File size: 26,380 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 |
{
"paper_id": "M91-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:15:25.904438Z"
},
"title": "GTE: DESCRIPTION OF THE TIA SYSTEM USED FOR MUC-3",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Dietz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "GTE Government Systems Corporatio n",
"location": {
"addrLine": "100 Ferguson Driv e Mountain View",
"postCode": "9403 9",
"region": "CA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the version of GTE's Text Interpretation Aid (TIA) system used for participation in th e Third Message Understanding Conference (MUC-3). Since 1985, GTE has developed and delivered three systems that automatically generate database records from text messages. These are the original TIA, the Alternate Domain Tex t Interpretation Aid (AD-TIA) and the Long Range Cruise Missile Analysis and Warning System (LAWS) TIA. These systems process messages from the naval intelligence (original TIA) and air intelligence (AD-TIA and LAWS-TIA) domains. Parallel to the development of these systems, GTE has (since 1984) also been active in Natural Languag e Processing (NLP) Independent Research and Development. We have developed several systems that build upon th e TIA/AD-TIA/LAWS TIA systems. The first system, which processes messages from the naval operations domain , was developed from the original TIA system for participation in the Second Message Understanding Conferenc e (MUCK-II) sponsored by NOSC. The system that this paper describes was developed from the AD-TIA system t o overcome weaknesses perceived in the MUCK-II system, as well as in the delivered systems .",
"pdf_parse": {
"paper_id": "M91-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the version of GTE's Text Interpretation Aid (TIA) system used for participation in th e Third Message Understanding Conference (MUC-3). Since 1985, GTE has developed and delivered three systems that automatically generate database records from text messages. These are the original TIA, the Alternate Domain Tex t Interpretation Aid (AD-TIA) and the Long Range Cruise Missile Analysis and Warning System (LAWS) TIA. These systems process messages from the naval intelligence (original TIA) and air intelligence (AD-TIA and LAWS-TIA) domains. Parallel to the development of these systems, GTE has (since 1984) also been active in Natural Languag e Processing (NLP) Independent Research and Development. We have developed several systems that build upon th e TIA/AD-TIA/LAWS TIA systems. The first system, which processes messages from the naval operations domain , was developed from the original TIA system for participation in the Second Message Understanding Conferenc e (MUCK-II) sponsored by NOSC. The system that this paper describes was developed from the AD-TIA system t o overcome weaknesses perceived in the MUCK-II system, as well as in the delivered systems .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The TIA systems are semantics based . Semantic representations are associated with each lexica l word/phrase entry in the lexicon, therefore grammatically correct syntactic constructions are not vital for messag e understanding . This semantic approach allows the system to be output driven . Critical information to be extracted from the text messages drives the semantic data structures constructed by the domain system developer .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GTE's APPROACH TO NL P",
"sec_num": null
},
{
"text": "TIA's development thrust has been the generation of database records from free formed text . Each message is tokenized, syntactically then semantically analyzed to detect and extract the relevant information needed to construc t or update database records. Upon completion of the message analysis, the records (templates) are output in an orderl y fashion .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GTE's APPROACH TO NL P",
"sec_num": null
},
{
"text": "The TIA used for MUC-3 was developed from the AD-TIA (Alternate Domain TIA) . This system, shown in Figure 1 , sequentially performs tokenization, syntactic analysis, semantic analysis, and output translation . Each components is discussed below .",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "SYSTEM ARCHITECTURE",
"sec_num": null
},
{
"text": "Tokenization . The tokenizer finds strings of text delimited by spaces, carriage returns and punctuation marks .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SYSTEM ARCHITECTURE",
"sec_num": null
},
{
"text": "It also attempts to classify the string as a known word or a member of a special token class such as a number or a latitude (e .g . 1234N) . The output from the tokenizer is a list of Lisp symbols representing known words an d dotted pairs representing a special token class and the text which represents it, e .g . ( : number . 12) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SYSTEM ARCHITECTURE",
"sec_num": null
},
{
"text": "If a token is not recognized as either a known word or a member of a special token class, spelling correction is attempted . The spelling corrector is a simple one that looks for transposed, elided or extra letters . If spelling correction fails, the token is classified as belonging to the special token class : unknown . For example, in the MUC-3 corpus, the string \" Orrantia\" is tokenized as ( :unknown . ORRANTIA) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SYSTEM ARCHITECTURE",
"sec_num": null
},
{
"text": "TIA uses a syntactic analysis stage to preprocess tokenized text before attempting semantic analysis . Syntactic analysis finds phrases which may be treated as though they were single words, such a s noun phrases (NPs), and to define synonyms . . As used by the TIA, the syntactic analyzer does not operate at th e sentence level of free text, only at the phrase level . l If more than one phrase is possible at a given point in the text, ambiguities are resolved according to th e following scheme: 1.) Left to right: the first phrase of overlapping phrases encountered is chosen .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysis.",
"sec_num": null
},
{
"text": "2.) Length: if two phrases start at the same point, the longer of the two is chosen .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysis.",
"sec_num": null
},
{
"text": "3.) Syntactic Priority : A syntactic priority may be assigned to a phrase when it is declared . If two phrases of the same length start at the same point in the text, the one with the higher declared priority is chosen .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysis.",
"sec_num": null
},
{
"text": "The syntactic analyzer retains a parse tree for every phrase that it finds . These parse trees allow the semantic analyzer (see below) to know that, for example, a date was found, but also to extract the month from tha t date . In the MUC-3 domain, the parse trees tended to be rather shallow and consume only a token or two each . For example, the sentences \"Police have reported that terrorists tonight bombed the embassies of the PRC and th e Soviet Union . The bombs caused damage but no injuries.\" returns a list of parse trees whose roots are (<LAW-ENFORCEMENT-OR-SECURITY> <REPORT> <DETERMINER> <TERRORIST> <TONIGHT> <BOMBING > <DETERMINER> <EMBASSY> <PREPOSITION> <LOCATION> <CONJUNCTION> <LOCATION > <PERIOD> <DETERMINER> <BOMB> <CAUSE-DAMAGE> <CONJUNCTION> <NO-INJURY > <PERIOD>) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysis.",
"sec_num": null
},
{
"text": "The syntactic analyzer has the ability to automatically add production rules at run time . For example, the following (simplified) production rules are defined in the system : <location> : := <district> ?<comma> <location-approx> <location > <location> <region > <location-approx> nea r <district> : := @<region> distric t <region> : .= san Isidr o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text of messag e",
"sec_num": null
},
{
"text": "1 The code allows sentential-level parsing, the TIA simply does not utilize the capability . These rewrite rules allow \"Orrantia district, near San Isidro\" to be recognized as <location> and cause the following rule to be added to the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text of messag e",
"sec_num": null
},
{
"text": "In this example, the question mark indicates an optional item, and the @ sign indicates a place where a new rule might be added. As an example of how phrases are defined in the TIA, <location> above is defined as follows :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "(def-lexical-entry LOCATIO N :grammar muc 3 :syntax ((<district> ?<comma> <location-approx> <location> ) <location> <region > . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": ") :predicts ((location-p input-text-s (PT-T-STR ) type-s (GET-REGION-TYP E (PT-ST-STR '<region>)))) ) Semantic Analysis .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "The semantic analyzer (based on concepts embodied in PLUM, developed at th e University of Massachusetts by Wendy Lehnert [1] ) is the major component of the system . The input to the semantic analyzer is a list of syntactic units identified by the syntactic analyzer . The output of the semantic analyzer is a lis t of frame-like structured concepts, each consisting of a \"head\" and an unspecified number of (slot, value) pairs . The semantic analyzer predicts zero or more concepts for each parse tree found by the syntactic analyzer.",
"cite_spans": [
{
"start": 122,
"end": 125,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "Some of the slots in each concept are initialized from information found in the parse tree . For example . <location> predicts location-p, which has slots named input-text-s and type-s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "In the above example, the first location-p predicted has the input-text-s slot filled with \"TH E PRC\" (directly from the parse tree) and its type-s slot filled with ' COUNTRY (by table lookup) . Slot initialization information is stored with syntax information, as shown in the sample definition of <location > above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "Other slots are filled by \"expecting\" information in other frames For example, <bombing> predicts bombing-p, with slots which include agent-s and physical-target-s . Agent-s is expected to be filled by an actor-p, found previously in the same sentence, and physical-target-s is expected to be filled by a theme-p, found later in the same sentence . Passive voice is recognized by the syntactic analyzer, and would have predicted passive-bombing-p, with different expectations about the structure of the text .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "Knowledge of expectations is stored in prediction prototypes . There is a prediction prototype for eac h possible type of structured concept .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "Concepts are instantiated when \"enough\" slots are filled . What is \"enough\" is specific to the individua l concept type. Only instantiated concepts can fill slots. Disambiguation is handled by making a prediction for eac h sense of a phrase and only instantiating the prediction for the correct sense .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "That instantiation of a concept can cause actions to occur by allowing calls to Lisp Reference resolution is one example of the type of action that might occur after instantiating a concept .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "The concepts are hierarchical . . For example, terrorist-p, which is-a actor-p, can fill the agent-s slot in the bombing-p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "A prediction prototype maintains a list of slots that need to be filled for that type of structured concept , along with information on how to fill them . An example is shown below . This particular concept is predicted b y the syntactic constituent <embassy> . Output translation . The output translator transforms the internal representations of the concepts extracte d from the message into the proper database update template form . This component of the system is tailored to mee t the requirements of the particular domain and database under consideration . In the MUC domain, the outpu t translator applies defaults, standardizes fillers for set list slots and performs cross referencing . This module is also responsible for not generating templates . For example, it should determine that a military versus military attack should not generate a template .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "The output translator is only partially implemented . It does not incorporate many of the heuristics abou t generating and combining templates, and incorrectly applies those heuristics that it does incorporate . Additionally , the output translator is dependent on the order in which its input is received . It may generate an entirely different se t of template if the list of concepts produced by the semantic analyzer is reversed . This was not intended to be a feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<region> orranti a",
"sec_num": null
},
{
"text": "Reference resolution takes place in two modules : the semantic analyzer and the output translator . In the semantic analyzer, newly instantiated concepts (of specific types) are compared with other similar concepts . If n o contradictions are found between a pair of concepts, the two are merged into a single concept . \"No contradictions\" is defined to mean that 1) both concepts are the same type or the type of one is a direct ancestor of the other in th e concept hierarchy and 2) every slot of each matches that of the other, or is empty . Certain slots, such as inputtext -s , are excluded from the matching requirement, since two concepts may refer to the same entity, but were represented differently in the text . To \"match\" usually means that the slot fillers are EQUAL, but not always . For example, the persons-name-s slot may allow partial matches to succeed ; \"Smith\" matches \"John Smith . \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REFERENCE RESOLUTIO N",
"sec_num": null
},
{
"text": "The output translator is responsible for combining templates . For example, two attacks at the same tim e and place should be output as one template . This module is also responsible for deleting multiple templates for a single event. For example, a bombing at a given time and place is an attack or an attack that results in a perso n being killed is a murder. In each case only one template should be produced . Unfortunately, this module is only partially implemented, so many spurious templates are produced .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REFERENCE RESOLUTIO N",
"sec_num": null
},
{
"text": "Conjunction processing occurs in the semantic analyzer module. The conjunction in the phrase \"embassie s of the PRC and the Soviet Union . \" is handled as follows: <embassy> predicts government -building-p , which when instantiated fills the physical-target-s slot of bombing-p The concept governmentbuilding-p has a slot country-s which can be filled by a location-p concept which has the type-s of ' country and obj-of-prep-s of ' of . The syntactic unit <preposition> predicts preposition -p, which upon being instantiated, inserts itself as the obj-of-prep-s slot of the subsequent concept. The first <1 o c a t i o n >, i .e. \" The PR C \" , predicts a location -p, which meets the constraints needed to fill the country-s slot of the government-building-p . When that slot is filled, the location-p creates a record to indicate that it has filled a slot in the government-building-p . Next, <c on junction> predicts conjunction-p, which, when instantiated, makes a note to try to join the previous concept with the next concept at a later time (the end of the sentence .) The ,second <1ocation> then predicts a new location-p and the <period> predicts a number of concepts whose only function is to cause other concepts to be instantiated i n the proper order. One of these tells the co n conjunct ion -p that it's time to try to join the previously noted concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONJUNCTION PROCESSIN G",
"sec_num": null
},
{
"text": "The conjunction joining mechanism verifies that the two locations are the same types of thing. Upon verification, the locations are conjoined by copying concepts in which the first location filled one or more slots, an d replacing those fillers with the second location . In this instance, the government-building-p is copied with \"the Soviet Union\" filling the country-s slot. Since the government-building-p concept filled a slot in bombing-p concept, that bombing-p concept is copied with the new government-building-p filling the physical-target-s slot of the copy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONJUNCTION PROCESSIN G",
"sec_num": null
},
{
"text": "GTE believes the TIA architecture described in this paper is sound, robust and practical for a messag e understanding system. It has in fact been the basis for several delivered systems . It's relatively poor performance in MUC-3 can be attributed to the small amount of time devoted to adapting it to the new domain . The scores were more reflective of the current state of development, especially in the output translation module, than the syste m architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SUMMAR Y",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Counselor Project -Experiments with PLUM",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lehnert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Draper",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Stucky",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sullivan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lehnert, W, Narasimhan, Draper, B ., Stucky, B . and Sullivan, M ., \"The Counselor Project - Experiments with PLUM\", Department of Computer and Information Science, University of Massachusetts .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "TIA System Architecture"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "defpred GOVERNMENT-BUILDING-P :parents (theme-p ) :concept-frame (input-text-s country-s = (of-countryp ) type-s = \"GOVERNMENT OFFICE OR RESIDENCE\" ) :control-structure ((expect COUNTRY-S in next location-p input-text-s until PERIOD-P ) (expect COUNTRY-S in last countrian-adj-p until PERIOD-P) ) :instantiator (np-instantiator-p) )Long term memory consists of lists of concepts, created by semantic analyzer . These lists are called *SYNTAX-MEM*, *CONCEPT-MEM*, *DRAMATIS-PERSONAE*, *STORY-LINE*, and *EVENT-MEM* . The structured concepts in *CONCEPT-MEM* correspond roughly to nouns, those in *EVENT-MEM* to verbs . *SYNTAX-MEM* is a catch-all for miscellaneous structured concepts . *STORY-LINE* is a subset of *EVENT-MEM* in which concepts that refer to the same event are resolved into the same concept . *DRAMATIS -PERSONAE* is a similar subset of *CONCEPT-MEM* . The output of the semantic analyzer is the list o f concepts found in *STORY-LINE* ."
}
}
}
} |