id
stringlengths
8
8
title
stringlengths
18
138
abstract
stringlengths
177
1.96k
entities
list
relation
list
C82-1054
AN IMPROVED LEFT-CORNER PARSING ALGORITHM
This paper proposes a series of modifications to the left corner parsing algorithm for context-free grammars . It is argued that the resulting algorithm is both efficient and flexible and is, therefore, a good choice for the parser used in a natural language interface .
[ { "id": "C82-1054.1", "char_start": 56, "char_end": 85 }, { "id": "C82-1054.2", "char_start": 90, "char_end": 111 }, { "id": "C82-1054.3", "char_start": 228, "char_end": 234 }, { "id": "C82-1054.4", "char_start": 245, "char_end": 271 } ]
[ { "label": 1, "arg1": "C82-1054.1", "arg2": "C82-1054.2", "reverse": false }, { "label": 1, "arg1": "C82-1054.3", "arg2": "C82-1054.4", "reverse": false } ]
J82-3002
An Efficient Easily Adaptable System for Interpreting Natural Language Queries
This paper gives an overall account of a prototype natural language question answering system , called Chat-80 . Chat-80 has been designed to be both efficient and easily adaptable to a variety of applications. The system is implemented entirely in Prolog , a programming language based on logic . With the aid of a logic-based grammar formalism called extraposition grammars , Chat-80 translates English questions into the Prolog subset of logic . The resulting logical expression is then transformed by a planning algorithm into efficient Prolog , cf. query optimisation in a relational database . Finally, the Prolog form is executed to yield the answer.
[ { "id": "J82-3002.1", "char_start": 54, "char_end": 96 }, { "id": "J82-3002.2", "char_start": 106, "char_end": 113 }, { "id": "J82-3002.3", "char_start": 116, "char_end": 123 }, { "id": "J82-3002.4", "char_start": 252, "char_end": 258 }, { "id": "J82-3002.5", "char_start": 263, "char_end": 283 }, { "id": "J82-3002.6", "char_start": 293, "char_end": 298 }, { "id": "J82-3002.7", "char_start": 319, "char_end": 348 }, { "id": "J82-3002.8", "char_start": 356, "char_end": 378 }, { "id": "J82-3002.9", "char_start": 381, "char_end": 388 }, { "id": "J82-3002.10", "char_start": 400, "char_end": 417 }, { "id": "J82-3002.11", "char_start": 427, "char_end": 433 }, { "id": "J82-3002.12", "char_start": 434, "char_end": 449 }, { "id": "J82-3002.13", "char_start": 466, "char_end": 484 }, { "id": "J82-3002.14", "char_start": 510, "char_end": 528 }, { "id": "J82-3002.15", "char_start": 544, "char_end": 550 }, { "id": "J82-3002.16", "char_start": 557, "char_end": 575 }, { "id": "J82-3002.17", "char_start": 581, "char_end": 600 }, { "id": "J82-3002.18", "char_start": 616, "char_end": 627 } ]
[ { "label": 1, "arg1": "J82-3002.5", "arg2": "J82-3002.6", "reverse": true }, { "label": 1, "arg1": "J82-3002.9", "arg2": "J82-3002.10", "reverse": false }, { "label": 4, "arg1": "J82-3002.11", "arg2": "J82-3002.12", "reverse": true }, { "label": 1, "arg1": "J82-3002.16", "arg2": "J82-3002.17", "reverse": false } ]
P82-1035
Scruffy Text Understanding: Design and Implementation of 'Tolerant' Understanders
Most large text-understanding systems have been designed under the assumption that the input text will be in reasonably neat form, e.g., newspaper stories and other edited texts . However, a great deal of natural language texts e.g., memos , rough drafts , conversation transcripts etc., have features that differ significantly from neat texts , posing special problems for readers, such as misspelled words , missing words , poor syntactic construction , missing periods , etc. Our solution to these problems is to make use of expectations , based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and semantic expectations can be used to figure out unknown words from context , constrain the possible word-senses of words with multiple meanings ( ambiguity ), fill in missing words ( ellipsis ), and resolve referents ( anaphora ). This method of using expectations to aid the understanding of scruffy texts has been incorporated into a working computer program called NOMAD , which understands scruffy texts in the domain of Navy messages.
[ { "id": "P82-1035.1", "char_start": 14, "char_end": 40 }, { "id": "P82-1035.2", "char_start": 96, "char_end": 100 }, { "id": "P82-1035.3", "char_start": 140, "char_end": 157 }, { "id": "P82-1035.4", "char_start": 168, "char_end": 180 }, { "id": "P82-1035.5", "char_start": 208, "char_end": 230 }, { "id": "P82-1035.6", "char_start": 237, "char_end": 242 }, { "id": "P82-1035.7", "char_start": 251, "char_end": 257 }, { "id": "P82-1035.8", "char_start": 260, "char_end": 284 }, { "id": "P82-1035.9", "char_start": 336, "char_end": 346 }, { "id": "P82-1035.10", "char_start": 394, "char_end": 410 }, { "id": "P82-1035.11", "char_start": 413, "char_end": 426 }, { "id": "P82-1035.12", "char_start": 429, "char_end": 456 }, { "id": "P82-1035.13", "char_start": 459, "char_end": 474 }, { "id": "P82-1035.14", "char_start": 531, "char_end": 543 }, { "id": "P82-1035.15", "char_start": 573, "char_end": 588 }, { "id": "P82-1035.16", "char_start": 596, "char_end": 611 }, { "id": "P82-1035.17", "char_start": 652, "char_end": 687 }, { "id": "P82-1035.18", "char_start": 714, "char_end": 727 }, { "id": "P82-1035.19", "char_start": 733, "char_end": 740 }, { "id": "P82-1035.20", "char_start": 766, "char_end": 777 }, { "id": "P82-1035.21", "char_start": 781, "char_end": 809 }, { "id": "P82-1035.22", "char_start": 812, "char_end": 821 }, { "id": "P82-1035.23", "char_start": 833, "char_end": 846 }, { "id": "P82-1035.24", "char_start": 849, "char_end": 857 }, { "id": "P82-1035.25", "char_start": 873, "char_end": 882 }, { "id": "P82-1035.26", "char_start": 885, "char_end": 893 }, { "id": "P82-1035.27", "char_start": 918, "char_end": 930 }, { "id": "P82-1035.28", "char_start": 959, "char_end": 972 }, { "id": "P82-1035.29", "char_start": 1010, "char_end": 1026 }, { "id": "P82-1035.30", "char_start": 1034, "char_end": 1039 }, { "id": "P82-1035.31", "char_start": 1060, "char_end": 1073 } ]
[ { "label": 6, "arg1": "P82-1035.8", "arg2": "P82-1035.9", "reverse": false }, { "label": 1, "arg1": "P82-1035.14", "arg2": "P82-1035.15", "reverse": true }, { "label": 3, "arg1": "P82-1035.18", "arg2": "P82-1035.19", "reverse": true }, { "label": 3, "arg1": "P82-1035.20", "arg2": "P82-1035.21", "reverse": false }, { "label": 4, "arg1": "P82-1035.23", "arg2": "P82-1035.24", "reverse": false }, { "label": 4, "arg1": "P82-1035.25", "arg2": "P82-1035.26", "reverse": false }, { "label": 1, "arg1": "P82-1035.27", "arg2": "P82-1035.28", "reverse": false } ]
P84-1020
LIMITED DOMAIN SYSTEMS FOR LANGUAGE TEACHING
This abstract describes a natural language system which deals usefully with ungrammatical input and describes some actual and potential applications of it in computer aided second language learning . However, this is not the only area in which the principles of the system might be used, and the aim in building it was simply to demonstrate the workability of the general mechanism, and provide a framework for assessing developments of it.
[ { "id": "P84-1020.1", "char_start": 29, "char_end": 52 }, { "id": "P84-1020.2", "char_start": 79, "char_end": 98 }, { "id": "P84-1020.3", "char_start": 161, "char_end": 200 } ]
[ { "label": 1, "arg1": "P84-1020.1", "arg2": "P84-1020.2", "reverse": false } ]
P84-1034
A PROPER TREATMEMT OF SYNTAX AND SEMANTICS IN MACHINE TRANSLATION
A proper treatment of syntax and semantics in machine translation is introduced and discussed from the empirical viewpoint. For English-Japanese machine translation , the syntax directed approach is effective where the Heuristic Parsing Model (HPM) and the Syntactic Role System play important roles. For Japanese-English translation , the semantics directed approach is powerful where the Conceptual Dependency Diagram (CDD) and the Augmented Case Marker System (which is a kind of Semantic Role System ) play essential roles. Some examples of the difference between Japanese sentence structure and English sentence structure , which is vital to machine translation are also discussed together with various interesting ambiguities .
[ { "id": "P84-1034.1", "char_start": 25, "char_end": 31 }, { "id": "P84-1034.2", "char_start": 36, "char_end": 45 }, { "id": "P84-1034.3", "char_start": 49, "char_end": 68 }, { "id": "P84-1034.4", "char_start": 131, "char_end": 167 }, { "id": "P84-1034.5", "char_start": 174, "char_end": 198 }, { "id": "P84-1034.6", "char_start": 222, "char_end": 251 }, { "id": "P84-1034.7", "char_start": 260, "char_end": 281 }, { "id": "P84-1034.8", "char_start": 308, "char_end": 336 }, { "id": "P84-1034.9", "char_start": 343, "char_end": 370 }, { "id": "P84-1034.10", "char_start": 393, "char_end": 428 }, { "id": "P84-1034.11", "char_start": 437, "char_end": 465 }, { "id": "P84-1034.12", "char_start": 486, "char_end": 506 }, { "id": "P84-1034.13", "char_start": 571, "char_end": 598 }, { "id": "P84-1034.14", "char_start": 603, "char_end": 629 }, { "id": "P84-1034.15", "char_start": 650, "char_end": 669 }, { "id": "P84-1034.16", "char_start": 723, "char_end": 734 } ]
[ { "label": 1, "arg1": "P84-1034.4", "arg2": "P84-1034.5", "reverse": true }, { "label": 1, "arg1": "P84-1034.8", "arg2": "P84-1034.9", "reverse": false }, { "label": 6, "arg1": "P84-1034.13", "arg2": "P84-1034.14", "reverse": false } ]
P84-1047
Entity-Oriented Parsing
An entity-oriented approach to restricted-domain parsing is proposed. In this approach, the definitions of the structure and surface representation of domain entities are grouped together. Like semantic grammar , this allows easy exploitation of limited domain semantics . In addition, it facilitates fragmentary recognition and the use of multiple parsing strategies , and so is particularly useful for robust recognition of extra-grammatical input . Several advantages from the point of view of language definition are also noted. Representative samples from an entity-oriented language definition are presented, along with a control structure for an entity-oriented parser , some parsing strategies that use the control structure , and worked examples of parses . A parser incorporating the control structure and the parsing strategies is currently under implementation .
[ { "id": "P84-1047.1", "char_start": 6, "char_end": 59 }, { "id": "P84-1047.2", "char_start": 114, "char_end": 123 }, { "id": "P84-1047.3", "char_start": 128, "char_end": 150 }, { "id": "P84-1047.4", "char_start": 154, "char_end": 169 }, { "id": "P84-1047.5", "char_start": 197, "char_end": 213 }, { "id": "P84-1047.6", "char_start": 249, "char_end": 273 }, { "id": "P84-1047.7", "char_start": 304, "char_end": 327 }, { "id": "P84-1047.8", "char_start": 343, "char_end": 370 }, { "id": "P84-1047.9", "char_start": 414, "char_end": 452 }, { "id": "P84-1047.10", "char_start": 500, "char_end": 519 }, { "id": "P84-1047.11", "char_start": 567, "char_end": 602 }, { "id": "P84-1047.12", "char_start": 631, "char_end": 648 }, { "id": "P84-1047.13", "char_start": 656, "char_end": 678 }, { "id": "P84-1047.14", "char_start": 686, "char_end": 704 }, { "id": "P84-1047.15", "char_start": 718, "char_end": 735 }, { "id": "P84-1047.16", "char_start": 761, "char_end": 767 }, { "id": "P84-1047.17", "char_start": 772, "char_end": 778 }, { "id": "P84-1047.18", "char_start": 797, "char_end": 814 }, { "id": "P84-1047.19", "char_start": 823, "char_end": 841 }, { "id": "P84-1047.20", "char_start": 861, "char_end": 875 } ]
[ { "label": 3, "arg1": "P84-1047.3", "arg2": "P84-1047.4", "reverse": false }, { "label": 1, "arg1": "P84-1047.8", "arg2": "P84-1047.9", "reverse": false }, { "label": 1, "arg1": "P84-1047.12", "arg2": "P84-1047.13", "reverse": false }, { "label": 1, "arg1": "P84-1047.14", "arg2": "P84-1047.15", "reverse": true }, { "label": 4, "arg1": "P84-1047.17", "arg2": "P84-1047.18", "reverse": true } ]
P84-1064
A COMPUTATIONAL THEORY OF DISPOSITIONS
Informally, a disposition is a proposition which is preponderantly, but not necessarily always, true. For example, birds can fly is a disposition , as are the propositions Swedes are blond and Spaniards are dark. An idea which underlies the theory described in this paper is that a disposition may be viewed as a proposition with implicit fuzzy quantifiers which are approximations to all and always, e.g., almost all, almost always, most, frequently, etc. For example, birds can fly may be interpreted as the result of suppressing the fuzzy quantifier most in the proposition most birds can fly. Similarly, young men like young women may be read as most young men like mostly young women. The process of transforming a disposition into a proposition is referred to as explicitation or restoration . Explicitation sets the stage for representing the meaning of a proposition through the use of test-score semantics (Zadeh, 1978, 1982). In this approach to semantics , the meaning of a proposition , p, is represented as a procedure which tests, scores and aggregates the elastic constraints which are induced by p. The paper closes with a description of an approach to reasoning with dispositions which is based on the concept of a fuzzy syllogism . Syllogistic reasoning with dispositions has an important bearing on commonsense reasoning as well as on the management of uncertainty in expert systems . As a simple application of the techniques described in this paper, we formulate a definition of typicality -- a concept which plays an important role in human cognition and is of relevance to default reasoning .
[ { "id": "P84-1064.1", "char_start": 17, "char_end": 28 }, { "id": "P84-1064.2", "char_start": 34, "char_end": 45 }, { "id": "P84-1064.3", "char_start": 137, "char_end": 148 }, { "id": "P84-1064.4", "char_start": 162, "char_end": 174 }, { "id": "P84-1064.5", "char_start": 285, "char_end": 296 }, { "id": "P84-1064.6", "char_start": 316, "char_end": 327 }, { "id": "P84-1064.7", "char_start": 342, "char_end": 359 }, { "id": "P84-1064.8", "char_start": 539, "char_end": 555 }, { "id": "P84-1064.9", "char_start": 568, "char_end": 579 }, { "id": "P84-1064.10", "char_start": 723, "char_end": 734 }, { "id": "P84-1064.11", "char_start": 742, "char_end": 753 }, { "id": "P84-1064.12", "char_start": 772, "char_end": 785 }, { "id": "P84-1064.13", "char_start": 789, "char_end": 800 }, { "id": "P84-1064.14", "char_start": 803, "char_end": 816 }, { "id": "P84-1064.15", "char_start": 853, "char_end": 860 }, { "id": "P84-1064.16", "char_start": 866, "char_end": 877 }, { "id": "P84-1064.17", "char_start": 897, "char_end": 917 }, { "id": "P84-1064.18", "char_start": 959, "char_end": 968 }, { "id": "P84-1064.19", "char_start": 975, "char_end": 982 }, { "id": "P84-1064.20", "char_start": 988, "char_end": 999 }, { "id": "P84-1064.21", "char_start": 1172, "char_end": 1199 }, { "id": "P84-1064.22", "char_start": 1235, "char_end": 1250 }, { "id": "P84-1064.23", "char_start": 1253, "char_end": 1292 }, { "id": "P84-1064.24", "char_start": 1321, "char_end": 1342 }, { "id": "P84-1064.25", "char_start": 1361, "char_end": 1386 }, { "id": "P84-1064.26", "char_start": 1390, "char_end": 1404 }, { "id": "P84-1064.27", "char_start": 1503, "char_end": 1513 }, { "id": "P84-1064.28", "char_start": 1560, "char_end": 1575 }, { "id": "P84-1064.29", "char_start": 1599, "char_end": 1616 } ]
[ { "label": 4, "arg1": "P84-1064.6", "arg2": "P84-1064.7", "reverse": true }, { "label": 4, "arg1": "P84-1064.8", "arg2": "P84-1064.9", "reverse": false }, { "label": 3, "arg1": "P84-1064.15", "arg2": "P84-1064.16", "reverse": false }, { "label": 3, "arg1": "P84-1064.19", "arg2": "P84-1064.20", "reverse": false }, { "label": 1, "arg1": "P84-1064.21", "arg2": "P84-1064.22", "reverse": true }, { "label": 2, "arg1": "P84-1064.23", "arg2": "P84-1064.24", "reverse": false }, { "label": 4, "arg1": "P84-1064.25", "arg2": "P84-1064.26", "reverse": false }, { "label": 2, "arg1": "P84-1064.27", "arg2": "P84-1064.28", "reverse": false } ]
P84-1078
Controlling Lexical Substitution in Computer Text Generation
This report describes Paul , a computer text generation system designed to create cohesive text through the use of lexical substitutions . Specifically, this system is designed to deterministically choose between pronominalization , superordinate substitution , and definite noun phrase reiteration . The system identifies a strength of antecedence recovery for each of the lexical substitutions , and matches them against the strength of potential antecedence of each element in the text to select the proper substitutions for these elements.
[ { "id": "P84-1078.1", "char_start": 25, "char_end": 29 }, { "id": "P84-1078.2", "char_start": 34, "char_end": 65 }, { "id": "P84-1078.3", "char_start": 85, "char_end": 98 }, { "id": "P84-1078.4", "char_start": 118, "char_end": 139 }, { "id": "P84-1078.5", "char_start": 216, "char_end": 233 }, { "id": "P84-1078.6", "char_start": 236, "char_end": 262 }, { "id": "P84-1078.7", "char_start": 278, "char_end": 301 }, { "id": "P84-1078.8", "char_start": 340, "char_end": 360 }, { "id": "P84-1078.9", "char_start": 377, "char_end": 398 }, { "id": "P84-1078.10", "char_start": 430, "char_end": 463 }, { "id": "P84-1078.11", "char_start": 487, "char_end": 491 }, { "id": "P84-1078.12", "char_start": 513, "char_end": 526 } ]
[ { "label": 1, "arg1": "P84-1078.2", "arg2": "P84-1078.4", "reverse": true }, { "label": 6, "arg1": "P84-1078.5", "arg2": "P84-1078.6", "reverse": false }, { "label": 3, "arg1": "P84-1078.8", "arg2": "P84-1078.9", "reverse": false }, { "label": 3, "arg1": "P84-1078.10", "arg2": "P84-1078.12", "reverse": false } ]
C86-1081
A LOGICAL FORMALISM FOR THE REPRESENTATION OF DETERMINERS
Determiners play an important role in conveying the meaning of an utterance , but they have often been disregarded, perhaps because it seemed more important to devise methods to grasp the global meaning of a sentence , even if not in a precise way. Another problem with determiners is their inherent ambiguity . In this paper we propose a logical formalism , which, among other things, is suitable for representing determiners without forcing a particular interpretation when their meaning is still not clear.
[ { "id": "C86-1081.1", "char_start": 3, "char_end": 14 }, { "id": "C86-1081.2", "char_start": 55, "char_end": 62 }, { "id": "C86-1081.3", "char_start": 69, "char_end": 78 }, { "id": "C86-1081.4", "char_start": 191, "char_end": 206 }, { "id": "C86-1081.5", "char_start": 212, "char_end": 220 }, { "id": "C86-1081.6", "char_start": 274, "char_end": 285 }, { "id": "C86-1081.7", "char_start": 304, "char_end": 313 }, { "id": "C86-1081.8", "char_start": 343, "char_end": 360 }, { "id": "C86-1081.9", "char_start": 419, "char_end": 430 }, { "id": "C86-1081.10", "char_start": 460, "char_end": 474 }, { "id": "C86-1081.11", "char_start": 486, "char_end": 493 } ]
[ { "label": 3, "arg1": "C86-1081.2", "arg2": "C86-1081.3", "reverse": false }, { "label": 3, "arg1": "C86-1081.4", "arg2": "C86-1081.5", "reverse": false }, { "label": 3, "arg1": "C86-1081.6", "arg2": "C86-1081.7", "reverse": true }, { "label": 3, "arg1": "C86-1081.10", "arg2": "C86-1081.11", "reverse": false } ]
C86-1132
SYNTHESIZING WEATHER FORECASTS FROM FORMATFED DATA
This paper describes a system ( RAREAS ) which synthesizes marine weather forecasts directly from formatted weather data . Such synthesis appears feasible in certain natural sublanguages with stereotyped text structure . RAREAS draws on several kinds of linguistic and non-linguistic knowledge and mirrors a forecaster's apparent tendency to ascribe less precise temporal adverbs to more remote meteorological events. The approach can easily be adapted to synthesize bilingual or multi-lingual texts .
[ { "id": "C86-1132.1", "char_start": 35, "char_end": 41 }, { "id": "C86-1132.2", "char_start": 101, "char_end": 123 }, { "id": "C86-1132.3", "char_start": 131, "char_end": 140 }, { "id": "C86-1132.4", "char_start": 169, "char_end": 189 }, { "id": "C86-1132.5", "char_start": 195, "char_end": 221 }, { "id": "C86-1132.6", "char_start": 224, "char_end": 230 }, { "id": "C86-1132.7", "char_start": 257, "char_end": 296 }, { "id": "C86-1132.8", "char_start": 366, "char_end": 382 }, { "id": "C86-1132.9", "char_start": 470, "char_end": 502 } ]
[ { "label": 1, "arg1": "C86-1132.1", "arg2": "C86-1132.2", "reverse": false }, { "label": 3, "arg1": "C86-1132.4", "arg2": "C86-1132.5", "reverse": true }, { "label": 1, "arg1": "C86-1132.6", "arg2": "C86-1132.7", "reverse": true } ]
J86-1002
THE CORRECTION OF ILL-FORMED INPUT USING HISTORY-BASED EXPECTATION WITH APPLICATIONS TO SPEECH UNDERSTANDING
A method for error correction of ill-formed input is described that acquires dialogue patterns in typical usage and uses these patterns to predict new inputs. Error correction is done by strongly biasing parsing toward expected meanings unless clear evidence from the input shows the current sentence is not expected. A dialogue acquisition and tracking algorithm is presented along with a description of its implementation in a voice interactive system . A series of tests are described that show the power of the error correction methodology when stereotypic dialogue occurs.
[ { "id": "J86-1002.1", "char_start": 16, "char_end": 32 }, { "id": "J86-1002.2", "char_start": 36, "char_end": 52 }, { "id": "J86-1002.3", "char_start": 80, "char_end": 97 }, { "id": "J86-1002.4", "char_start": 130, "char_end": 138 }, { "id": "J86-1002.5", "char_start": 162, "char_end": 178 }, { "id": "J86-1002.6", "char_start": 207, "char_end": 214 }, { "id": "J86-1002.7", "char_start": 231, "char_end": 239 }, { "id": "J86-1002.8", "char_start": 295, "char_end": 303 }, { "id": "J86-1002.9", "char_start": 323, "char_end": 366 }, { "id": "J86-1002.10", "char_start": 412, "char_end": 426 }, { "id": "J86-1002.11", "char_start": 432, "char_end": 456 }, { "id": "J86-1002.12", "char_start": 518, "char_end": 546 }, { "id": "J86-1002.13", "char_start": 552, "char_end": 572 } ]
[ { "label": 1, "arg1": "J86-1002.1", "arg2": "J86-1002.2", "reverse": false }, { "label": 4, "arg1": "J86-1002.9", "arg2": "J86-1002.11", "reverse": false } ]
J86-3001
Attention, Intentions, And The Structure Of Discourse
In this paper we explore a new theory of discourse structure that stresses the role of purpose and processing in discourse . In this theory, discourse structure is composed of three separate but interrelated components: the structure of the sequence of utterances (called the linguistic structure ), a structure of purposes (called the intentional structure ), and the state of focus of attention (called the attentional state ). The linguistic structure consists of segments of the discourse into which the utterances naturally aggregate. The intentional structure captures the discourse-relevant purposes , expressed in each of the linguistic segments as well as relationships among them. The attentional state is an abstraction of the focus of attention of the participants as the discourse unfolds. The attentional state , being dynamic, records the objects, properties, and relations that are salient at each point of the discourse . The distinction among these components is essential to provide an adequate explanation of such discourse phenomena as cue phrases , referring expressions , and interruptions . The theory of attention, intention, and aggregation of utterances is illustrated in the paper with a number of example discourses . Various properties of discourse are described, and explanations for the behaviour of cue phrases , referring expressions , and interruptions are explored. This theory provides a framework for describing the processing of utterances in a discourse . Discourse processing requires recognizing how the utterances of the discourse aggregate into segments , recognizing the intentions expressed in the discourse and the relationships among intentions , and tracking the discourse through the operation of the mechanisms associated with attentional state . This processing description specifies in these recognition tasks the role of information from the discourse and from the participants ' knowledge of the domain.
[ { "id": "J86-3001.1", "char_start": 34, "char_end": 63 }, { "id": "J86-3001.2", "char_start": 90, "char_end": 97 }, { "id": "J86-3001.3", "char_start": 102, "char_end": 112 }, { "id": "J86-3001.4", "char_start": 116, "char_end": 125 }, { "id": "J86-3001.5", "char_start": 144, "char_end": 163 }, { "id": "J86-3001.6", "char_start": 256, "char_end": 266 }, { "id": "J86-3001.7", "char_start": 279, "char_end": 299 }, { "id": "J86-3001.8", "char_start": 318, "char_end": 326 }, { "id": "J86-3001.9", "char_start": 339, "char_end": 360 }, { "id": "J86-3001.10", "char_start": 381, "char_end": 399 }, { "id": "J86-3001.11", "char_start": 412, "char_end": 429 }, { "id": "J86-3001.12", "char_start": 437, "char_end": 457 }, { "id": "J86-3001.13", "char_start": 486, "char_end": 495 }, { "id": "J86-3001.14", "char_start": 511, "char_end": 521 }, { "id": "J86-3001.15", "char_start": 547, "char_end": 568 }, { "id": "J86-3001.16", "char_start": 582, "char_end": 609 }, { "id": "J86-3001.17", "char_start": 637, "char_end": 656 }, { "id": "J86-3001.18", "char_start": 698, "char_end": 715 }, { "id": "J86-3001.19", "char_start": 741, "char_end": 759 }, { "id": "J86-3001.20", "char_start": 767, "char_end": 779 }, { "id": "J86-3001.21", "char_start": 787, "char_end": 796 }, { "id": "J86-3001.22", "char_start": 810, "char_end": 827 }, { "id": "J86-3001.23", "char_start": 930, "char_end": 939 }, { "id": "J86-3001.24", "char_start": 1037, "char_end": 1056 }, { "id": "J86-3001.25", "char_start": 1060, "char_end": 1071 }, { "id": "J86-3001.26", "char_start": 1074, "char_end": 1095 }, { "id": "J86-3001.27", "char_start": 1102, "char_end": 1115 }, { "id": "J86-3001.28", "char_start": 1122, "char_end": 1183 }, { "id": "J86-3001.29", "char_start": 1237, "char_end": 1247 }, { "id": "J86-3001.30", "char_start": 1272, "char_end": 1281 }, { "id": "J86-3001.31", "char_start": 1335, "char_end": 1346 }, { "id": "J86-3001.32", "char_start": 1349, "char_end": 1370 }, { "id": "J86-3001.33", "char_start": 1377, "char_end": 1390 }, { "id": "J86-3001.34", "char_start": 1410, "char_end": 1416 }, { "id": "J86-3001.35", "char_start": 1471, "char_end": 1481 }, { "id": "J86-3001.36", "char_start": 1487, "char_end": 1496 }, { "id": "J86-3001.37", "char_start": 1499, "char_end": 1519 }, { "id": "J86-3001.38", "char_start": 1549, "char_end": 1559 }, { "id": "J86-3001.39", "char_start": 1567, "char_end": 1576 }, { "id": "J86-3001.40", "char_start": 1592, "char_end": 1600 }, { "id": "J86-3001.41", "char_start": 1619, "char_end": 1629 }, { "id": "J86-3001.42", "char_start": 1647, "char_end": 1656 }, { "id": "J86-3001.43", "char_start": 1685, "char_end": 1695 }, { "id": "J86-3001.44", "char_start": 1715, "char_end": 1724 }, { "id": "J86-3001.45", "char_start": 1781, "char_end": 1798 }, { "id": "J86-3001.46", "char_start": 1848, "char_end": 1865 }, { "id": "J86-3001.47", "char_start": 1899, "char_end": 1908 }, { "id": "J86-3001.48", "char_start": 1922, "char_end": 1934 } ]
[ { "label": 2, "arg1": "J86-3001.3", "arg2": "J86-3001.4", "reverse": false }, { "label": 3, "arg1": "J86-3001.6", "arg2": "J86-3001.7", "reverse": true }, { "label": 3, "arg1": "J86-3001.8", "arg2": "J86-3001.9", "reverse": true }, { "label": 3, "arg1": "J86-3001.10", "arg2": "J86-3001.11", "reverse": true }, { "label": 4, "arg1": "J86-3001.13", "arg2": "J86-3001.14", "reverse": true }, { "label": 3, "arg1": "J86-3001.15", "arg2": "J86-3001.16", "reverse": false }, { "label": 3, "arg1": "J86-3001.18", "arg2": "J86-3001.19", "reverse": false }, { "label": 3, "arg1": "J86-3001.28", "arg2": "J86-3001.29", "reverse": false }, { "label": 4, "arg1": "J86-3001.35", "arg2": "J86-3001.36", "reverse": false }, { "label": 4, "arg1": "J86-3001.38", "arg2": "J86-3001.39", "reverse": false }, { "label": 4, "arg1": "J86-3001.41", "arg2": "J86-3001.42", "reverse": false } ]
J86-4002
REFERENCE IDENTIFICATION AND REFERENCE IDENTIFICATION FAILURES
The goal of this work is the enrichment of human-machine interactions in a natural language environment . Because a speaker and listener cannot be assured to have the same beliefs , contexts , perceptions , backgrounds , or goals , at each point in a conversation , difficulties and mistakes arise when a listener interprets a speaker's utterance . These mistakes can lead to various kinds of misunderstandings between speaker and listener , including reference failures or failure to understand the speaker's intention . We call these misunderstandings miscommunication . Such mistakes can slow, and possibly break down, communication . Our goal is to recognize and isolate such miscommunications and circumvent them. This paper highlights a particular class of miscommunication --- reference problems --- by describing a case study and techniques for avoiding failures of reference . We want to illustrate a framework less restrictive than earlier ones by allowing a speaker leeway in forming an utterance about a task and in determining the conversational vehicle to deliver it. The paper also promotes a new view for extensional reference .
[ { "id": "J86-4002.1", "char_start": 46, "char_end": 72 }, { "id": "J86-4002.2", "char_start": 78, "char_end": 106 }, { "id": "J86-4002.3", "char_start": 119, "char_end": 126 }, { "id": "J86-4002.4", "char_start": 131, "char_end": 139 }, { "id": "J86-4002.5", "char_start": 175, "char_end": 182 }, { "id": "J86-4002.6", "char_start": 185, "char_end": 193 }, { "id": "J86-4002.7", "char_start": 196, "char_end": 207 }, { "id": "J86-4002.8", "char_start": 210, "char_end": 221 }, { "id": "J86-4002.9", "char_start": 227, "char_end": 232 }, { "id": "J86-4002.10", "char_start": 254, "char_end": 266 }, { "id": "J86-4002.11", "char_start": 308, "char_end": 316 }, { "id": "J86-4002.12", "char_start": 330, "char_end": 349 }, { "id": "J86-4002.13", "char_start": 422, "char_end": 429 }, { "id": "J86-4002.14", "char_start": 434, "char_end": 442 }, { "id": "J86-4002.15", "char_start": 455, "char_end": 473 }, { "id": "J86-4002.16", "char_start": 503, "char_end": 522 }, { "id": "J86-4002.17", "char_start": 557, "char_end": 573 }, { "id": "J86-4002.18", "char_start": 625, "char_end": 638 }, { "id": "J86-4002.19", "char_start": 683, "char_end": 700 }, { "id": "J86-4002.20", "char_start": 766, "char_end": 782 }, { "id": "J86-4002.21", "char_start": 787, "char_end": 805 }, { "id": "J86-4002.22", "char_start": 865, "char_end": 886 }, { "id": "J86-4002.23", "char_start": 972, "char_end": 979 }, { "id": "J86-4002.24", "char_start": 1001, "char_end": 1010 }, { "id": "J86-4002.25", "char_start": 1124, "char_end": 1145 } ]
[ { "label": 3, "arg1": "J86-4002.1", "arg2": "J86-4002.2", "reverse": true }, { "label": 6, "arg1": "J86-4002.3", "arg2": "J86-4002.4", "reverse": false } ]
P86-1011
The Relationship Between Tree Adjoining Grammars And Head Grammars
We examine the relationship between the two grammatical formalisms : Tree Adjoining Grammars and Head Grammars . We briefly investigate the weak equivalence of the two formalisms . We then turn to a discussion comparing the linguistic expressiveness of the two formalisms .
[ { "id": "P86-1011.1", "char_start": 47, "char_end": 69 }, { "id": "P86-1011.2", "char_start": 72, "char_end": 95 }, { "id": "P86-1011.3", "char_start": 100, "char_end": 113 }, { "id": "P86-1011.4", "char_start": 148, "char_end": 159 }, { "id": "P86-1011.5", "char_start": 171, "char_end": 181 }, { "id": "P86-1011.6", "char_start": 227, "char_end": 252 }, { "id": "P86-1011.7", "char_start": 264, "char_end": 274 } ]
[ { "label": 6, "arg1": "P86-1011.2", "arg2": "P86-1011.3", "reverse": false }, { "label": 3, "arg1": "P86-1011.4", "arg2": "P86-1011.5", "reverse": false }, { "label": 3, "arg1": "P86-1011.6", "arg2": "P86-1011.7", "reverse": false } ]
P86-1038
A LOGICAL SEMANTICS FOR FEATURE STRUCTURES
Unification-based grammar formalisms use structures containing sets of features to describe linguistic objects . Although computational algorithms for unification of feature structures have been worked out in experimental research, these algorithms become quite complicated, and a more precise description of feature structures is desirable. We have developed a model in which descriptions of feature structures can be regarded as logical formulas , and interpreted by sets of directed graphs which satisfy them. These graphs are, in fact, transition graphs for a special type of deterministic finite automaton . This semantics for feature structures extends the ideas of Pereira and Shieber [11], by providing an interpretation for values which are specified by disjunctions and path values embedded within disjunctions . Our interpretation differs from that of Pereira and Shieber by using a logical model in place of a denotational semantics . This logical model yields a calculus of equivalences , which can be used to simplify formulas . Unification is attractive, because of its generality, but it is often computationally inefficient. Our model allows a careful examination of the computational complexity of unification . We have shown that the consistency problem for formulas with disjunctive values is NP-complete . To deal with this complexity , we describe how disjunctive values can be specified in a way which delays expansion to disjunctive normal form .
[ { "id": "P86-1038.1", "char_start": 3, "char_end": 39 }, { "id": "P86-1038.2", "char_start": 74, "char_end": 82 }, { "id": "P86-1038.3", "char_start": 95, "char_end": 113 }, { "id": "P86-1038.4", "char_start": 125, "char_end": 187 }, { "id": "P86-1038.5", "char_start": 312, "char_end": 330 }, { "id": "P86-1038.6", "char_start": 365, "char_end": 370 }, { "id": "P86-1038.7", "char_start": 396, "char_end": 414 }, { "id": "P86-1038.8", "char_start": 434, "char_end": 450 }, { "id": "P86-1038.9", "char_start": 480, "char_end": 495 }, { "id": "P86-1038.10", "char_start": 522, "char_end": 528 }, { "id": "P86-1038.11", "char_start": 543, "char_end": 560 }, { "id": "P86-1038.12", "char_start": 583, "char_end": 613 }, { "id": "P86-1038.13", "char_start": 621, "char_end": 630 }, { "id": "P86-1038.14", "char_start": 635, "char_end": 653 }, { "id": "P86-1038.15", "char_start": 766, "char_end": 778 }, { "id": "P86-1038.16", "char_start": 783, "char_end": 794 }, { "id": "P86-1038.17", "char_start": 811, "char_end": 823 }, { "id": "P86-1038.18", "char_start": 897, "char_end": 910 }, { "id": "P86-1038.19", "char_start": 925, "char_end": 947 }, { "id": "P86-1038.20", "char_start": 955, "char_end": 968 }, { "id": "P86-1038.21", "char_start": 990, "char_end": 1002 }, { "id": "P86-1038.22", "char_start": 1035, "char_end": 1043 }, { "id": "P86-1038.23", "char_start": 1046, "char_end": 1057 }, { "id": "P86-1038.24", "char_start": 1149, "char_end": 1154 }, { "id": "P86-1038.25", "char_start": 1191, "char_end": 1215 }, { "id": "P86-1038.26", "char_start": 1219, "char_end": 1230 }, { "id": "P86-1038.27", "char_start": 1256, "char_end": 1275 }, { "id": "P86-1038.28", "char_start": 1280, "char_end": 1288 }, { "id": "P86-1038.29", "char_start": 1294, "char_end": 1312 }, { "id": "P86-1038.30", "char_start": 1316, "char_end": 1327 }, { "id": "P86-1038.31", "char_start": 1348, "char_end": 1358 }, { "id": "P86-1038.32", "char_start": 1377, "char_end": 1388 }, { "id": "P86-1038.33", "char_start": 1435, "char_end": 1444 }, { "id": "P86-1038.34", "char_start": 1448, "char_end": 1471 } ]
[ { "label": 3, "arg1": "P86-1038.2", "arg2": "P86-1038.3", "reverse": false }, { "label": 3, "arg1": "P86-1038.7", "arg2": "P86-1038.8", "reverse": false }, { "label": 4, "arg1": "P86-1038.11", "arg2": "P86-1038.12", "reverse": false }, { "label": 3, "arg1": "P86-1038.13", "arg2": "P86-1038.14", "reverse": false }, { "label": 4, "arg1": "P86-1038.16", "arg2": "P86-1038.17", "reverse": false }, { "label": 6, "arg1": "P86-1038.18", "arg2": "P86-1038.19", "reverse": false }, { "label": 1, "arg1": "P86-1038.20", "arg2": "P86-1038.22", "reverse": false }, { "label": 3, "arg1": "P86-1038.25", "arg2": "P86-1038.26", "reverse": false }, { "label": 3, "arg1": "P86-1038.27", "arg2": "P86-1038.28", "reverse": false } ]
A88-1001
The Multimedia Articulation of Answers in a Natural Language Database Query System
This paper describes a domain independent strategy for the multimedia articulation of answers elicited by a natural language interface to database query applications . Multimedia answers include videodisc images and heuristically-produced complete sentences in text or text-to-speech form . Deictic reference and feedback about the discourse are enabled. The interface thus presents the application as cooperative and conversational.
[ { "id": "A88-1001.1", "char_start": 62, "char_end": 96 }, { "id": "A88-1001.2", "char_start": 111, "char_end": 137 }, { "id": "A88-1001.3", "char_start": 141, "char_end": 168 }, { "id": "A88-1001.4", "char_start": 171, "char_end": 189 }, { "id": "A88-1001.5", "char_start": 198, "char_end": 214 }, { "id": "A88-1001.6", "char_start": 251, "char_end": 260 }, { "id": "A88-1001.7", "char_start": 264, "char_end": 268 }, { "id": "A88-1001.8", "char_start": 272, "char_end": 291 }, { "id": "A88-1001.9", "char_start": 294, "char_end": 311 }, { "id": "A88-1001.10", "char_start": 316, "char_end": 324 }, { "id": "A88-1001.11", "char_start": 335, "char_end": 344 }, { "id": "A88-1001.12", "char_start": 362, "char_end": 371 } ]
[ { "label": 6, "arg1": "A88-1001.7", "arg2": "A88-1001.8", "reverse": false }, { "label": 3, "arg1": "A88-1001.10", "arg2": "A88-1001.11", "reverse": false } ]
A88-1003
An Architecture for Anaphora Resolution
In this paper, we describe the pronominal anaphora resolution module of Lucy , a portable English understanding system . The design of this module was motivated by the observation that, although there exist many theories of anaphora resolution , no one of these theories is complete. Thus we have implemented a blackboard-like architecture in which individual partial theories can be encoded as separate modules that can interact to propose candidate antecedents and to evaluate each other's proposals.
[ { "id": "A88-1003.1", "char_start": 34, "char_end": 71 }, { "id": "A88-1003.2", "char_start": 75, "char_end": 79 }, { "id": "A88-1003.3", "char_start": 93, "char_end": 121 }, { "id": "A88-1003.4", "char_start": 227, "char_end": 246 }, { "id": "A88-1003.5", "char_start": 314, "char_end": 342 }, { "id": "A88-1003.6", "char_start": 363, "char_end": 379 }, { "id": "A88-1003.7", "char_start": 454, "char_end": 465 } ]
[ { "label": 4, "arg1": "A88-1003.1", "arg2": "A88-1003.2", "reverse": false }, { "label": 4, "arg1": "A88-1003.5", "arg2": "A88-1003.6", "reverse": true } ]
C88-1007
Machine Translation Using Isomorphic UCGs
This paper discusses the application of Unification Categorial Grammar (UCG) to the framework of Isomorphic Grammars for Machine Translation pioneered by Landsbergen. The Isomorphic Grammars approach to MT involves developing the grammars of the Source and Target languages in parallel, in order to ensure that SL and TL expressions which stand in the translation relation have isomorphic derivations . The principle advantage of this approach is that knowledge concerning translation equivalence of expressions may be directly exploited, obviating the need for answers to semantic questions that we do not yet have. Semantic and other information may still be incorporated, but as constraints on the translation relation , not as levels of textual representation . After introducing this approach to MT system design, and the basics of monolingual UCG , we will show how the two can be integrated, and present an example from an implemented bi-directional English-Spanish fragment . Finally we will present some outstanding problems with the approach.
[ { "id": "C88-1007.1", "char_start": 43, "char_end": 79 }, { "id": "C88-1007.2", "char_start": 100, "char_end": 119 }, { "id": "C88-1007.3", "char_start": 124, "char_end": 143 }, { "id": "C88-1007.4", "char_start": 174, "char_end": 208 }, { "id": "C88-1007.5", "char_start": 233, "char_end": 241 }, { "id": "C88-1007.6", "char_start": 249, "char_end": 276 }, { "id": "C88-1007.7", "char_start": 314, "char_end": 316 }, { "id": "C88-1007.8", "char_start": 321, "char_end": 323 }, { "id": "C88-1007.9", "char_start": 355, "char_end": 375 }, { "id": "C88-1007.10", "char_start": 381, "char_end": 403 }, { "id": "C88-1007.11", "char_start": 576, "char_end": 594 }, { "id": "C88-1007.12", "char_start": 620, "char_end": 628 }, { "id": "C88-1007.13", "char_start": 704, "char_end": 724 }, { "id": "C88-1007.14", "char_start": 744, "char_end": 766 }, { "id": "C88-1007.15", "char_start": 804, "char_end": 813 }, { "id": "C88-1007.16", "char_start": 840, "char_end": 855 }, { "id": "C88-1007.17", "char_start": 945, "char_end": 984 } ]
[ { "label": 1, "arg1": "C88-1007.1", "arg2": "C88-1007.3", "reverse": false }, { "label": 3, "arg1": "C88-1007.5", "arg2": "C88-1007.6", "reverse": false }, { "label": 6, "arg1": "C88-1007.7", "arg2": "C88-1007.8", "reverse": false }, { "label": 6, "arg1": "C88-1007.13", "arg2": "C88-1007.14", "reverse": false } ]
C88-1044
On the Generation and Interpretation of Demonstrative Expressions
This paper presents necessary and sufficient conditions for the use of demonstrative expressions in English and discusses implications for current discourse processing algorithms . We examine a broad range of texts to show how the distribution of demonstrative forms and functions is genre dependent . This research is part of a larger study of anaphoric expressions , the results of which will be incorporated into a natural language generation system .
[ { "id": "C88-1044.1", "char_start": 74, "char_end": 99 }, { "id": "C88-1044.2", "char_start": 103, "char_end": 110 }, { "id": "C88-1044.3", "char_start": 150, "char_end": 181 }, { "id": "C88-1044.4", "char_start": 212, "char_end": 217 }, { "id": "C88-1044.5", "char_start": 250, "char_end": 283 }, { "id": "C88-1044.6", "char_start": 287, "char_end": 302 }, { "id": "C88-1044.7", "char_start": 348, "char_end": 369 }, { "id": "C88-1044.8", "char_start": 421, "char_end": 455 } ]
[ { "label": 4, "arg1": "C88-1044.1", "arg2": "C88-1044.2", "reverse": false }, { "label": 4, "arg1": "C88-1044.4", "arg2": "C88-1044.5", "reverse": true } ]
C88-1066
Parsing with Category Coocurrence Restrictions
This paper summarizes the formalism of Category Cooccurrence Restrictions (CCRs) and describes two parsing algorithms that interpret it. CCRs are Boolean conditions on the cooccurrence of categories in local trees which allow the statement of generalizations which cannot be captured in other current syntax formalisms . The use of CCRs leads to syntactic descriptions formulated entirely with restrictive statements . The paper shows how conventional algorithms for the analysis of context free languages can be adapted to the CCR formalism . Special attention is given to the part of the parser that checks the fulfillment of logical well-formedness conditions on trees .
[ { "id": "C88-1066.1", "char_start": 42, "char_end": 83 }, { "id": "C88-1066.2", "char_start": 102, "char_end": 120 }, { "id": "C88-1066.3", "char_start": 140, "char_end": 144 }, { "id": "C88-1066.4", "char_start": 149, "char_end": 167 }, { "id": "C88-1066.5", "char_start": 191, "char_end": 201 }, { "id": "C88-1066.6", "char_start": 205, "char_end": 216 }, { "id": "C88-1066.7", "char_start": 233, "char_end": 261 }, { "id": "C88-1066.8", "char_start": 304, "char_end": 321 }, { "id": "C88-1066.9", "char_start": 335, "char_end": 339 }, { "id": "C88-1066.10", "char_start": 349, "char_end": 371 }, { "id": "C88-1066.11", "char_start": 397, "char_end": 419 }, { "id": "C88-1066.12", "char_start": 486, "char_end": 508 }, { "id": "C88-1066.13", "char_start": 531, "char_end": 544 }, { "id": "C88-1066.14", "char_start": 593, "char_end": 599 }, { "id": "C88-1066.15", "char_start": 631, "char_end": 665 }, { "id": "C88-1066.16", "char_start": 669, "char_end": 674 } ]
[ { "label": 3, "arg1": "C88-1066.4", "arg2": "C88-1066.5", "reverse": false }, { "label": 3, "arg1": "C88-1066.10", "arg2": "C88-1066.11", "reverse": true }, { "label": 6, "arg1": "C88-1066.12", "arg2": "C88-1066.13", "reverse": false }, { "label": 3, "arg1": "C88-1066.15", "arg2": "C88-1066.16", "reverse": false } ]
C88-2086
Solving Some Persistent Presupposition Problems
Soames 1979 provides some counterexamples to the theory of natural language presuppositions that is presented in Gazdar 1979. Soames 1982 provides a theory which explains these counterexamples. Mercer 1987 rejects the solution found in Soames 1982 leaving these counterexamples unexplained. By reappraising these insightful counterexamples, the inferential theory for natural language presuppositions described in Mercer 1987, 1988 gives a simple and straightforward explanation for the presuppositional nature of these sentences .
[ { "id": "C88-2086.1", "char_start": 52, "char_end": 94 }, { "id": "C88-2086.2", "char_start": 348, "char_end": 403 }, { "id": "C88-2086.3", "char_start": 490, "char_end": 513 }, { "id": "C88-2086.4", "char_start": 523, "char_end": 532 } ]
[ { "label": 3, "arg1": "C88-2086.3", "arg2": "C88-2086.4", "reverse": false } ]
C88-2130
Directing the Generation of Living Space Descriptions
We have developed a computational model of the process of describing the layout of an apartment or house, a much-studied discourse task first characterized linguistically by Linde (1974). The model is embodied in a program, APT , that can reproduce segments of actual tape-recorded descriptions, using organizational and discourse strategies derived through analysis of our corpus .
[ { "id": "C88-2130.1", "char_start": 23, "char_end": 42 }, { "id": "C88-2130.2", "char_start": 124, "char_end": 138 }, { "id": "C88-2130.3", "char_start": 195, "char_end": 200 }, { "id": "C88-2130.4", "char_start": 227, "char_end": 230 }, { "id": "C88-2130.5", "char_start": 305, "char_end": 344 }, { "id": "C88-2130.6", "char_start": 377, "char_end": 383 } ]
[ { "label": 1, "arg1": "C88-2130.1", "arg2": "C88-2130.2", "reverse": false }, { "label": 4, "arg1": "C88-2130.3", "arg2": "C88-2130.4", "reverse": false } ]
C88-2132
Island Parsing and Bidirectional Charts
Chart parsing is directional in the sense that it works from the starting point (usually the beginning of the sentence) extending its activity usually in a rightward manner. We shall introduce the concept of a chart that works outward from islands and makes sense of as much of the sentence as it is actually possible, and after that will lead to predictions of missing fragments . So, for any place where the easily identifiable fragments occur in the sentence , the process will extend to both the left and the right of the islands , until possibly completely missing fragments are reached. At that point, by virtue of the fact that both a left and a right context were found, heuristics can be introduced that predict the nature of the missing fragments .
[ { "id": "C88-2132.1", "char_start": 3, "char_end": 16 }, { "id": "C88-2132.2", "char_start": 20, "char_end": 31 }, { "id": "C88-2132.3", "char_start": 213, "char_end": 218 }, { "id": "C88-2132.4", "char_start": 243, "char_end": 250 }, { "id": "C88-2132.5", "char_start": 285, "char_end": 293 }, { "id": "C88-2132.6", "char_start": 373, "char_end": 382 }, { "id": "C88-2132.7", "char_start": 433, "char_end": 442 }, { "id": "C88-2132.8", "char_start": 456, "char_end": 464 }, { "id": "C88-2132.9", "char_start": 529, "char_end": 536 }, { "id": "C88-2132.10", "char_start": 573, "char_end": 582 }, { "id": "C88-2132.11", "char_start": 682, "char_end": 692 }, { "id": "C88-2132.12", "char_start": 750, "char_end": 759 } ]
[ { "label": 4, "arg1": "C88-2132.7", "arg2": "C88-2132.8", "reverse": false } ]
C88-2160
Interactive Translation : a new approach
A new approach for Interactive Machine Translation where the author interacts during the creation or the modification of the document is proposed. The explanation of an ambiguity or an error for the purposes of correction does not use any concepts of the underlying linguistic theory : it is a reformulation of the erroneous or ambiguous sentence . The interaction is limited to the analysis step of the translation process . This paper presents a new interactive disambiguation scheme based on the paraphrasing of a parser 's multiple output. Some examples of paraphrasing ambiguous sentences are presented.
[ { "id": "C88-2160.1", "char_start": 22, "char_end": 53 }, { "id": "C88-2160.2", "char_start": 64, "char_end": 70 }, { "id": "C88-2160.3", "char_start": 128, "char_end": 136 }, { "id": "C88-2160.4", "char_start": 172, "char_end": 181 }, { "id": "C88-2160.5", "char_start": 269, "char_end": 286 }, { "id": "C88-2160.6", "char_start": 341, "char_end": 349 }, { "id": "C88-2160.7", "char_start": 407, "char_end": 426 }, { "id": "C88-2160.8", "char_start": 455, "char_end": 488 }, { "id": "C88-2160.9", "char_start": 502, "char_end": 514 }, { "id": "C88-2160.10", "char_start": 520, "char_end": 526 }, { "id": "C88-2160.11", "char_start": 564, "char_end": 576 }, { "id": "C88-2160.12", "char_start": 587, "char_end": 596 } ]
[ { "label": 2, "arg1": "C88-2160.9", "arg2": "C88-2160.10", "reverse": true }, { "label": 3, "arg1": "C88-2160.11", "arg2": "C88-2160.12", "reverse": false } ]
C88-2162
NETL: A System for Representing and Using Real-World Knowledge
Computer programs so far have not fared well in modeling language acquisition . For one thing, learning methodology applicable in general domains does not readily lend itself in the linguistic domain . For another, linguistic representation used by language processing systems is not geared to learning . We introduced a new linguistic representation , the Dynamic Hierarchical Phrasal Lexicon (DHPL) [Zernik88], to facilitate language acquisition . From this, a language learning model was implemented in the program RINA , which enhances its own lexical hierarchy by processing examples in context. We identified two tasks: First, how linguistic concepts are acquired from training examples and organized in a hierarchy ; this task was discussed in previous papers [Zernik87]. Second, we show in this paper how a lexical hierarchy is used in predicting new linguistic concepts . Thus, a program does not stall even in the presence of a lexical unknown , and a hypothesis can be produced for covering that lexical gap .
[ { "id": "C88-2162.1", "char_start": 51, "char_end": 80 }, { "id": "C88-2162.2", "char_start": 98, "char_end": 118 }, { "id": "C88-2162.3", "char_start": 133, "char_end": 148 }, { "id": "C88-2162.4", "char_start": 185, "char_end": 202 }, { "id": "C88-2162.5", "char_start": 218, "char_end": 243 }, { "id": "C88-2162.6", "char_start": 252, "char_end": 279 }, { "id": "C88-2162.7", "char_start": 297, "char_end": 305 }, { "id": "C88-2162.8", "char_start": 328, "char_end": 353 }, { "id": "C88-2162.9", "char_start": 360, "char_end": 403 }, { "id": "C88-2162.10", "char_start": 430, "char_end": 450 }, { "id": "C88-2162.11", "char_start": 466, "char_end": 489 }, { "id": "C88-2162.12", "char_start": 521, "char_end": 525 }, { "id": "C88-2162.13", "char_start": 551, "char_end": 568 }, { "id": "C88-2162.14", "char_start": 640, "char_end": 659 }, { "id": "C88-2162.15", "char_start": 678, "char_end": 695 }, { "id": "C88-2162.16", "char_start": 715, "char_end": 724 }, { "id": "C88-2162.17", "char_start": 818, "char_end": 835 }, { "id": "C88-2162.18", "char_start": 862, "char_end": 881 }, { "id": "C88-2162.19", "char_start": 892, "char_end": 899 }, { "id": "C88-2162.20", "char_start": 941, "char_end": 956 }, { "id": "C88-2162.21", "char_start": 965, "char_end": 975 }, { "id": "C88-2162.22", "char_start": 1010, "char_end": 1021 } ]
[ { "label": 6, "arg1": "C88-2162.3", "arg2": "C88-2162.4", "reverse": false }, { "label": 1, "arg1": "C88-2162.5", "arg2": "C88-2162.6", "reverse": false }, { "label": 1, "arg1": "C88-2162.9", "arg2": "C88-2162.10", "reverse": false }, { "label": 4, "arg1": "C88-2162.11", "arg2": "C88-2162.13", "reverse": true }, { "label": 3, "arg1": "C88-2162.14", "arg2": "C88-2162.15", "reverse": false }, { "label": 1, "arg1": "C88-2162.17", "arg2": "C88-2162.18", "reverse": false }, { "label": 3, "arg1": "C88-2162.20", "arg2": "C88-2162.21", "reverse": true } ]
C88-2166
COMPLEX: A Computational Lexicon for Natural Language Systems
Although every natural language system needs a computational lexicon , each system puts different amounts and types of information into its lexicon according to its individual needs. However, some of the information needed across systems is shared or identical information. This paper presents our experience in planning and building COMPLEX , a computational lexicon designed to be a repository of shared lexical information for use by Natural Language Processing (NLP) systems . We have drawn primarily on explicit and implicit information from machine-readable dictionaries (MRD's) to create a broad coverage lexicon .
[ { "id": "C88-2166.1", "char_start": 18, "char_end": 41 }, { "id": "C88-2166.2", "char_start": 50, "char_end": 71 }, { "id": "C88-2166.3", "char_start": 143, "char_end": 150 }, { "id": "C88-2166.4", "char_start": 337, "char_end": 344 }, { "id": "C88-2166.5", "char_start": 349, "char_end": 370 }, { "id": "C88-2166.6", "char_start": 402, "char_end": 428 }, { "id": "C88-2166.7", "char_start": 440, "char_end": 481 }, { "id": "C88-2166.8", "char_start": 550, "char_end": 587 }, { "id": "C88-2166.9", "char_start": 600, "char_end": 622 } ]
[ { "label": 1, "arg1": "C88-2166.1", "arg2": "C88-2166.2", "reverse": true }, { "label": 4, "arg1": "C88-2166.5", "arg2": "C88-2166.6", "reverse": true }, { "label": 1, "arg1": "C88-2166.8", "arg2": "C88-2166.9", "reverse": false } ]
J88-3002
MODELING THE USER IN NATURAL LANGUAGE SYSTEMS
For intelligent interactive systems to communicate with humans in a natural manner, they must have knowledge about the system users . This paper explores the role of user modeling in such systems . It begins with a characterization of what a user model is and how it can be used. The types of information that a user model may be required to keep about a user are then identified and discussed. User models themselves can vary greatly depending on the requirements of the situation and the implementation, so several dimensions along which they can be classified are presented. Since acquiring the knowledge for a user model is a fundamental problem in user modeling , a section is devoted to this topic. Next, the benefits and costs of implementing a user modeling component for a system are weighed in light of several aspects of the interaction requirements that may be imposed by the system. Finally, the current state of research in user modeling is summarized, and future research topics that must be addressed in order to achieve powerful, general user modeling systems are assessed.
[ { "id": "J88-3002.1", "char_start": 7, "char_end": 38 }, { "id": "J88-3002.2", "char_start": 59, "char_end": 65 }, { "id": "J88-3002.3", "char_start": 122, "char_end": 134 }, { "id": "J88-3002.4", "char_start": 169, "char_end": 182 }, { "id": "J88-3002.5", "char_start": 191, "char_end": 198 }, { "id": "J88-3002.6", "char_start": 245, "char_end": 255 }, { "id": "J88-3002.7", "char_start": 315, "char_end": 325 }, { "id": "J88-3002.8", "char_start": 358, "char_end": 362 }, { "id": "J88-3002.9", "char_start": 398, "char_end": 409 }, { "id": "J88-3002.10", "char_start": 617, "char_end": 627 }, { "id": "J88-3002.11", "char_start": 656, "char_end": 669 }, { "id": "J88-3002.12", "char_start": 755, "char_end": 778 }, { "id": "J88-3002.13", "char_start": 839, "char_end": 863 }, { "id": "J88-3002.14", "char_start": 941, "char_end": 954 }, { "id": "J88-3002.15", "char_start": 1058, "char_end": 1079 } ]
[ { "label": 2, "arg1": "J88-3002.4", "arg2": "J88-3002.5", "reverse": false }, { "label": 3, "arg1": "J88-3002.7", "arg2": "J88-3002.8", "reverse": false }, { "label": 4, "arg1": "J88-3002.10", "arg2": "J88-3002.11", "reverse": false } ]
C90-1013
Generation for Dialogue Translation Using Typed Feature Structure Unification
This article introduces a bidirectional grammar generation system called feature structure-directed generation , developed for a dialogue translation system . The system utilizes typed feature structures to control the top-down derivation in a declarative way. This generation system also uses disjunctive feature structures to reduce the number of copies of the derivation tree . The grammar for this generator is designed to properly generate the speaker's intention in a telephone dialogue .
[ { "id": "C90-1013.1", "char_start": 29, "char_end": 68 }, { "id": "C90-1013.2", "char_start": 76, "char_end": 113 }, { "id": "C90-1013.3", "char_start": 132, "char_end": 159 }, { "id": "C90-1013.4", "char_start": 182, "char_end": 206 }, { "id": "C90-1013.5", "char_start": 222, "char_end": 241 }, { "id": "C90-1013.6", "char_start": 269, "char_end": 286 }, { "id": "C90-1013.7", "char_start": 297, "char_end": 327 }, { "id": "C90-1013.8", "char_start": 366, "char_end": 381 }, { "id": "C90-1013.9", "char_start": 388, "char_end": 395 }, { "id": "C90-1013.10", "char_start": 405, "char_end": 414 }, { "id": "C90-1013.11", "char_start": 452, "char_end": 471 }, { "id": "C90-1013.12", "char_start": 477, "char_end": 495 } ]
[ { "label": 1, "arg1": "C90-1013.2", "arg2": "C90-1013.3", "reverse": false }, { "label": 1, "arg1": "C90-1013.4", "arg2": "C90-1013.5", "reverse": false }, { "label": 1, "arg1": "C90-1013.6", "arg2": "C90-1013.7", "reverse": true }, { "label": 1, "arg1": "C90-1013.9", "arg2": "C90-1013.10", "reverse": false } ]
C90-2032
Sentence disambiguation by document preference sets oriented
This paper proposes document oriented preference sets(DoPS) for the disambiguation of the dependency structure of sentences . The DoPS system extracts preference knowledge from a target document or other documents automatically. Sentence ambiguities can be resolved by using domain targeted preference knowledge without using complicated large knowledgebases . Implementation and empirical results are described for the the analysis of dependency structures of Japanese patent claim sentences .
[ { "id": "C90-2032.1", "char_start": 23, "char_end": 62 }, { "id": "C90-2032.2", "char_start": 93, "char_end": 113 }, { "id": "C90-2032.3", "char_start": 117, "char_end": 126 }, { "id": "C90-2032.4", "char_start": 133, "char_end": 144 }, { "id": "C90-2032.5", "char_start": 182, "char_end": 197 }, { "id": "C90-2032.6", "char_start": 207, "char_end": 216 }, { "id": "C90-2032.7", "char_start": 232, "char_end": 252 }, { "id": "C90-2032.8", "char_start": 347, "char_end": 361 }, { "id": "C90-2032.9", "char_start": 364, "char_end": 378 }, { "id": "C90-2032.10", "char_start": 383, "char_end": 400 }, { "id": "C90-2032.11", "char_start": 439, "char_end": 460 }, { "id": "C90-2032.12", "char_start": 464, "char_end": 495 } ]
[ { "label": 3, "arg1": "C90-2032.2", "arg2": "C90-2032.3", "reverse": false }, { "label": 1, "arg1": "C90-2032.4", "arg2": "C90-2032.5", "reverse": false }, { "label": 3, "arg1": "C90-2032.11", "arg2": "C90-2032.12", "reverse": false } ]
C90-3014
A phonological knowledge base system using unification-based formalism: a case study of Korean phonology
This paper describes the framework of a Korean phonological knowledge base system using the unification-based grammar formalism : Korean Phonology Structure Grammar (KPSG) . The approach of KPSG provides an explicit development model for constructing a computational phonological system : speech recognition and synthesis system . We show that the proposed approach is more describable than other approaches such as those employing a traditional generative phonological approach .
[ { "id": "C90-3014.1", "char_start": 43, "char_end": 84 }, { "id": "C90-3014.2", "char_start": 95, "char_end": 130 }, { "id": "C90-3014.3", "char_start": 133, "char_end": 174 }, { "id": "C90-3014.4", "char_start": 193, "char_end": 197 }, { "id": "C90-3014.5", "char_start": 270, "char_end": 289 }, { "id": "C90-3014.6", "char_start": 292, "char_end": 310 }, { "id": "C90-3014.7", "char_start": 315, "char_end": 331 }, { "id": "C90-3014.8", "char_start": 449, "char_end": 481 } ]
[ { "label": 1, "arg1": "C90-3014.1", "arg2": "C90-3014.2", "reverse": true }, { "label": 1, "arg1": "C90-3014.4", "arg2": "C90-3014.5", "reverse": false } ]
C90-3045
Synchronous Tree-Adjoining Grammars
The unique properties of tree-adjoining grammars (TAG) present a challenge for the application of TAGs beyond the limited confines of syntax , for instance, to the task of semantic interpretation or automatic translation of natural language . We present a variant of TAGs , called synchronous TAGs , which characterize correspondences between languages . The formalism's intended usage is to relate expressions of natural languages to their associated semantics represented in a logical form language , or to their translates in another natural language ; in summary, we intend it to allow TAGs to be used beyond their role in syntax proper . We discuss the application of synchronous TAGs to concrete examples, mentioning primarily in passing some computational issues that arise in its interpretation.
[ { "id": "C90-3045.1", "char_start": 28, "char_end": 57 }, { "id": "C90-3045.2", "char_start": 101, "char_end": 105 }, { "id": "C90-3045.3", "char_start": 137, "char_end": 143 }, { "id": "C90-3045.4", "char_start": 175, "char_end": 198 }, { "id": "C90-3045.5", "char_start": 202, "char_end": 243 }, { "id": "C90-3045.6", "char_start": 270, "char_end": 274 }, { "id": "C90-3045.7", "char_start": 284, "char_end": 300 }, { "id": "C90-3045.8", "char_start": 346, "char_end": 355 }, { "id": "C90-3045.9", "char_start": 402, "char_end": 434 }, { "id": "C90-3045.10", "char_start": 455, "char_end": 464 }, { "id": "C90-3045.11", "char_start": 482, "char_end": 503 }, { "id": "C90-3045.12", "char_start": 518, "char_end": 528 }, { "id": "C90-3045.13", "char_start": 540, "char_end": 556 }, { "id": "C90-3045.14", "char_start": 593, "char_end": 597 }, { "id": "C90-3045.15", "char_start": 630, "char_end": 643 }, { "id": "C90-3045.16", "char_start": 676, "char_end": 692 } ]
[ { "label": 6, "arg1": "C90-3045.3", "arg2": "C90-3045.4", "reverse": false }, { "label": 3, "arg1": "C90-3045.10", "arg2": "C90-3045.11", "reverse": true }, { "label": 3, "arg1": "C90-3045.12", "arg2": "C90-3045.13", "reverse": false }, { "label": 1, "arg1": "C90-3045.14", "arg2": "C90-3045.15", "reverse": false } ]
C90-3046
Japanese Sentence Analysis as Argumentation
This paper proposes that sentence analysis should be treated as defeasible reasoning , and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige, which is a formalization of defeasible reasoning , that includes arguments and defeat rules that capture defeasibility .
[ { "id": "C90-3046.1", "char_start": 28, "char_end": 45 }, { "id": "C90-3046.2", "char_start": 67, "char_end": 87 }, { "id": "C90-3046.3", "char_start": 124, "char_end": 150 }, { "id": "C90-3046.4", "char_start": 160, "char_end": 180 }, { "id": "C90-3046.5", "char_start": 205, "char_end": 218 }, { "id": "C90-3046.6", "char_start": 222, "char_end": 242 }, { "id": "C90-3046.7", "char_start": 259, "char_end": 268 }, { "id": "C90-3046.8", "char_start": 273, "char_end": 285 }, { "id": "C90-3046.9", "char_start": 299, "char_end": 312 } ]
[ { "label": 3, "arg1": "C90-3046.1", "arg2": "C90-3046.2", "reverse": false }, { "label": 1, "arg1": "C90-3046.3", "arg2": "C90-3046.4", "reverse": true }, { "label": 3, "arg1": "C90-3046.5", "arg2": "C90-3046.6", "reverse": false }, { "label": 3, "arg1": "C90-3046.8", "arg2": "C90-3046.9", "reverse": false } ]
C90-3072
Spelling-checking for Highly Inflective Languages
Spelling-checkers have become an integral part of most text processing software . From different reasons among which the speed of processing prevails they are usually based on dictionaries of word forms instead of words . This approach is sufficient for languages with little inflection such as English , but fails for highly inflective languages such as Czech , Russian , Slovak or other Slavonic languages . We have developed a special method for describing inflection for the purpose of building spelling-checkers for such languages. The speed of the resulting program lies somewhere in the middle of the scale of existing spelling-checkers for English and the main dictionary fits into the standard 360K floppy , whereas the number of recognized word forms exceeds 6 million (for Czech ). Further, a special method has been developed for easy word classification .
[ { "id": "C90-3072.1", "char_start": 3, "char_end": 20 }, { "id": "C90-3072.2", "char_start": 58, "char_end": 82 }, { "id": "C90-3072.3", "char_start": 179, "char_end": 205 }, { "id": "C90-3072.4", "char_start": 217, "char_end": 222 }, { "id": "C90-3072.5", "char_start": 279, "char_end": 289 }, { "id": "C90-3072.6", "char_start": 298, "char_end": 305 }, { "id": "C90-3072.7", "char_start": 322, "char_end": 349 }, { "id": "C90-3072.8", "char_start": 358, "char_end": 363 }, { "id": "C90-3072.9", "char_start": 366, "char_end": 373 }, { "id": "C90-3072.10", "char_start": 376, "char_end": 382 }, { "id": "C90-3072.11", "char_start": 392, "char_end": 410 }, { "id": "C90-3072.12", "char_start": 463, "char_end": 473 }, { "id": "C90-3072.13", "char_start": 502, "char_end": 519 }, { "id": "C90-3072.14", "char_start": 629, "char_end": 646 }, { "id": "C90-3072.15", "char_start": 651, "char_end": 658 }, { "id": "C90-3072.16", "char_start": 672, "char_end": 682 }, { "id": "C90-3072.17", "char_start": 706, "char_end": 717 }, { "id": "C90-3072.18", "char_start": 753, "char_end": 763 }, { "id": "C90-3072.19", "char_start": 787, "char_end": 792 }, { "id": "C90-3072.20", "char_start": 850, "char_end": 869 } ]
[ { "label": 4, "arg1": "C90-3072.1", "arg2": "C90-3072.2", "reverse": false }, { "label": 6, "arg1": "C90-3072.3", "arg2": "C90-3072.4", "reverse": false }, { "label": 3, "arg1": "C90-3072.5", "arg2": "C90-3072.6", "reverse": false }, { "label": 3, "arg1": "C90-3072.7", "arg2": "C90-3072.8", "reverse": false }, { "label": 1, "arg1": "C90-3072.14", "arg2": "C90-3072.15", "reverse": false }, { "label": 4, "arg1": "C90-3072.18", "arg2": "C90-3072.19", "reverse": false } ]
H90-1016
Toward a Real-Time Spoken Language System Using Commercial Hardware
We describe the methods and hardware that we are using to produce a real-time demonstration of an integrated Spoken Language System . We describe algorithms that greatly reduce the computation needed to compute the N-Best sentence hypotheses . To avoid grammar coverage problems we use a fully-connected first-order statistical class grammar . The speech-search algorithm is implemented on a board with a single Intel i860 chip , which provides a factor of 5 speed-up over a SUN 4 for straight C code . The board plugs directly into the VME bus of the SUN4 , which controls the system and contains the natural language system and application back end .
[ { "id": "H90-1016.1", "char_start": 101, "char_end": 134 }, { "id": "H90-1016.2", "char_start": 218, "char_end": 244 }, { "id": "H90-1016.3", "char_start": 256, "char_end": 281 }, { "id": "H90-1016.4", "char_start": 291, "char_end": 344 }, { "id": "H90-1016.5", "char_start": 351, "char_end": 374 }, { "id": "H90-1016.6", "char_start": 395, "char_end": 400 }, { "id": "H90-1016.7", "char_start": 415, "char_end": 430 }, { "id": "H90-1016.8", "char_start": 478, "char_end": 483 }, { "id": "H90-1016.9", "char_start": 488, "char_end": 503 }, { "id": "H90-1016.10", "char_start": 510, "char_end": 515 }, { "id": "H90-1016.11", "char_start": 540, "char_end": 547 }, { "id": "H90-1016.12", "char_start": 555, "char_end": 559 }, { "id": "H90-1016.13", "char_start": 605, "char_end": 628 }, { "id": "H90-1016.14", "char_start": 633, "char_end": 653 } ]
[ { "label": 1, "arg1": "H90-1016.3", "arg2": "H90-1016.4", "reverse": true }, { "label": 3, "arg1": "H90-1016.6", "arg2": "H90-1016.7", "reverse": true }, { "label": 4, "arg1": "H90-1016.11", "arg2": "H90-1016.12", "reverse": false } ]
H90-1060
A New Paradigm for Speaker-Independent Training and Speaker Adaptation
This paper reports on two contributions to large vocabulary continuous speech recognition . First, we present a new paradigm for speaker-independent (SI) training of hidden Markov models (HMM) , which uses a large amount of speech from a few speakers instead of the traditional practice of using a little speech from many speakers . In addition, combination of the training speakers is done by averaging the statistics of independently trained models rather than the usual pooling of all the speech data from many speakers prior to training . With only 12 training speakers for SI recognition , we achieved a 7.5% word error rate on a standard grammar and test set from the DARPA Resource Management corpus . This performance is comparable to our best condition for this test suite, using 109 training speakers . Second, we show a significant improvement for speaker adaptation (SA) using the new SI corpus and a small amount of speech from the new (target) speaker . A probabilistic spectral mapping is estimated independently for each training (reference) speaker and the target speaker . Each reference model is transformed to the space of the target speaker and combined by averaging . Using only 40 utterances from the target speaker for adaptation , the error rate dropped to 4.1% --- a 45% reduction in error compared to the SI result.
[ { "id": "H90-1060.1", "char_start": 46, "char_end": 92 }, { "id": "H90-1060.2", "char_start": 132, "char_end": 165 }, { "id": "H90-1060.3", "char_start": 169, "char_end": 195 }, { "id": "H90-1060.4", "char_start": 227, "char_end": 233 }, { "id": "H90-1060.5", "char_start": 245, "char_end": 253 }, { "id": "H90-1060.6", "char_start": 308, "char_end": 314 }, { "id": "H90-1060.7", "char_start": 325, "char_end": 333 }, { "id": "H90-1060.8", "char_start": 368, "char_end": 385 }, { "id": "H90-1060.9", "char_start": 411, "char_end": 421 }, { "id": "H90-1060.10", "char_start": 425, "char_end": 453 }, { "id": "H90-1060.11", "char_start": 495, "char_end": 506 }, { "id": "H90-1060.12", "char_start": 517, "char_end": 525 }, { "id": "H90-1060.13", "char_start": 535, "char_end": 543 }, { "id": "H90-1060.14", "char_start": 559, "char_end": 576 }, { "id": "H90-1060.15", "char_start": 581, "char_end": 595 }, { "id": "H90-1060.16", "char_start": 617, "char_end": 632 }, { "id": "H90-1060.17", "char_start": 647, "char_end": 654 }, { "id": "H90-1060.18", "char_start": 659, "char_end": 667 }, { "id": "H90-1060.19", "char_start": 677, "char_end": 709 }, { "id": "H90-1060.20", "char_start": 717, "char_end": 728 }, { "id": "H90-1060.21", "char_start": 796, "char_end": 813 }, { "id": "H90-1060.22", "char_start": 862, "char_end": 885 }, { "id": "H90-1060.23", "char_start": 900, "char_end": 909 }, { "id": "H90-1060.24", "char_start": 932, "char_end": 938 }, { "id": "H90-1060.25", "char_start": 961, "char_end": 968 }, { "id": "H90-1060.26", "char_start": 973, "char_end": 1003 }, { "id": "H90-1060.27", "char_start": 1040, "char_end": 1068 }, { "id": "H90-1060.28", "char_start": 1077, "char_end": 1091 }, { "id": "H90-1060.29", "char_start": 1099, "char_end": 1114 }, { "id": "H90-1060.30", "char_start": 1137, "char_end": 1142 }, { "id": "H90-1060.31", "char_start": 1150, "char_end": 1164 }, { "id": "H90-1060.32", "char_start": 1181, "char_end": 1190 }, { "id": "H90-1060.33", "char_start": 1207, "char_end": 1217 }, { "id": "H90-1060.34", "char_start": 1227, "char_end": 1241 }, { "id": "H90-1060.35", "char_start": 1246, "char_end": 1256 }, { "id": "H90-1060.36", "char_start": 1263, "char_end": 1273 }, { "id": "H90-1060.37", "char_start": 1335, "char_end": 1337 } ]
[ { "label": 1, "arg1": "H90-1060.2", "arg2": "H90-1060.4", "reverse": true }, { "label": 6, "arg1": "H90-1060.9", "arg2": "H90-1060.11", "reverse": false }, { "label": 2, "arg1": "H90-1060.14", "arg2": "H90-1060.16", "reverse": false }, { "label": 4, "arg1": "H90-1060.18", "arg2": "H90-1060.19", "reverse": false }, { "label": 1, "arg1": "H90-1060.22", "arg2": "H90-1060.23", "reverse": true }, { "label": 3, "arg1": "H90-1060.26", "arg2": "H90-1060.27", "reverse": false }, { "label": 1, "arg1": "H90-1060.29", "arg2": "H90-1060.32", "reverse": true }, { "label": 1, "arg1": "H90-1060.33", "arg2": "H90-1060.35", "reverse": false } ]
J90-3002
AN EDITOR FOR THE EXPLANATORY AND COMBINATORY DICTIONARY OF CONTEMPORARY FRENCH (DECFC)
This paper presents a specialized editor for a highly structured dictionary . The basic goal in building that editor was to provide an adequate tool to help lexicologists produce a valid and coherent dictionary on the basis of a linguistic theory . If we want valuable lexicons and grammars to achieve complex natural language processing , we must provide very powerful tools to help create and ensure the validity of such complex linguistic databases . Our most important task in building the editor was to define a set of coherence rules that could be computationally applied to ensure the validity of lexical entries . A customized interface for browsing and editing was also designed and implemented.
[ { "id": "J90-3002.1", "char_start": 37, "char_end": 43 }, { "id": "J90-3002.2", "char_start": 68, "char_end": 78 }, { "id": "J90-3002.3", "char_start": 113, "char_end": 119 }, { "id": "J90-3002.4", "char_start": 160, "char_end": 173 }, { "id": "J90-3002.5", "char_start": 203, "char_end": 213 }, { "id": "J90-3002.6", "char_start": 232, "char_end": 249 }, { "id": "J90-3002.7", "char_start": 272, "char_end": 280 }, { "id": "J90-3002.8", "char_start": 285, "char_end": 293 }, { "id": "J90-3002.9", "char_start": 313, "char_end": 340 }, { "id": "J90-3002.10", "char_start": 434, "char_end": 454 }, { "id": "J90-3002.11", "char_start": 497, "char_end": 503 }, { "id": "J90-3002.12", "char_start": 527, "char_end": 542 }, { "id": "J90-3002.13", "char_start": 607, "char_end": 622 }, { "id": "J90-3002.14", "char_start": 638, "char_end": 647 } ]
[ { "label": 1, "arg1": "J90-3002.1", "arg2": "J90-3002.2", "reverse": false }, { "label": 1, "arg1": "J90-3002.3", "arg2": "J90-3002.4", "reverse": false }, { "label": 1, "arg1": "J90-3002.5", "arg2": "J90-3002.6", "reverse": true }, { "label": 1, "arg1": "J90-3002.8", "arg2": "J90-3002.9", "reverse": false } ]
P90-1014
Free Indexation: Combinatorial Analysis and A Compositional Algorithm
The principle known as free indexation plays an important role in the determination of the referential properties of noun phrases in the principle-and-parameters language framework . First, by investigating the combinatorics of free indexation , we show that the problem of enumerating all possible indexings requires exponential time . Secondly, we exhibit a provably optimal free indexation algorithm .
[ { "id": "P90-1014.1", "char_start": 26, "char_end": 41 }, { "id": "P90-1014.2", "char_start": 94, "char_end": 132 }, { "id": "P90-1014.3", "char_start": 140, "char_end": 183 }, { "id": "P90-1014.4", "char_start": 231, "char_end": 246 }, { "id": "P90-1014.5", "char_start": 302, "char_end": 311 }, { "id": "P90-1014.6", "char_start": 321, "char_end": 337 }, { "id": "P90-1014.7", "char_start": 380, "char_end": 405 } ]
[ { "label": 2, "arg1": "P90-1014.1", "arg2": "P90-1014.2", "reverse": false } ]
A92-1026
Robust Processing of Real-World Natural-Language Texts
It is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS ; especially in the MUC-3 evaluation , has shown that principled techniques for syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust---an agenda-based scheduling parser , a recovery technique for failed parses , and a new technique called terminal substring parsing . For pragmatics processing , we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge , performance degrades gracefully. Each of these techniques have been evaluated and the results of the evaluations are presented.
[ { "id": "A92-1026.1", "char_start": 33, "char_end": 60 }, { "id": "A92-1026.2", "char_start": 194, "char_end": 201 }, { "id": "A92-1026.3", "char_start": 222, "char_end": 238 }, { "id": "A92-1026.4", "char_start": 282, "char_end": 314 }, { "id": "A92-1026.5", "char_start": 411, "char_end": 429 }, { "id": "A92-1026.6", "char_start": 447, "char_end": 477 }, { "id": "A92-1026.7", "char_start": 482, "char_end": 518 }, { "id": "A92-1026.8", "char_start": 548, "char_end": 574 }, { "id": "A92-1026.9", "char_start": 581, "char_end": 602 }, { "id": "A92-1026.10", "char_start": 635, "char_end": 654 }, { "id": "A92-1026.11", "char_start": 762, "char_end": 777 } ]
[ { "label": 1, "arg1": "A92-1026.5", "arg2": "A92-1026.6", "reverse": true }, { "label": 1, "arg1": "A92-1026.9", "arg2": "A92-1026.10", "reverse": true } ]
A92-1027
An Efficient Chart-based Algorithm for Partial-Parsing of Unrestricted Texts
We present an efficient algorithm for chart-based phrase structure parsing of natural language that is tailored to the problem of extracting specific information from unrestricted texts where many of the words are unknown and much of the text is irrelevant to the task. The parser gains algorithmic efficiency through a reduction of its search space . As each new edge is added to the chart , the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. The resulting spanning edges are insured to be the correct ones by carefully controlling the order in which edges are introduced so that every final constituent covers the longest possible span . This is facilitated through the use of phrase boundary heuristics based on the placement of function words , and by heuristic rules that permit certain kinds of phrases to be deduced despite the presence of unknown words . A further reduction in the search space is achieved by using semantic rather than syntactic categories on the terminal and non-terminal edges , thereby reducing the amount of ambiguity and thus the number of edges , since only edges with a valid semantic interpretation are ever introduced.
[ { "id": "A92-1027.1", "char_start": 41, "char_end": 77 }, { "id": "A92-1027.2", "char_start": 81, "char_end": 97 }, { "id": "A92-1027.3", "char_start": 170, "char_end": 188 }, { "id": "A92-1027.4", "char_start": 207, "char_end": 212 }, { "id": "A92-1027.5", "char_start": 241, "char_end": 245 }, { "id": "A92-1027.6", "char_start": 277, "char_end": 283 }, { "id": "A92-1027.7", "char_start": 323, "char_end": 332 }, { "id": "A92-1027.8", "char_start": 340, "char_end": 352 }, { "id": "A92-1027.9", "char_start": 367, "char_end": 371 }, { "id": "A92-1027.10", "char_start": 388, "char_end": 393 }, { "id": "A92-1027.11", "char_start": 441, "char_end": 446 }, { "id": "A92-1027.12", "char_start": 484, "char_end": 489 }, { "id": "A92-1027.13", "char_start": 535, "char_end": 549 }, { "id": "A92-1027.14", "char_start": 629, "char_end": 634 }, { "id": "A92-1027.15", "char_start": 670, "char_end": 681 }, { "id": "A92-1027.16", "char_start": 710, "char_end": 714 }, { "id": "A92-1027.17", "char_start": 756, "char_end": 782 }, { "id": "A92-1027.18", "char_start": 809, "char_end": 823 }, { "id": "A92-1027.19", "char_start": 833, "char_end": 848 }, { "id": "A92-1027.20", "char_start": 878, "char_end": 885 }, { "id": "A92-1027.21", "char_start": 924, "char_end": 937 }, { "id": "A92-1027.22", "char_start": 950, "char_end": 979 }, { "id": "A92-1027.23", "char_start": 1001, "char_end": 1009 }, { "id": "A92-1027.24", "char_start": 1022, "char_end": 1042 }, { "id": "A92-1027.25", "char_start": 1050, "char_end": 1081 }, { "id": "A92-1027.26", "char_start": 1115, "char_end": 1124 }, { "id": "A92-1027.27", "char_start": 1148, "char_end": 1153 }, { "id": "A92-1027.28", "char_start": 1167, "char_end": 1172 }, { "id": "A92-1027.29", "char_start": 1186, "char_end": 1194 } ]
[ { "label": 1, "arg1": "A92-1027.1", "arg2": "A92-1027.2", "reverse": false }, { "label": 4, "arg1": "A92-1027.3", "arg2": "A92-1027.4", "reverse": true }, { "label": 6, "arg1": "A92-1027.11", "arg2": "A92-1027.12", "reverse": false }, { "label": 1, "arg1": "A92-1027.17", "arg2": "A92-1027.18", "reverse": true }, { "label": 4, "arg1": "A92-1027.20", "arg2": "A92-1027.21", "reverse": true }, { "label": 6, "arg1": "A92-1027.23", "arg2": "A92-1027.24", "reverse": false }, { "label": 2, "arg1": "A92-1027.26", "arg2": "A92-1027.27", "reverse": false }, { "label": 3, "arg1": "A92-1027.28", "arg2": "A92-1027.29", "reverse": true } ]
C92-1052
Temporal Structure of Discourse
In this paper discourse segments are defined and a method for discourse segmentation primarily based on abduction of temporal relations between segments is proposed. This method is precise and computationally feasible and is supported by previous work in the area of temporal anaphora resolution .
[ { "id": "C92-1052.1", "char_start": 17, "char_end": 35 }, { "id": "C92-1052.2", "char_start": 65, "char_end": 87 }, { "id": "C92-1052.3", "char_start": 107, "char_end": 116 }, { "id": "C92-1052.4", "char_start": 120, "char_end": 138 }, { "id": "C92-1052.5", "char_start": 147, "char_end": 155 }, { "id": "C92-1052.6", "char_start": 196, "char_end": 220 }, { "id": "C92-1052.7", "char_start": 270, "char_end": 298 } ]
[ { "label": 1, "arg1": "C92-1052.2", "arg2": "C92-1052.3", "reverse": true }, { "label": 3, "arg1": "C92-1052.4", "arg2": "C92-1052.5", "reverse": false } ]
C92-1055
Syntactic Ambiguity Resolution Using A Discrimination and Robustness Oriented Adaptive Learning Algorithm
In this paper, a discrimination and robustness oriented adaptive learning procedure is proposed to deal with the task of syntactic ambiguity resolution . Owing to the problem of insufficient training data and approximation error introduced by the language model , traditional statistical approaches , which resolve ambiguities by indirectly and implicitly using maximum likelihood method , fail to achieve high performance in real applications. The proposed method remedies these problems by adjusting the parameters to maximize the accuracy rate directly. To make the proposed algorithm robust, the possible variations between the training corpus and the real tasks are also taken into consideration by enlarging the separation margin between the correct candidate and its competing members. Significant improvement has been observed in the test. The accuracy rate of syntactic disambiguation is raised from 46.0% to 60.62% by using this novel approach.
[ { "id": "C92-1055.1", "char_start": 59, "char_end": 86 }, { "id": "C92-1055.2", "char_start": 124, "char_end": 154 }, { "id": "C92-1055.3", "char_start": 181, "char_end": 207 }, { "id": "C92-1055.4", "char_start": 212, "char_end": 231 }, { "id": "C92-1055.5", "char_start": 250, "char_end": 264 }, { "id": "C92-1055.6", "char_start": 279, "char_end": 301 }, { "id": "C92-1055.7", "char_start": 318, "char_end": 329 }, { "id": "C92-1055.8", "char_start": 365, "char_end": 390 }, { "id": "C92-1055.9", "char_start": 414, "char_end": 425 }, { "id": "C92-1055.10", "char_start": 536, "char_end": 549 }, { "id": "C92-1055.11", "char_start": 635, "char_end": 650 }, { "id": "C92-1055.12", "char_start": 721, "char_end": 738 }, { "id": "C92-1055.13", "char_start": 855, "char_end": 868 }, { "id": "C92-1055.14", "char_start": 872, "char_end": 896 } ]
[ { "label": 1, "arg1": "C92-1055.1", "arg2": "C92-1055.2", "reverse": false }, { "label": 2, "arg1": "C92-1055.4", "arg2": "C92-1055.5", "reverse": true }, { "label": 1, "arg1": "C92-1055.6", "arg2": "C92-1055.8", "reverse": true }, { "label": 2, "arg1": "C92-1055.13", "arg2": "C92-1055.14", "reverse": true } ]
C92-2068
Quasi-Destructive Graph Unification with Structure-Sharing
Graph unification remains the most expensive part of unification-based grammar parsing . We focus on one speed-up element in the design of unification algorithms : avoidance of copying of unmodified subgraphs . We propose a method of attaining such a design through a method of structure-sharing which avoids log(d) overheads often associated with structure-sharing of graphs without any use of costly dependency pointers . The proposed scheme eliminates redundant copying while maintaining the quasi-destructive scheme's ability to avoid over copying and early copying combined with its ability to handle cyclic structures without algorithmic additions.
[ { "id": "C92-2068.1", "char_start": 3, "char_end": 20 }, { "id": "C92-2068.2", "char_start": 56, "char_end": 89 }, { "id": "C92-2068.3", "char_start": 142, "char_end": 164 }, { "id": "C92-2068.4", "char_start": 180, "char_end": 187 }, { "id": "C92-2068.5", "char_start": 191, "char_end": 211 }, { "id": "C92-2068.6", "char_start": 281, "char_end": 298 }, { "id": "C92-2068.7", "char_start": 312, "char_end": 328 }, { "id": "C92-2068.8", "char_start": 351, "char_end": 378 }, { "id": "C92-2068.9", "char_start": 405, "char_end": 424 }, { "id": "C92-2068.10", "char_start": 458, "char_end": 475 }, { "id": "C92-2068.11", "char_start": 498, "char_end": 532 }, { "id": "C92-2068.12", "char_start": 542, "char_end": 554 }, { "id": "C92-2068.13", "char_start": 559, "char_end": 572 }, { "id": "C92-2068.14", "char_start": 609, "char_end": 626 } ]
[ { "label": 4, "arg1": "C92-2068.1", "arg2": "C92-2068.2", "reverse": false }, { "label": 3, "arg1": "C92-2068.7", "arg2": "C92-2068.8", "reverse": false } ]
C92-2115
A Similarity-Driven Transfer System
The transfer phase in machine translation (MT) systems has been considered to be more complicated than analysis and generation , since it is inherently a conglomeration of individual lexical rules . Currently some attempts are being made to use case-based reasoning in machine translation , that is, to make decisions on the basis of translation examples at appropriate pints in MT . This paper proposes a new type of transfer system , called a Similarity-driven Transfer System (SimTran) , for use in such case-based MT (CBMT) .
[ { "id": "C92-2115.1", "char_start": 7, "char_end": 21 }, { "id": "C92-2115.2", "char_start": 25, "char_end": 57 }, { "id": "C92-2115.3", "char_start": 106, "char_end": 114 }, { "id": "C92-2115.4", "char_start": 119, "char_end": 129 }, { "id": "C92-2115.5", "char_start": 186, "char_end": 199 }, { "id": "C92-2115.6", "char_start": 248, "char_end": 268 }, { "id": "C92-2115.7", "char_start": 272, "char_end": 291 }, { "id": "C92-2115.8", "char_start": 337, "char_end": 357 }, { "id": "C92-2115.9", "char_start": 382, "char_end": 384 }, { "id": "C92-2115.10", "char_start": 421, "char_end": 436 }, { "id": "C92-2115.11", "char_start": 448, "char_end": 491 }, { "id": "C92-2115.12", "char_start": 510, "char_end": 530 } ]
[ { "label": 6, "arg1": "C92-2115.1", "arg2": "C92-2115.3", "reverse": false }, { "label": 1, "arg1": "C92-2115.6", "arg2": "C92-2115.7", "reverse": false }, { "label": 1, "arg1": "C92-2115.11", "arg2": "C92-2115.12", "reverse": false } ]
C92-3165
Interactive Speech Understanding
This paper introduces a robust interactive method for speech understanding . The generalized LR parsing is enhanced in this approach. Parsing proceeds from left to right correcting minor errors. When a very noisy portion is detected, the parser skips that portion using a fake non-terminal symbol . The unidentified portion is resolved by re-utterance of that portion which is parsed very efficiently by using the parse record of the first utterance . The user does not have to speak the whole sentence again. This method is also capable of handling unknown words , which is important in practical systems. Detected unknown words can be incrementally incorporated into the dictionary after the interaction with the user . A pilot system has shown great effectiveness of this approach.
[ { "id": "C92-3165.1", "char_start": 34, "char_end": 77 }, { "id": "C92-3165.2", "char_start": 84, "char_end": 106 }, { "id": "C92-3165.3", "char_start": 137, "char_end": 144 }, { "id": "C92-3165.4", "char_start": 216, "char_end": 223 }, { "id": "C92-3165.5", "char_start": 241, "char_end": 247 }, { "id": "C92-3165.6", "char_start": 259, "char_end": 266 }, { "id": "C92-3165.7", "char_start": 280, "char_end": 299 }, { "id": "C92-3165.8", "char_start": 319, "char_end": 326 }, { "id": "C92-3165.9", "char_start": 342, "char_end": 354 }, { "id": "C92-3165.10", "char_start": 363, "char_end": 370 }, { "id": "C92-3165.11", "char_start": 417, "char_end": 429 }, { "id": "C92-3165.12", "char_start": 443, "char_end": 452 }, { "id": "C92-3165.13", "char_start": 459, "char_end": 463 }, { "id": "C92-3165.14", "char_start": 497, "char_end": 505 }, { "id": "C92-3165.15", "char_start": 553, "char_end": 566 }, { "id": "C92-3165.16", "char_start": 619, "char_end": 632 }, { "id": "C92-3165.17", "char_start": 676, "char_end": 686 }, { "id": "C92-3165.18", "char_start": 718, "char_end": 722 }, { "id": "C92-3165.19", "char_start": 727, "char_end": 739 } ]
[ { "label": 1, "arg1": "C92-3165.5", "arg2": "C92-3165.7", "reverse": true }, { "label": 3, "arg1": "C92-3165.11", "arg2": "C92-3165.12", "reverse": false }, { "label": 4, "arg1": "C92-3165.16", "arg2": "C92-3165.17", "reverse": false } ]
C92-4199
Recognizing Unregistered Names for Mandarin Word Identification
Word Identification has been an important and active issue in Chinese Natural Language Processing . In this paper, a new mechanism, based on the concept of sublanguage , is proposed for identifying unknown words , especially personal names , in Chinese newspapers . The proposed mechanism includes title-driven name recognition , adaptive dynamic word formation , identification of 2-character and 3-character Chinese names without title . We will show the experimental results for two corpora and compare them with the results by the NTHU's statistic-based system , the only system that we know has attacked the same problem. The experimental results have shown significant improvements over the WI systems without the name identification capability.
[ { "id": "C92-4199.1", "char_start": 3, "char_end": 22 }, { "id": "C92-4199.2", "char_start": 65, "char_end": 100 }, { "id": "C92-4199.3", "char_start": 159, "char_end": 170 }, { "id": "C92-4199.4", "char_start": 201, "char_end": 214 }, { "id": "C92-4199.5", "char_start": 228, "char_end": 242 }, { "id": "C92-4199.6", "char_start": 248, "char_end": 266 }, { "id": "C92-4199.7", "char_start": 301, "char_end": 330 }, { "id": "C92-4199.8", "char_start": 333, "char_end": 364 }, { "id": "C92-4199.9", "char_start": 367, "char_end": 440 }, { "id": "C92-4199.10", "char_start": 489, "char_end": 496 }, { "id": "C92-4199.11", "char_start": 538, "char_end": 567 }, { "id": "C92-4199.12", "char_start": 700, "char_end": 710 }, { "id": "C92-4199.13", "char_start": 723, "char_end": 742 } ]
[ { "label": 1, "arg1": "C92-4199.3", "arg2": "C92-4199.4", "reverse": false }, { "label": 4, "arg1": "C92-4199.5", "arg2": "C92-4199.6", "reverse": false } ]
C92-4207
Reconstructing Spatial Image from Natural Language Texts
This paper describes the understanding process of the spatial descriptions in Japanese . In order to understand the described world , the authors try to reconstruct the geometric model of the global scene from the scenic descriptions drawing a space. It is done by an experimental computer program SPRINT , which takes natural language texts and produces a model of the described world . To reconstruct the model , the authors extract the qualitative spatial constraints from the text , and represent them as the numerical constraints on the spatial attributes of the entities . This makes it possible to express the vagueness of the spatial concepts and to derive the maximally plausible interpretation from a chunk of information accumulated as the constraints. The interpretation reflects the temporary belief about the world .
[ { "id": "C92-4207.1", "char_start": 57, "char_end": 77 }, { "id": "C92-4207.2", "char_start": 81, "char_end": 89 }, { "id": "C92-4207.3", "char_start": 129, "char_end": 134 }, { "id": "C92-4207.4", "char_start": 284, "char_end": 300 }, { "id": "C92-4207.5", "char_start": 301, "char_end": 307 }, { "id": "C92-4207.6", "char_start": 322, "char_end": 344 }, { "id": "C92-4207.7", "char_start": 360, "char_end": 365 }, { "id": "C92-4207.8", "char_start": 383, "char_end": 388 }, { "id": "C92-4207.9", "char_start": 410, "char_end": 415 }, { "id": "C92-4207.10", "char_start": 442, "char_end": 473 }, { "id": "C92-4207.11", "char_start": 483, "char_end": 487 }, { "id": "C92-4207.12", "char_start": 516, "char_end": 537 }, { "id": "C92-4207.13", "char_start": 545, "char_end": 563 }, { "id": "C92-4207.14", "char_start": 571, "char_end": 579 }, { "id": "C92-4207.15", "char_start": 637, "char_end": 653 }, { "id": "C92-4207.16", "char_start": 799, "char_end": 815 }, { "id": "C92-4207.17", "char_start": 826, "char_end": 831 } ]
[ { "label": 4, "arg1": "C92-4207.1", "arg2": "C92-4207.2", "reverse": false }, { "label": 3, "arg1": "C92-4207.7", "arg2": "C92-4207.8", "reverse": false }, { "label": 4, "arg1": "C92-4207.10", "arg2": "C92-4207.11", "reverse": false }, { "label": 3, "arg1": "C92-4207.13", "arg2": "C92-4207.14", "reverse": false }, { "label": 3, "arg1": "C92-4207.16", "arg2": "C92-4207.17", "reverse": false } ]
H92-1003
Multi-Site Data Collection for a Spoken Language Corpus: MADCOW
This paper describes a recently collected spoken language corpus for the ATIS (Air Travel Information System) domain . This data collection effort has been co-ordinated by MADCOW (Multi-site ATIS Data COllection Working group) . We summarize the motivation for this effort, the goals, the implementation of a multi-site data collection paradigm , and the accomplishments of MADCOW in monitoring the collection and distribution of 12,000 utterances of spontaneous speech from five sites for use in a multi-site common evaluation of speech, natural language and spoken language .
[ { "id": "H92-1003.1", "char_start": 45, "char_end": 67 }, { "id": "H92-1003.2", "char_start": 76, "char_end": 119 }, { "id": "H92-1003.3", "char_start": 175, "char_end": 229 }, { "id": "H92-1003.4", "char_start": 312, "char_end": 347 }, { "id": "H92-1003.5", "char_start": 377, "char_end": 383 }, { "id": "H92-1003.6", "char_start": 402, "char_end": 412 }, { "id": "H92-1003.7", "char_start": 440, "char_end": 450 }, { "id": "H92-1003.8", "char_start": 454, "char_end": 472 }, { "id": "H92-1003.9", "char_start": 502, "char_end": 578 } ]
[ { "label": 3, "arg1": "H92-1003.1", "arg2": "H92-1003.2", "reverse": true }, { "label": 3, "arg1": "H92-1003.7", "arg2": "H92-1003.8", "reverse": true } ]
H92-1010
Spoken Language Processing in the Framework of Human-Machine Communication at LIMSI
The paper provides an overview of the research conducted at LIMSI in the field of speech processing , but also in the related areas of Human-Machine Communication , including Natural Language Processing , Non Verbal and Multimodal Communication . Also presented are the commercial applications of some of the research projects. When applicable, the discussion is placed in the framework of international collaborations.
[ { "id": "H92-1010.1", "char_start": 63, "char_end": 68 }, { "id": "H92-1010.2", "char_start": 85, "char_end": 102 }, { "id": "H92-1010.3", "char_start": 138, "char_end": 165 }, { "id": "H92-1010.4", "char_start": 178, "char_end": 205 }, { "id": "H92-1010.5", "char_start": 208, "char_end": 247 } ]
[ { "label": 6, "arg1": "H92-1010.2", "arg2": "H92-1010.3", "reverse": false } ]
H92-1016
The MIT ATIS System: February 1992 Progress Report
This paper describes the status of the MIT ATIS system as of February 1992, focusing especially on the changes made to the SUMMIT recognizer . These include context-dependent phonetic modelling , the use of a bigram language model in conjunction with a probabilistic LR parser , and refinements made to the lexicon . Together with the use of a larger training set , these modifications combined to reduce the speech recognition word and sentence error rates by a factor of 2.5 and 1.6, respectively, on the October '91 test set . The weighted error for the entire spoken language system on the same test set is 49.3%. Similar results were also obtained on the February '92 benchmark evaluation .
[ { "id": "H92-1016.1", "char_start": 42, "char_end": 57 }, { "id": "H92-1016.2", "char_start": 126, "char_end": 143 }, { "id": "H92-1016.3", "char_start": 160, "char_end": 196 }, { "id": "H92-1016.4", "char_start": 212, "char_end": 233 }, { "id": "H92-1016.5", "char_start": 256, "char_end": 279 }, { "id": "H92-1016.6", "char_start": 310, "char_end": 317 }, { "id": "H92-1016.7", "char_start": 354, "char_end": 366 }, { "id": "H92-1016.8", "char_start": 412, "char_end": 460 }, { "id": "H92-1016.9", "char_start": 510, "char_end": 530 }, { "id": "H92-1016.10", "char_start": 567, "char_end": 589 }, { "id": "H92-1016.11", "char_start": 602, "char_end": 610 }, { "id": "H92-1016.12", "char_start": 663, "char_end": 696 } ]
[]
H92-1017
Recent Improvements and Benchmark Results for Paramax ATIS System
This paper describes three relatively domain-independent capabilities recently added to the Paramax spoken language understanding system : non-monotonic reasoning , implicit reference resolution , and database query paraphrase . In addition, we discuss the results of the February 1992 ATIS benchmark tests . We describe a variation on the standard evaluation metric which provides a more tightly controlled measure of progress. Finally, we briefly describe an experiment which we have done in extending the n-best speech/language integration architecture to improving OCR accuracy .
[ { "id": "H92-1017.1", "char_start": 41, "char_end": 72 }, { "id": "H92-1017.2", "char_start": 95, "char_end": 139 }, { "id": "H92-1017.3", "char_start": 142, "char_end": 165 }, { "id": "H92-1017.4", "char_start": 168, "char_end": 197 }, { "id": "H92-1017.5", "char_start": 204, "char_end": 229 }, { "id": "H92-1017.6", "char_start": 275, "char_end": 309 }, { "id": "H92-1017.7", "char_start": 343, "char_end": 369 }, { "id": "H92-1017.8", "char_start": 511, "char_end": 558 }, { "id": "H92-1017.9", "char_start": 572, "char_end": 575 }, { "id": "H92-1017.10", "char_start": 576, "char_end": 584 } ]
[ { "label": 3, "arg1": "H92-1017.1", "arg2": "H92-1017.2", "reverse": false }, { "label": 2, "arg1": "H92-1017.8", "arg2": "H92-1017.10", "reverse": false } ]
H92-1026
Towards History-based Grammars: Using Richer Models for Probabilistic Parsing
We describe a generative probabilistic model of natural language , which we call HBG , that takes advantage of detailed linguistic information to resolve ambiguity . HBG incorporates lexical, syntactic, semantic, and structural information from the parse tree into the disambiguation process in a novel way. We use a corpus of bracketed sentences , called a Treebank , in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence . This stands in contrast to the usual approach of further grammar tailoring via the usual linguistic introspection in the hope of generating the correct parse . In head-to-head tests against one of the best existing robust probabilistic parsing models , which we call P-CFG, the HBG model significantly outperforms P-CFG , increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.
[ { "id": "H92-1026.1", "char_start": 17, "char_end": 67 }, { "id": "H92-1026.2", "char_start": 84, "char_end": 87 }, { "id": "H92-1026.3", "char_start": 123, "char_end": 145 }, { "id": "H92-1026.4", "char_start": 157, "char_end": 166 }, { "id": "H92-1026.5", "char_start": 170, "char_end": 173 }, { "id": "H92-1026.6", "char_start": 187, "char_end": 243 }, { "id": "H92-1026.7", "char_start": 253, "char_end": 263 }, { "id": "H92-1026.8", "char_start": 273, "char_end": 295 }, { "id": "H92-1026.9", "char_start": 321, "char_end": 350 }, { "id": "H92-1026.10", "char_start": 362, "char_end": 370 }, { "id": "H92-1026.11", "char_start": 393, "char_end": 415 }, { "id": "H92-1026.12", "char_start": 455, "char_end": 465 }, { "id": "H92-1026.13", "char_start": 498, "char_end": 503 }, { "id": "H92-1026.14", "char_start": 509, "char_end": 517 }, { "id": "H92-1026.15", "char_start": 577, "char_end": 584 }, { "id": "H92-1026.16", "char_start": 609, "char_end": 633 }, { "id": "H92-1026.17", "char_start": 672, "char_end": 677 }, { "id": "H92-1026.18", "char_start": 683, "char_end": 701 }, { "id": "H92-1026.19", "char_start": 742, "char_end": 770 }, { "id": "H92-1026.20", "char_start": 787, "char_end": 792 }, { "id": "H92-1026.21", "char_start": 798, "char_end": 807 }, { "id": "H92-1026.22", "char_start": 834, "char_end": 839 }, { "id": "H92-1026.23", "char_start": 857, "char_end": 873 } ]
[ { "label": 1, "arg1": "H92-1026.3", "arg2": "H92-1026.4", "reverse": false }, { "label": 1, "arg1": "H92-1026.5", "arg2": "H92-1026.6", "reverse": true }, { "label": 3, "arg1": "H92-1026.13", "arg2": "H92-1026.14", "reverse": false }, { "label": 1, "arg1": "H92-1026.15", "arg2": "H92-1026.16", "reverse": true }, { "label": 6, "arg1": "H92-1026.21", "arg2": "H92-1026.22", "reverse": false } ]
H92-1036
MAP Estimation of Continuous Density HMM: Theory and Applications
We discuss maximum a posteriori estimation of continuous density hidden Markov models (CDHMM) . The classical MLE reestimation algorithms , namely the forward-backward algorithm and the segmental k-means algorithm , are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities . Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing , speaker adaptation , speaker group modeling and corrective training . New experimental results on all four applications are provided to show the effectiveness of the MAP estimation approach .
[ { "id": "H92-1036.1", "char_start": 14, "char_end": 45 }, { "id": "H92-1036.2", "char_start": 49, "char_end": 96 }, { "id": "H92-1036.3", "char_start": 113, "char_end": 140 }, { "id": "H92-1036.4", "char_start": 154, "char_end": 180 }, { "id": "H92-1036.5", "char_start": 189, "char_end": 216 }, { "id": "H92-1036.6", "char_start": 236, "char_end": 257 }, { "id": "H92-1036.7", "char_start": 272, "char_end": 319 }, { "id": "H92-1036.8", "char_start": 354, "char_end": 371 }, { "id": "H92-1036.9", "char_start": 424, "char_end": 442 }, { "id": "H92-1036.10", "char_start": 464, "char_end": 483 }, { "id": "H92-1036.11", "char_start": 486, "char_end": 504 }, { "id": "H92-1036.12", "char_start": 507, "char_end": 529 }, { "id": "H92-1036.13", "char_start": 534, "char_end": 553 }, { "id": "H92-1036.14", "char_start": 652, "char_end": 675 } ]
[ { "label": 1, "arg1": "H92-1036.6", "arg2": "H92-1036.7", "reverse": false }, { "label": 1, "arg1": "H92-1036.8", "arg2": "H92-1036.9", "reverse": false } ]
H92-1045
One Sense Per Discourse
It is well-known that there are polysemous words like sentence whose meaning or sense depends on the context of use. We have recently reported on two new word-sense disambiguation systems , one trained on bilingual material (the Canadian Hansards ) and the other trained on monolingual material ( Roget's Thesaurus and Grolier's Encyclopedia ). As this work was nearing completion, we observed a very strong discourse effect. That is, if a polysemous word such as sentence appears two or more times in a well-written discourse , it is extremely likely that they will all share the same sense . This paper describes an experiment which confirmed this hypothesis and found that the tendency to share sense in the same discourse is extremely strong (98%). This result can be used as an additional source of constraint for improving the performance of the word-sense disambiguation algorithm . In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint .
[ { "id": "H92-1045.1", "char_start": 35, "char_end": 51 }, { "id": "H92-1045.2", "char_start": 57, "char_end": 65 }, { "id": "H92-1045.3", "char_start": 72, "char_end": 79 }, { "id": "H92-1045.4", "char_start": 83, "char_end": 88 }, { "id": "H92-1045.5", "char_start": 157, "char_end": 190 }, { "id": "H92-1045.6", "char_start": 208, "char_end": 226 }, { "id": "H92-1045.7", "char_start": 232, "char_end": 249 }, { "id": "H92-1045.8", "char_start": 277, "char_end": 297 }, { "id": "H92-1045.9", "char_start": 300, "char_end": 317 }, { "id": "H92-1045.10", "char_start": 322, "char_end": 344 }, { "id": "H92-1045.11", "char_start": 411, "char_end": 420 }, { "id": "H92-1045.12", "char_start": 443, "char_end": 458 }, { "id": "H92-1045.13", "char_start": 467, "char_end": 475 }, { "id": "H92-1045.14", "char_start": 507, "char_end": 529 }, { "id": "H92-1045.15", "char_start": 589, "char_end": 594 }, { "id": "H92-1045.16", "char_start": 701, "char_end": 706 }, { "id": "H92-1045.17", "char_start": 719, "char_end": 728 }, { "id": "H92-1045.18", "char_start": 807, "char_end": 817 }, { "id": "H92-1045.19", "char_start": 855, "char_end": 890 }, { "id": "H92-1045.20", "char_start": 945, "char_end": 970 }, { "id": "H92-1045.21", "char_start": 1000, "char_end": 1020 } ]
[ { "label": 3, "arg1": "H92-1045.2", "arg2": "H92-1045.3", "reverse": true }, { "label": 1, "arg1": "H92-1045.5", "arg2": "H92-1045.6", "reverse": true }, { "label": 4, "arg1": "H92-1045.13", "arg2": "H92-1045.14", "reverse": false }, { "label": 4, "arg1": "H92-1045.16", "arg2": "H92-1045.17", "reverse": false }, { "label": 1, "arg1": "H92-1045.18", "arg2": "H92-1045.19", "reverse": false }, { "label": 1, "arg1": "H92-1045.20", "arg2": "H92-1045.21", "reverse": true } ]
P06-1112
Exploring Correlation Of Dependency Relation Paths For Answer Extraction
In this paper, we explore correlation of dependency relation paths to rank candidate answers in answer extraction. Using the correlation measure, we compare dependency relations of a candidate answer and mapped question phrases in sentence with the corresponding relations in question. Different from previous studies, we propose an approximate phrase mapping algorithm and incorporate the mapping score into the correlation measure. The correlations are further incorporated into a Maximum Entropy-based ranking model which estimates path weights from training. Experimental results show that our method significantly outperforms state-of-the-art syntactic relation-based methods by up to 20% in MRR.
[ { "id": "P06-1112.1", "char_start": 42, "char_end": 67 }, { "id": "P06-1112.2", "char_start": 97, "char_end": 114 }, { "id": "P06-1112.3", "char_start": 126, "char_end": 145 }, { "id": "P06-1112.4", "char_start": 158, "char_end": 178 }, { "id": "P06-1112.5", "char_start": 212, "char_end": 228 }, { "id": "P06-1112.6", "char_start": 232, "char_end": 240 }, { "id": "P06-1112.7", "char_start": 264, "char_end": 273 }, { "id": "P06-1112.8", "char_start": 334, "char_end": 370 }, { "id": "P06-1112.9", "char_start": 391, "char_end": 404 }, { "id": "P06-1112.10", "char_start": 414, "char_end": 433 }, { "id": "P06-1112.11", "char_start": 484, "char_end": 519 }, { "id": "P06-1112.12", "char_start": 536, "char_end": 548 }, { "id": "P06-1112.13", "char_start": 649, "char_end": 681 }, { "id": "P06-1112.14", "char_start": 698, "char_end": 701 } ]
[ { "label": 1, "arg1": "P06-1112.1", "arg2": "P06-1112.2", "reverse": false }, { "label": 3, "arg1": "P06-1112.4", "arg2": "P06-1112.5", "reverse": false }, { "label": 4, "arg1": "P06-1112.9", "arg2": "P06-1112.10", "reverse": false } ]
C90-3063
Automatic Processing Of Large Corpora For The Resolution Of Anaphora References
Manual acquisition of semantic constraints in broad domains is very expensive. This paper presents an automatic scheme for collecting statistics on cooccurrence patterns in a large corpus. To a large extent, these statistics reflect semantic constraints and thus are used to disambiguate anaphora references and syntactic ambiguities. The scheme was implemented by gathering statistics on the output of other linguistic tools. An experiment was performed to resolve references of the pronoun "it" in sentences that were randomly selected from the corpus. The results of the experiment show that in most of the cases the cooccurrence statistics indeed reflect the semantic constraints and thus provide a basis for a useful disambiguation tool.
[ { "id": "C90-3063.1", "char_start": 1, "char_end": 19 }, { "id": "C90-3063.2", "char_start": 23, "char_end": 43 }, { "id": "C90-3063.3", "char_start": 149, "char_end": 170 }, { "id": "C90-3063.4", "char_start": 182, "char_end": 188 }, { "id": "C90-3063.5", "char_start": 234, "char_end": 254 }, { "id": "C90-3063.6", "char_start": 289, "char_end": 308 }, { "id": "C90-3063.7", "char_start": 313, "char_end": 334 }, { "id": "C90-3063.8", "char_start": 467, "char_end": 477 }, { "id": "C90-3063.9", "char_start": 485, "char_end": 497 }, { "id": "C90-3063.10", "char_start": 501, "char_end": 510 }, { "id": "C90-3063.11", "char_start": 548, "char_end": 554 }, { "id": "C90-3063.12", "char_start": 621, "char_end": 644 }, { "id": "C90-3063.13", "char_start": 664, "char_end": 684 }, { "id": "C90-3063.14", "char_start": 723, "char_end": 742 } ]
[ { "label": 4, "arg1": "C90-3063.3", "arg2": "C90-3063.4", "reverse": false }, { "label": 1, "arg1": "C90-3063.5", "arg2": "C90-3063.6", "reverse": false }, { "label": 3, "arg1": "C90-3063.8", "arg2": "C90-3063.9", "reverse": false }, { "label": 4, "arg1": "C90-3063.10", "arg2": "C90-3063.11", "reverse": false }, { "label": 1, "arg1": "C90-3063.12", "arg2": "C90-3063.14", "reverse": false } ]
C04-1011
Kullback-Leibler Distance Between Probabilistic Context-Free Grammars And Probabilistic Finite Automata
We consider the problem of computing the Kullback-Leibler distance, also called the relative entropy, between a probabilistic context-free grammar and a probabilistic finite automaton. We show that there is a closed-form (analytical) solution for one part of the Kullback-Leibler distance, viz. the cross-entropy. We discuss several applications of the result to the problem of distributional approximation of probabilistic context-free grammars by means of probabilistic finite automata.
[ { "id": "C04-1011.1", "char_start": 42, "char_end": 67 }, { "id": "C04-1011.2", "char_start": 85, "char_end": 101 }, { "id": "C04-1011.3", "char_start": 113, "char_end": 147 }, { "id": "C04-1011.4", "char_start": 154, "char_end": 184 }, { "id": "C04-1011.5", "char_start": 210, "char_end": 243 }, { "id": "C04-1011.6", "char_start": 264, "char_end": 289 }, { "id": "C04-1011.7", "char_start": 300, "char_end": 313 }, { "id": "C04-1011.8", "char_start": 379, "char_end": 407 }, { "id": "C04-1011.9", "char_start": 411, "char_end": 446 }, { "id": "C04-1011.10", "char_start": 459, "char_end": 488 } ]
[ { "label": 6, "arg1": "C04-1011.3", "arg2": "C04-1011.4", "reverse": false }, { "label": 4, "arg1": "C04-1011.6", "arg2": "C04-1011.7", "reverse": true }, { "label": 1, "arg1": "C04-1011.8", "arg2": "C04-1011.10", "reverse": true } ]
A94-1037
Spelling Correction In Agglutinative Languages
Methods developed for spelling correction for languages like English (see the review by Kukich (Kukich, 1992)) are not readily applicable to agglutinative languages. This poster presents an approach to spelling correction in agglutinative languages that is based on two-level morphology and a dynamic-programming based search algorithm. After an overview of our approach, we present results from experiments with spelling correction in Turkish.
[ { "id": "A94-1037.1", "char_start": 23, "char_end": 42 }, { "id": "A94-1037.2", "char_start": 47, "char_end": 56 }, { "id": "A94-1037.3", "char_start": 62, "char_end": 69 }, { "id": "A94-1037.4", "char_start": 142, "char_end": 165 }, { "id": "A94-1037.5", "char_start": 203, "char_end": 222 }, { "id": "A94-1037.6", "char_start": 226, "char_end": 249 }, { "id": "A94-1037.7", "char_start": 267, "char_end": 287 }, { "id": "A94-1037.8", "char_start": 294, "char_end": 336 }, { "id": "A94-1037.9", "char_start": 414, "char_end": 433 }, { "id": "A94-1037.10", "char_start": 437, "char_end": 444 } ]
[ { "label": 1, "arg1": "A94-1037.1", "arg2": "A94-1037.2", "reverse": false }, { "label": 1, "arg1": "A94-1037.5", "arg2": "A94-1037.6", "reverse": false }, { "label": 1, "arg1": "A94-1037.9", "arg2": "A94-1037.10", "reverse": false } ]
H94-1102
Robust Continuous Speech Recognition Technology Program Summary
The major objective of this program is to develop and demonstrate robust, high performance continuous speech recognition (CSR) techniques focussed on application in Spoken Language Systems (SLS) which will enhance the effectiveness of military and civilian computer-based systems. A key complementary objective is to define and develop applications of robust speech recognition and understanding systems, and to help catalyze the transition of spoken language technology into military and civilian systems, with particular focus on application of robust CSR to mobile military command and control. The research effort focusses on developing advanced acoustic modelling, rapid search, and recognition-time adaptation techniques for robust large-vocabulary CSR, and on applying these techniques to the new ARPA large-vocabulary CSR corpora and to military application tasks.
[ { "id": "H94-1102.1", "char_start": 92, "char_end": 138 }, { "id": "H94-1102.2", "char_start": 166, "char_end": 195 }, { "id": "H94-1102.3", "char_start": 236, "char_end": 280 }, { "id": "H94-1102.4", "char_start": 360, "char_end": 404 }, { "id": "H94-1102.5", "char_start": 445, "char_end": 471 }, { "id": "H94-1102.6", "char_start": 477, "char_end": 506 }, { "id": "H94-1102.7", "char_start": 555, "char_end": 558 }, { "id": "H94-1102.8", "char_start": 562, "char_end": 597 }, { "id": "H94-1102.9", "char_start": 651, "char_end": 669 }, { "id": "H94-1102.10", "char_start": 689, "char_end": 727 }, { "id": "H94-1102.11", "char_start": 739, "char_end": 759 }, { "id": "H94-1102.12", "char_start": 805, "char_end": 838 } ]
[ { "label": 1, "arg1": "H94-1102.1", "arg2": "H94-1102.2", "reverse": false }, { "label": 1, "arg1": "H94-1102.5", "arg2": "H94-1102.6", "reverse": false }, { "label": 1, "arg1": "H94-1102.7", "arg2": "H94-1102.8", "reverse": false }, { "label": 1, "arg1": "H94-1102.11", "arg2": "H94-1102.12", "reverse": false } ]
P98-1118
A Framework for Customizable Generation of Hypertext Presentations
In this paper, we present a framework, Presentor, for the development and customization of hypertext presentation generators. Presentor offers intuitive and powerful declarative languages specifying the presentation at different levels: macro-planning, micro-planning, realization, and formatting. Presentor is implemented and is portable cross-platform and cross-domain. It has been used with success in several application domains including weather forecasting, object modeling, system description and requirements summarization.
[ { "id": "P98-1118.1", "char_start": 40, "char_end": 49 }, { "id": "P98-1118.2", "char_start": 92, "char_end": 125 }, { "id": "P98-1118.3", "char_start": 127, "char_end": 136 }, { "id": "P98-1118.4", "char_start": 167, "char_end": 188 }, { "id": "P98-1118.5", "char_start": 299, "char_end": 308 }, { "id": "P98-1118.6", "char_start": 465, "char_end": 480 }, { "id": "P98-1118.7", "char_start": 505, "char_end": 531 } ]
[ { "label": 4, "arg1": "P98-1118.3", "arg2": "P98-1118.4", "reverse": true } ]
A92-1023
A Practical Methodology For The Evaluation Of Spoken Language Systems
A meaningful evaluation methodology can advance the state-of-the-art by encouraging mature, practical applications rather than "toy" implementations. Evaluation is also crucial to assessing competing claims and identifying promising technical approaches. While work in speech recognition (SR) has a history of evaluation methodologies that permit comparison among various systems, until recently no methodology existed for either developers of natural language (NL) interfaces or researchers in speech understanding (SU) to evaluate and compare the systems they developed. Recently considerable progress has been made by a number of groups involved in the DARPA Spoken Language Systems (SLS) program to agree on a methodology for comparative evaluation of SLS systems, and that methodology has been put into practice several times in comparative tests of several SLS systems. These evaluations are probably the only NL evaluations other than the series of Message Understanding Conferences (Sundheim, 1989; Sundheim, 1991) to have been developed and used by a group of researchers at different sites, although several excellent workshops have been held to study some of these problems (Palmer et al., 1989; Neal et al., 1991). This paper describes a practical "black-box" methodology for automatic evaluation of question-answering NL systems. While each new application domain will require some development of special resources, the heart of the methodology is domain-independent, and it can be used with either speech or text input. The particular characteristics of the approach are described in the following section: subsequent sections present its implementation in the DARPA SLS community, and some problems and directions for future development.
[ { "id": "A92-1023.1", "char_start": 270, "char_end": 293 }, { "id": "A92-1023.2", "char_start": 445, "char_end": 477 }, { "id": "A92-1023.3", "char_start": 496, "char_end": 521 }, { "id": "A92-1023.4", "char_start": 657, "char_end": 700 }, { "id": "A92-1023.5", "char_start": 757, "char_end": 768 }, { "id": "A92-1023.6", "char_start": 864, "char_end": 875 }, { "id": "A92-1023.7", "char_start": 917, "char_end": 931 }, { "id": "A92-1023.8", "char_start": 957, "char_end": 990 }, { "id": "A92-1023.9", "char_start": 1261, "char_end": 1284 }, { "id": "A92-1023.10", "char_start": 1313, "char_end": 1342 }, { "id": "A92-1023.11", "char_start": 1513, "char_end": 1533 }, { "id": "A92-1023.12", "char_start": 1676, "char_end": 1695 } ]
[ { "label": 5, "arg1": "A92-1023.4", "arg2": "A92-1023.5", "reverse": false }, { "label": 5, "arg1": "A92-1023.7", "arg2": "A92-1023.8", "reverse": true }, { "label": 1, "arg1": "A92-1023.9", "arg2": "A92-1023.10", "reverse": false } ]
P06-1053
Integrating Syntactic Priming Into An Incremental Probabilistic Parser, With An Application To Psycholinguistic Modeling
The psycholinguistic literature provides evidence for syntactic priming, i.e., the tendency to repeat structures. This paper describes a method for incorporating priming into an incremental probabilistic parser. Three models are compared, which involve priming of rules between sentences, within sentences, and within coordinate structures. These models simulate the reading time advantage for parallel structures found in human data, and also yield a small increase in overall parsing accuracy.
[ { "id": "P06-1053.1", "char_start": 5, "char_end": 32 }, { "id": "P06-1053.2", "char_start": 55, "char_end": 72 }, { "id": "P06-1053.3", "char_start": 163, "char_end": 170 }, { "id": "P06-1053.4", "char_start": 179, "char_end": 211 }, { "id": "P06-1053.5", "char_start": 254, "char_end": 261 }, { "id": "P06-1053.6", "char_start": 265, "char_end": 270 }, { "id": "P06-1053.7", "char_start": 279, "char_end": 288 }, { "id": "P06-1053.8", "char_start": 297, "char_end": 306 }, { "id": "P06-1053.9", "char_start": 319, "char_end": 340 }, { "id": "P06-1053.10", "char_start": 395, "char_end": 414 }, { "id": "P06-1053.11", "char_start": 424, "char_end": 434 }, { "id": "P06-1053.12", "char_start": 479, "char_end": 495 } ]
[ { "label": 5, "arg1": "P06-1053.1", "arg2": "P06-1053.2", "reverse": false }, { "label": 1, "arg1": "P06-1053.3", "arg2": "P06-1053.4", "reverse": false }, { "label": 4, "arg1": "P06-1053.10", "arg2": "P06-1053.11", "reverse": false } ]
P06-1012
Estimating Class Priors In Domain Adaptation For Word Sense Disambiguation
Instances of a word drawn from different domains may have different sense priors (the proportions of the different senses of a word). This in turn affects the accuracy of word sense disambiguation (WSD) systems trained and applied on different domains. This paper presents a method to estimate the sense priors of words drawn from a new domain, and highlights the importance of using well calibrated probabilities when performing these estimations. By using well calibrated probabilities, we are able to estimate the sense priors effectively to achieve significant improvements in WSD accuracy.
[ { "id": "P06-1012.1", "char_start": 16, "char_end": 20 }, { "id": "P06-1012.2", "char_start": 42, "char_end": 49 }, { "id": "P06-1012.3", "char_start": 69, "char_end": 81 }, { "id": "P06-1012.4", "char_start": 116, "char_end": 122 }, { "id": "P06-1012.5", "char_start": 128, "char_end": 132 }, { "id": "P06-1012.6", "char_start": 172, "char_end": 211 }, { "id": "P06-1012.7", "char_start": 245, "char_end": 252 }, { "id": "P06-1012.8", "char_start": 299, "char_end": 311 }, { "id": "P06-1012.9", "char_start": 315, "char_end": 320 }, { "id": "P06-1012.10", "char_start": 338, "char_end": 344 }, { "id": "P06-1012.11", "char_start": 385, "char_end": 414 }, { "id": "P06-1012.12", "char_start": 437, "char_end": 448 }, { "id": "P06-1012.13", "char_start": 459, "char_end": 488 }, { "id": "P06-1012.14", "char_start": 518, "char_end": 530 }, { "id": "P06-1012.15", "char_start": 582, "char_end": 594 } ]
[ { "label": 3, "arg1": "P06-1012.1", "arg2": "P06-1012.3", "reverse": true }, { "label": 3, "arg1": "P06-1012.4", "arg2": "P06-1012.5", "reverse": false }, { "label": 1, "arg1": "P06-1012.6", "arg2": "P06-1012.7", "reverse": false }, { "label": 3, "arg1": "P06-1012.8", "arg2": "P06-1012.9", "reverse": false }, { "label": 1, "arg1": "P06-1012.11", "arg2": "P06-1012.12", "reverse": false }, { "label": 1, "arg1": "P06-1012.13", "arg2": "P06-1012.14", "reverse": false } ]
C86-1105
An Attempt To Automatic Thesaurus Construction From An Ordinary Japanese Language Dictionary
How to obtain hierarchical relations(e.g. superordinate -hyponym relation, synonym relation) is one of the most important problems for thesaurus construction. A pilot system for extracting these relations automatically from an ordinary Japanese language dictionary (Shinmeikai Kokugojiten, published by Sansei-do, in machine readable form) is given. The features of the definition sentences in the dictionary, the mechanical extraction of the hierarchical relations and the estimation of the results are discussed.
[ { "id": "C86-1105.1", "char_start": 15, "char_end": 37 }, { "id": "C86-1105.2", "char_start": 43, "char_end": 74 }, { "id": "C86-1105.3", "char_start": 76, "char_end": 92 }, { "id": "C86-1105.4", "char_start": 136, "char_end": 158 }, { "id": "C86-1105.5", "char_start": 196, "char_end": 205 }, { "id": "C86-1105.6", "char_start": 237, "char_end": 265 }, { "id": "C86-1105.7", "char_start": 371, "char_end": 391 }, { "id": "C86-1105.8", "char_start": 399, "char_end": 409 }, { "id": "C86-1105.9", "char_start": 444, "char_end": 466 } ]
[ { "label": 4, "arg1": "C86-1105.1", "arg2": "C86-1105.4", "reverse": false }, { "label": 4, "arg1": "C86-1105.5", "arg2": "C86-1105.6", "reverse": false }, { "label": 4, "arg1": "C86-1105.7", "arg2": "C86-1105.8", "reverse": false } ]
C86-1021
The Transfer Phase Of Mu Machine Translation System
The interlingual approach to MT has been repeatedly advocated by researchers originally interested in natural language understanding who take machine translation to be one possible application. However, not only the ambiguity but also the vagueness which every natural language inevitably has leads this approach into essential difficulties. In contrast, our project, the Mu-project, adopts the transfer approach as the basic framework of MT. This paper describes the detailed construction of the transfer phase of our system from Japanese to English, and gives some examples of problems which seem difficult to treat in the interlingual approach. The basic design principles of the transfer phase of our system have already been mentioned in (1) (2). Some of the principles which are relevant to the topic of this paper are: (a) Multiple Layer of Grammars (b) Multiple Layer Presentation (c) Lexicon Driven Processing (d) Form-Oriented Dictionary Description. This paper also shows how these principles are realized in the current system.
[ { "id": "C86-1021.1", "char_start": 5, "char_end": 32 }, { "id": "C86-1021.2", "char_start": 103, "char_end": 133 }, { "id": "C86-1021.3", "char_start": 143, "char_end": 162 }, { "id": "C86-1021.4", "char_start": 217, "char_end": 226 }, { "id": "C86-1021.5", "char_start": 262, "char_end": 278 }, { "id": "C86-1021.6", "char_start": 373, "char_end": 383 }, { "id": "C86-1021.7", "char_start": 396, "char_end": 413 }, { "id": "C86-1021.8", "char_start": 440, "char_end": 442 }, { "id": "C86-1021.9", "char_start": 498, "char_end": 512 }, { "id": "C86-1021.10", "char_start": 532, "char_end": 540 }, { "id": "C86-1021.11", "char_start": 544, "char_end": 551 }, { "id": "C86-1021.12", "char_start": 626, "char_end": 647 }, { "id": "C86-1021.13", "char_start": 684, "char_end": 698 }, { "id": "C86-1021.14", "char_start": 831, "char_end": 857 }, { "id": "C86-1021.15", "char_start": 862, "char_end": 889 }, { "id": "C86-1021.16", "char_start": 894, "char_end": 919 }, { "id": "C86-1021.17", "char_start": 924, "char_end": 960 } ]
[ { "label": 1, "arg1": "C86-1021.2", "arg2": "C86-1021.3", "reverse": false }, { "label": 3, "arg1": "C86-1021.4", "arg2": "C86-1021.5", "reverse": false }, { "label": 1, "arg1": "C86-1021.7", "arg2": "C86-1021.8", "reverse": false }, { "label": 6, "arg1": "C86-1021.9", "arg2": "C86-1021.12", "reverse": false } ]
C04-1058
Why Nitpicking Works: Evidence For Occam's Razor In Error Correctors
Empirical experience and observations have shown us when powerful and highly tunable classifiers such as maximum entropy classifiers, boosting and SVMs are applied to language processing tasks, it is possible to achieve high accuracies, but eventually their performances all tend to plateau out at around the same point. To further improve performance, various error correction mechanisms have been developed, but in practice, most of them cannot be relied on to predictably improve performance on unseen data; indeed, depending upon the test set, they are as likely to degrade accuracy as to improve it. This problem is especially severe if the base classifier has already been finely tuned. In recent work, we introduced N-fold Templated Piped Correction, or NTPC ("nitpick"), an intriguing error corrector that is designed to work in these extreme operating conditions. Despite its simplicity, it consistently and robustly improves the accuracy of existing highly accurate base models. This paper investigates some of the more surprising claims made by NTPC, and presents experiments supporting an Occam's Razor argument that more complex models are damaging or unnecessary in practice.
[ { "id": "C04-1058.1", "char_start": 86, "char_end": 97 }, { "id": "C04-1058.2", "char_start": 106, "char_end": 133 }, { "id": "C04-1058.3", "char_start": 135, "char_end": 143 }, { "id": "C04-1058.4", "char_start": 148, "char_end": 152 }, { "id": "C04-1058.5", "char_start": 168, "char_end": 193 }, { "id": "C04-1058.6", "char_start": 362, "char_end": 389 }, { "id": "C04-1058.7", "char_start": 499, "char_end": 510 }, { "id": "C04-1058.8", "char_start": 539, "char_end": 547 }, { "id": "C04-1058.9", "char_start": 647, "char_end": 662 }, { "id": "C04-1058.10", "char_start": 724, "char_end": 778 }, { "id": "C04-1058.11", "char_start": 794, "char_end": 809 }, { "id": "C04-1058.12", "char_start": 977, "char_end": 988 }, { "id": "C04-1058.13", "char_start": 1057, "char_end": 1061 }, { "id": "C04-1058.14", "char_start": 1102, "char_end": 1124 } ]
[ { "label": 1, "arg1": "C04-1058.4", "arg2": "C04-1058.5", "reverse": false }, { "label": 1, "arg1": "C04-1058.6", "arg2": "C04-1058.7", "reverse": false } ]
N06-1007
Acquisition Of Verb Entailment From Text
The study addresses the problem of automatic acquisition of entailment relations between verbs. While this task has much in common with paraphrases acquisition which aims to discover semantic equivalence between verbs, the main challenge of entailment acquisition is to capture asymmetric, or directional, relations. Motivated by the intuition that it often underlies the local structure of coherent text, we develop a method that discovers verb entailment using evidence about discourse relations between clauses available in a parsed corpus. In comparison with earlier work, the proposed method covers a much wider range of verb entailment types and learns the mapping between verbs with highly varied argument structures.
[ { "id": "N06-1007.1", "char_start": 36, "char_end": 57 }, { "id": "N06-1007.2", "char_start": 61, "char_end": 81 }, { "id": "N06-1007.3", "char_start": 90, "char_end": 95 }, { "id": "N06-1007.4", "char_start": 137, "char_end": 160 }, { "id": "N06-1007.5", "char_start": 184, "char_end": 204 }, { "id": "N06-1007.6", "char_start": 213, "char_end": 218 }, { "id": "N06-1007.7", "char_start": 242, "char_end": 264 }, { "id": "N06-1007.8", "char_start": 279, "char_end": 316 }, { "id": "N06-1007.9", "char_start": 373, "char_end": 388 }, { "id": "N06-1007.10", "char_start": 392, "char_end": 405 }, { "id": "N06-1007.11", "char_start": 442, "char_end": 457 }, { "id": "N06-1007.12", "char_start": 479, "char_end": 498 }, { "id": "N06-1007.13", "char_start": 507, "char_end": 514 }, { "id": "N06-1007.14", "char_start": 530, "char_end": 543 }, { "id": "N06-1007.15", "char_start": 627, "char_end": 648 }, { "id": "N06-1007.16", "char_start": 664, "char_end": 671 }, { "id": "N06-1007.17", "char_start": 680, "char_end": 685 }, { "id": "N06-1007.18", "char_start": 705, "char_end": 724 } ]
[ { "label": 3, "arg1": "N06-1007.2", "arg2": "N06-1007.3", "reverse": false }, { "label": 3, "arg1": "N06-1007.5", "arg2": "N06-1007.6", "reverse": false }, { "label": 5, "arg1": "N06-1007.7", "arg2": "N06-1007.8", "reverse": false }, { "label": 3, "arg1": "N06-1007.9", "arg2": "N06-1007.10", "reverse": false }, { "label": 3, "arg1": "N06-1007.12", "arg2": "N06-1007.13", "reverse": false }, { "label": 3, "arg1": "N06-1007.17", "arg2": "N06-1007.18", "reverse": false } ]
A00-2023
Forest-Based Statistical Sentence Generation
This paper presents a new approach to statistical sentence generation in which alternative phrases are represented as packed sets of trees, or forests, and then ranked statistically to choose the best one. This representation offers advantages in compactness and in the ability to represent syntactic information. It also facilitates more efficient statistical ranking than a previous approach to statistical generation. An efficient ranking algorithm is described, together with experimental results showing significant improvements over simple enumeration or a lattice-based approach.
[ { "id": "A00-2023.1", "char_start": 39, "char_end": 70 }, { "id": "A00-2023.2", "char_start": 92, "char_end": 99 }, { "id": "A00-2023.3", "char_start": 134, "char_end": 139 }, { "id": "A00-2023.4", "char_start": 144, "char_end": 151 }, { "id": "A00-2023.5", "char_start": 292, "char_end": 313 }, { "id": "A00-2023.6", "char_start": 350, "char_end": 369 }, { "id": "A00-2023.7", "char_start": 398, "char_end": 420 }, { "id": "A00-2023.8", "char_start": 435, "char_end": 452 }, { "id": "A00-2023.9", "char_start": 564, "char_end": 586 } ]
[ { "label": 3, "arg1": "A00-2023.2", "arg2": "A00-2023.3", "reverse": false }, { "label": 6, "arg1": "A00-2023.8", "arg2": "A00-2023.9", "reverse": false } ]
X98-1022
An NTU-Approach To Automatic Sentence Extraction For Summary Generation
Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0. 447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task.
[ { "id": "X98-1022.1", "char_start": 1, "char_end": 24 }, { "id": "X98-1022.2", "char_start": 29, "char_end": 51 }, { "id": "X98-1022.3", "char_start": 89, "char_end": 92 }, { "id": "X98-1022.4", "char_start": 97, "char_end": 103 }, { "id": "X98-1022.5", "char_start": 192, "char_end": 215 }, { "id": "X98-1022.6", "char_start": 261, "char_end": 270 }, { "id": "X98-1022.7", "char_start": 275, "char_end": 293 }, { "id": "X98-1022.8", "char_start": 323, "char_end": 331 }, { "id": "X98-1022.9", "char_start": 337, "char_end": 356 }, { "id": "X98-1022.10", "char_start": 358, "char_end": 382 }, { "id": "X98-1022.11", "char_start": 387, "char_end": 411 }, { "id": "X98-1022.12", "char_start": 468, "char_end": 477 }, { "id": "X98-1022.13", "char_start": 497, "char_end": 507 }, { "id": "X98-1022.14", "char_start": 538, "char_end": 543 }, { "id": "X98-1022.15", "char_start": 548, "char_end": 553 }, { "id": "X98-1022.16", "char_start": 587, "char_end": 604 }, { "id": "X98-1022.17", "char_start": 623, "char_end": 632 }, { "id": "X98-1022.18", "char_start": 654, "char_end": 677 }, { "id": "X98-1022.19", "char_start": 705, "char_end": 710 }, { "id": "X98-1022.20", "char_start": 803, "char_end": 808 }, { "id": "X98-1022.21", "char_start": 863, "char_end": 882 }, { "id": "X98-1022.22", "char_start": 951, "char_end": 970 } ]
[ { "label": 1, "arg1": "X98-1022.6", "arg2": "X98-1022.7", "reverse": false }, { "label": 1, "arg1": "X98-1022.9", "arg2": "X98-1022.10", "reverse": true }, { "label": 1, "arg1": "X98-1022.13", "arg2": "X98-1022.18", "reverse": false } ]
P98-2213
A Method for Relating Multiple Newspaper Articles by Using Graphs, and Its Application to Webcasting
This paper describes methods for relating (threading) multiple newspaper articles, and for visualizing various characteristics of them by using a directed graph. A set of articles is represented by a set of word vectors, and the similarity between the vectors is then calculated. The graph is constructed from the similarity matrix. By applying some constraints on the chronological ordering of articles, an efficient threading algorithm that runs in 0(n) time (where n is the number of articles) is obtained. The constructed graph is visualized with words that represent the topics of the threads, and words that represent new information in each article. The threading technique is suitable for Webcasting (push) applications. A threading server determines relationships among articles from various news sources, and creates files containing their threading information. This information is represented in eXtended Markup Language (XML), and can be visualized on most Web browsers. The XML-based representation and a current prototype are described in this paper.
[ { "id": "P98-2213.1", "char_start": 147, "char_end": 161 }, { "id": "P98-2213.2", "char_start": 208, "char_end": 220 }, { "id": "P98-2213.3", "char_start": 230, "char_end": 240 }, { "id": "P98-2213.4", "char_start": 253, "char_end": 260 }, { "id": "P98-2213.5", "char_start": 285, "char_end": 290 }, { "id": "P98-2213.6", "char_start": 315, "char_end": 332 }, { "id": "P98-2213.7", "char_start": 351, "char_end": 362 }, { "id": "P98-2213.8", "char_start": 419, "char_end": 438 }, { "id": "P98-2213.9", "char_start": 452, "char_end": 461 }, { "id": "P98-2213.10", "char_start": 527, "char_end": 532 }, { "id": "P98-2213.11", "char_start": 552, "char_end": 557 }, { "id": "P98-2213.12", "char_start": 577, "char_end": 583 }, { "id": "P98-2213.13", "char_start": 591, "char_end": 598 }, { "id": "P98-2213.14", "char_start": 604, "char_end": 609 }, { "id": "P98-2213.15", "char_start": 629, "char_end": 640 }, { "id": "P98-2213.16", "char_start": 662, "char_end": 681 }, { "id": "P98-2213.17", "char_start": 732, "char_end": 748 }, { "id": "P98-2213.18", "char_start": 851, "char_end": 872 }, { "id": "P98-2213.19", "char_start": 909, "char_end": 939 }, { "id": "P98-2213.20", "char_start": 989, "char_end": 1013 } ]
[ { "label": 3, "arg1": "P98-2213.3", "arg2": "P98-2213.4", "reverse": false }, { "label": 3, "arg1": "P98-2213.8", "arg2": "P98-2213.9", "reverse": false }, { "label": 3, "arg1": "P98-2213.12", "arg2": "P98-2213.13", "reverse": false }, { "label": 3, "arg1": "P98-2213.14", "arg2": "P98-2213.15", "reverse": false } ]
P98-1113
A Flexible Example-Based Parser Based on the SSTC
In this paper we sketch an approach for Natural Language parsing. Our approach is an example-based approach, which relies mainly on examples that already parsed to their representation structure, and on the knowledge that we can get from these examples the required information to parse a new input sentence. In our approach, examples are annotated with the Structured String Tree Correspondence (SSTC) annotation schema where each SSTC describes a sentence, a representation tree as well as the correspondence between substrings in the sentence and subtrees in the representation tree. In the process of parsing, we first try to build subtrees for phrases in the input sentence which have been successfully found in the example-base - a bottom up approach. These subtrees will then be combined together to form a single rooted representation tree based on an example with similar representation structure - a top down approach.Keywords:
[ { "id": "P98-1113.1", "char_start": 42, "char_end": 66 }, { "id": "P98-1113.2", "char_start": 87, "char_end": 109 }, { "id": "P98-1113.3", "char_start": 172, "char_end": 196 }, { "id": "P98-1113.4", "char_start": 295, "char_end": 309 }, { "id": "P98-1113.5", "char_start": 360, "char_end": 422 }, { "id": "P98-1113.6", "char_start": 434, "char_end": 438 }, { "id": "P98-1113.7", "char_start": 451, "char_end": 459 }, { "id": "P98-1113.8", "char_start": 463, "char_end": 482 }, { "id": "P98-1113.9", "char_start": 521, "char_end": 531 }, { "id": "P98-1113.10", "char_start": 539, "char_end": 547 }, { "id": "P98-1113.11", "char_start": 552, "char_end": 560 }, { "id": "P98-1113.12", "char_start": 568, "char_end": 587 }, { "id": "P98-1113.13", "char_start": 607, "char_end": 614 }, { "id": "P98-1113.14", "char_start": 638, "char_end": 646 }, { "id": "P98-1113.15", "char_start": 651, "char_end": 658 }, { "id": "P98-1113.16", "char_start": 666, "char_end": 680 }, { "id": "P98-1113.17", "char_start": 723, "char_end": 735 }, { "id": "P98-1113.18", "char_start": 766, "char_end": 774 }, { "id": "P98-1113.19", "char_start": 816, "char_end": 849 }, { "id": "P98-1113.20", "char_start": 883, "char_end": 907 } ]
[ { "label": 3, "arg1": "P98-1113.6", "arg2": "P98-1113.7", "reverse": false }, { "label": 4, "arg1": "P98-1113.9", "arg2": "P98-1113.10", "reverse": false }, { "label": 4, "arg1": "P98-1113.11", "arg2": "P98-1113.12", "reverse": false }, { "label": 3, "arg1": "P98-1113.14", "arg2": "P98-1113.15", "reverse": false } ]
P02-1060
Named Entity Recognition Using An HMM-Based Chunk Tagger
This paper proposes a Hidden Markov Model (HMM) and an HMM-based chunk tagger, from which a named entity (NE) recognition (NER) system is built to recognize and classify names, times and numerical quantities. Through the HMM, our system is able to apply and integrate four types of internal and external evidences : 1) simple deterministic internal feature of the words, such as capitalization and digitalization ; 2) internal semantic feature of important triggers ; 3) internal gazetteer feature; 4) external macro context feature. In this way, the NER problem can be resolved effectively. Evaluation of our system on MUC-6 and MUC-7 English NE tasks achieves F-measures of 96.6% and 94.1% respectively. It shows that the performance is significantly better than reported by any other machine-learning system. Moreover, the performance is even consistently better than those based on handcrafted rules.
[ { "id": "P02-1060.1", "char_start": 23, "char_end": 48 }, { "id": "P02-1060.2", "char_start": 56, "char_end": 78 }, { "id": "P02-1060.3", "char_start": 93, "char_end": 135 }, { "id": "P02-1060.4", "char_start": 171, "char_end": 176 }, { "id": "P02-1060.5", "char_start": 178, "char_end": 208 }, { "id": "P02-1060.6", "char_start": 222, "char_end": 225 }, { "id": "P02-1060.7", "char_start": 365, "char_end": 370 }, { "id": "P02-1060.8", "char_start": 380, "char_end": 394 }, { "id": "P02-1060.9", "char_start": 419, "char_end": 444 }, { "id": "P02-1060.10", "char_start": 472, "char_end": 498 }, { "id": "P02-1060.11", "char_start": 503, "char_end": 533 }, { "id": "P02-1060.12", "char_start": 552, "char_end": 563 }, { "id": "P02-1060.13", "char_start": 611, "char_end": 617 }, { "id": "P02-1060.14", "char_start": 621, "char_end": 653 }, { "id": "P02-1060.15", "char_start": 663, "char_end": 673 }, { "id": "P02-1060.16", "char_start": 788, "char_end": 811 }, { "id": "P02-1060.17", "char_start": 827, "char_end": 838 }, { "id": "P02-1060.18", "char_start": 887, "char_end": 904 } ]
[ { "label": 1, "arg1": "P02-1060.2", "arg2": "P02-1060.3", "reverse": false }, { "label": 3, "arg1": "P02-1060.7", "arg2": "P02-1060.8", "reverse": true }, { "label": 2, "arg1": "P02-1060.13", "arg2": "P02-1060.15", "reverse": false } ]
C96-2213
Using A Hybrid System Of Corpus- And Knowledge-Based Techniques To Automate The Induction Of A Lexical Sublanguage Grammar
Porting a Natural Language Processing (NLP) system to a new domain remains one of the bottlenecks in syntactic parsing, because of the amount of effort required to fix gaps in the lexicon, and to attune the existing grammar to the idiosyncracies of the new sublanguage. This paper shows how the process of fitting a lexicalized grammar to a domain can be automated to a great extent by using a hybrid system that combines traditional knowledge-based techniques with a corpus-based approach.
[ { "id": "C96-2213.1", "char_start": 11, "char_end": 51 }, { "id": "C96-2213.2", "char_start": 57, "char_end": 67 }, { "id": "C96-2213.3", "char_start": 102, "char_end": 119 }, { "id": "C96-2213.4", "char_start": 181, "char_end": 188 }, { "id": "C96-2213.5", "char_start": 208, "char_end": 224 }, { "id": "C96-2213.6", "char_start": 254, "char_end": 269 }, { "id": "C96-2213.7", "char_start": 317, "char_end": 336 }, { "id": "C96-2213.8", "char_start": 342, "char_end": 348 }, { "id": "C96-2213.9", "char_start": 395, "char_end": 408 }, { "id": "C96-2213.10", "char_start": 423, "char_end": 461 }, { "id": "C96-2213.11", "char_start": 469, "char_end": 490 } ]
[ { "label": 1, "arg1": "C96-2213.1", "arg2": "C96-2213.2", "reverse": true }, { "label": 1, "arg1": "C96-2213.5", "arg2": "C96-2213.6", "reverse": false }, { "label": 1, "arg1": "C96-2213.7", "arg2": "C96-2213.8", "reverse": false }, { "label": 6, "arg1": "C96-2213.10", "arg2": "C96-2213.11", "reverse": false } ]
C04-1102
Detecting Transliterated Orthographic Variants Via Two Similarity Metrics
We propose a detection method for orthographic variants caused by transliteration in a large corpus. The method employs two similarities. One is string similarity based on edit distance. The other is contextual similarity by a vector space model. Experimental results show that the method performed a 0.889 F-measure in an open test.
[ { "id": "C04-1102.1", "char_start": 14, "char_end": 30 }, { "id": "C04-1102.2", "char_start": 67, "char_end": 82 }, { "id": "C04-1102.3", "char_start": 94, "char_end": 100 }, { "id": "C04-1102.4", "char_start": 125, "char_end": 137 }, { "id": "C04-1102.5", "char_start": 146, "char_end": 163 }, { "id": "C04-1102.6", "char_start": 173, "char_end": 186 }, { "id": "C04-1102.7", "char_start": 201, "char_end": 222 }, { "id": "C04-1102.8", "char_start": 228, "char_end": 246 }, { "id": "C04-1102.9", "char_start": 308, "char_end": 317 } ]
[ { "label": 4, "arg1": "C04-1102.2", "arg2": "C04-1102.3", "reverse": false }, { "label": 1, "arg1": "C04-1102.5", "arg2": "C04-1102.6", "reverse": true }, { "label": 1, "arg1": "C04-1102.7", "arg2": "C04-1102.8", "reverse": true } ]
N06-1018
Understanding Temporal Expressions In Emails
Recent years have seen increasing research on extracting and using temporal information in natural language applications. However most of the works found in the literature have focused on identifying and understanding temporal expressions in newswire texts. In this paper we report our work on anchoring temporal expressions in a novel genre, emails. The highly under-specified nature of these expressions fits well with our constraint-based representation of time, Time Calculus for Natural Language (TCNL). We have developed and evaluated a Temporal Expression Anchoror (TEA), and the result shows that it performs significantly better than the baseline, and compares favorably with some of the closely related work.
[ { "id": "N06-1018.1", "char_start": 92, "char_end": 121 }, { "id": "N06-1018.2", "char_start": 219, "char_end": 239 }, { "id": "N06-1018.3", "char_start": 243, "char_end": 257 }, { "id": "N06-1018.4", "char_start": 305, "char_end": 325 }, { "id": "N06-1018.5", "char_start": 337, "char_end": 342 }, { "id": "N06-1018.6", "char_start": 395, "char_end": 406 }, { "id": "N06-1018.7", "char_start": 426, "char_end": 457 }, { "id": "N06-1018.8", "char_start": 467, "char_end": 508 }, { "id": "N06-1018.9", "char_start": 544, "char_end": 578 }, { "id": "N06-1018.10", "char_start": 648, "char_end": 656 } ]
[ { "label": 4, "arg1": "N06-1018.2", "arg2": "N06-1018.3", "reverse": false }, { "label": 3, "arg1": "N06-1018.4", "arg2": "N06-1018.5", "reverse": true }, { "label": 6, "arg1": "N06-1018.9", "arg2": "N06-1018.10", "reverse": false } ]
H89-2066
Research And Development For Spoken Language Systems
The goal of this research is to develop a spoken language system that will demonstrate the usefulness of voice input for interactive problem solving. The system will accept continuous speech, and will handle multiple speakers without explicit speaker enrollment. Combining speech recognition and natural language processing to achieve speech understanding, the system will be demonstrated in an application domain relevant to the DoD. The objective of this project is to develop a robust and high-performance speech recognition system using a segment-based approach to phonetic recognition. The recognition system will eventually be integrated with natural language processing to achieve spoken language understanding.
[ { "id": "H89-2066.1", "char_start": 43, "char_end": 65 }, { "id": "H89-2066.2", "char_start": 106, "char_end": 117 }, { "id": "H89-2066.3", "char_start": 122, "char_end": 149 }, { "id": "H89-2066.4", "char_start": 174, "char_end": 191 }, { "id": "H89-2066.5", "char_start": 209, "char_end": 226 }, { "id": "H89-2066.6", "char_start": 235, "char_end": 262 }, { "id": "H89-2066.7", "char_start": 274, "char_end": 292 }, { "id": "H89-2066.8", "char_start": 297, "char_end": 324 }, { "id": "H89-2066.9", "char_start": 336, "char_end": 356 }, { "id": "H89-2066.10", "char_start": 396, "char_end": 414 }, { "id": "H89-2066.11", "char_start": 482, "char_end": 535 }, { "id": "H89-2066.12", "char_start": 544, "char_end": 566 }, { "id": "H89-2066.13", "char_start": 570, "char_end": 590 }, { "id": "H89-2066.14", "char_start": 596, "char_end": 614 }, { "id": "H89-2066.15", "char_start": 650, "char_end": 677 }, { "id": "H89-2066.16", "char_start": 689, "char_end": 718 } ]
[ { "label": 1, "arg1": "H89-2066.2", "arg2": "H89-2066.3", "reverse": false }, { "label": 1, "arg1": "H89-2066.8", "arg2": "H89-2066.9", "reverse": false }, { "label": 1, "arg1": "H89-2066.12", "arg2": "H89-2066.13", "reverse": false }, { "label": 1, "arg1": "H89-2066.15", "arg2": "H89-2066.16", "reverse": false } ]
J81-1002
Computer Generation Of Multiparagraph English Text
This paper reports recent research into methods for creating natural language text. A new processing paradigm called Fragment-and-Compose has been created and an experimental system implemented in it. The knowledge to be expressed in text is first divided into small propositional units, which are then composed into appropriate combinations and converted into text.KDS (Knowledge Delivery System), which embodies this paradigm, has distinct parts devoted to creation of the propositional units, to organization of the text, to prevention of excess redundancy, to creation of combinations of units, to evaluation of these combinations as potential sentences, to selection of the best among competing combinations, and to creation of the final text. The Fragment-and-Compose paradigm and the computational methods of KDS are described.
[ { "id": "J81-1002.1", "char_start": 53, "char_end": 83 }, { "id": "J81-1002.2", "char_start": 91, "char_end": 110 }, { "id": "J81-1002.3", "char_start": 118, "char_end": 138 }, { "id": "J81-1002.4", "char_start": 206, "char_end": 215 }, { "id": "J81-1002.5", "char_start": 235, "char_end": 239 }, { "id": "J81-1002.6", "char_start": 268, "char_end": 287 }, { "id": "J81-1002.7", "char_start": 362, "char_end": 366 }, { "id": "J81-1002.8", "char_start": 367, "char_end": 398 }, { "id": "J81-1002.9", "char_start": 476, "char_end": 495 }, { "id": "J81-1002.10", "char_start": 520, "char_end": 524 }, { "id": "J81-1002.11", "char_start": 543, "char_end": 560 }, { "id": "J81-1002.12", "char_start": 649, "char_end": 658 }, { "id": "J81-1002.13", "char_start": 738, "char_end": 748 }, { "id": "J81-1002.14", "char_start": 754, "char_end": 783 }, { "id": "J81-1002.15", "char_start": 792, "char_end": 813 }, { "id": "J81-1002.16", "char_start": 817, "char_end": 820 } ]
[ { "label": 4, "arg1": "J81-1002.4", "arg2": "J81-1002.5", "reverse": false }, { "label": 1, "arg1": "J81-1002.15", "arg2": "J81-1002.16", "reverse": false } ]
C90-1002
Design Of A Hybrid Deterministic Parser
A deterministic parser is under development which represents a departure from traditional deterministic parsers in that it combines both symbolic and connectionist components. The connectionist component is trained either from patterns derived from the rules of a deterministic grammar. The development and evolution of such a hybrid architecture has lead to a parser which is superior to any known deterministic parser. Experiments are described and powerful training techniques are demonstrated that permit decision-making by the connectionist component in the parsing process. This approach has permitted some simplifications to the rules of other deterministic parsers, including the elimination of rule packets and priorities. Furthermore, parsing is performed more robustly and with more tolerance for error. Data are presented which show how a connectionist (neural) network trained with linguistic rules can parse both expected (grammatical) sentences as well as some novel (ungrammatical or lexically ambiguous) sentences.
[ { "id": "C90-1002.1", "char_start": 1, "char_end": 23 }, { "id": "C90-1002.2", "char_start": 79, "char_end": 112 }, { "id": "C90-1002.3", "char_start": 138, "char_end": 175 }, { "id": "C90-1002.4", "char_start": 228, "char_end": 236 }, { "id": "C90-1002.5", "char_start": 254, "char_end": 259 }, { "id": "C90-1002.6", "char_start": 265, "char_end": 286 }, { "id": "C90-1002.7", "char_start": 328, "char_end": 347 }, { "id": "C90-1002.8", "char_start": 362, "char_end": 368 }, { "id": "C90-1002.9", "char_start": 394, "char_end": 420 }, { "id": "C90-1002.10", "char_start": 461, "char_end": 480 }, { "id": "C90-1002.11", "char_start": 510, "char_end": 525 }, { "id": "C90-1002.12", "char_start": 533, "char_end": 556 }, { "id": "C90-1002.13", "char_start": 564, "char_end": 579 }, { "id": "C90-1002.14", "char_start": 637, "char_end": 642 }, { "id": "C90-1002.15", "char_start": 652, "char_end": 673 }, { "id": "C90-1002.16", "char_start": 704, "char_end": 716 }, { "id": "C90-1002.17", "char_start": 746, "char_end": 753 }, { "id": "C90-1002.18", "char_start": 852, "char_end": 882 }, { "id": "C90-1002.19", "char_start": 896, "char_end": 912 }, { "id": "C90-1002.20", "char_start": 928, "char_end": 960 } ]
[ { "label": 6, "arg1": "C90-1002.1", "arg2": "C90-1002.2", "reverse": false }, { "label": 4, "arg1": "C90-1002.5", "arg2": "C90-1002.6", "reverse": false }, { "label": 6, "arg1": "C90-1002.8", "arg2": "C90-1002.9", "reverse": false }, { "label": 1, "arg1": "C90-1002.10", "arg2": "C90-1002.11", "reverse": false }, { "label": 4, "arg1": "C90-1002.12", "arg2": "C90-1002.13", "reverse": false }, { "label": 4, "arg1": "C90-1002.14", "arg2": "C90-1002.15", "reverse": false }, { "label": 1, "arg1": "C90-1002.18", "arg2": "C90-1002.20", "reverse": false } ]
P02-1059
Supervised Ranking In Open-Domain Text Summarization
The paper proposes and empirically motivates an integration of supervised learning with unsupervised learning to deal with human biases in summarization. In particular, we explore the use of probabilistic decision tree within the clustering framework to account for the variation as well as regularity in human created summaries. The corpus of human created extracts is created from a newspaper corpus and used as a test set. We build probabilistic decision trees of different flavors and integrate each of them with the clustering framework. Experiments with the corpus demonstrate that the mixture of the two paradigms generally gives a significant boost in performance compared to cases where either ofthe two is considered alone.
[ { "id": "P02-1059.1", "char_start": 64, "char_end": 83 }, { "id": "P02-1059.2", "char_start": 89, "char_end": 110 }, { "id": "P02-1059.3", "char_start": 140, "char_end": 153 }, { "id": "P02-1059.4", "char_start": 192, "char_end": 219 }, { "id": "P02-1059.5", "char_start": 306, "char_end": 329 }, { "id": "P02-1059.6", "char_start": 335, "char_end": 341 }, { "id": "P02-1059.7", "char_start": 386, "char_end": 402 }, { "id": "P02-1059.8", "char_start": 436, "char_end": 464 }, { "id": "P02-1059.9", "char_start": 565, "char_end": 571 } ]
[ { "label": 1, "arg1": "P02-1059.1", "arg2": "P02-1059.3", "reverse": false }, { "label": 4, "arg1": "P02-1059.6", "arg2": "P02-1059.7", "reverse": true } ]
P02-1002
Sequential Conditional Generalized Iterative Scaling
We describe a speedup for training conditional maximum entropy models. The algorithm is a simple variation on Generalized Iterative Scaling, but converges roughly an order of magnitude faster, depending on the number of constraints, and the way speed is measured. Rather than attempting to train all model parameters simultaneously, the algorithm trains them sequentially. The algorithm is easy to implement, typically uses only slightly more memory, and will lead to improvements for most maximum entropy problems.
[ { "id": "P02-1002.1", "char_start": 36, "char_end": 70 }, { "id": "P02-1002.2", "char_start": 111, "char_end": 140 }, { "id": "P02-1002.3", "char_start": 491, "char_end": 515 } ]
[]
C00-2123
Word Re-Ordering And DP-Based Search In Statistical Machine Translation
In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an efficient search algorithm. A search restriction especially useful for the translation direction from German to English is presented. The experimental tests are carried out on the Verbmobil task (German-English, 8000-word vocabulary), which is a limited-domain spoken-language task.
[ { "id": "C00-2123.1", "char_start": 51, "char_end": 87 }, { "id": "C00-2123.2", "char_start": 97, "char_end": 121 }, { "id": "C00-2123.3", "char_start": 246, "char_end": 261 }, { "id": "C00-2123.4", "char_start": 270, "char_end": 296 }, { "id": "C00-2123.5", "char_start": 500, "char_end": 514 }, { "id": "C00-2123.6", "char_start": 566, "char_end": 601 } ]
[ { "label": 1, "arg1": "C00-2123.1", "arg2": "C00-2123.2", "reverse": true }, { "label": 3, "arg1": "C00-2123.5", "arg2": "C00-2123.6", "reverse": true } ]
C96-1055
Role Of Word Sense Disambiguation In Lexical Acquisition : Predicting Semantics From Syntactic Cues
This paper addresses the issue of word-sense ambiguity in extraction from machine-readable resources for the construction of large-scale knowledge sources. We describe two experiments: one which ignored word-sense distinctions, resulting in 6.3% accuracy for semantic classification of verbs based on (Levin, 1993); and one which exploited word-sense distinctions, resulting in 97.9% accuracy. These experiments were dual purpose: (1) to validate the central thesis of the work of (Levin, 1993), i.e., that verb semantics and syntactic behavior are predictably related; (2) to demonstrate that a 15-fold improvement can be achieved in deriving semantic information from syntactic cues if we first divide the syntactic cues into distinct groupings that correlate with different word senses. Finally, we show that we can provide effective acquisition techniques for novel word senses using a combination of online sources.
[ { "id": "C96-1055.1", "char_start": 35, "char_end": 55 }, { "id": "C96-1055.2", "char_start": 75, "char_end": 101 }, { "id": "C96-1055.3", "char_start": 126, "char_end": 155 }, { "id": "C96-1055.4", "char_start": 204, "char_end": 227 }, { "id": "C96-1055.5", "char_start": 260, "char_end": 283 }, { "id": "C96-1055.6", "char_start": 287, "char_end": 292 }, { "id": "C96-1055.7", "char_start": 341, "char_end": 364 }, { "id": "C96-1055.8", "char_start": 508, "char_end": 522 }, { "id": "C96-1055.9", "char_start": 527, "char_end": 545 }, { "id": "C96-1055.10", "char_start": 645, "char_end": 665 }, { "id": "C96-1055.11", "char_start": 671, "char_end": 685 }, { "id": "C96-1055.12", "char_start": 709, "char_end": 723 }, { "id": "C96-1055.13", "char_start": 778, "char_end": 789 }, { "id": "C96-1055.14", "char_start": 871, "char_end": 882 } ]
[ { "label": 4, "arg1": "C96-1055.1", "arg2": "C96-1055.2", "reverse": false }, { "label": 6, "arg1": "C96-1055.8", "arg2": "C96-1055.9", "reverse": false }, { "label": 1, "arg1": "C96-1055.10", "arg2": "C96-1055.11", "reverse": true }, { "label": 6, "arg1": "C96-1055.12", "arg2": "C96-1055.13", "reverse": false } ]
M91-1029
PRC Inc: Description Of The PAKTUS System Used For MUC-3
The PRC Adaptive Knowledge-based Text Understanding System (PAKTUS) has been under development as an Independent Research and Development project at PRC since 1984. The objective is a generic system of tools, including a core English lexicon, grammar, and concept representations, for building natural language processing (NLP) systems for text understanding. Systems built with PAKTUS are intended to generate input to knowledge based systems ordata base systems. Input to the NLP system is typically derived from an existing electronic message stream, such as a news wire. PAKTUS supports the adaptation of the generic core to a variety of domains: JINTACCS messages, RAINFORM messages, news reports about a specific type of event, such as financial transfers or terrorist acts, etc., by acquiring sublanguage and domain-specific grammar, words, conceptual mappings, and discourse patterns. The long-term goal is a system that can support the processing of relatively long discourses in domains that are fairly broad with a high rate of success.
[ { "id": "M91-1029.1", "char_start": 5, "char_end": 68 }, { "id": "M91-1029.2", "char_start": 222, "char_end": 242 }, { "id": "M91-1029.3", "char_start": 244, "char_end": 251 }, { "id": "M91-1029.4", "char_start": 295, "char_end": 336 }, { "id": "M91-1029.5", "char_start": 341, "char_end": 359 }, { "id": "M91-1029.6", "char_start": 380, "char_end": 386 }, { "id": "M91-1029.7", "char_start": 479, "char_end": 489 }, { "id": "M91-1029.8", "char_start": 528, "char_end": 553 }, { "id": "M91-1029.9", "char_start": 576, "char_end": 582 }, { "id": "M91-1029.10", "char_start": 652, "char_end": 669 }, { "id": "M91-1029.11", "char_start": 671, "char_end": 688 }, { "id": "M91-1029.12", "char_start": 690, "char_end": 702 }, { "id": "M91-1029.13", "char_start": 801, "char_end": 840 }, { "id": "M91-1029.14", "char_start": 842, "char_end": 868 }, { "id": "M91-1029.15", "char_start": 874, "char_end": 892 } ]
[ { "label": 1, "arg1": "M91-1029.4", "arg2": "M91-1029.5", "reverse": false }, { "label": 1, "arg1": "M91-1029.9", "arg2": "M91-1029.10", "reverse": false } ]
P98-1088
Memorisation for Glue Language Deduction and Categorial Parsing
The multiplicative fragment of linear logic has found a number of applications in computational linguistics: in the "glue language" approach to LFG semantics, and in the formulation and parsing of various categorial grammars. These applications call for efficient deduction methods. Although a number of deduction methods for multiplicative linear logic are known, none of them are tabular methods, which bring a substantial efficiency gain by avoiding redundant computation (cf. chart methods in CFG parsing): this paper presents such a method, and discusses its use in relation to the above applications.
[ { "id": "P98-1088.1", "char_start": 32, "char_end": 44 }, { "id": "P98-1088.2", "char_start": 83, "char_end": 108 }, { "id": "P98-1088.3", "char_start": 117, "char_end": 132 }, { "id": "P98-1088.4", "char_start": 145, "char_end": 158 }, { "id": "P98-1088.5", "char_start": 187, "char_end": 194 }, { "id": "P98-1088.6", "char_start": 206, "char_end": 225 }, { "id": "P98-1088.7", "char_start": 327, "char_end": 354 }, { "id": "P98-1088.8", "char_start": 498, "char_end": 509 } ]
[ { "label": 1, "arg1": "P98-1088.1", "arg2": "P98-1088.2", "reverse": false }, { "label": 1, "arg1": "P98-1088.3", "arg2": "P98-1088.4", "reverse": false } ]
E06-1045
Data-Driven Generation Of Emphatic Facial Displays
We describe an implementation of data-driven selection of emphatic facial displays for an embodied conversational agent in a dialogue system. A corpus of sentences in the domain of the target dialogue system was recorded, and the facial displays used by the speaker were annotated. The data from those recordings was used in a range of models for generating facial displays, each model making use of a different amount of context or choosing displays differently within a context. The models were evaluated in two ways: by cross-validation against the corpus, and by asking users to rate the output. The predictions of the cross-validation study differed from the actual user ratings. While the cross-validation gave the highest scores to models making a majority choice within a context, the user study showed a significant preference for models that produced more variation. This preference was especially strong among the female subjects.
[ { "id": "E06-1045.1", "char_start": 91, "char_end": 120 }, { "id": "E06-1045.2", "char_start": 126, "char_end": 141 }, { "id": "E06-1045.3", "char_start": 145, "char_end": 164 }, { "id": "E06-1045.4", "char_start": 186, "char_end": 208 }, { "id": "E06-1045.5", "char_start": 259, "char_end": 266 }, { "id": "E06-1045.6", "char_start": 423, "char_end": 430 }, { "id": "E06-1045.7", "char_start": 473, "char_end": 480 }, { "id": "E06-1045.8", "char_start": 524, "char_end": 540 }, { "id": "E06-1045.9", "char_start": 553, "char_end": 559 }, { "id": "E06-1045.10", "char_start": 624, "char_end": 640 }, { "id": "E06-1045.11", "char_start": 696, "char_end": 712 } ]
[ { "label": 4, "arg1": "E06-1045.1", "arg2": "E06-1045.2", "reverse": false } ]
P04-1030
Head-Driven Parsing For Word Lattices
We present the first application of the head-driven statistical parsing model of Collins (1999) as a simultaneous language model and parser for large-vocabulary speech recognition. The model is adapted to an online left to right chart-parser for word lattices, integrating acoustic, n-gram, and parser probabilities. The parser uses structural and lexical dependencies not considered by n-gram models, conditioning recognition on more linguistically-grounded relationships. Experiments on the Wall Street Journal treebank and lattice corpora show word error rates competitive with the standard n-gram language model while extracting additional structural information useful for speech understanding.
[ { "id": "P04-1030.1", "char_start": 41, "char_end": 78 }, { "id": "P04-1030.2", "char_start": 102, "char_end": 129 }, { "id": "P04-1030.3", "char_start": 134, "char_end": 140 }, { "id": "P04-1030.4", "char_start": 145, "char_end": 180 }, { "id": "P04-1030.5", "char_start": 209, "char_end": 242 }, { "id": "P04-1030.6", "char_start": 247, "char_end": 260 }, { "id": "P04-1030.7", "char_start": 322, "char_end": 328 }, { "id": "P04-1030.8", "char_start": 334, "char_end": 369 }, { "id": "P04-1030.9", "char_start": 388, "char_end": 401 }, { "id": "P04-1030.10", "char_start": 494, "char_end": 522 }, { "id": "P04-1030.11", "char_start": 548, "char_end": 564 }, { "id": "P04-1030.12", "char_start": 586, "char_end": 616 }, { "id": "P04-1030.13", "char_start": 645, "char_end": 667 }, { "id": "P04-1030.14", "char_start": 679, "char_end": 699 } ]
[ { "label": 1, "arg1": "P04-1030.3", "arg2": "P04-1030.4", "reverse": false }, { "label": 1, "arg1": "P04-1030.7", "arg2": "P04-1030.8", "reverse": true }, { "label": 1, "arg1": "P04-1030.13", "arg2": "P04-1030.14", "reverse": false } ]
P02-1023
Improving Language Model Size Reduction Using Better Pruning Criteria
Reducing language model (LM) size is a critical issue when applying a LM to realistic applications which have memory constraints. In this paper, three measures are studied for the purpose of LM pruning. They are probability, rank, and entropy. We evaluated the performance of the three pruning criteria in a real application of Chinese text input in terms of character error rate (CER). We first present an empirical comparison, showing that rank performs the best in most cases. We also show that the high-performance of rank lies in its strong correlation with error rate. We then present a novel method of combining two criteria in model pruning. Experimental results show that the combined criterion consistently leads to smaller models than the models pruned using either of the criteria separately, at the same CER.
[ { "id": "P02-1023.1", "char_start": 10, "char_end": 34 }, { "id": "P02-1023.2", "char_start": 71, "char_end": 73 }, { "id": "P02-1023.3", "char_start": 192, "char_end": 202 }, { "id": "P02-1023.4", "char_start": 226, "char_end": 230 }, { "id": "P02-1023.5", "char_start": 236, "char_end": 243 }, { "id": "P02-1023.6", "char_start": 287, "char_end": 303 }, { "id": "P02-1023.7", "char_start": 329, "char_end": 347 }, { "id": "P02-1023.8", "char_start": 360, "char_end": 386 }, { "id": "P02-1023.9", "char_start": 443, "char_end": 447 }, { "id": "P02-1023.10", "char_start": 523, "char_end": 527 }, { "id": "P02-1023.11", "char_start": 564, "char_end": 574 }, { "id": "P02-1023.12", "char_start": 636, "char_end": 649 }, { "id": "P02-1023.13", "char_start": 818, "char_end": 821 } ]
[ { "label": 6, "arg1": "P02-1023.4", "arg2": "P02-1023.5", "reverse": false }, { "label": 1, "arg1": "P02-1023.6", "arg2": "P02-1023.7", "reverse": false } ]
C00-1054
Finite-State Multimodal Parsing And Understanding
Multimodal interfaces require effective parsing and understanding of utterances whose content is distributed across multiple input modes. Johnston 1998 presents an approach in which strategies for multimodal integration are stated declaratively using a unification-based grammar that is used by a multidimensional chart parser to compose inputs. This approach is highly expressive and supports a broad class of interfaces, but offers only limited potential for mutual compensation among the input modes, is subject to significant concerns in terms of computational complexity, and complicates selection among alternative multimodal interpretations of the input. In this paper, we present an alternative approach in which multimodal parsing and understanding are achieved using a weighted finite-state device which takes speech and gesture streams as inputs and outputs their joint interpretation. This approach is significantly more efficient, enables tight-coupling of multimodal understanding with speech recognition, and provides a general probabilistic framework for multimodal ambiguity resolution.
[ { "id": "C00-1054.1", "char_start": 42, "char_end": 49 }, { "id": "C00-1054.2", "char_start": 71, "char_end": 81 }, { "id": "C00-1054.3", "char_start": 199, "char_end": 221 }, { "id": "C00-1054.4", "char_start": 255, "char_end": 280 }, { "id": "C00-1054.5", "char_start": 299, "char_end": 328 }, { "id": "C00-1054.6", "char_start": 413, "char_end": 423 }, { "id": "C00-1054.7", "char_start": 723, "char_end": 759 }, { "id": "C00-1054.8", "char_start": 781, "char_end": 809 }, { "id": "C00-1054.9", "char_start": 822, "char_end": 848 }, { "id": "C00-1054.10", "char_start": 1002, "char_end": 1020 }, { "id": "C00-1054.11", "char_start": 1073, "char_end": 1104 } ]
[ { "label": 1, "arg1": "C00-1054.4", "arg2": "C00-1054.5", "reverse": false }, { "label": 1, "arg1": "C00-1054.7", "arg2": "C00-1054.8", "reverse": true } ]
N06-1037
Exploring Syntactic Features For Relation Extraction Using A Convolution Tree Kernel
This paper proposes to use a convolution kernel over parse trees to model syntactic structure information for relation extraction. Our study reveals that the syntactic structure features embedded in a parse tree are very effective for relation extraction and these features can be well captured by the convolution tree kernel. Evaluation on the ACE 2003 corpus shows that the convolution kernel over parse trees can achieve comparable performance with the previous best-reported feature-based methods on the 24 ACE relation subtypes. It also shows that our method significantly outperforms the previous two dependency tree kernels on the 5 ACE relation major types.
[ { "id": "N06-1037.1", "char_start": 30, "char_end": 48 }, { "id": "N06-1037.2", "char_start": 54, "char_end": 65 }, { "id": "N06-1037.3", "char_start": 75, "char_end": 106 }, { "id": "N06-1037.4", "char_start": 111, "char_end": 130 }, { "id": "N06-1037.5", "char_start": 159, "char_end": 187 }, { "id": "N06-1037.6", "char_start": 202, "char_end": 212 }, { "id": "N06-1037.7", "char_start": 236, "char_end": 255 }, { "id": "N06-1037.8", "char_start": 303, "char_end": 326 }, { "id": "N06-1037.9", "char_start": 346, "char_end": 361 }, { "id": "N06-1037.10", "char_start": 377, "char_end": 395 }, { "id": "N06-1037.11", "char_start": 401, "char_end": 412 }, { "id": "N06-1037.12", "char_start": 512, "char_end": 533 }, { "id": "N06-1037.13", "char_start": 608, "char_end": 631 }, { "id": "N06-1037.14", "char_start": 641, "char_end": 665 } ]
[ { "label": 3, "arg1": "N06-1037.1", "arg2": "N06-1037.3", "reverse": false }, { "label": 4, "arg1": "N06-1037.5", "arg2": "N06-1037.6", "reverse": false } ]
H90-1011
Performing Integrated Syntactic And Semantic Parsing Using Classification
This paper describes a particular approach to parsing that utilizes recent advances in unification-based parsing and in classification-based knowledge representation. As unification-based grammatical frameworks are extended to handle richer descriptions of linguistic information, they begin to share many of the properties that have been developed in KL-ONE-like knowledge representation systems. This commonality suggests that some of the classification-based representation techniques can be applied to unification-based linguistic descriptions. This merging supports the integration of semantic and syntactic information into the same system, simultaneously subject to the same types of processes, in an efficient manner. The result is expected to be more efficient parsing due to the increased organization of knowledge. The use of a KL-ONE style representation for parsing and semantic interpretation was first explored in the PSI-KLONE system [2], in which parsing is characterized as an inference process called incremental description refinement.
[ { "id": "H90-1011.1", "char_start": 47, "char_end": 54 }, { "id": "H90-1011.2", "char_start": 88, "char_end": 113 }, { "id": "H90-1011.3", "char_start": 121, "char_end": 166 }, { "id": "H90-1011.4", "char_start": 171, "char_end": 211 }, { "id": "H90-1011.5", "char_start": 258, "char_end": 280 }, { "id": "H90-1011.6", "char_start": 353, "char_end": 397 }, { "id": "H90-1011.7", "char_start": 442, "char_end": 488 }, { "id": "H90-1011.8", "char_start": 507, "char_end": 548 }, { "id": "H90-1011.9", "char_start": 591, "char_end": 625 }, { "id": "H90-1011.10", "char_start": 761, "char_end": 778 }, { "id": "H90-1011.11", "char_start": 840, "char_end": 867 }, { "id": "H90-1011.12", "char_start": 872, "char_end": 879 }, { "id": "H90-1011.13", "char_start": 884, "char_end": 907 }, { "id": "H90-1011.14", "char_start": 934, "char_end": 950 }, { "id": "H90-1011.15", "char_start": 965, "char_end": 972 }, { "id": "H90-1011.16", "char_start": 1021, "char_end": 1055 } ]
[ { "label": 1, "arg1": "H90-1011.1", "arg2": "H90-1011.2", "reverse": true }, { "label": 6, "arg1": "H90-1011.4", "arg2": "H90-1011.6", "reverse": false }, { "label": 1, "arg1": "H90-1011.7", "arg2": "H90-1011.8", "reverse": false }, { "label": 1, "arg1": "H90-1011.11", "arg2": "H90-1011.12", "reverse": false }, { "label": 1, "arg1": "H90-1011.15", "arg2": "H90-1011.16", "reverse": true } ]
J97-1004
Developing And Empirically Evaluating Robust Explanation Generators : The KNIGHT Experiments
"To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven-year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. In particular, it describes a robust explanation system that constructs multisentential and multi-paragraph explanations from the a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We introduce the evaluation methodology and describe how performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, scored within ""half a grade"" of domain experts, and its performance exceeded that of one of the domain experts."
[ { "id": "J97-1004.1", "char_start": 35, "char_end": 53 }, { "id": "J97-1004.2", "char_start": 121, "char_end": 137 }, { "id": "J97-1004.3", "char_start": 178, "char_end": 209 }, { "id": "J97-1004.4", "char_start": 227, "char_end": 242 }, { "id": "J97-1004.5", "char_start": 375, "char_end": 386 }, { "id": "J97-1004.6", "char_start": 488, "char_end": 510 }, { "id": "J97-1004.7", "char_start": 516, "char_end": 562 }, { "id": "J97-1004.8", "char_start": 594, "char_end": 619 }, { "id": "J97-1004.9", "char_start": 636, "char_end": 684 }, { "id": "J97-1004.10", "char_start": 696, "char_end": 722 } ]
[ { "label": 1, "arg1": "J97-1004.1", "arg2": "J97-1004.2", "reverse": true }, { "label": 1, "arg1": "J97-1004.6", "arg2": "J97-1004.7", "reverse": true }, { "label": 1, "arg1": "J97-1004.8", "arg2": "J97-1004.10", "reverse": true } ]
P04-2005
Automatic Acquisition Of English Topic Signatures Based On A Second Language
We present a novel approach for automatically acquiring English topic signatures. Given a particular concept, or word sense, a topic signature is a set of words that tend to co-occur with it. Topic signatures can be useful in a number of Natural Language Processing (NLP) applications, such as Word Sense Disambiguation (WSD) and Text Summarisation. Our method takes advantage of the different way in which word senses are lexicalised in English and Chinese, and also exploits the large amount of Chinese text available in corpora and on the Web. We evaluated the topic signatures on a WSD task, where we trained a second-order vector cooccurrence algorithm on standard WSD datasets, with promising results.
[ { "id": "P04-2005.1", "char_start": 57, "char_end": 81 }, { "id": "P04-2005.2", "char_start": 102, "char_end": 109 }, { "id": "P04-2005.3", "char_start": 114, "char_end": 124 }, { "id": "P04-2005.4", "char_start": 128, "char_end": 143 }, { "id": "P04-2005.5", "char_start": 156, "char_end": 161 }, { "id": "P04-2005.6", "char_start": 193, "char_end": 209 }, { "id": "P04-2005.7", "char_start": 239, "char_end": 285 }, { "id": "P04-2005.8", "char_start": 295, "char_end": 326 }, { "id": "P04-2005.9", "char_start": 331, "char_end": 349 }, { "id": "P04-2005.10", "char_start": 408, "char_end": 419 }, { "id": "P04-2005.11", "char_start": 439, "char_end": 446 }, { "id": "P04-2005.12", "char_start": 451, "char_end": 458 }, { "id": "P04-2005.13", "char_start": 498, "char_end": 510 }, { "id": "P04-2005.14", "char_start": 524, "char_end": 531 }, { "id": "P04-2005.15", "char_start": 565, "char_end": 581 }, { "id": "P04-2005.16", "char_start": 587, "char_end": 595 }, { "id": "P04-2005.17", "char_start": 616, "char_end": 658 }, { "id": "P04-2005.18", "char_start": 662, "char_end": 683 } ]
[ { "label": 4, "arg1": "P04-2005.4", "arg2": "P04-2005.5", "reverse": true }, { "label": 1, "arg1": "P04-2005.6", "arg2": "P04-2005.7", "reverse": false }, { "label": 4, "arg1": "P04-2005.10", "arg2": "P04-2005.11", "reverse": false }, { "label": 4, "arg1": "P04-2005.13", "arg2": "P04-2005.14", "reverse": false }, { "label": 1, "arg1": "P04-2005.15", "arg2": "P04-2005.16", "reverse": false } ]
P04-2010
A Machine Learning Approach To German Pronoun Resolution
This paper presents a novel ensemble learning approach to resolving German pronouns. Boosting, the method in question, combines the moderately accurate hypotheses of several classifiers to form a highly accurate one. Experiments show that this approach is superior to a single decision-tree classifier. Furthermore, we present a standalone system that resolves pronouns in unannotated text by using a fully automatic sequence of preprocessing modules that mimics the manual annotation process. Although the system performs well within a limited textual domain, further research is needed to make it effective for open-domain question answering and text summarisation.
[ { "id": "P04-2010.1", "char_start": 29, "char_end": 55 }, { "id": "P04-2010.2", "char_start": 69, "char_end": 84 }, { "id": "P04-2010.3", "char_start": 86, "char_end": 94 }, { "id": "P04-2010.4", "char_start": 153, "char_end": 163 }, { "id": "P04-2010.5", "char_start": 175, "char_end": 186 }, { "id": "P04-2010.6", "char_start": 278, "char_end": 302 }, { "id": "P04-2010.7", "char_start": 330, "char_end": 347 }, { "id": "P04-2010.8", "char_start": 362, "char_end": 370 }, { "id": "P04-2010.9", "char_start": 374, "char_end": 390 }, { "id": "P04-2010.10", "char_start": 430, "char_end": 451 }, { "id": "P04-2010.11", "char_start": 468, "char_end": 493 }, { "id": "P04-2010.12", "char_start": 546, "char_end": 560 }, { "id": "P04-2010.13", "char_start": 614, "char_end": 644 }, { "id": "P04-2010.14", "char_start": 649, "char_end": 667 } ]
[ { "label": 1, "arg1": "P04-2010.1", "arg2": "P04-2010.2", "reverse": false }, { "label": 1, "arg1": "P04-2010.7", "arg2": "P04-2010.9", "reverse": false }, { "label": 6, "arg1": "P04-2010.12", "arg2": "P04-2010.13", "reverse": false } ]
C04-1035
Classifying Ellipsis In Dialogue : A Machine Learning Approach
This paper presents a machine learning approach to bare slice disambiguation in dialogue. We extract a set of heuristic principles from a corpus-based sample and formulate them as probabilistic Horn clauses. We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms : SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. Both learners perform well, yielding similar success rates of approx 90%. The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features.
[ { "id": "C04-1035.1", "char_start": 23, "char_end": 48 }, { "id": "C04-1035.2", "char_start": 52, "char_end": 77 }, { "id": "C04-1035.3", "char_start": 81, "char_end": 89 }, { "id": "C04-1035.4", "char_start": 111, "char_end": 131 }, { "id": "C04-1035.5", "char_start": 139, "char_end": 158 }, { "id": "C04-1035.6", "char_start": 181, "char_end": 207 }, { "id": "C04-1035.7", "char_start": 244, "char_end": 251 }, { "id": "C04-1035.8", "char_start": 271, "char_end": 298 }, { "id": "C04-1035.9", "char_start": 314, "char_end": 327 }, { "id": "C04-1035.10", "char_start": 351, "char_end": 378 }, { "id": "C04-1035.11", "char_start": 392, "char_end": 421 }, { "id": "C04-1035.12", "char_start": 436, "char_end": 455 }, { "id": "C04-1035.13", "char_start": 502, "char_end": 515 }, { "id": "C04-1035.14", "char_start": 557, "char_end": 565 }, { "id": "C04-1035.15", "char_start": 601, "char_end": 621 }, { "id": "C04-1035.16", "char_start": 666, "char_end": 671 }, { "id": "C04-1035.17", "char_start": 698, "char_end": 710 }, { "id": "C04-1035.18", "char_start": 750, "char_end": 758 } ]
[ { "label": 1, "arg1": "C04-1035.1", "arg2": "C04-1035.2", "reverse": false }, { "label": 3, "arg1": "C04-1035.4", "arg2": "C04-1035.6", "reverse": true }, { "label": 3, "arg1": "C04-1035.8", "arg2": "C04-1035.9", "reverse": false }, { "label": 6, "arg1": "C04-1035.11", "arg2": "C04-1035.12", "reverse": false }, { "label": 3, "arg1": "C04-1035.14", "arg2": "C04-1035.15", "reverse": false }, { "label": 6, "arg1": "C04-1035.16", "arg2": "C04-1035.17", "reverse": false } ]
C04-1036
Feature Vector Quality And Distributional Similarity
We suggest a new goal and evaluation criterion for word similarity measures. The new criterion – meaning-entailing substitutability – fits the needs of semantic-oriented NLP applications and can be evaluated directly (independent of an application) at a good level of human agreement. Motivated by this semantic criterion we analyze the empirical quality of distributional word feature vectors and its impact on word similarity results, proposing an objective measure for evaluating feature vector quality. Finally, a novel feature weighting and selection function is presented, which yields superior feature vectors and better word similarity performance.
[ { "id": "C04-1036.1", "char_start": 27, "char_end": 47 }, { "id": "C04-1036.2", "char_start": 52, "char_end": 76 }, { "id": "C04-1036.3", "char_start": 98, "char_end": 132 }, { "id": "C04-1036.4", "char_start": 153, "char_end": 187 }, { "id": "C04-1036.5", "char_start": 269, "char_end": 284 }, { "id": "C04-1036.6", "char_start": 304, "char_end": 322 }, { "id": "C04-1036.7", "char_start": 359, "char_end": 394 }, { "id": "C04-1036.8", "char_start": 413, "char_end": 436 }, { "id": "C04-1036.9", "char_start": 484, "char_end": 506 }, { "id": "C04-1036.10", "char_start": 525, "char_end": 565 }, { "id": "C04-1036.11", "char_start": 602, "char_end": 617 }, { "id": "C04-1036.12", "char_start": 629, "char_end": 656 } ]
[ { "label": 1, "arg1": "C04-1036.3", "arg2": "C04-1036.4", "reverse": false }, { "label": 3, "arg1": "C04-1036.6", "arg2": "C04-1036.7", "reverse": false }, { "label": 2, "arg1": "C04-1036.10", "arg2": "C04-1036.11", "reverse": false } ]
C04-1068
Filtering Speaker-Specific Words From Electronic Discussions
The work presented in this paper is the first step in a project which aims to cluster and summarise electronic discussions in the context of help-desk applications. The eventual objective of this project is to use these summaries to assist help-desk users and operators. In this paper, we identify features of electronic discussions that influence the clustering process, and offer a filtering mechanism that removes undesirable influences. We tested the clustering and filtering processes on electronic newsgroup discussions, and evaluated their performance by means of two experiments : coarse-level clustering simple information retrieval.
[ { "id": "C04-1068.1", "char_start": 101, "char_end": 123 }, { "id": "C04-1068.2", "char_start": 142, "char_end": 164 }, { "id": "C04-1068.3", "char_start": 221, "char_end": 230 }, { "id": "C04-1068.4", "char_start": 299, "char_end": 307 }, { "id": "C04-1068.5", "char_start": 311, "char_end": 333 }, { "id": "C04-1068.6", "char_start": 353, "char_end": 371 }, { "id": "C04-1068.7", "char_start": 385, "char_end": 404 }, { "id": "C04-1068.8", "char_start": 430, "char_end": 440 }, { "id": "C04-1068.9", "char_start": 456, "char_end": 490 }, { "id": "C04-1068.10", "char_start": 494, "char_end": 526 }, { "id": "C04-1068.11", "char_start": 548, "char_end": 559 }, { "id": "C04-1068.12", "char_start": 590, "char_end": 613 }, { "id": "C04-1068.13", "char_start": 621, "char_end": 642 } ]
[ { "label": 3, "arg1": "C04-1068.4", "arg2": "C04-1068.5", "reverse": false }, { "label": 1, "arg1": "C04-1068.9", "arg2": "C04-1068.10", "reverse": false }, { "label": 2, "arg1": "C04-1068.11", "arg2": "C04-1068.12", "reverse": true } ]
C04-1080
Part- Of- Speech Tagging In Context
We present a new HMM tagger that exploits context on both sides of a word to be tagged, and evaluate it in both the unsupervised and supervised case. Along the way, we present the first comprehensive comparison of unsupervised methods for part-of-speech tagging, noting that published results to date have not been comparable across corpora or lexicons. Observing that the quality of the lexicon greatly impacts the accuracy that can be achieved by the algorithms, we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable. Finally, we show how this new tagger achieves state-of-the-art results in a supervised, non-training intensive framework.
[ { "id": "C04-1080.1", "char_start": 43, "char_end": 50 }, { "id": "C04-1080.2", "char_start": 70, "char_end": 74 }, { "id": "C04-1080.3", "char_start": 117, "char_end": 149 }, { "id": "C04-1080.4", "char_start": 215, "char_end": 262 }, { "id": "C04-1080.5", "char_start": 334, "char_end": 341 }, { "id": "C04-1080.6", "char_start": 345, "char_end": 353 }, { "id": "C04-1080.7", "char_start": 374, "char_end": 381 }, { "id": "C04-1080.8", "char_start": 389, "char_end": 396 }, { "id": "C04-1080.9", "char_start": 417, "char_end": 425 }, { "id": "C04-1080.10", "char_start": 454, "char_end": 464 }, { "id": "C04-1080.11", "char_start": 489, "char_end": 501 }, { "id": "C04-1080.12", "char_start": 516, "char_end": 524 }, { "id": "C04-1080.13", "char_start": 530, "char_end": 538 }, { "id": "C04-1080.14", "char_start": 542, "char_end": 563 }, { "id": "C04-1080.15", "char_start": 653, "char_end": 697 } ]
[ { "label": 3, "arg1": "C04-1080.1", "arg2": "C04-1080.2", "reverse": false }, { "label": 2, "arg1": "C04-1080.7", "arg2": "C04-1080.9", "reverse": false }, { "label": 2, "arg1": "C04-1080.11", "arg2": "C04-1080.12", "reverse": false } ]
C04-1096
Generation Of Relative Referring Expressions Based On Perceptual Grouping
Past work of generating referring expressions mainly utilized attributes of objects and binary relations between objects. However, such an approach does not work well when there is no distinctive attribute among objects. To overcome this limitation, this paper proposes a method utilizing the perceptual groups of objects and n-ary relations among them. The key is to identify groups of objects that are naturally recognized by humans. We conducted psychological experiments with 42 subjects to collect referring expressions in such situations, and built a generation algorithm based on the results. The evaluation using another 23 subjects showed that the proposed method could effectively generate proper referring expressions.
[ { "id": "C04-1096.1", "char_start": 25, "char_end": 46 }, { "id": "C04-1096.2", "char_start": 77, "char_end": 84 }, { "id": "C04-1096.3", "char_start": 89, "char_end": 105 }, { "id": "C04-1096.4", "char_start": 114, "char_end": 121 }, { "id": "C04-1096.5", "char_start": 213, "char_end": 220 }, { "id": "C04-1096.6", "char_start": 315, "char_end": 322 }, { "id": "C04-1096.7", "char_start": 327, "char_end": 342 }, { "id": "C04-1096.8", "char_start": 388, "char_end": 395 }, { "id": "C04-1096.9", "char_start": 504, "char_end": 525 }, { "id": "C04-1096.10", "char_start": 558, "char_end": 578 } ]
[ { "label": 1, "arg1": "C04-1096.1", "arg2": "C04-1096.2", "reverse": false }, { "label": 3, "arg1": "C04-1096.3", "arg2": "C04-1096.4", "reverse": false }, { "label": 3, "arg1": "C04-1096.6", "arg2": "C04-1096.7", "reverse": true } ]
C04-1103
Direct Orthographical Mapping For Machine Transliteration
Machine transliteration/back-transliteration plays an important role in many multilingual speech and language applications. In this paper, a novel framework for machine transliteration/backtransliteration that allows us to carry out direct orthographical mapping (DOM) between two different languages is presented. Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (n-gram TM), is further proposed to model the transliteration process. We evaluate the proposed methods through several transliteration/backtransliteration experiments for English/Chinese and English/Japanese language pairs. Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly.
[ { "id": "C04-1103.1", "char_start": 1, "char_end": 45 }, { "id": "C04-1103.2", "char_start": 78, "char_end": 123 }, { "id": "C04-1103.3", "char_start": 162, "char_end": 205 }, { "id": "C04-1103.4", "char_start": 234, "char_end": 269 }, { "id": "C04-1103.5", "char_start": 292, "char_end": 301 }, { "id": "C04-1103.6", "char_start": 340, "char_end": 382 }, { "id": "C04-1103.7", "char_start": 396, "char_end": 436 }, { "id": "C04-1103.8", "char_start": 471, "char_end": 494 }, { "id": "C04-1103.9", "char_start": 545, "char_end": 592 }, { "id": "C04-1103.10", "char_start": 597, "char_end": 648 }, { "id": "C04-1103.11", "char_start": 723, "char_end": 748 }, { "id": "C04-1103.12", "char_start": 771, "char_end": 795 } ]
[ { "label": 4, "arg1": "C04-1103.1", "arg2": "C04-1103.2", "reverse": false }, { "label": 1, "arg1": "C04-1103.3", "arg2": "C04-1103.4", "reverse": false }, { "label": 3, "arg1": "C04-1103.7", "arg2": "C04-1103.8", "reverse": false }, { "label": 1, "arg1": "C04-1103.9", "arg2": "C04-1103.10", "reverse": false } ]
C04-1112
A Lemma- Based Approach To A Maximum Entropy Word Sense Disambiguation System For Dutch
In this paper, we present a corpus-based supervised word sense disambiguation (WSD) system for Dutch which combines statistical classification (maximum entropy) with linguistic information. Instead of building individual classifiers per ambiguous wordform, we introduce a lemma-based approach. The advantage of this novel method is that it clusters all inflected forms of an ambiguous word in one classifier, therefore augmenting the training material available to the algorithm. Testing the lemma-based model on the Dutch Senseval-2 test data, we achieve a significant increase in accuracy over the wordform model. Also, the WSD system based on lemmas is smaller and more robust.
[ { "id": "C04-1112.1", "char_start": 29, "char_end": 91 }, { "id": "C04-1112.2", "char_start": 96, "char_end": 101 }, { "id": "C04-1112.3", "char_start": 117, "char_end": 143 }, { "id": "C04-1112.4", "char_start": 145, "char_end": 160 }, { "id": "C04-1112.5", "char_start": 167, "char_end": 189 }, { "id": "C04-1112.6", "char_start": 222, "char_end": 233 }, { "id": "C04-1112.7", "char_start": 238, "char_end": 256 }, { "id": "C04-1112.8", "char_start": 273, "char_end": 293 }, { "id": "C04-1112.9", "char_start": 354, "char_end": 369 }, { "id": "C04-1112.10", "char_start": 376, "char_end": 390 }, { "id": "C04-1112.11", "char_start": 398, "char_end": 408 }, { "id": "C04-1112.12", "char_start": 435, "char_end": 452 }, { "id": "C04-1112.13", "char_start": 470, "char_end": 479 }, { "id": "C04-1112.14", "char_start": 493, "char_end": 510 }, { "id": "C04-1112.15", "char_start": 518, "char_end": 544 }, { "id": "C04-1112.16", "char_start": 583, "char_end": 591 }, { "id": "C04-1112.17", "char_start": 601, "char_end": 615 }, { "id": "C04-1112.18", "char_start": 627, "char_end": 653 } ]
[ { "label": 1, "arg1": "C04-1112.1", "arg2": "C04-1112.2", "reverse": false }, { "label": 6, "arg1": "C04-1112.6", "arg2": "C04-1112.8", "reverse": false }, { "label": 3, "arg1": "C04-1112.9", "arg2": "C04-1112.10", "reverse": false }, { "label": 1, "arg1": "C04-1112.12", "arg2": "C04-1112.13", "reverse": false }, { "label": 1, "arg1": "C04-1112.14", "arg2": "C04-1112.15", "reverse": false } ]
C04-1116
Term Aggregation : Mining Synonymous Expressions Using Personal Stylistic Variations
We present a text mining method for finding synonymous expressions based on the distributional hypothesis in a set of coherent corpora. This paper proposes a new methodology to improve the accuracy of a term aggregation system using each author's text as a coherent corpus. Our approach is based on the idea that one person tends to use one expression for one meaning. According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. Our proposed method improves the accuracy of our term aggregation system, showing that our approach is successful.
[ { "id": "C04-1116.1", "char_start": 14, "char_end": 32 }, { "id": "C04-1116.2", "char_start": 45, "char_end": 67 }, { "id": "C04-1116.3", "char_start": 81, "char_end": 106 }, { "id": "C04-1116.4", "char_start": 128, "char_end": 135 }, { "id": "C04-1116.5", "char_start": 190, "char_end": 198 }, { "id": "C04-1116.6", "char_start": 204, "char_end": 227 }, { "id": "C04-1116.7", "char_start": 248, "char_end": 252 }, { "id": "C04-1116.8", "char_start": 267, "char_end": 273 }, { "id": "C04-1116.9", "char_start": 342, "char_end": 352 }, { "id": "C04-1116.10", "char_start": 361, "char_end": 368 }, { "id": "C04-1116.11", "char_start": 411, "char_end": 416 }, { "id": "C04-1116.12", "char_start": 422, "char_end": 446 }, { "id": "C04-1116.13", "char_start": 464, "char_end": 470 }, { "id": "C04-1116.14", "char_start": 486, "char_end": 508 }, { "id": "C04-1116.15", "char_start": 543, "char_end": 551 }, { "id": "C04-1116.16", "char_start": 559, "char_end": 582 } ]
[ { "label": 1, "arg1": "C04-1116.1", "arg2": "C04-1116.3", "reverse": true }, { "label": 2, "arg1": "C04-1116.5", "arg2": "C04-1116.6", "reverse": true }, { "label": 1, "arg1": "C04-1116.7", "arg2": "C04-1116.8", "reverse": false }, { "label": 3, "arg1": "C04-1116.9", "arg2": "C04-1116.10", "reverse": true }, { "label": 3, "arg1": "C04-1116.11", "arg2": "C04-1116.12", "reverse": true }, { "label": 2, "arg1": "C04-1116.15", "arg2": "C04-1116.16", "reverse": true } ]