id
stringlengths 8
8
| title
stringlengths 18
138
| abstract
stringlengths 177
1.96k
| entities
list | relation
list |
---|---|---|---|---|
P97-1072 | Towards resolution of bridging descriptions
|
We present preliminary results concerning robust techniques for resolving bridging definite descriptions. We report our analysis of a collection of 20 Wall Street Journal articles from the Penn Treebank Corpus and our experiments with WordNet to identify relations between bridging descriptions and their antecedents. | [
{
"id": "P97-1072.1",
"char_start": 65,
"char_end": 105
},
{
"id": "P97-1072.2",
"char_start": 152,
"char_end": 180
},
{
"id": "P97-1072.3",
"char_start": 190,
"char_end": 210
},
{
"id": "P97-1072.4",
"char_start": 236,
"char_end": 243
},
{
"id": "P97-1072.5",
"char_start": 274,
"char_end": 295
},
{
"id": "P97-1072.6",
"char_start": 306,
"char_end": 317
}
] | [
{
"label": 4,
"arg1": "P97-1072.2",
"arg2": "P97-1072.3",
"reverse": false
}
] |
P99-1058 | A semantically-derived subset of English for hardware verification
|
To verify hardware designs by model checking, circuit specifications are commonly expressed in the temporal logic CTL. Automatic conversion of English to CTL requires the definition of an appropriately restricted subset of English. We show how the limited semantic expressibility of CTL can be exploited to derive a hierarchy of subsets. Our strategy avoids potential difficulties with approaches that take existing computational semantic analyses of English as their starting point--such as the need to ensure that all sentences in the subset possess a CTL translation. | [
{
"id": "P99-1058.1",
"char_start": 11,
"char_end": 27
},
{
"id": "P99-1058.2",
"char_start": 31,
"char_end": 45
},
{
"id": "P99-1058.3",
"char_start": 47,
"char_end": 69
},
{
"id": "P99-1058.4",
"char_start": 100,
"char_end": 118
},
{
"id": "P99-1058.5",
"char_start": 120,
"char_end": 158
},
{
"id": "P99-1058.6",
"char_start": 203,
"char_end": 220
},
{
"id": "P99-1058.7",
"char_start": 224,
"char_end": 231
},
{
"id": "P99-1058.8",
"char_start": 257,
"char_end": 280
},
{
"id": "P99-1058.9",
"char_start": 284,
"char_end": 287
},
{
"id": "P99-1058.10",
"char_start": 330,
"char_end": 337
},
{
"id": "P99-1058.11",
"char_start": 417,
"char_end": 448
},
{
"id": "P99-1058.12",
"char_start": 452,
"char_end": 459
},
{
"id": "P99-1058.13",
"char_start": 521,
"char_end": 530
},
{
"id": "P99-1058.14",
"char_start": 538,
"char_end": 544
},
{
"id": "P99-1058.15",
"char_start": 555,
"char_end": 570
}
] | [
{
"label": 1,
"arg1": "P99-1058.1",
"arg2": "P99-1058.2",
"reverse": true
},
{
"label": 3,
"arg1": "P99-1058.3",
"arg2": "P99-1058.4",
"reverse": true
},
{
"label": 4,
"arg1": "P99-1058.6",
"arg2": "P99-1058.7",
"reverse": false
},
{
"label": 3,
"arg1": "P99-1058.8",
"arg2": "P99-1058.9",
"reverse": false
},
{
"label": 5,
"arg1": "P99-1058.11",
"arg2": "P99-1058.12",
"reverse": false
},
{
"label": 4,
"arg1": "P99-1058.13",
"arg2": "P99-1058.14",
"reverse": false
}
] |
P05-1039 |
What To Do When Lexicalization Fails: Parsing German With Suffix Analysis And Smoothing |
In this paper, we present an unlexicalized parser for German which employs smoothing and suffix analysis to achieve a labelled bracket F-score of 76.2, higher than previously reported results on the NEGRA corpus. In addition to the high accuracy of the model, the use of smoothing in an unlexicalized parser allows us to better examine the interplay between smoothing and parsing results.
| [
{
"id": "P05-1039.1",
"char_start": 30,
"char_end": 50
},
{
"id": "P05-1039.2",
"char_start": 55,
"char_end": 61
},
{
"id": "P05-1039.3",
"char_start": 76,
"char_end": 85
},
{
"id": "P05-1039.4",
"char_start": 90,
"char_end": 105
},
{
"id": "P05-1039.5",
"char_start": 119,
"char_end": 143
},
{
"id": "P05-1039.6",
"char_start": 200,
"char_end": 212
},
{
"id": "P05-1039.7",
"char_start": 238,
"char_end": 246
},
{
"id": "P05-1039.8",
"char_start": 272,
"char_end": 281
},
{
"id": "P05-1039.9",
"char_start": 288,
"char_end": 308
},
{
"id": "P05-1039.10",
"char_start": 359,
"char_end": 368
},
{
"id": "P05-1039.11",
"char_start": 373,
"char_end": 380
}
] | [
{
"label": 1,
"arg1": "P05-1039.1",
"arg2": "P05-1039.2",
"reverse": false
},
{
"label": 2,
"arg1": "P05-1039.4",
"arg2": "P05-1039.5",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1039.8",
"arg2": "P05-1039.9",
"reverse": false
}
] |
P05-1058 | Alignment Model Adaptation For Domain-Specific Word Alignment |
This paper proposes an alignment adaptation approach to improve domain-specific (in-domain) word alignment. The basic idea of alignment adaptation is to use out-of-domain corpus to improve in-domain word alignment results. In this paper, we first train two statistical word alignment models with the large-scale out-of-domain corpus and the small-scale in-domain corpus respectively, and then interpolate these two models to improve the domain-specific word alignment. Experimental results show that our approach improves domain-specific word alignment in terms of both precision and recall, achieving a relative error rate reduction of 6.56% as compared with the state-of-the-art technologies.
| [
{
"id": "P05-1058.1",
"char_start": 24,
"char_end": 53
},
{
"id": "P05-1058.2",
"char_start": 65,
"char_end": 107
},
{
"id": "P05-1058.3",
"char_start": 127,
"char_end": 147
},
{
"id": "P05-1058.4",
"char_start": 158,
"char_end": 178
},
{
"id": "P05-1058.5",
"char_start": 190,
"char_end": 214
},
{
"id": "P05-1058.6",
"char_start": 258,
"char_end": 291
},
{
"id": "P05-1058.7",
"char_start": 313,
"char_end": 333
},
{
"id": "P05-1058.8",
"char_start": 354,
"char_end": 370
},
{
"id": "P05-1058.9",
"char_start": 438,
"char_end": 468
},
{
"id": "P05-1058.10",
"char_start": 523,
"char_end": 553
},
{
"id": "P05-1058.11",
"char_start": 571,
"char_end": 580
},
{
"id": "P05-1058.12",
"char_start": 585,
"char_end": 591
},
{
"id": "P05-1058.13",
"char_start": 605,
"char_end": 634
}
] | [
{
"label": 1,
"arg1": "P05-1058.1",
"arg2": "P05-1058.2",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1058.4",
"arg2": "P05-1058.5",
"reverse": false
},
{
"label": 6,
"arg1": "P05-1058.7",
"arg2": "P05-1058.8",
"reverse": false
},
{
"label": 2,
"arg1": "P05-1058.10",
"arg2": "P05-1058.13",
"reverse": false
}
] |
P05-3001 |
An Information-State Approach To Collaborative Reference |
We describe a dialogue system that works with its interlocutor to identify objects. Our contributions include a concise, modular architecture with reversible processes of understanding and generation, an information-state model of reference, and flexible links between semantics and collaborative problem solving.
| [
{
"id": "P05-3001.1",
"char_start": 15,
"char_end": 30
},
{
"id": "P05-3001.2",
"char_start": 122,
"char_end": 142
},
{
"id": "P05-3001.3",
"char_start": 172,
"char_end": 185
},
{
"id": "P05-3001.4",
"char_start": 190,
"char_end": 200
},
{
"id": "P05-3001.5",
"char_start": 205,
"char_end": 241
},
{
"id": "P05-3001.6",
"char_start": 270,
"char_end": 279
},
{
"id": "P05-3001.7",
"char_start": 284,
"char_end": 313
}
] | [
{
"label": 4,
"arg1": "P05-3001.2",
"arg2": "P05-3001.3",
"reverse": true
}
] |
E83-1021 | AN APPROACH TO NATURAL LANGUAGE IN THE SI-NETS PARADIGM
|
This article deals with the interpretation of conceptual operations underlying the communicative use of natural language (NL) within the Structured Inheritance Network (SI-Nets) paradigm. The operations are reduced to functions of a formal language, thus changing the level of abstraction of the operations to be performed on SI-Nets. In this sense, operations on SI-Nets are not merely isomorphic to single epistemological objects, but can be viewed as a simulation of processes on a different level, that pertaining to the conceptual system of NL. For this purpose, we have designed a version of KL-ONE which represents the epistemological level, while the new experimental language, KL-Conc, represents the conceptual level. KL-Conc would seem to be a more natural and intuitive way of interacting with SI-Nets.
| [
{
"id": "E83-1021.1",
"char_start": 29,
"char_end": 43
},
{
"id": "E83-1021.2",
"char_start": 47,
"char_end": 68
},
{
"id": "E83-1021.3",
"char_start": 105,
"char_end": 126
},
{
"id": "E83-1021.4",
"char_start": 138,
"char_end": 187
},
{
"id": "E83-1021.5",
"char_start": 219,
"char_end": 228
},
{
"id": "E83-1021.6",
"char_start": 234,
"char_end": 249
},
{
"id": "E83-1021.7",
"char_start": 327,
"char_end": 334
},
{
"id": "E83-1021.8",
"char_start": 365,
"char_end": 372
},
{
"id": "E83-1021.9",
"char_start": 526,
"char_end": 543
},
{
"id": "E83-1021.10",
"char_start": 547,
"char_end": 549
},
{
"id": "E83-1021.11",
"char_start": 599,
"char_end": 605
},
{
"id": "E83-1021.12",
"char_start": 627,
"char_end": 648
},
{
"id": "E83-1021.13",
"char_start": 687,
"char_end": 694
},
{
"id": "E83-1021.14",
"char_start": 711,
"char_end": 727
},
{
"id": "E83-1021.15",
"char_start": 807,
"char_end": 814
}
] | [
{
"label": 3,
"arg1": "E83-1021.1",
"arg2": "E83-1021.2",
"reverse": false
},
{
"label": 4,
"arg1": "E83-1021.5",
"arg2": "E83-1021.6",
"reverse": false
},
{
"label": 3,
"arg1": "E83-1021.9",
"arg2": "E83-1021.10",
"reverse": false
},
{
"label": 3,
"arg1": "E83-1021.11",
"arg2": "E83-1021.12",
"reverse": false
},
{
"label": 3,
"arg1": "E83-1021.13",
"arg2": "E83-1021.14",
"reverse": false
}
] |
E87-1043 | Iteration, Habituality And Verb Form Semantics |
The verb forms are often claimed to convey two kinds of information : 1. whether the event described in a sentence is present, past or future (= deictic information) 2. whether the event described in a sentence is presented as completed, going on, just starting or being finished (= aspectual information). It will be demonstrated in this paper that one has to add a third component to the analysis of verb form meanings, namely whether or not they express habituality. The framework of the analysis is model-theoretic semantics.
| [
{
"id": "E87-1043.1",
"char_start": 5,
"char_end": 15
},
{
"id": "E87-1043.2",
"char_start": 57,
"char_end": 68
},
{
"id": "E87-1043.3",
"char_start": 86,
"char_end": 91
},
{
"id": "E87-1043.4",
"char_start": 107,
"char_end": 115
},
{
"id": "E87-1043.5",
"char_start": 119,
"char_end": 126
},
{
"id": "E87-1043.6",
"char_start": 128,
"char_end": 132
},
{
"id": "E87-1043.7",
"char_start": 136,
"char_end": 142
},
{
"id": "E87-1043.8",
"char_start": 146,
"char_end": 165
},
{
"id": "E87-1043.9",
"char_start": 182,
"char_end": 187
},
{
"id": "E87-1043.10",
"char_start": 203,
"char_end": 211
},
{
"id": "E87-1043.11",
"char_start": 284,
"char_end": 305
},
{
"id": "E87-1043.12",
"char_start": 403,
"char_end": 421
},
{
"id": "E87-1043.13",
"char_start": 458,
"char_end": 469
},
{
"id": "E87-1043.14",
"char_start": 504,
"char_end": 529
}
] | [
{
"label": 3,
"arg1": "E87-1043.1",
"arg2": "E87-1043.2",
"reverse": true
},
{
"label": 4,
"arg1": "E87-1043.3",
"arg2": "E87-1043.4",
"reverse": false
},
{
"label": 4,
"arg1": "E87-1043.9",
"arg2": "E87-1043.10",
"reverse": false
},
{
"label": 3,
"arg1": "E87-1043.12",
"arg2": "E87-1043.13",
"reverse": true
}
] |
E91-1050 |
A Language For The Statement Of Binary Relations Over Feature Structures |
Unification is often the appropriate method for expressing relations between representations in the form of feature structures; however, there are circumstances in which a different approach is desirable. A declarative formalism is presented which permits direct mappings of one feature structure into another, and illustrative examples are given of its application to areas of current interest.
| [
{
"id": "E91-1050.1",
"char_start": 1,
"char_end": 12
},
{
"id": "E91-1050.2",
"char_start": 60,
"char_end": 69
},
{
"id": "E91-1050.3",
"char_start": 78,
"char_end": 93
},
{
"id": "E91-1050.4",
"char_start": 109,
"char_end": 127
},
{
"id": "E91-1050.5",
"char_start": 208,
"char_end": 229
},
{
"id": "E91-1050.6",
"char_start": 264,
"char_end": 272
},
{
"id": "E91-1050.7",
"char_start": 280,
"char_end": 297
}
] | [
{
"label": 3,
"arg1": "E91-1050.2",
"arg2": "E91-1050.3",
"reverse": false
},
{
"label": 1,
"arg1": "E91-1050.5",
"arg2": "E91-1050.6",
"reverse": false
}
] |
E93-1025 | A Discourse Copying Algorithm for Ellipsis and Anaphora Resolution
|
We give an analysis of ellipsis resolution in terms of a straightforward discourse copying algorithm that correctly predicts a wide range of phenomena. The treatment does not suffer from problems inherent in identity-of-relations analyses. Furthermore, in contrast to the approach of Dalrymple et al. [1991], the treatment directly encodes the intuitive distinction between full NPs and the referential elements that corefer with them through what we term role linking. The correct predictions for several problematic examples of ellipsis naturally result. Finally, the analysis extends directly to other discourse copying phenomena.
| [
{
"id": "E93-1025.1",
"char_start": 25,
"char_end": 44
},
{
"id": "E93-1025.2",
"char_start": 75,
"char_end": 102
},
{
"id": "E93-1025.3",
"char_start": 210,
"char_end": 240
},
{
"id": "E93-1025.4",
"char_start": 376,
"char_end": 384
},
{
"id": "E93-1025.5",
"char_start": 393,
"char_end": 413
},
{
"id": "E93-1025.6",
"char_start": 458,
"char_end": 470
},
{
"id": "E93-1025.7",
"char_start": 484,
"char_end": 495
},
{
"id": "E93-1025.8",
"char_start": 532,
"char_end": 540
},
{
"id": "E93-1025.9",
"char_start": 607,
"char_end": 634
}
] | [
{
"label": 3,
"arg1": "E93-1025.1",
"arg2": "E93-1025.2",
"reverse": true
},
{
"label": 6,
"arg1": "E93-1025.4",
"arg2": "E93-1025.5",
"reverse": false
},
{
"label": 3,
"arg1": "E93-1025.7",
"arg2": "E93-1025.8",
"reverse": false
}
] |
E99-1023 | Representing Text Chunks |
Dividing sentences in chunks of words is a useful preprocessing step for parsing, information extraction and information retrieval. (Ramshaw and Marcus, 1995) have introduced a "convenient" data representation for chunking by converting it to a tagging task. In this paper we will examine seven different data representations for the problem of recognizing noun phrase chunks. We will show that the data representation choice has a minor influence on chunking performance. However, equipped with the most suitabledata representation, our memory-based learning chunker was able to improve the best published chunking results for a standard data set.
| [
{
"id": "E99-1023.1",
"char_start": 10,
"char_end": 19
},
{
"id": "E99-1023.2",
"char_start": 23,
"char_end": 38
},
{
"id": "E99-1023.3",
"char_start": 74,
"char_end": 81
},
{
"id": "E99-1023.4",
"char_start": 83,
"char_end": 105
},
{
"id": "E99-1023.5",
"char_start": 110,
"char_end": 131
},
{
"id": "E99-1023.6",
"char_start": 191,
"char_end": 210
},
{
"id": "E99-1023.7",
"char_start": 215,
"char_end": 223
},
{
"id": "E99-1023.8",
"char_start": 246,
"char_end": 258
},
{
"id": "E99-1023.9",
"char_start": 306,
"char_end": 326
},
{
"id": "E99-1023.10",
"char_start": 358,
"char_end": 376
},
{
"id": "E99-1023.11",
"char_start": 400,
"char_end": 426
},
{
"id": "E99-1023.12",
"char_start": 452,
"char_end": 472
},
{
"id": "E99-1023.13",
"char_start": 514,
"char_end": 533
},
{
"id": "E99-1023.14",
"char_start": 539,
"char_end": 568
},
{
"id": "E99-1023.15",
"char_start": 608,
"char_end": 624
},
{
"id": "E99-1023.16",
"char_start": 631,
"char_end": 648
}
] | [
{
"label": 4,
"arg1": "E99-1023.1",
"arg2": "E99-1023.2",
"reverse": true
},
{
"label": 1,
"arg1": "E99-1023.6",
"arg2": "E99-1023.7",
"reverse": false
},
{
"label": 3,
"arg1": "E99-1023.9",
"arg2": "E99-1023.10",
"reverse": false
},
{
"label": 2,
"arg1": "E99-1023.11",
"arg2": "E99-1023.12",
"reverse": false
},
{
"label": 2,
"arg1": "E99-1023.14",
"arg2": "E99-1023.15",
"reverse": false
}
] |
E95-1021 | Tagging French - Comparing A Statistical And A Constraint-Based Method |
In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment: the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.
| [
{
"id": "E95-1021.1",
"char_start": 54,
"char_end": 76
},
{
"id": "E95-1021.2",
"char_start": 78,
"char_end": 125
},
{
"id": "E95-1021.3",
"char_start": 133,
"char_end": 139
},
{
"id": "E95-1021.4",
"char_start": 147,
"char_end": 160
},
{
"id": "E95-1021.5",
"char_start": 251,
"char_end": 268
},
{
"id": "E95-1021.6",
"char_start": 348,
"char_end": 365
},
{
"id": "E95-1021.7",
"char_start": 424,
"char_end": 432
},
{
"id": "E95-1021.8",
"char_start": 440,
"char_end": 458
},
{
"id": "E95-1021.9",
"char_start": 493,
"char_end": 500
},
{
"id": "E95-1021.10",
"char_start": 505,
"char_end": 512
},
{
"id": "E95-1021.11",
"char_start": 522,
"char_end": 545
},
{
"id": "E95-1021.12",
"char_start": 619,
"char_end": 635
}
] | [
{
"label": 1,
"arg1": "E95-1021.1",
"arg2": "E95-1021.2",
"reverse": true
},
{
"label": 1,
"arg1": "E95-1021.3",
"arg2": "E95-1021.4",
"reverse": false
},
{
"label": 6,
"arg1": "E95-1021.5",
"arg2": "E95-1021.6",
"reverse": false
},
{
"label": 6,
"arg1": "E95-1021.8",
"arg2": "E95-1021.9",
"reverse": false
}
] |
E99-1015 |
An Annotation Scheme For Discourse-Level Argumentation In Research Articles |
In order to build robust automatic abstracting systems, there is a need for better training resources than are currently available. In this paper, we introduce an annotation scheme for scientific articles which can be used to build such a resource in a consistent way. The seven categories of the scheme are based on rhetorical moves of argumentation. Our experimental results show that the scheme is stable, reproducible and intuitive to use.
| [
{
"id": "E99-1015.1",
"char_start": 26,
"char_end": 55
},
{
"id": "E99-1015.2",
"char_start": 84,
"char_end": 102
},
{
"id": "E99-1015.3",
"char_start": 164,
"char_end": 181
},
{
"id": "E99-1015.4",
"char_start": 240,
"char_end": 248
},
{
"id": "E99-1015.5",
"char_start": 298,
"char_end": 304
},
{
"id": "E99-1015.6",
"char_start": 318,
"char_end": 334
},
{
"id": "E99-1015.7",
"char_start": 338,
"char_end": 351
},
{
"id": "E99-1015.8",
"char_start": 392,
"char_end": 398
}
] | [
{
"label": 1,
"arg1": "E99-1015.1",
"arg2": "E99-1015.2",
"reverse": true
},
{
"label": 1,
"arg1": "E99-1015.3",
"arg2": "E99-1015.4",
"reverse": false
}
] |
H91-1067 | Automatic Acquisition Of Subcategorization Frames From Tagged Text |
This paper describes an implemented program that takes a tagged text corpus and generates a partial list of the subcategorization frames in which each verb occurs. The completeness of the output list increases monotonically with the total occurrences of each verb in the training corpus. False positive rates are one to three percent. Five subcategorization frames are currently detected and we foresee no impediment to detecting many more. Ultimately, we expect to provide a large subcategorization dictionary to the NLP community and to train dictionaries for specific corpora.
| [
{
"id": "H91-1067.1",
"char_start": 58,
"char_end": 76
},
{
"id": "H91-1067.2",
"char_start": 113,
"char_end": 137
},
{
"id": "H91-1067.3",
"char_start": 152,
"char_end": 156
},
{
"id": "H91-1067.4",
"char_start": 240,
"char_end": 251
},
{
"id": "H91-1067.5",
"char_start": 260,
"char_end": 264
},
{
"id": "H91-1067.6",
"char_start": 272,
"char_end": 287
},
{
"id": "H91-1067.7",
"char_start": 289,
"char_end": 309
},
{
"id": "H91-1067.8",
"char_start": 341,
"char_end": 365
},
{
"id": "H91-1067.9",
"char_start": 483,
"char_end": 511
},
{
"id": "H91-1067.10",
"char_start": 519,
"char_end": 532
},
{
"id": "H91-1067.11",
"char_start": 546,
"char_end": 558
},
{
"id": "H91-1067.12",
"char_start": 572,
"char_end": 579
}
] | [
{
"label": 3,
"arg1": "H91-1067.2",
"arg2": "H91-1067.3",
"reverse": false
},
{
"label": 4,
"arg1": "H91-1067.5",
"arg2": "H91-1067.6",
"reverse": false
}
] |
A97-1021 | Large-Scale Acquisition of LCS-Based Lexicons for Foreign Language Tutoring
|
We focus on the problem of building large repositories of lexical conceptual structure (LCS) representations for verbs in multiple languages. One of the main results of this work is the definition of a relation between broad semantic classes and LCS meaning components. Our acquisition program - LEXICALL - takes, as input, the result of previous work on verb classification and thematic grid tagging, and outputs LCS representations for different languages. These representations have been ported into English, Arabic and Spanish lexicons, each containing approximately 9000 verbs. We are currently using these lexicons in an operational foreign language tutoring and machine translation.
| [
{
"id": "A97-1021.1",
"char_start": 43,
"char_end": 55
},
{
"id": "A97-1021.2",
"char_start": 59,
"char_end": 109
},
{
"id": "A97-1021.3",
"char_start": 114,
"char_end": 119
},
{
"id": "A97-1021.4",
"char_start": 132,
"char_end": 141
},
{
"id": "A97-1021.5",
"char_start": 220,
"char_end": 242
},
{
"id": "A97-1021.6",
"char_start": 247,
"char_end": 269
},
{
"id": "A97-1021.7",
"char_start": 275,
"char_end": 307
},
{
"id": "A97-1021.8",
"char_start": 356,
"char_end": 375
},
{
"id": "A97-1021.9",
"char_start": 380,
"char_end": 401
},
{
"id": "A97-1021.10",
"char_start": 415,
"char_end": 434
},
{
"id": "A97-1021.11",
"char_start": 449,
"char_end": 458
},
{
"id": "A97-1021.12",
"char_start": 466,
"char_end": 481
},
{
"id": "A97-1021.13",
"char_start": 504,
"char_end": 540
},
{
"id": "A97-1021.14",
"char_start": 577,
"char_end": 582
},
{
"id": "A97-1021.15",
"char_start": 613,
"char_end": 621
},
{
"id": "A97-1021.16",
"char_start": 628,
"char_end": 665
},
{
"id": "A97-1021.17",
"char_start": 670,
"char_end": 689
}
] | [
{
"label": 3,
"arg1": "A97-1021.2",
"arg2": "A97-1021.3",
"reverse": false
},
{
"label": 1,
"arg1": "A97-1021.9",
"arg2": "A97-1021.10",
"reverse": false
},
{
"label": 1,
"arg1": "A97-1021.12",
"arg2": "A97-1021.13",
"reverse": false
},
{
"label": 1,
"arg1": "A97-1021.15",
"arg2": "A97-1021.16",
"reverse": false
}
] |
A97-1050 |
Semi-Automatic Acquisition Of Domain-Specific Translation Lexicons |
We investigate the utility of an algorithm for translation lexicon acquisition (SABLE), used previously on a very large corpus to acquire general translation lexicons, when that algorithm is applied to a much smaller corpus to produce candidates for domain-specific translation lexicons.
| [
{
"id": "A97-1050.1",
"char_start": 34,
"char_end": 87
},
{
"id": "A97-1050.2",
"char_start": 121,
"char_end": 127
},
{
"id": "A97-1050.3",
"char_start": 147,
"char_end": 167
},
{
"id": "A97-1050.4",
"char_start": 179,
"char_end": 188
},
{
"id": "A97-1050.5",
"char_start": 218,
"char_end": 224
},
{
"id": "A97-1050.6",
"char_start": 251,
"char_end": 287
}
] | [
{
"label": 1,
"arg1": "A97-1050.1",
"arg2": "A97-1050.3",
"reverse": false
},
{
"label": 1,
"arg1": "A97-1050.4",
"arg2": "A97-1050.6",
"reverse": false
}
] |
J87-1003 | SIMULTANEOUS-DISTRIBUTIVE COORDINATION AND CONTEXT-FREENESS
|
English is shown to be trans-context-free on the basis of coordinations of the respectively type that involve strictly syntactic cross-serial agreement. The agreement in question involves number in nouns and reflexive pronouns and is syntactic rather than semantic in nature because grammatical number in English, like grammatical gender in languages such as French, is partly arbitrary. The formal proof, which makes crucial use of the Interchange Lemma of Ogden et al., is so constructed as to be valid even if English is presumed to contain grammatical sentences in which respectively operates across a pair of coordinate phrases one of whose members has fewer conjuncts than the other; it thus goes through whatever the facts may be regarding constructions with unequal numbers of conjuncts in the scope of respectively, whereas other arguments have foundered on this problem.
| [
{
"id": "J87-1003.1",
"char_start": 1,
"char_end": 8
},
{
"id": "J87-1003.2",
"char_start": 59,
"char_end": 72
},
{
"id": "J87-1003.3",
"char_start": 111,
"char_end": 152
},
{
"id": "J87-1003.4",
"char_start": 158,
"char_end": 167
},
{
"id": "J87-1003.5",
"char_start": 189,
"char_end": 195
},
{
"id": "J87-1003.6",
"char_start": 199,
"char_end": 204
},
{
"id": "J87-1003.7",
"char_start": 209,
"char_end": 227
},
{
"id": "J87-1003.8",
"char_start": 284,
"char_end": 302
},
{
"id": "J87-1003.9",
"char_start": 306,
"char_end": 313
},
{
"id": "J87-1003.10",
"char_start": 320,
"char_end": 338
},
{
"id": "J87-1003.11",
"char_start": 342,
"char_end": 351
},
{
"id": "J87-1003.12",
"char_start": 360,
"char_end": 366
},
{
"id": "J87-1003.13",
"char_start": 438,
"char_end": 455
},
{
"id": "J87-1003.14",
"char_start": 514,
"char_end": 521
},
{
"id": "J87-1003.15",
"char_start": 545,
"char_end": 566
},
{
"id": "J87-1003.16",
"char_start": 615,
"char_end": 633
},
{
"id": "J87-1003.17",
"char_start": 665,
"char_end": 674
},
{
"id": "J87-1003.18",
"char_start": 748,
"char_end": 761
},
{
"id": "J87-1003.19",
"char_start": 786,
"char_end": 795
},
{
"id": "J87-1003.20",
"char_start": 803,
"char_end": 808
},
{
"id": "J87-1003.21",
"char_start": 840,
"char_end": 849
}
] | [
{
"label": 3,
"arg1": "J87-1003.2",
"arg2": "J87-1003.3",
"reverse": false
},
{
"label": 3,
"arg1": "J87-1003.5",
"arg2": "J87-1003.6",
"reverse": false
},
{
"label": 4,
"arg1": "J87-1003.8",
"arg2": "J87-1003.9",
"reverse": false
},
{
"label": 4,
"arg1": "J87-1003.10",
"arg2": "J87-1003.12",
"reverse": false
},
{
"label": 4,
"arg1": "J87-1003.14",
"arg2": "J87-1003.15",
"reverse": true
},
{
"label": 4,
"arg1": "J87-1003.16",
"arg2": "J87-1003.17",
"reverse": true
},
{
"label": 4,
"arg1": "J87-1003.18",
"arg2": "J87-1003.19",
"reverse": true
}
] |
I05-5004 | A Class-oriented Approach to Building a Paraphrase Corpus |
Towards deep analysis of compositional classes of paraphrases, we have examined a class-oriented framework for collecting paraphrase examples, in which sentential paraphrases are collected for each paraphrase class separately by means of automatic candidate generation and manual judgement. Our preliminary experiments on building a paraphrase corpus have so far been producing promising results, which we have evaluated according to cost-efficiency, exhaustiveness, and reliability.
| [
{
"id": "I05-5004.1",
"char_start": 26,
"char_end": 62
},
{
"id": "I05-5004.2",
"char_start": 83,
"char_end": 107
},
{
"id": "I05-5004.3",
"char_start": 123,
"char_end": 142
},
{
"id": "I05-5004.4",
"char_start": 153,
"char_end": 175
},
{
"id": "I05-5004.5",
"char_start": 199,
"char_end": 215
},
{
"id": "I05-5004.6",
"char_start": 239,
"char_end": 269
},
{
"id": "I05-5004.7",
"char_start": 274,
"char_end": 290
},
{
"id": "I05-5004.8",
"char_start": 334,
"char_end": 351
},
{
"id": "I05-5004.9",
"char_start": 435,
"char_end": 450
},
{
"id": "I05-5004.10",
"char_start": 452,
"char_end": 466
},
{
"id": "I05-5004.11",
"char_start": 472,
"char_end": 483
}
] | [
{
"label": 1,
"arg1": "I05-5004.2",
"arg2": "I05-5004.3",
"reverse": false
},
{
"label": 4,
"arg1": "I05-5004.4",
"arg2": "I05-5004.5",
"reverse": false
}
] |
P81-1033 | A Construction-Specific Approach to Focused Interaction in Flexible Parsing
|
A flexible parser can deal with input that deviates from its grammar, in addition to input that conforms to it. Ideally, such a parser will correct the deviant input: sometimes, it will be unable to correct it at all; at other times, correction will be possible, but only to within a range of ambiguous possibilities. This paper is concerned with such ambiguous situations, and with making it as easy as possible for the ambiguity to be resolved through consultation with the user of the parser - we presume interactive use. We show the importance of asking the user for clarification in as focused a way as possible. Focused interaction of this kind is facilitated by a construction-specific approach to flexible parsing, with specialized parsing techniques for each type of construction, and specialized ambiguity representations for each type of ambiguity that a particular construction can give rise to. A construction-specific approach also aids in task-specific language development by allowing a language definition that is natural in terms of the task domain to be interpreted directly without compilation into a uniform grammar formalism, thus greatly speeding the testing of changes to the language definition.
| [
{
"id": "P81-1033.1",
"char_start": 3,
"char_end": 18
},
{
"id": "P81-1033.2",
"char_start": 62,
"char_end": 69
},
{
"id": "P81-1033.3",
"char_start": 129,
"char_end": 135
},
{
"id": "P81-1033.4",
"char_start": 235,
"char_end": 245
},
{
"id": "P81-1033.5",
"char_start": 422,
"char_end": 431
},
{
"id": "P81-1033.6",
"char_start": 489,
"char_end": 495
},
{
"id": "P81-1033.7",
"char_start": 619,
"char_end": 638
},
{
"id": "P81-1033.8",
"char_start": 672,
"char_end": 702
},
{
"id": "P81-1033.9",
"char_start": 706,
"char_end": 722
},
{
"id": "P81-1033.10",
"char_start": 729,
"char_end": 759
},
{
"id": "P81-1033.11",
"char_start": 777,
"char_end": 789
},
{
"id": "P81-1033.12",
"char_start": 807,
"char_end": 832
},
{
"id": "P81-1033.13",
"char_start": 850,
"char_end": 859
},
{
"id": "P81-1033.14",
"char_start": 878,
"char_end": 890
},
{
"id": "P81-1033.15",
"char_start": 911,
"char_end": 941
},
{
"id": "P81-1033.16",
"char_start": 955,
"char_end": 989
},
{
"id": "P81-1033.17",
"char_start": 1004,
"char_end": 1023
},
{
"id": "P81-1033.18",
"char_start": 1056,
"char_end": 1067
},
{
"id": "P81-1033.19",
"char_start": 1122,
"char_end": 1147
},
{
"id": "P81-1033.20",
"char_start": 1175,
"char_end": 1182
},
{
"id": "P81-1033.21",
"char_start": 1201,
"char_end": 1220
}
] | [
{
"label": 4,
"arg1": "P81-1033.1",
"arg2": "P81-1033.2",
"reverse": true
},
{
"label": 1,
"arg1": "P81-1033.8",
"arg2": "P81-1033.9",
"reverse": false
},
{
"label": 1,
"arg1": "P81-1033.10",
"arg2": "P81-1033.11",
"reverse": false
},
{
"label": 3,
"arg1": "P81-1033.12",
"arg2": "P81-1033.13",
"reverse": false
},
{
"label": 1,
"arg1": "P81-1033.15",
"arg2": "P81-1033.16",
"reverse": false
}
] |
P85-1019 | Semantic Caseframe Parsing and Syntactic Generality
|
We have implemented a restricted domain parser called Plume. Building on previous work at Carnegie-Mellon University e.g. [4, 5, 8], Plume's approach to parsing is based on semantic caseframe instantiation. This has the advantages of efficiency on grammatical input, and robustness in the face of ungrammatical input. While Plume is well adapted to simple declarative and imperative utterances, it handles passives, relative clauses and interrogatives in an ad hoc manner leading to patchy syntactic coverage. This paper outlines Plume as it currently exists and describes our detailed design for extending Plume to handle passives, relative clauses, and interrogatives in a general manner.
| [
{
"id": "P85-1019.1",
"char_start": 23,
"char_end": 47
},
{
"id": "P85-1019.2",
"char_start": 55,
"char_end": 60
},
{
"id": "P85-1019.3",
"char_start": 134,
"char_end": 161
},
{
"id": "P85-1019.4",
"char_start": 174,
"char_end": 206
},
{
"id": "P85-1019.5",
"char_start": 235,
"char_end": 245
},
{
"id": "P85-1019.6",
"char_start": 249,
"char_end": 266
},
{
"id": "P85-1019.7",
"char_start": 272,
"char_end": 282
},
{
"id": "P85-1019.8",
"char_start": 298,
"char_end": 317
},
{
"id": "P85-1019.9",
"char_start": 325,
"char_end": 330
},
{
"id": "P85-1019.10",
"char_start": 357,
"char_end": 394
},
{
"id": "P85-1019.11",
"char_start": 407,
"char_end": 415
},
{
"id": "P85-1019.12",
"char_start": 417,
"char_end": 433
},
{
"id": "P85-1019.13",
"char_start": 438,
"char_end": 452
},
{
"id": "P85-1019.14",
"char_start": 491,
"char_end": 509
},
{
"id": "P85-1019.15",
"char_start": 531,
"char_end": 536
},
{
"id": "P85-1019.16",
"char_start": 608,
"char_end": 613
},
{
"id": "P85-1019.17",
"char_start": 624,
"char_end": 632
},
{
"id": "P85-1019.18",
"char_start": 634,
"char_end": 650
},
{
"id": "P85-1019.19",
"char_start": 656,
"char_end": 670
}
] | [
{
"label": 1,
"arg1": "P85-1019.3",
"arg2": "P85-1019.4",
"reverse": true
},
{
"label": 2,
"arg1": "P85-1019.9",
"arg2": "P85-1019.14",
"reverse": false
}
] |
P91-1025 | Resolving Translation Mismatches With Information Flow
|
Languages differ in the concepts and real-world entities for which they have words and grammatical constructs. Therefore translation must sometimes be a matter of approximating the meaning of a source language text rather than finding an exact counterpart in the target language. We propose a translation framework based on Situation Theory. The basic ingredients are an information lattice, a representation scheme for utterances embedded in contexts, and a mismatch resolution scheme defined in terms of information flow. We motivate our approach with examples of translation between English and Japanese.
| [
{
"id": "P91-1025.1",
"char_start": 1,
"char_end": 10
},
{
"id": "P91-1025.2",
"char_start": 25,
"char_end": 33
},
{
"id": "P91-1025.3",
"char_start": 38,
"char_end": 57
},
{
"id": "P91-1025.4",
"char_start": 78,
"char_end": 83
},
{
"id": "P91-1025.5",
"char_start": 88,
"char_end": 110
},
{
"id": "P91-1025.6",
"char_start": 122,
"char_end": 133
},
{
"id": "P91-1025.7",
"char_start": 182,
"char_end": 189
},
{
"id": "P91-1025.8",
"char_start": 195,
"char_end": 215
},
{
"id": "P91-1025.9",
"char_start": 264,
"char_end": 279
},
{
"id": "P91-1025.10",
"char_start": 294,
"char_end": 315
},
{
"id": "P91-1025.11",
"char_start": 325,
"char_end": 341
},
{
"id": "P91-1025.12",
"char_start": 372,
"char_end": 391
},
{
"id": "P91-1025.13",
"char_start": 395,
"char_end": 416
},
{
"id": "P91-1025.14",
"char_start": 421,
"char_end": 431
},
{
"id": "P91-1025.15",
"char_start": 444,
"char_end": 452
},
{
"id": "P91-1025.16",
"char_start": 460,
"char_end": 486
},
{
"id": "P91-1025.17",
"char_start": 507,
"char_end": 523
},
{
"id": "P91-1025.18",
"char_start": 567,
"char_end": 578
},
{
"id": "P91-1025.19",
"char_start": 587,
"char_end": 594
},
{
"id": "P91-1025.20",
"char_start": 599,
"char_end": 607
}
] | [
{
"label": 4,
"arg1": "P91-1025.1",
"arg2": "P91-1025.4",
"reverse": true
},
{
"label": 3,
"arg1": "P91-1025.7",
"arg2": "P91-1025.8",
"reverse": false
},
{
"label": 1,
"arg1": "P91-1025.10",
"arg2": "P91-1025.11",
"reverse": true
},
{
"label": 3,
"arg1": "P91-1025.13",
"arg2": "P91-1025.14",
"reverse": false
},
{
"label": 3,
"arg1": "P91-1025.16",
"arg2": "P91-1025.17",
"reverse": true
},
{
"label": 1,
"arg1": "P91-1025.18",
"arg2": "P91-1025.19",
"reverse": false
}
] |
P95-1034 | Two-Level, Many-Paths Generation
|
Large-scale natural language generation requires the integration of vast amounts of knowledge: lexical, grammatical, and conceptual. A robust generator must be able to operate well even when pieces of knowledge are missing. It must also be robust against incomplete or inaccurate inputs. To attack these problems, we have built a hybrid generator, in which gaps in symbolic knowledge are filled by statistical methods. We describe algorithms and show experimental results. We also discuss how the hybrid generation model can be used to simplify current generators and enhance their portability, even when perfect knowledge is in principle obtainable.
| [
{
"id": "P95-1034.1",
"char_start": 1,
"char_end": 40
},
{
"id": "P95-1034.2",
"char_start": 85,
"char_end": 94
},
{
"id": "P95-1034.3",
"char_start": 136,
"char_end": 152
},
{
"id": "P95-1034.4",
"char_start": 202,
"char_end": 211
},
{
"id": "P95-1034.5",
"char_start": 256,
"char_end": 287
},
{
"id": "P95-1034.6",
"char_start": 331,
"char_end": 347
},
{
"id": "P95-1034.7",
"char_start": 366,
"char_end": 384
},
{
"id": "P95-1034.8",
"char_start": 399,
"char_end": 418
},
{
"id": "P95-1034.9",
"char_start": 498,
"char_end": 521
},
{
"id": "P95-1034.10",
"char_start": 554,
"char_end": 564
},
{
"id": "P95-1034.11",
"char_start": 583,
"char_end": 594
},
{
"id": "P95-1034.12",
"char_start": 614,
"char_end": 623
}
] | [
{
"label": 1,
"arg1": "P95-1034.1",
"arg2": "P95-1034.2",
"reverse": true
},
{
"label": 1,
"arg1": "P95-1034.3",
"arg2": "P95-1034.4",
"reverse": true
},
{
"label": 1,
"arg1": "P95-1034.6",
"arg2": "P95-1034.8",
"reverse": true
},
{
"label": 3,
"arg1": "P95-1034.10",
"arg2": "P95-1034.11",
"reverse": true
}
] |
P97-1017 | Machine Transliteration
|
It is challenging to translate names and technical terms across languages with different alphabets and sound inventories. These items are commonly transliterated, i.e., replaced with approximate phonetic equivalents. For example, computer in English comes out as ~ i/l:::'=--~-- (konpyuutaa) in Japanese. Translating such items from Japanese back to English is even more challenging, and of practical interest, as transliterated items make up the bulk of text phrases not found in bilingual dictionaries. We describe and evaluate a method for performing backwards transliterations by machine. This method uses a generative model, incorporating several distinct stages in the transliteration process.
| [
{
"id": "P97-1017.1",
"char_start": 32,
"char_end": 37
},
{
"id": "P97-1017.2",
"char_start": 42,
"char_end": 57
},
{
"id": "P97-1017.3",
"char_start": 65,
"char_end": 74
},
{
"id": "P97-1017.4",
"char_start": 90,
"char_end": 99
},
{
"id": "P97-1017.5",
"char_start": 104,
"char_end": 121
},
{
"id": "P97-1017.6",
"char_start": 196,
"char_end": 216
},
{
"id": "P97-1017.7",
"char_start": 243,
"char_end": 250
},
{
"id": "P97-1017.8",
"char_start": 296,
"char_end": 304
},
{
"id": "P97-1017.9",
"char_start": 334,
"char_end": 342
},
{
"id": "P97-1017.10",
"char_start": 351,
"char_end": 358
},
{
"id": "P97-1017.11",
"char_start": 456,
"char_end": 468
},
{
"id": "P97-1017.12",
"char_start": 482,
"char_end": 504
},
{
"id": "P97-1017.13",
"char_start": 555,
"char_end": 581
},
{
"id": "P97-1017.14",
"char_start": 585,
"char_end": 592
},
{
"id": "P97-1017.15",
"char_start": 613,
"char_end": 629
},
{
"id": "P97-1017.16",
"char_start": 676,
"char_end": 699
}
] | [
{
"label": 3,
"arg1": "P97-1017.3",
"arg2": "P97-1017.4",
"reverse": false
},
{
"label": 6,
"arg1": "P97-1017.7",
"arg2": "P97-1017.8",
"reverse": false
},
{
"label": 1,
"arg1": "P97-1017.13",
"arg2": "P97-1017.14",
"reverse": true
},
{
"label": 1,
"arg1": "P97-1017.15",
"arg2": "P97-1017.16",
"reverse": false
}
] |
P97-1058 | Approximating Context-Free Grammars with a Finite-State Calculus
|
Although adequate models of human language for syntactic analysis and semantic interpretation are of at least context-free complexity, for applications such as speech processing in which speed is important finite-state models are often preferred. These requirements may be reconciled by using the more complex grammar to automatically derive a finite-state approximation which can then be used as a filter to guide speech recognition or to reject many hypotheses at an early stage of processing. A method is presented here for calculating such finite-state approximations from context-free grammars. It is essentially different from the algorithm introduced by Pereira and Wright (1991; 1996), is faster in some cases, and has the advantage of being open-ended and adaptable.
| [
{
"id": "P97-1058.1",
"char_start": 29,
"char_end": 43
},
{
"id": "P97-1058.2",
"char_start": 48,
"char_end": 66
},
{
"id": "P97-1058.3",
"char_start": 71,
"char_end": 94
},
{
"id": "P97-1058.4",
"char_start": 111,
"char_end": 134
},
{
"id": "P97-1058.5",
"char_start": 161,
"char_end": 178
},
{
"id": "P97-1058.6",
"char_start": 207,
"char_end": 226
},
{
"id": "P97-1058.7",
"char_start": 311,
"char_end": 318
},
{
"id": "P97-1058.8",
"char_start": 345,
"char_end": 371
},
{
"id": "P97-1058.9",
"char_start": 416,
"char_end": 434
},
{
"id": "P97-1058.10",
"char_start": 545,
"char_end": 572
},
{
"id": "P97-1058.11",
"char_start": 578,
"char_end": 599
}
] | [
{
"label": 1,
"arg1": "P97-1058.5",
"arg2": "P97-1058.6",
"reverse": true
},
{
"label": 1,
"arg1": "P97-1058.8",
"arg2": "P97-1058.9",
"reverse": false
}
] |
P99-1036 | A Part of Speech Estimation Method for Japanese Unknown Words using a Statistical Model of Morphology and Context
|
We present a statistical model of Japanese unknown words consisting of a set of length and spelling models classified by the character types that constitute a word. The point is quite simple: different character sets should be treated differently and the changes between character types are very important because Japanese script has both ideograms like Chinese (kanji) and phonograms like English (katakana). Both word segmentation accuracy and part of speech tagging accuracy are improved by the proposed model. The model can achieve 96.6% tagging accuracy if unknown words are correctly segmented.
| [
{
"id": "P99-1036.1",
"char_start": 14,
"char_end": 31
},
{
"id": "P99-1036.2",
"char_start": 35,
"char_end": 57
},
{
"id": "P99-1036.3",
"char_start": 81,
"char_end": 107
},
{
"id": "P99-1036.4",
"char_start": 126,
"char_end": 141
},
{
"id": "P99-1036.5",
"char_start": 160,
"char_end": 164
},
{
"id": "P99-1036.6",
"char_start": 203,
"char_end": 217
},
{
"id": "P99-1036.7",
"char_start": 272,
"char_end": 287
},
{
"id": "P99-1036.8",
"char_start": 315,
"char_end": 330
},
{
"id": "P99-1036.9",
"char_start": 340,
"char_end": 349
},
{
"id": "P99-1036.10",
"char_start": 355,
"char_end": 362
},
{
"id": "P99-1036.11",
"char_start": 364,
"char_end": 369
},
{
"id": "P99-1036.12",
"char_start": 375,
"char_end": 385
},
{
"id": "P99-1036.13",
"char_start": 391,
"char_end": 398
},
{
"id": "P99-1036.14",
"char_start": 400,
"char_end": 408
},
{
"id": "P99-1036.15",
"char_start": 416,
"char_end": 442
},
{
"id": "P99-1036.16",
"char_start": 447,
"char_end": 478
},
{
"id": "P99-1036.17",
"char_start": 543,
"char_end": 559
},
{
"id": "P99-1036.18",
"char_start": 563,
"char_end": 576
}
] | [
{
"label": 3,
"arg1": "P99-1036.1",
"arg2": "P99-1036.2",
"reverse": false
},
{
"label": 4,
"arg1": "P99-1036.4",
"arg2": "P99-1036.5",
"reverse": false
},
{
"label": 4,
"arg1": "P99-1036.8",
"arg2": "P99-1036.9",
"reverse": true
}
] |
P99-1080 | A Pylonic Decision-Tree Language Model with Optimal Question Selection
|
This paper discusses a decision-tree approach to the problem of assigning probabilities to words following a given text. In contrast with previous decision-tree language model attempts, an algorithm for selecting nearly optimal questions is considered. The model is to be tested on a standard task, The Wall Street Journal, allowing a fair comparison with the well-known tri-gram model.
| [
{
"id": "P99-1080.1",
"char_start": 24,
"char_end": 46
},
{
"id": "P99-1080.2",
"char_start": 75,
"char_end": 88
},
{
"id": "P99-1080.3",
"char_start": 92,
"char_end": 97
},
{
"id": "P99-1080.4",
"char_start": 116,
"char_end": 120
},
{
"id": "P99-1080.5",
"char_start": 148,
"char_end": 185
},
{
"id": "P99-1080.6",
"char_start": 214,
"char_end": 238
},
{
"id": "P99-1080.7",
"char_start": 300,
"char_end": 323
},
{
"id": "P99-1080.8",
"char_start": 372,
"char_end": 386
}
] | [
{
"label": 1,
"arg1": "P99-1080.1",
"arg2": "P99-1080.2",
"reverse": false
},
{
"label": 4,
"arg1": "P99-1080.3",
"arg2": "P99-1080.4",
"reverse": false
}
] |
E93-1066 | Two-Level Description Of Turkish Morphology
|
This poster paper describes a full scale two-level morphological description (Karttunen, 1983; Koskenniemi, 1983) of Turkish word structures. The description has been implemented using the PC-KIMMO environment (Antworth, 1990) and is based on a root word lexicon of about 23,000 roots words. Almost all the special cases of and exceptions to phonological and morphological rules have been implemented. Turkish is an agglutinative language with word structures formed by productive affixations of derivational and inflectional suffixes to root words. Turkish has finite-state but nevertheless rather complex morphotactics. Morphemes added to a root word or a stem can convert the word from a nominal to a verbal structure or vice-versa, or can create adverbial constructs. The surface realizations of morphological constructions are constrained and modified by a number of phonetic rules such as vowel harmony.
| [
{
"id": "E93-1066.1",
"char_start": 31,
"char_end": 77
},
{
"id": "E93-1066.2",
"char_start": 118,
"char_end": 141
},
{
"id": "E93-1066.3",
"char_start": 190,
"char_end": 210
},
{
"id": "E93-1066.4",
"char_start": 246,
"char_end": 263
},
{
"id": "E93-1066.5",
"char_start": 280,
"char_end": 291
},
{
"id": "E93-1066.6",
"char_start": 343,
"char_end": 379
},
{
"id": "E93-1066.7",
"char_start": 403,
"char_end": 410
},
{
"id": "E93-1066.8",
"char_start": 417,
"char_end": 439
},
{
"id": "E93-1066.9",
"char_start": 445,
"char_end": 460
},
{
"id": "E93-1066.10",
"char_start": 471,
"char_end": 535
},
{
"id": "E93-1066.11",
"char_start": 539,
"char_end": 549
},
{
"id": "E93-1066.12",
"char_start": 551,
"char_end": 558
},
{
"id": "E93-1066.13",
"char_start": 563,
"char_end": 575
},
{
"id": "E93-1066.14",
"char_start": 623,
"char_end": 632
},
{
"id": "E93-1066.15",
"char_start": 644,
"char_end": 653
},
{
"id": "E93-1066.16",
"char_start": 659,
"char_end": 663
},
{
"id": "E93-1066.17",
"char_start": 680,
"char_end": 684
},
{
"id": "E93-1066.18",
"char_start": 692,
"char_end": 699
},
{
"id": "E93-1066.19",
"char_start": 705,
"char_end": 721
},
{
"id": "E93-1066.20",
"char_start": 751,
"char_end": 771
},
{
"id": "E93-1066.21",
"char_start": 777,
"char_end": 797
},
{
"id": "E93-1066.22",
"char_start": 801,
"char_end": 828
},
{
"id": "E93-1066.23",
"char_start": 873,
"char_end": 887
},
{
"id": "E93-1066.24",
"char_start": 896,
"char_end": 909
}
] | [
{
"label": 3,
"arg1": "E93-1066.1",
"arg2": "E93-1066.2",
"reverse": false
},
{
"label": 4,
"arg1": "E93-1066.4",
"arg2": "E93-1066.5",
"reverse": true
},
{
"label": 3,
"arg1": "E93-1066.7",
"arg2": "E93-1066.8",
"reverse": false
},
{
"label": 3,
"arg1": "E93-1066.21",
"arg2": "E93-1066.22",
"reverse": false
}
] |
X96-1041 | TUIT : A Toolkit For Constructing Multilingual TIPSTER User Interfaces
|
The TIPSTER Architecture has been designed to enable a variety of different text applications to use a set of common text processing modules. Since user interfaces work best when customized for particular applications , it is appropriator that no particular user interface styles or conventions are described in the TIPSTER Architecture specification. However, the Computing Research Laboratory (CRL) has constructed several TIPSTER applications that use a common set of configurable Graphical User Interface (GUI) functions. These GUIs were constructed using CRL's TIPSTER User Interface Toolkit (TUIT). TUIT is a software library that can be used to construct multilingual TIPSTER user interfaces for a set of common user tasks. CRL developed TUIT to support their work to integrate TIPSTER modules for the 6 and 12 month TIPSTER II demonstrations as well as their Oleada and Temple demonstration projects. This paper briefly describes TUIT and its capabilities.
| [
{
"id": "X96-1041.1",
"char_start": 5,
"char_end": 25
},
{
"id": "X96-1041.2",
"char_start": 77,
"char_end": 94
},
{
"id": "X96-1041.3",
"char_start": 111,
"char_end": 141
},
{
"id": "X96-1041.4",
"char_start": 149,
"char_end": 164
},
{
"id": "X96-1041.5",
"char_start": 195,
"char_end": 218
},
{
"id": "X96-1041.6",
"char_start": 259,
"char_end": 295
},
{
"id": "X96-1041.7",
"char_start": 317,
"char_end": 351
},
{
"id": "X96-1041.8",
"char_start": 366,
"char_end": 401
},
{
"id": "X96-1041.9",
"char_start": 426,
"char_end": 446
},
{
"id": "X96-1041.10",
"char_start": 485,
"char_end": 525
},
{
"id": "X96-1041.11",
"char_start": 533,
"char_end": 537
},
{
"id": "X96-1041.12",
"char_start": 561,
"char_end": 604
},
{
"id": "X96-1041.13",
"char_start": 606,
"char_end": 610
},
{
"id": "X96-1041.14",
"char_start": 616,
"char_end": 632
},
{
"id": "X96-1041.15",
"char_start": 663,
"char_end": 699
},
{
"id": "X96-1041.16",
"char_start": 732,
"char_end": 735
},
{
"id": "X96-1041.17",
"char_start": 746,
"char_end": 750
},
{
"id": "X96-1041.18",
"char_start": 786,
"char_end": 801
},
{
"id": "X96-1041.19",
"char_start": 939,
"char_end": 943
}
] | [
{
"label": 1,
"arg1": "X96-1041.2",
"arg2": "X96-1041.3",
"reverse": true
},
{
"label": 5,
"arg1": "X96-1041.6",
"arg2": "X96-1041.7",
"reverse": true
},
{
"label": 1,
"arg1": "X96-1041.9",
"arg2": "X96-1041.10",
"reverse": true
},
{
"label": 1,
"arg1": "X96-1041.11",
"arg2": "X96-1041.12",
"reverse": true
},
{
"label": 1,
"arg1": "X96-1041.14",
"arg2": "X96-1041.15",
"reverse": false
}
] |
P02-1008 | Phonological Comprehension And The Compilation Of Optimality Theory
|
This paper ties up some loose ends in finite-state Optimality Theory. First, it discusses how to perform comprehension under Optimality Theory grammars consisting of finite-state constraints. Comprehension has not been much studied in OT; we show that unlike production, it does not always yield a regular set, making finite-state methods inapplicable. However, after giving a suitably flexible presentation of OT, we show carefully how to treat comprehension under recent variants of OTin which grammars can be compiled into finite-state transducers. We then unify these variants, showing that compilation is possible if all components of the grammar are regular relations, including the harmony ordering on scored candidates.
| [
{
"id": "P02-1008.1",
"char_start": 39,
"char_end": 69
},
{
"id": "P02-1008.2",
"char_start": 106,
"char_end": 119
},
{
"id": "P02-1008.3",
"char_start": 126,
"char_end": 152
},
{
"id": "P02-1008.4",
"char_start": 167,
"char_end": 191
},
{
"id": "P02-1008.5",
"char_start": 193,
"char_end": 206
},
{
"id": "P02-1008.6",
"char_start": 236,
"char_end": 238
},
{
"id": "P02-1008.7",
"char_start": 260,
"char_end": 270
},
{
"id": "P02-1008.8",
"char_start": 319,
"char_end": 339
},
{
"id": "P02-1008.9",
"char_start": 412,
"char_end": 414
},
{
"id": "P02-1008.10",
"char_start": 447,
"char_end": 460
},
{
"id": "P02-1008.11",
"char_start": 474,
"char_end": 488
},
{
"id": "P02-1008.12",
"char_start": 497,
"char_end": 505
},
{
"id": "P02-1008.13",
"char_start": 527,
"char_end": 551
},
{
"id": "P02-1008.14",
"char_start": 596,
"char_end": 607
},
{
"id": "P02-1008.15",
"char_start": 645,
"char_end": 652
},
{
"id": "P02-1008.16",
"char_start": 657,
"char_end": 674
},
{
"id": "P02-1008.17",
"char_start": 690,
"char_end": 706
},
{
"id": "P02-1008.18",
"char_start": 710,
"char_end": 727
}
] | [] |
P98-2176 | Learning Correlations between Linguistic Indicators and Semantic Constraints : Reuse of Context-Dependent Descriptions of Entities
|
This paper presents the results of a study on the semantic constraints imposed on lexical choice by certain contextual indicators. We show how such indicators are computed and how correlations between them and the choice of a noun phrase description of a named entity can be automatically established using supervised learning. Based on this correlation, we have developed a technique for automatic lexical choice of descriptions of entities in text generation. We discuss the underlying relationship between the pragmatics of choosing an appropriate description that serves a specific purpose in the automatically generated text and the semantics of the description itself. We present our work in the framework of the more general concept of reuse of linguistic structures that are automatically extracted from large corpora. We present a formal evaluation of our approach and we conclude with some thoughts on potential applications of our method.
| [
{
"id": "P98-2176.1",
"char_start": 51,
"char_end": 71
},
{
"id": "P98-2176.2",
"char_start": 83,
"char_end": 97
},
{
"id": "P98-2176.3",
"char_start": 109,
"char_end": 130
},
{
"id": "P98-2176.4",
"char_start": 149,
"char_end": 159
},
{
"id": "P98-2176.5",
"char_start": 181,
"char_end": 193
},
{
"id": "P98-2176.6",
"char_start": 215,
"char_end": 250
},
{
"id": "P98-2176.7",
"char_start": 256,
"char_end": 268
},
{
"id": "P98-2176.8",
"char_start": 308,
"char_end": 327
},
{
"id": "P98-2176.9",
"char_start": 343,
"char_end": 354
},
{
"id": "P98-2176.10",
"char_start": 390,
"char_end": 414
},
{
"id": "P98-2176.11",
"char_start": 418,
"char_end": 430
},
{
"id": "P98-2176.12",
"char_start": 434,
"char_end": 442
},
{
"id": "P98-2176.13",
"char_start": 446,
"char_end": 461
},
{
"id": "P98-2176.14",
"char_start": 514,
"char_end": 524
},
{
"id": "P98-2176.15",
"char_start": 552,
"char_end": 563
},
{
"id": "P98-2176.16",
"char_start": 602,
"char_end": 630
},
{
"id": "P98-2176.17",
"char_start": 639,
"char_end": 648
},
{
"id": "P98-2176.18",
"char_start": 656,
"char_end": 667
},
{
"id": "P98-2176.19",
"char_start": 753,
"char_end": 774
},
{
"id": "P98-2176.20",
"char_start": 813,
"char_end": 826
}
] | [] |
H94-1024 | Evaluation In The ARPA Machine Translation Program : 1993 Methodology
|
In the second year of evaluations of the ARPA HLT Machine Translation (MT) Initiative, methodologies developed and tested in 1992 were applied to the 1993 MT test runs. The current methodology optimizes the inherently subjective judgments on translation accuracy and quality by channeling the judgments of non-translators into many data points which reflect both the comparison of the performance of the research MT systems with production MT systems and against the performance of novice translators. This paper discusses the three evaluation methods used in the 1993 evaluation, the results of the evaluations , and preliminary characterizations of the Winter 1994 evaluation, now underway. The efforts under discussion focus on measuring the progress of core MT technology and increasing the sensitivity and portability of MT evaluation methodology.
| [
{
"id": "H94-1024.1",
"char_start": 23,
"char_end": 34
},
{
"id": "H94-1024.2",
"char_start": 42,
"char_end": 86
},
{
"id": "H94-1024.3",
"char_start": 151,
"char_end": 168
},
{
"id": "H94-1024.4",
"char_start": 219,
"char_end": 239
},
{
"id": "H94-1024.5",
"char_start": 243,
"char_end": 275
},
{
"id": "H94-1024.6",
"char_start": 294,
"char_end": 303
},
{
"id": "H94-1024.7",
"char_start": 307,
"char_end": 322
},
{
"id": "H94-1024.8",
"char_start": 333,
"char_end": 344
},
{
"id": "H94-1024.9",
"char_start": 386,
"char_end": 397
},
{
"id": "H94-1024.10",
"char_start": 405,
"char_end": 424
},
{
"id": "H94-1024.11",
"char_start": 430,
"char_end": 451
},
{
"id": "H94-1024.12",
"char_start": 468,
"char_end": 479
},
{
"id": "H94-1024.13",
"char_start": 483,
"char_end": 501
},
{
"id": "H94-1024.14",
"char_start": 534,
"char_end": 552
},
{
"id": "H94-1024.15",
"char_start": 565,
"char_end": 580
},
{
"id": "H94-1024.16",
"char_start": 656,
"char_end": 679
},
{
"id": "H94-1024.17",
"char_start": 759,
"char_end": 777
},
{
"id": "H94-1024.18",
"char_start": 813,
"char_end": 824
},
{
"id": "H94-1024.19",
"char_start": 828,
"char_end": 853
}
] | [] |
C02-1071 | Integrating Shallow Linguistic Processing Into A Unification-Based Spanish Grammar
|
This paper describes to what extent deep processing may benefit from shallow techniques and it presents a NLP system which integrates a linguistic PoS tagger and chunker as a preprocessing module of a broad coverage unification based grammar of Spanish. Experiments show that the efficiency of the overall analysis improves significantly and that our system also provides robustness to the linguistic processing while maintaining both the accuracy and the precision of the grammar.
| [
{
"id": "C02-1071.1",
"char_start": 37,
"char_end": 52
},
{
"id": "C02-1071.2",
"char_start": 70,
"char_end": 88
},
{
"id": "C02-1071.3",
"char_start": 107,
"char_end": 117
},
{
"id": "C02-1071.4",
"char_start": 137,
"char_end": 170
},
{
"id": "C02-1071.5",
"char_start": 202,
"char_end": 253
},
{
"id": "C02-1071.6",
"char_start": 281,
"char_end": 291
},
{
"id": "C02-1071.7",
"char_start": 373,
"char_end": 383
},
{
"id": "C02-1071.8",
"char_start": 391,
"char_end": 412
},
{
"id": "C02-1071.9",
"char_start": 440,
"char_end": 448
},
{
"id": "C02-1071.10",
"char_start": 457,
"char_end": 466
},
{
"id": "C02-1071.11",
"char_start": 474,
"char_end": 481
}
] | [
{
"label": 1,
"arg1": "C02-1071.1",
"arg2": "C02-1071.2",
"reverse": true
},
{
"label": 4,
"arg1": "C02-1071.3",
"arg2": "C02-1071.4",
"reverse": true
},
{
"label": 3,
"arg1": "C02-1071.7",
"arg2": "C02-1071.8",
"reverse": false
},
{
"label": 2,
"arg1": "C02-1071.10",
"arg2": "C02-1071.11",
"reverse": true
}
] |
P06-2067 | Parsing And Subcategorization Data
|
In this paper, we compare the performance of a state-of-the-art statistical parser (Bikel, 2004) in parsing written and spoken language and in generating sub-categorization cues from written and spoken language. Although Bikel's parser achieves a higher accuracy for parsing written language, it achieves a higher accuracy when extracting subcategorization cues from spoken language. Our experiments also show that current technology for extracting subcategorization frames initially designed for written texts works equally well for spoken language. Additionally, we explore the utility of punctuation in helping parsing and extraction of subcategorization cues. Our experiments show that punctuation is of little help in parsing spoken language and extracting subcategorization cues from spoken language. This indicates that there is no need to add punctuation in transcribing spoken corpora simply in order to help parsers.
| [
{
"id": "P06-2067.1",
"char_start": 65,
"char_end": 83
},
{
"id": "P06-2067.2",
"char_start": 109,
"char_end": 136
},
{
"id": "P06-2067.3",
"char_start": 155,
"char_end": 178
},
{
"id": "P06-2067.4",
"char_start": 184,
"char_end": 211
},
{
"id": "P06-2067.5",
"char_start": 222,
"char_end": 236
},
{
"id": "P06-2067.6",
"char_start": 255,
"char_end": 263
},
{
"id": "P06-2067.7",
"char_start": 276,
"char_end": 292
},
{
"id": "P06-2067.8",
"char_start": 315,
"char_end": 323
},
{
"id": "P06-2067.9",
"char_start": 340,
"char_end": 362
},
{
"id": "P06-2067.10",
"char_start": 368,
"char_end": 383
},
{
"id": "P06-2067.11",
"char_start": 439,
"char_end": 474
},
{
"id": "P06-2067.12",
"char_start": 498,
"char_end": 511
},
{
"id": "P06-2067.13",
"char_start": 535,
"char_end": 550
},
{
"id": "P06-2067.14",
"char_start": 592,
"char_end": 603
},
{
"id": "P06-2067.15",
"char_start": 615,
"char_end": 622
},
{
"id": "P06-2067.16",
"char_start": 627,
"char_end": 637
},
{
"id": "P06-2067.17",
"char_start": 641,
"char_end": 663
},
{
"id": "P06-2067.18",
"char_start": 691,
"char_end": 702
},
{
"id": "P06-2067.19",
"char_start": 732,
"char_end": 747
},
{
"id": "P06-2067.20",
"char_start": 763,
"char_end": 785
},
{
"id": "P06-2067.21",
"char_start": 791,
"char_end": 806
},
{
"id": "P06-2067.22",
"char_start": 852,
"char_end": 863
},
{
"id": "P06-2067.23",
"char_start": 880,
"char_end": 894
},
{
"id": "P06-2067.24",
"char_start": 919,
"char_end": 926
}
] | [
{
"label": 1,
"arg1": "P06-2067.1",
"arg2": "P06-2067.2",
"reverse": false
},
{
"label": 2,
"arg1": "P06-2067.5",
"arg2": "P06-2067.6",
"reverse": false
},
{
"label": 1,
"arg1": "P06-2067.11",
"arg2": "P06-2067.12",
"reverse": false
},
{
"label": 1,
"arg1": "P06-2067.14",
"arg2": "P06-2067.15",
"reverse": false
},
{
"label": 4,
"arg1": "P06-2067.20",
"arg2": "P06-2067.21",
"reverse": false
},
{
"label": 4,
"arg1": "P06-2067.22",
"arg2": "P06-2067.23",
"reverse": false
}
] |
C94-1088 | Character-Based Collocation For Mandarin Chinese
|
This paper describes a characters-based Chinese collocation system and discusses the advantages of it over a traditional word-based system. Since wordbreaks are not conventionally marked in Chinese text corpora, a character-based collocation system has the dual advantages of avoiding pre-processing distortion and directly accessing sub-lexical information. Furthermore, word-based collocational properties can be obtained through an auxiliary module of automatic segmentation.
| [
{
"id": "C94-1088.1",
"char_start": 24,
"char_end": 67
},
{
"id": "C94-1088.2",
"char_start": 122,
"char_end": 139
},
{
"id": "C94-1088.3",
"char_start": 147,
"char_end": 157
},
{
"id": "C94-1088.4",
"char_start": 191,
"char_end": 211
},
{
"id": "C94-1088.5",
"char_start": 215,
"char_end": 249
},
{
"id": "C94-1088.6",
"char_start": 286,
"char_end": 311
},
{
"id": "C94-1088.7",
"char_start": 335,
"char_end": 358
},
{
"id": "C94-1088.8",
"char_start": 373,
"char_end": 408
},
{
"id": "C94-1088.9",
"char_start": 456,
"char_end": 478
}
] | [
{
"label": 6,
"arg1": "C94-1088.1",
"arg2": "C94-1088.2",
"reverse": false
},
{
"label": 4,
"arg1": "C94-1088.3",
"arg2": "C94-1088.4",
"reverse": false
},
{
"label": 1,
"arg1": "C94-1088.8",
"arg2": "C94-1088.9",
"reverse": true
}
] |
C04-1024 | Efficient Parsing Of Highly Ambiguous Context-Free Grammars With Bit Vectors
|
An efficient bit-vector-based CKY-style parser for context-free parsing is presented. The parser computes a compact parse forest representation of the complete set of possible analyses for large treebank grammars and long input sentences. The parser uses bit-vector operations to parallelise the basic parsing operations. The parser is particularly useful when all analyses are needed rather than just the most probable one.
| [
{
"id": "C04-1024.1",
"char_start": 14,
"char_end": 47
},
{
"id": "C04-1024.2",
"char_start": 52,
"char_end": 72
},
{
"id": "C04-1024.3",
"char_start": 91,
"char_end": 97
},
{
"id": "C04-1024.4",
"char_start": 117,
"char_end": 144
},
{
"id": "C04-1024.5",
"char_start": 177,
"char_end": 213
},
{
"id": "C04-1024.6",
"char_start": 223,
"char_end": 238
},
{
"id": "C04-1024.7",
"char_start": 244,
"char_end": 250
},
{
"id": "C04-1024.8",
"char_start": 256,
"char_end": 277
},
{
"id": "C04-1024.9",
"char_start": 297,
"char_end": 321
},
{
"id": "C04-1024.10",
"char_start": 327,
"char_end": 333
}
] | [
{
"label": 1,
"arg1": "C04-1024.1",
"arg2": "C04-1024.2",
"reverse": false
},
{
"label": 3,
"arg1": "C04-1024.4",
"arg2": "C04-1024.5",
"reverse": false
},
{
"label": 1,
"arg1": "C04-1024.7",
"arg2": "C04-1024.8",
"reverse": true
}
] |
N04-1008 | Automatic Question Answering : Beyond The Factoid
|
In this paper we describe and evaluate a Question Answering system that goes beyond answering factoid questions. We focus on FAQ-like questions and answers , and build our system around a noisy-channel architecture which exploits both a language model for answers and a transformation model for answer/question terms, trained on a corpus of 1 million question/answer pairs collected from the Web.
| [
{
"id": "N04-1008.1",
"char_start": 42,
"char_end": 67
},
{
"id": "N04-1008.2",
"char_start": 126,
"char_end": 156
},
{
"id": "N04-1008.3",
"char_start": 189,
"char_end": 215
},
{
"id": "N04-1008.4",
"char_start": 238,
"char_end": 252
},
{
"id": "N04-1008.5",
"char_start": 257,
"char_end": 264
},
{
"id": "N04-1008.6",
"char_start": 271,
"char_end": 291
},
{
"id": "N04-1008.7",
"char_start": 296,
"char_end": 317
},
{
"id": "N04-1008.8",
"char_start": 332,
"char_end": 338
},
{
"id": "N04-1008.9",
"char_start": 352,
"char_end": 373
}
] | [
{
"label": 1,
"arg1": "N04-1008.4",
"arg2": "N04-1008.5",
"reverse": false
},
{
"label": 1,
"arg1": "N04-1008.6",
"arg2": "N04-1008.7",
"reverse": false
},
{
"label": 4,
"arg1": "N04-1008.8",
"arg2": "N04-1008.9",
"reverse": true
}
] |
P05-1046 |
Unsupervised Learning Of Field Segmentation Models For Information Extraction |
The applicability of many current information extraction techniques is severely limited by the need for supervised training data.
We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data.
| [
{
"id": "P05-1046.1",
"char_start": 35,
"char_end": 68
},
{
"id": "P05-1046.2",
"char_start": 105,
"char_end": 129
},
{
"id": "P05-1046.3",
"char_start": 164,
"char_end": 197
},
{
"id": "P05-1046.4",
"char_start": 279,
"char_end": 294
},
{
"id": "P05-1046.5",
"char_start": 379,
"char_end": 406
},
{
"id": "P05-1046.6",
"char_start": 426,
"char_end": 442
},
{
"id": "P05-1046.7",
"char_start": 447,
"char_end": 468
},
{
"id": "P05-1046.8",
"char_start": 478,
"char_end": 503
},
{
"id": "P05-1046.9",
"char_start": 658,
"char_end": 673
},
{
"id": "P05-1046.10",
"char_start": 731,
"char_end": 751
},
{
"id": "P05-1046.11",
"char_start": 763,
"char_end": 773
},
{
"id": "P05-1046.12",
"char_start": 783,
"char_end": 801
},
{
"id": "P05-1046.13",
"char_start": 834,
"char_end": 852
},
{
"id": "P05-1046.14",
"char_start": 859,
"char_end": 875
},
{
"id": "P05-1046.15",
"char_start": 886,
"char_end": 909
},
{
"id": "P05-1046.16",
"char_start": 948,
"char_end": 960
}
] | [
{
"label": 1,
"arg1": "P05-1046.1",
"arg2": "P05-1046.2",
"reverse": true
},
{
"label": 1,
"arg1": "P05-1046.3",
"arg2": "P05-1046.4",
"reverse": true
},
{
"label": 3,
"arg1": "P05-1046.6",
"arg2": "P05-1046.7",
"reverse": false
},
{
"label": 2,
"arg1": "P05-1046.10",
"arg2": "P05-1046.11",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1046.15",
"arg2": "P05-1046.16",
"reverse": true
}
] |
P05-1073 |
Joint Learning Improves Semantic Role Labeling |
Despite much recent progress on accurate semantic role labeling, previous work has largely used independent classifiers, possibly combined with separate label sequence models via Viterbi decoding. This stands in stark contrast to the linguistic observation that a core argument frame is a joint structure, with strong dependencies between arguments. We show how to build a joint model of argument frames, incorporating novel features that model these interactions into discriminative log-linear models. This system achieves an error reduction of 22% on all arguments and 32% on core arguments over a state-of-the art independent classifier for gold-standard parse trees on PropBank.
| [
{
"id": "P05-1073.1",
"char_start": 42,
"char_end": 64
},
{
"id": "P05-1073.2",
"char_start": 97,
"char_end": 120
},
{
"id": "P05-1073.3",
"char_start": 154,
"char_end": 175
},
{
"id": "P05-1073.4",
"char_start": 180,
"char_end": 196
},
{
"id": "P05-1073.5",
"char_start": 265,
"char_end": 284
},
{
"id": "P05-1073.6",
"char_start": 319,
"char_end": 331
},
{
"id": "P05-1073.7",
"char_start": 340,
"char_end": 349
},
{
"id": "P05-1073.8",
"char_start": 374,
"char_end": 385
},
{
"id": "P05-1073.9",
"char_start": 389,
"char_end": 404
},
{
"id": "P05-1073.10",
"char_start": 426,
"char_end": 434
},
{
"id": "P05-1073.11",
"char_start": 470,
"char_end": 502
},
{
"id": "P05-1073.12",
"char_start": 528,
"char_end": 543
},
{
"id": "P05-1073.13",
"char_start": 558,
"char_end": 567
},
{
"id": "P05-1073.14",
"char_start": 579,
"char_end": 593
},
{
"id": "P05-1073.15",
"char_start": 630,
"char_end": 640
},
{
"id": "P05-1073.16",
"char_start": 645,
"char_end": 670
},
{
"id": "P05-1073.17",
"char_start": 674,
"char_end": 682
}
] | [
{
"label": 1,
"arg1": "P05-1073.1",
"arg2": "P05-1073.2",
"reverse": true
},
{
"label": 3,
"arg1": "P05-1073.6",
"arg2": "P05-1073.7",
"reverse": false
},
{
"label": 3,
"arg1": "P05-1073.8",
"arg2": "P05-1073.9",
"reverse": false
},
{
"label": 4,
"arg1": "P05-1073.10",
"arg2": "P05-1073.11",
"reverse": false
},
{
"label": 4,
"arg1": "P05-1073.16",
"arg2": "P05-1073.17",
"reverse": false
}
] |
P05-3030 |
Organizing English Reading Materials For Vocabulary Learning |
We propose a method of organizing reading materials for vocabulary learning. It enables us to select a concise set of reading texts (from a target corpus) that contains all the target vocabulary to be learned. We used a specialized vocabulary for an English certification test as the target vocabulary and used English Wikipedia, a free-content encyclopedia, as the target corpus. The organized reading materials would enable learners not only to study the target vocabulary efficiently but also to gain a variety of knowledge through reading. The reading materials are available on our web site.
| [
{
"id": "P05-3030.1",
"char_start": 57,
"char_end": 76
},
{
"id": "P05-3030.2",
"char_start": 127,
"char_end": 132
},
{
"id": "P05-3030.3",
"char_start": 141,
"char_end": 154
},
{
"id": "P05-3030.4",
"char_start": 178,
"char_end": 195
},
{
"id": "P05-3030.5",
"char_start": 233,
"char_end": 243
},
{
"id": "P05-3030.6",
"char_start": 285,
"char_end": 302
},
{
"id": "P05-3030.7",
"char_start": 312,
"char_end": 329
},
{
"id": "P05-3030.8",
"char_start": 367,
"char_end": 380
},
{
"id": "P05-3030.9",
"char_start": 458,
"char_end": 475
}
] | [
{
"label": 4,
"arg1": "P05-3030.2",
"arg2": "P05-3030.3",
"reverse": false
},
{
"label": 1,
"arg1": "P05-3030.5",
"arg2": "P05-3030.6",
"reverse": false
},
{
"label": 1,
"arg1": "P05-3030.7",
"arg2": "P05-3030.8",
"reverse": false
}
] |
E83-1029 | NATURAL LANGUAGE INPUT FOR SCENE GENERATION
|
In this paper a system which understands and conceptualizes scenes descriptions in natural language is presented. Specifically, the following components of the system are described: the syntactic analyzer, based on a Procedural Systemic Grammar, the semantic analyzer relying on the Conceptual Dependency Theory, and the dictionary.
| [
{
"id": "E83-1029.1",
"char_start": 61,
"char_end": 100
},
{
"id": "E83-1029.2",
"char_start": 187,
"char_end": 205
},
{
"id": "E83-1029.3",
"char_start": 218,
"char_end": 245
},
{
"id": "E83-1029.4",
"char_start": 251,
"char_end": 268
},
{
"id": "E83-1029.5",
"char_start": 284,
"char_end": 312
},
{
"id": "E83-1029.6",
"char_start": 322,
"char_end": 332
}
] | [
{
"label": 1,
"arg1": "E83-1029.2",
"arg2": "E83-1029.3",
"reverse": true
},
{
"label": 1,
"arg1": "E83-1029.4",
"arg2": "E83-1029.5",
"reverse": true
}
] |
E89-1006 | TENSES AS ANAPHORA
|
A proposal to deal with French tenses in the framework of Discourse Representation Theory is presented, as it has been implemented for a fragment at the IMS. It is based on the theory of tenses of H. Kamp and Ch. Rohrer. Instead of using operators to express the meaning of the tenses the Reichenbachian point of view is adopted and refined such that the impact of the tenses with respect to the meaning of the text is understood as contribution to the integration of the events of a sentence in the event structure of the preceeding text. Thereby a system of relevant times provided by the preceeding text and by the temporal adverbials of the sentence being processed is used. This system consists of one or more reference times and temporal perspective times, the speech time and the location time. The special interest of our proposal is to establish a plausible choice of anchors for the new event out of the system of relevant times and to update this system of temporal coordinates correctly. The problem of choice is largely neglected in the literature. In opposition to the approach of Kamp and Rohrer the exact meaning of the tenses is fixed by the resolution component and not in the process of syntactic analysis.
| [
{
"id": "E89-1006.1",
"char_start": 25,
"char_end": 38
},
{
"id": "E89-1006.2",
"char_start": 59,
"char_end": 90
},
{
"id": "E89-1006.3",
"char_start": 154,
"char_end": 157
},
{
"id": "E89-1006.4",
"char_start": 178,
"char_end": 194
},
{
"id": "E89-1006.5",
"char_start": 239,
"char_end": 248
},
{
"id": "E89-1006.6",
"char_start": 264,
"char_end": 271
},
{
"id": "E89-1006.7",
"char_start": 279,
"char_end": 285
},
{
"id": "E89-1006.8",
"char_start": 370,
"char_end": 376
},
{
"id": "E89-1006.9",
"char_start": 397,
"char_end": 404
},
{
"id": "E89-1006.10",
"char_start": 412,
"char_end": 416
},
{
"id": "E89-1006.11",
"char_start": 473,
"char_end": 479
},
{
"id": "E89-1006.12",
"char_start": 485,
"char_end": 493
},
{
"id": "E89-1006.13",
"char_start": 501,
"char_end": 516
},
{
"id": "E89-1006.14",
"char_start": 535,
"char_end": 539
},
{
"id": "E89-1006.15",
"char_start": 551,
"char_end": 575
},
{
"id": "E89-1006.16",
"char_start": 603,
"char_end": 607
},
{
"id": "E89-1006.17",
"char_start": 619,
"char_end": 638
},
{
"id": "E89-1006.18",
"char_start": 646,
"char_end": 654
},
{
"id": "E89-1006.19",
"char_start": 716,
"char_end": 731
},
{
"id": "E89-1006.20",
"char_start": 736,
"char_end": 762
},
{
"id": "E89-1006.21",
"char_start": 768,
"char_end": 779
},
{
"id": "E89-1006.22",
"char_start": 788,
"char_end": 801
},
{
"id": "E89-1006.23",
"char_start": 894,
"char_end": 903
},
{
"id": "E89-1006.24",
"char_start": 915,
"char_end": 939
},
{
"id": "E89-1006.25",
"char_start": 959,
"char_end": 989
},
{
"id": "E89-1006.26",
"char_start": 1122,
"char_end": 1129
},
{
"id": "E89-1006.27",
"char_start": 1137,
"char_end": 1143
},
{
"id": "E89-1006.28",
"char_start": 1160,
"char_end": 1180
},
{
"id": "E89-1006.29",
"char_start": 1207,
"char_end": 1225
}
] | [
{
"label": 3,
"arg1": "E89-1006.1",
"arg2": "E89-1006.2",
"reverse": true
},
{
"label": 3,
"arg1": "E89-1006.6",
"arg2": "E89-1006.7",
"reverse": false
},
{
"label": 3,
"arg1": "E89-1006.9",
"arg2": "E89-1006.10",
"reverse": false
},
{
"label": 4,
"arg1": "E89-1006.11",
"arg2": "E89-1006.12",
"reverse": false
},
{
"label": 3,
"arg1": "E89-1006.13",
"arg2": "E89-1006.14",
"reverse": false
},
{
"label": 4,
"arg1": "E89-1006.17",
"arg2": "E89-1006.18",
"reverse": false
},
{
"label": 3,
"arg1": "E89-1006.26",
"arg2": "E89-1006.27",
"reverse": false
},
{
"label": 6,
"arg1": "E89-1006.28",
"arg2": "E89-1006.29",
"reverse": false
}
] |
E93-1004 |
Talking About Trees |
In this paper we introduce a modal language LTfor imposing constraints on trees, and an extension LT (LF) for imposing constraints on trees decorated with feature structures. The motivation for introducing these languages is to provide tools for formalising grammatical frameworks perspicuously, and the paper illustrates this by showing how the leading ideas of GPSG can be captured in LT (LF). In addition, the role of modal languages (and in particular, what we have called as constraint formalisms for linguistic theorising is discussed in some detail.
| [
{
"id": "E93-1004.1",
"char_start": 30,
"char_end": 47
},
{
"id": "E93-1004.2",
"char_start": 60,
"char_end": 71
},
{
"id": "E93-1004.3",
"char_start": 75,
"char_end": 80
},
{
"id": "E93-1004.4",
"char_start": 99,
"char_end": 106
},
{
"id": "E93-1004.5",
"char_start": 120,
"char_end": 131
},
{
"id": "E93-1004.6",
"char_start": 135,
"char_end": 174
},
{
"id": "E93-1004.7",
"char_start": 213,
"char_end": 222
},
{
"id": "E93-1004.8",
"char_start": 259,
"char_end": 281
},
{
"id": "E93-1004.9",
"char_start": 364,
"char_end": 368
},
{
"id": "E93-1004.10",
"char_start": 388,
"char_end": 395
},
{
"id": "E93-1004.11",
"char_start": 422,
"char_end": 437
},
{
"id": "E93-1004.12",
"char_start": 481,
"char_end": 502
}
] | [
{
"label": 3,
"arg1": "E93-1004.7",
"arg2": "E93-1004.8",
"reverse": false
},
{
"label": 6,
"arg1": "E93-1004.9",
"arg2": "E93-1004.10",
"reverse": false
}
] |
E99-1029 | Parsing with an Extended Domain of Locality
|
One of the claimed benefits of Tree Adjoining Grammars is that they have an extended domain of locality (EDOL). We consider how this can be exploited to limit the need for feature structure unification during parsing. We compare two wide-coverage lexicalized grammars of English, LEXSYS and XTAG, finding that the two grammars exploit EDOL in different ways.
| [
{
"id": "E99-1029.1",
"char_start": 32,
"char_end": 55
},
{
"id": "E99-1029.2",
"char_start": 77,
"char_end": 111
},
{
"id": "E99-1029.3",
"char_start": 173,
"char_end": 202
},
{
"id": "E99-1029.4",
"char_start": 210,
"char_end": 217
},
{
"id": "E99-1029.5",
"char_start": 248,
"char_end": 279
},
{
"id": "E99-1029.6",
"char_start": 281,
"char_end": 287
},
{
"id": "E99-1029.7",
"char_start": 292,
"char_end": 296
},
{
"id": "E99-1029.8",
"char_start": 319,
"char_end": 327
},
{
"id": "E99-1029.9",
"char_start": 336,
"char_end": 340
}
] | [
{
"label": 3,
"arg1": "E99-1029.1",
"arg2": "E99-1029.2",
"reverse": false
},
{
"label": 1,
"arg1": "E99-1029.3",
"arg2": "E99-1029.4",
"reverse": false
},
{
"label": 6,
"arg1": "E99-1029.6",
"arg2": "E99-1029.7",
"reverse": false
},
{
"label": 1,
"arg1": "E99-1029.8",
"arg2": "E99-1029.9",
"reverse": true
}
] |
E95-1033 |
ParseTalk About Sentence- And Text-Level Anaphora |
We provide a unified account of sentence-level and text-level anaphora within the framework of a dependency-based grammar model. Criteria for anaphora resolution within sentence boundaries rephrase major concepts from GB's binding theory, while those for text-level anaphora incorporate an adapted version of a Grosz-Sidner-style focus model.
| [
{
"id": "E95-1033.1",
"char_start": 33,
"char_end": 71
},
{
"id": "E95-1033.2",
"char_start": 98,
"char_end": 128
},
{
"id": "E95-1033.3",
"char_start": 143,
"char_end": 162
},
{
"id": "E95-1033.4",
"char_start": 170,
"char_end": 189
},
{
"id": "E95-1033.5",
"char_start": 219,
"char_end": 238
},
{
"id": "E95-1033.6",
"char_start": 256,
"char_end": 275
},
{
"id": "E95-1033.7",
"char_start": 312,
"char_end": 342
}
] | [
{
"label": 3,
"arg1": "E95-1033.1",
"arg2": "E95-1033.2",
"reverse": true
},
{
"label": 3,
"arg1": "E95-1033.6",
"arg2": "E95-1033.7",
"reverse": true
}
] |
H89-1027 |
The MIT Summit Speech Recognition System : A Progress Report |
Recently, we initiated a project to develop a phonetically-based spoken language understanding system called SUMMIT. In contrast to many of the past efforts that make use of heuristic rules whose development requires intense knowledge engineering, our approach attempts to express the speech knowledge within a formal framework using well-defined mathematical tools. In our system, features and decision strategies are discovered and trained automatically, using a large body of speech data. This paper describes the system, and documents its current performance.
| [
{
"id": "H89-1027.1",
"char_start": 47,
"char_end": 102
},
{
"id": "H89-1027.2",
"char_start": 110,
"char_end": 116
},
{
"id": "H89-1027.3",
"char_start": 175,
"char_end": 190
},
{
"id": "H89-1027.4",
"char_start": 226,
"char_end": 247
},
{
"id": "H89-1027.5",
"char_start": 286,
"char_end": 302
},
{
"id": "H89-1027.6",
"char_start": 383,
"char_end": 391
},
{
"id": "H89-1027.7",
"char_start": 396,
"char_end": 415
},
{
"id": "H89-1027.8",
"char_start": 480,
"char_end": 491
}
] | [
{
"label": 1,
"arg1": "H89-1027.3",
"arg2": "H89-1027.4",
"reverse": true
},
{
"label": 1,
"arg1": "H89-1027.7",
"arg2": "H89-1027.8",
"reverse": true
}
] |
H91-1077 |
A Proposal For Lexical Disambiguation |
A method of sense resolution is proposed that is based on WordNet, an on-line lexical database that incorporates semantic relations (synonymy, antonymy, hyponymy, meronymy, causal and troponymic entailment) as labeled pointers between word senses. With WordNet, it is easy to retrieve sets of semantically related words, a facility that will be used for sense resolution during text processing, as follows. When a word with multiple senses is encountered, one of two procedures will be followed. Either, (1) words related in meaning to the alternative senses of the polysemous word will be retrieved; new strings will be derived by substituting these related words into the context of the polysemous word; a large textual corpus will then be searched for these derived strings; and that sense will be chosen that corresponds to the derived string that is found most often in the corpus. Or, (2) the context of the polysemous word will be used as a key to search a large corpus; all words found to occur in that context will be noted; WordNet will then be used to estimate the semantic distance from those words to the alternative senses of the polysemous word; and that sense will be chosen that is closest in meaning to other words occurring in the same context If successful, this procedure could have practical applications to problems of information retrieval, mechanical translation, intelligent tutoring systems, and elsewhere.
| [
{
"id": "H91-1077.1",
"char_start": 13,
"char_end": 29
},
{
"id": "H91-1077.2",
"char_start": 59,
"char_end": 66
},
{
"id": "H91-1077.3",
"char_start": 79,
"char_end": 95
},
{
"id": "H91-1077.4",
"char_start": 114,
"char_end": 132
},
{
"id": "H91-1077.5",
"char_start": 134,
"char_end": 142
},
{
"id": "H91-1077.6",
"char_start": 144,
"char_end": 152
},
{
"id": "H91-1077.7",
"char_start": 154,
"char_end": 162
},
{
"id": "H91-1077.8",
"char_start": 164,
"char_end": 172
},
{
"id": "H91-1077.9",
"char_start": 174,
"char_end": 206
},
{
"id": "H91-1077.10",
"char_start": 211,
"char_end": 227
},
{
"id": "H91-1077.11",
"char_start": 236,
"char_end": 247
},
{
"id": "H91-1077.12",
"char_start": 254,
"char_end": 261
},
{
"id": "H91-1077.13",
"char_start": 294,
"char_end": 320
},
{
"id": "H91-1077.14",
"char_start": 355,
"char_end": 371
},
{
"id": "H91-1077.15",
"char_start": 379,
"char_end": 394
},
{
"id": "H91-1077.16",
"char_start": 415,
"char_end": 419
},
{
"id": "H91-1077.17",
"char_start": 434,
"char_end": 440
},
{
"id": "H91-1077.18",
"char_start": 509,
"char_end": 514
},
{
"id": "H91-1077.19",
"char_start": 526,
"char_end": 533
},
{
"id": "H91-1077.20",
"char_start": 541,
"char_end": 559
},
{
"id": "H91-1077.21",
"char_start": 567,
"char_end": 582
},
{
"id": "H91-1077.22",
"char_start": 606,
"char_end": 613
},
{
"id": "H91-1077.23",
"char_start": 660,
"char_end": 665
},
{
"id": "H91-1077.24",
"char_start": 675,
"char_end": 682
},
{
"id": "H91-1077.25",
"char_start": 690,
"char_end": 705
},
{
"id": "H91-1077.26",
"char_start": 715,
"char_end": 729
},
{
"id": "H91-1077.27",
"char_start": 762,
"char_end": 777
},
{
"id": "H91-1077.28",
"char_start": 788,
"char_end": 793
},
{
"id": "H91-1077.29",
"char_start": 833,
"char_end": 847
},
{
"id": "H91-1077.30",
"char_start": 880,
"char_end": 886
},
{
"id": "H91-1077.31",
"char_start": 900,
"char_end": 907
},
{
"id": "H91-1077.32",
"char_start": 915,
"char_end": 930
},
{
"id": "H91-1077.33",
"char_start": 971,
"char_end": 977
},
{
"id": "H91-1077.34",
"char_start": 983,
"char_end": 988
},
{
"id": "H91-1077.35",
"char_start": 1012,
"char_end": 1019
},
{
"id": "H91-1077.36",
"char_start": 1035,
"char_end": 1042
},
{
"id": "H91-1077.37",
"char_start": 1077,
"char_end": 1094
},
{
"id": "H91-1077.38",
"char_start": 1106,
"char_end": 1111
},
{
"id": "H91-1077.39",
"char_start": 1119,
"char_end": 1137
},
{
"id": "H91-1077.40",
"char_start": 1145,
"char_end": 1160
},
{
"id": "H91-1077.41",
"char_start": 1171,
"char_end": 1176
},
{
"id": "H91-1077.42",
"char_start": 1211,
"char_end": 1218
},
{
"id": "H91-1077.43",
"char_start": 1228,
"char_end": 1233
},
{
"id": "H91-1077.44",
"char_start": 1256,
"char_end": 1263
},
{
"id": "H91-1077.45",
"char_start": 1343,
"char_end": 1364
},
{
"id": "H91-1077.46",
"char_start": 1366,
"char_end": 1388
},
{
"id": "H91-1077.47",
"char_start": 1390,
"char_end": 1418
}
] | [
{
"label": 1,
"arg1": "H91-1077.1",
"arg2": "H91-1077.2",
"reverse": true
},
{
"label": 4,
"arg1": "H91-1077.3",
"arg2": "H91-1077.4",
"reverse": true
},
{
"label": 4,
"arg1": "H91-1077.12",
"arg2": "H91-1077.13",
"reverse": true
},
{
"label": 4,
"arg1": "H91-1077.14",
"arg2": "H91-1077.15",
"reverse": false
},
{
"label": 3,
"arg1": "H91-1077.16",
"arg2": "H91-1077.17",
"reverse": true
},
{
"label": 4,
"arg1": "H91-1077.26",
"arg2": "H91-1077.27",
"reverse": true
},
{
"label": 4,
"arg1": "H91-1077.29",
"arg2": "H91-1077.30",
"reverse": false
},
{
"label": 3,
"arg1": "H91-1077.31",
"arg2": "H91-1077.32",
"reverse": false
},
{
"label": 3,
"arg1": "H91-1077.34",
"arg2": "H91-1077.35",
"reverse": true
},
{
"label": 1,
"arg1": "H91-1077.36",
"arg2": "H91-1077.37",
"reverse": false
},
{
"label": 3,
"arg1": "H91-1077.43",
"arg2": "H91-1077.44",
"reverse": true
}
] |
A97-1027 |
Dutch Sublanguage Semantic Tagging Combined With Mark-Up Technology |
In this paper, we want to show how the morphological component of an existing NLP-system for Dutch (Dutch Medical Language Processor - DMLP) has been extended in order to produce output that is compatible with the language independent modules of the LSP-MLP system (Linguistic String Project - Medical Language Processor) of the New York University. The former can take advantage of the language independent developments of the latter, while focusing on idiosyncrasies for Dutch. This general strategy will be illustrated by a practical application, namely the highlighting of relevant information in a patient discharge summary (PDS) by means of modern HyperText Mark-Up Language (HTML) technology. Such an application can be of use for medical administrative purposes in a hospital environment.
| [
{
"id": "A97-1027.1",
"char_start": 40,
"char_end": 63
},
{
"id": "A97-1027.2",
"char_start": 79,
"char_end": 141
},
{
"id": "A97-1027.3",
"char_start": 215,
"char_end": 243
},
{
"id": "A97-1027.4",
"char_start": 251,
"char_end": 322
},
{
"id": "A97-1027.5",
"char_start": 388,
"char_end": 421
},
{
"id": "A97-1027.6",
"char_start": 455,
"char_end": 469
},
{
"id": "A97-1027.7",
"char_start": 474,
"char_end": 479
},
{
"id": "A97-1027.8",
"char_start": 604,
"char_end": 635
},
{
"id": "A97-1027.9",
"char_start": 655,
"char_end": 699
}
] | [
{
"label": 4,
"arg1": "A97-1027.1",
"arg2": "A97-1027.2",
"reverse": false
},
{
"label": 4,
"arg1": "A97-1027.3",
"arg2": "A97-1027.4",
"reverse": false
},
{
"label": 4,
"arg1": "A97-1027.6",
"arg2": "A97-1027.7",
"reverse": false
}
] |
A97-1052 | Automatic Extraction Of Subcategorization From Corpora
|
We describe a novel technique and implemented system for constructing a subcategorization dictionary from textual corpora. Each dictionary entry encodes the relative frequency of occurrence of a comprehensive set of subcategorization classes for English. An initial experiment, on a sample of 14 verbs which exhibit multiple complementation patterns, demonstrates that the technique achieves accuracy comparable to previous approaches, which are all limited to a highly restricted set of subcategorization classes. We also demonstrate that a subcategorization dictionary built with the system improves the accuracy of a parser by an appreciable amount
| [
{
"id": "A97-1052.1",
"char_start": 73,
"char_end": 101
},
{
"id": "A97-1052.2",
"char_start": 107,
"char_end": 122
},
{
"id": "A97-1052.3",
"char_start": 129,
"char_end": 145
},
{
"id": "A97-1052.4",
"char_start": 158,
"char_end": 190
},
{
"id": "A97-1052.5",
"char_start": 217,
"char_end": 242
},
{
"id": "A97-1052.6",
"char_start": 247,
"char_end": 254
},
{
"id": "A97-1052.7",
"char_start": 297,
"char_end": 302
},
{
"id": "A97-1052.8",
"char_start": 317,
"char_end": 350
},
{
"id": "A97-1052.9",
"char_start": 393,
"char_end": 401
},
{
"id": "A97-1052.10",
"char_start": 489,
"char_end": 514
},
{
"id": "A97-1052.11",
"char_start": 543,
"char_end": 571
},
{
"id": "A97-1052.12",
"char_start": 607,
"char_end": 615
},
{
"id": "A97-1052.13",
"char_start": 621,
"char_end": 627
}
] | [
{
"label": 1,
"arg1": "A97-1052.1",
"arg2": "A97-1052.2",
"reverse": true
},
{
"label": 3,
"arg1": "A97-1052.4",
"arg2": "A97-1052.5",
"reverse": false
},
{
"label": 3,
"arg1": "A97-1052.7",
"arg2": "A97-1052.8",
"reverse": true
},
{
"label": 2,
"arg1": "A97-1052.11",
"arg2": "A97-1052.12",
"reverse": false
}
] |
J87-3001 | PROCESSING DICTIONARY DEFINITIONS WITH PHRASAL PATTERN HIERARCHIES
|
This paper shows how dictionary word sense definitions can be analysed by applying a hierarchy of phrasal patterns. An experimental system embodying this mechanism has been implemented for processing definitions from the Longman Dictionary of Contemporary English. A property of this dictionary, exploited by the system, is that it uses a restricted vocabulary in its word sense definitions. The structures generated by the experimental system are intended to be used for the classification of new word senses in terms of the senses of words in the restricted vocabulary. Examples illustrating the output generated are presented, and some qualitative performance results and problems that were encountered are discussed. The analysis process applies successively more specific phrasal analysis rules as determined by a hierarchy of patterns in which less specific patterns dominate more specific ones. This ensures that reasonable incomplete analyses of the definitions are produced when more complete analyses are not possible, resulting in a relatively robust analysis mechanism. Thus the work reported addresses two robustness problems faced by current experimental natural language processing systems: coping with an incomplete lexicon and with incomplete knowledge of phrasal constructions.
| [
{
"id": "J87-3001.1",
"char_start": 22,
"char_end": 55
},
{
"id": "J87-3001.2",
"char_start": 99,
"char_end": 115
},
{
"id": "J87-3001.3",
"char_start": 201,
"char_end": 212
},
{
"id": "J87-3001.4",
"char_start": 222,
"char_end": 264
},
{
"id": "J87-3001.5",
"char_start": 285,
"char_end": 295
},
{
"id": "J87-3001.6",
"char_start": 340,
"char_end": 361
},
{
"id": "J87-3001.7",
"char_start": 369,
"char_end": 391
},
{
"id": "J87-3001.8",
"char_start": 477,
"char_end": 491
},
{
"id": "J87-3001.9",
"char_start": 499,
"char_end": 510
},
{
"id": "J87-3001.10",
"char_start": 527,
"char_end": 533
},
{
"id": "J87-3001.11",
"char_start": 537,
"char_end": 542
},
{
"id": "J87-3001.12",
"char_start": 550,
"char_end": 571
},
{
"id": "J87-3001.13",
"char_start": 778,
"char_end": 800
},
{
"id": "J87-3001.14",
"char_start": 833,
"char_end": 841
},
{
"id": "J87-3001.15",
"char_start": 865,
"char_end": 873
},
{
"id": "J87-3001.16",
"char_start": 959,
"char_end": 970
},
{
"id": "J87-3001.17",
"char_start": 1063,
"char_end": 1081
},
{
"id": "J87-3001.18",
"char_start": 1120,
"char_end": 1139
},
{
"id": "J87-3001.19",
"char_start": 1170,
"char_end": 1205
},
{
"id": "J87-3001.20",
"char_start": 1233,
"char_end": 1240
},
{
"id": "J87-3001.21",
"char_start": 1261,
"char_end": 1270
},
{
"id": "J87-3001.22",
"char_start": 1274,
"char_end": 1295
}
] | [
{
"label": 1,
"arg1": "J87-3001.1",
"arg2": "J87-3001.2",
"reverse": true
},
{
"label": 4,
"arg1": "J87-3001.3",
"arg2": "J87-3001.4",
"reverse": false
},
{
"label": 1,
"arg1": "J87-3001.6",
"arg2": "J87-3001.7",
"reverse": false
},
{
"label": 3,
"arg1": "J87-3001.9",
"arg2": "J87-3001.10",
"reverse": true
},
{
"label": 4,
"arg1": "J87-3001.11",
"arg2": "J87-3001.12",
"reverse": false
},
{
"label": 5,
"arg1": "J87-3001.16",
"arg2": "J87-3001.17",
"reverse": true
},
{
"label": 3,
"arg1": "J87-3001.18",
"arg2": "J87-3001.19",
"reverse": false
}
] |
I05-5009 | Evaluating Contextual Dependency of Paraphrases using a Latent Variable Model |
This paper presents an evaluation method employing a latent variable model for paraphrases with their contexts. We assume that the context of a sentence is indicated by a latent variable of the model as a topic and that the likelihood of each variable can be inferred. A paraphrase is evaluated for whether its sentences are used in the same context. Experimental results showed that the proposed method achieves almost 60% accuracy and that there is not a large performance difference between the two models. The results also revealed an upper bound of accuracy of 77% with the method when using only topic information.
| [
{
"id": "I05-5009.1",
"char_start": 24,
"char_end": 41
},
{
"id": "I05-5009.2",
"char_start": 54,
"char_end": 75
},
{
"id": "I05-5009.3",
"char_start": 80,
"char_end": 91
},
{
"id": "I05-5009.4",
"char_start": 103,
"char_end": 111
},
{
"id": "I05-5009.5",
"char_start": 132,
"char_end": 139
},
{
"id": "I05-5009.6",
"char_start": 145,
"char_end": 153
},
{
"id": "I05-5009.7",
"char_start": 172,
"char_end": 187
},
{
"id": "I05-5009.8",
"char_start": 195,
"char_end": 200
},
{
"id": "I05-5009.9",
"char_start": 206,
"char_end": 211
},
{
"id": "I05-5009.10",
"char_start": 225,
"char_end": 235
},
{
"id": "I05-5009.11",
"char_start": 244,
"char_end": 252
},
{
"id": "I05-5009.12",
"char_start": 272,
"char_end": 282
},
{
"id": "I05-5009.13",
"char_start": 312,
"char_end": 321
},
{
"id": "I05-5009.14",
"char_start": 343,
"char_end": 350
},
{
"id": "I05-5009.15",
"char_start": 425,
"char_end": 433
},
{
"id": "I05-5009.16",
"char_start": 503,
"char_end": 509
},
{
"id": "I05-5009.17",
"char_start": 555,
"char_end": 563
},
{
"id": "I05-5009.18",
"char_start": 580,
"char_end": 586
},
{
"id": "I05-5009.19",
"char_start": 603,
"char_end": 620
}
] | [
{
"label": 1,
"arg1": "I05-5009.1",
"arg2": "I05-5009.2",
"reverse": true
},
{
"label": 3,
"arg1": "I05-5009.3",
"arg2": "I05-5009.4",
"reverse": true
},
{
"label": 3,
"arg1": "I05-5009.5",
"arg2": "I05-5009.6",
"reverse": false
},
{
"label": 4,
"arg1": "I05-5009.7",
"arg2": "I05-5009.8",
"reverse": false
},
{
"label": 3,
"arg1": "I05-5009.10",
"arg2": "I05-5009.11",
"reverse": false
},
{
"label": 3,
"arg1": "I05-5009.13",
"arg2": "I05-5009.14",
"reverse": false
},
{
"label": 2,
"arg1": "I05-5009.17",
"arg2": "I05-5009.19",
"reverse": true
}
] |
P83-1003 | Crossed Serial Dependencies : A low-power parseable extension to GPSG
|
An extension to the GPSG grammatical formalism is proposed, allowing non-terminals to consist of finite sequences of category labels, and allowing schematic variables to range over such sequences. The extension is shown to be sufficient to provide a strongly adequate grammar for crossed serial dependencies, as found in e.g. Dutch subordinate clauses. The structures induced for such constructions are argued to be more appropriate to data involving conjunction than some previous proposals have been. The extension is shown to be parseable by a simple extension to an existing parsing method for GPSG.
| [
{
"id": "P83-1003.1",
"char_start": 21,
"char_end": 47
},
{
"id": "P83-1003.2",
"char_start": 70,
"char_end": 83
},
{
"id": "P83-1003.3",
"char_start": 118,
"char_end": 133
},
{
"id": "P83-1003.4",
"char_start": 148,
"char_end": 167
},
{
"id": "P83-1003.5",
"char_start": 269,
"char_end": 276
},
{
"id": "P83-1003.6",
"char_start": 281,
"char_end": 308
},
{
"id": "P83-1003.7",
"char_start": 327,
"char_end": 352
},
{
"id": "P83-1003.8",
"char_start": 386,
"char_end": 399
},
{
"id": "P83-1003.9",
"char_start": 452,
"char_end": 463
},
{
"id": "P83-1003.10",
"char_start": 580,
"char_end": 594
},
{
"id": "P83-1003.11",
"char_start": 599,
"char_end": 603
}
] | [
{
"label": 4,
"arg1": "P83-1003.2",
"arg2": "P83-1003.3",
"reverse": true
},
{
"label": 1,
"arg1": "P83-1003.5",
"arg2": "P83-1003.6",
"reverse": false
},
{
"label": 1,
"arg1": "P83-1003.10",
"arg2": "P83-1003.11",
"reverse": false
}
] |