id
stringlengths 8
8
| title
stringlengths 18
138
| abstract
stringlengths 177
1.96k
| entities
list | relations
list |
---|---|---|---|---|
C04-1128 | Detection Of Question- Answer Pairs In Email Conversations |
While sentence extraction as an approach to summarization has been shown to work in documents of certain genres, because of the conversational nature of email communication where utterances are made in relation to one made previously, sentence extraction may not capture the necessary segments of dialogue that would make a summary coherent. In this paper, we present our work on the detection of question-answer pairs in an email conversation for the task of email summarization. We show that various features based on the structure of email-threads can be used to improve upon lexical similarity of discourse segments for question-answer pairing.
| [
{
"id": "C04-1128.1",
"char_start": 7,
"char_end": 26
},
{
"id": "C04-1128.2",
"char_start": 45,
"char_end": 58
},
{
"id": "C04-1128.3",
"char_start": 85,
"char_end": 94
},
{
"id": "C04-1128.4",
"char_start": 106,
"char_end": 112
},
{
"id": "C04-1128.5",
"char_start": 154,
"char_end": 173
},
{
"id": "C04-1128.6",
"char_start": 180,
"char_end": 190
},
{
"id": "C04-1128.7",
"char_start": 236,
"char_end": 255
},
{
"id": "C04-1128.8",
"char_start": 286,
"char_end": 294
},
{
"id": "C04-1128.9",
"char_start": 298,
"char_end": 306
},
{
"id": "C04-1128.10",
"char_start": 325,
"char_end": 332
},
{
"id": "C04-1128.11",
"char_start": 398,
"char_end": 419
},
{
"id": "C04-1128.12",
"char_start": 426,
"char_end": 444
},
{
"id": "C04-1128.13",
"char_start": 461,
"char_end": 480
},
{
"id": "C04-1128.14",
"char_start": 503,
"char_end": 511
},
{
"id": "C04-1128.15",
"char_start": 580,
"char_end": 598
},
{
"id": "C04-1128.16",
"char_start": 602,
"char_end": 620
},
{
"id": "C04-1128.17",
"char_start": 625,
"char_end": 648
}
] | [
{
"label": 1,
"arg1": "C04-1128.1",
"arg2": "C04-1128.2",
"reverse": false
},
{
"label": 3,
"arg1": "C04-1128.3",
"arg2": "C04-1128.4",
"reverse": true
},
{
"label": 4,
"arg1": "C04-1128.5",
"arg2": "C04-1128.6",
"reverse": false
},
{
"label": 4,
"arg1": "C04-1128.8",
"arg2": "C04-1128.9",
"reverse": false
},
{
"label": 4,
"arg1": "C04-1128.11",
"arg2": "C04-1128.12",
"reverse": true
},
{
"label": 1,
"arg1": "C04-1128.14",
"arg2": "C04-1128.15",
"reverse": false
}
] |
C04-1147 |
Fast Computation Of Lexical Affinity Models |
We present a framework for the fast computation of lexical affinity models. The framework is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms, an independence model, and a parametric affinity model. In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus. The framework is flexible, allowing fast adaptation to applications and it is scalable. We apply it in combination with a terabyte corpus to answer natural language tests, achieving encouraging results.
| [
{
"id": "C04-1147.1",
"char_start": 37,
"char_end": 48
},
{
"id": "C04-1147.2",
"char_start": 52,
"char_end": 75
},
{
"id": "C04-1147.3",
"char_start": 151,
"char_end": 177
},
{
"id": "C04-1147.4",
"char_start": 195,
"char_end": 200
},
{
"id": "C04-1147.5",
"char_start": 205,
"char_end": 223
},
{
"id": "C04-1147.6",
"char_start": 231,
"char_end": 256
},
{
"id": "C04-1147.7",
"char_start": 286,
"char_end": 292
},
{
"id": "C04-1147.8",
"char_start": 340,
"char_end": 350
},
{
"id": "C04-1147.9",
"char_start": 359,
"char_end": 364
},
{
"id": "C04-1147.10",
"char_start": 372,
"char_end": 388
},
{
"id": "C04-1147.11",
"char_start": 399,
"char_end": 416
},
{
"id": "C04-1147.12",
"char_start": 444,
"char_end": 450
},
{
"id": "C04-1147.13",
"char_start": 475,
"char_end": 497
},
{
"id": "C04-1147.14",
"char_start": 513,
"char_end": 518
},
{
"id": "C04-1147.15",
"char_start": 522,
"char_end": 529
},
{
"id": "C04-1147.16",
"char_start": 553,
"char_end": 559
},
{
"id": "C04-1147.17",
"char_start": 602,
"char_end": 612
},
{
"id": "C04-1147.18",
"char_start": 616,
"char_end": 628
},
{
"id": "C04-1147.19",
"char_start": 683,
"char_end": 698
},
{
"id": "C04-1147.20",
"char_start": 709,
"char_end": 731
}
] | [
{
"label": 3,
"arg1": "C04-1147.3",
"arg2": "C04-1147.4",
"reverse": false
},
{
"label": 3,
"arg1": "C04-1147.8",
"arg2": "C04-1147.9",
"reverse": false
},
{
"label": 1,
"arg1": "C04-1147.10",
"arg2": "C04-1147.11",
"reverse": false
},
{
"label": 3,
"arg1": "C04-1147.13",
"arg2": "C04-1147.14",
"reverse": false
}
] |
C04-1192 |
Fine-Grained Word Sense Disambiguation Based On Parallel Corpora, Word Alignment, Word Clustering And Aligned Wordnets |
The paper presents a method for word sense disambiguation based on parallel corpora. The method exploits recent advances in word alignment and word clustering based on automatic extraction of translation equivalents and being supported by available aligned wordnets for the languages in the corpus. The wordnets are aligned to the Princeton Wordnet, according to the principles established by EuroWordNet. The evaluation of the WSD system, implementing the method described herein showed very encouraging results. The same system used in a validation mode, can be used to check and spot alignment errors in multilingually aligned wordnets as BalkaNet and EuroWordNet.
| [
{
"id": "C04-1192.1",
"char_start": 33,
"char_end": 58
},
{
"id": "C04-1192.2",
"char_start": 68,
"char_end": 84
},
{
"id": "C04-1192.3",
"char_start": 125,
"char_end": 139
},
{
"id": "C04-1192.4",
"char_start": 144,
"char_end": 159
},
{
"id": "C04-1192.5",
"char_start": 169,
"char_end": 189
},
{
"id": "C04-1192.6",
"char_start": 193,
"char_end": 216
},
{
"id": "C04-1192.7",
"char_start": 258,
"char_end": 266
},
{
"id": "C04-1192.8",
"char_start": 275,
"char_end": 284
},
{
"id": "C04-1192.9",
"char_start": 292,
"char_end": 298
},
{
"id": "C04-1192.10",
"char_start": 304,
"char_end": 312
},
{
"id": "C04-1192.11",
"char_start": 332,
"char_end": 349
},
{
"id": "C04-1192.12",
"char_start": 394,
"char_end": 405
},
{
"id": "C04-1192.13",
"char_start": 429,
"char_end": 439
},
{
"id": "C04-1192.14",
"char_start": 588,
"char_end": 604
},
{
"id": "C04-1192.15",
"char_start": 608,
"char_end": 639
},
{
"id": "C04-1192.16",
"char_start": 643,
"char_end": 651
},
{
"id": "C04-1192.17",
"char_start": 656,
"char_end": 667
}
] | [
{
"label": 1,
"arg1": "C04-1192.1",
"arg2": "C04-1192.2",
"reverse": true
},
{
"label": 1,
"arg1": "C04-1192.4",
"arg2": "C04-1192.5",
"reverse": true
},
{
"label": 4,
"arg1": "C04-1192.14",
"arg2": "C04-1192.15",
"reverse": false
}
] |
N04-1022 | Minimum Bayes-Risk Decoding For Statistical Machine Translation |
We present Minimum Bayes-Risk (MBR) decoding for statistical machine translation. This statistical approach aims to minimize expected loss of translation errors under loss functions that measure translation performance. We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. We report the performance of the MBR decoders on a Chinese-to-English translation task. Our results show that MBR decoding can be used to tune statistical MT performance for specific loss functions.
| [
{
"id": "N04-1022.1",
"char_start": 12,
"char_end": 45
},
{
"id": "N04-1022.2",
"char_start": 50,
"char_end": 81
},
{
"id": "N04-1022.3",
"char_start": 126,
"char_end": 139
},
{
"id": "N04-1022.4",
"char_start": 143,
"char_end": 161
},
{
"id": "N04-1022.5",
"char_start": 168,
"char_end": 182
},
{
"id": "N04-1022.6",
"char_start": 196,
"char_end": 219
},
{
"id": "N04-1022.7",
"char_start": 248,
"char_end": 262
},
{
"id": "N04-1022.8",
"char_start": 300,
"char_end": 322
},
{
"id": "N04-1022.9",
"char_start": 328,
"char_end": 340
},
{
"id": "N04-1022.10",
"char_start": 342,
"char_end": 365
},
{
"id": "N04-1022.11",
"char_start": 374,
"char_end": 383
},
{
"id": "N04-1022.12",
"char_start": 389,
"char_end": 408
},
{
"id": "N04-1022.13",
"char_start": 414,
"char_end": 425
},
{
"id": "N04-1022.14",
"char_start": 429,
"char_end": 465
},
{
"id": "N04-1022.15",
"char_start": 481,
"char_end": 492
},
{
"id": "N04-1022.16",
"char_start": 500,
"char_end": 512
},
{
"id": "N04-1022.17",
"char_start": 518,
"char_end": 553
},
{
"id": "N04-1022.18",
"char_start": 577,
"char_end": 589
},
{
"id": "N04-1022.19",
"char_start": 610,
"char_end": 636
},
{
"id": "N04-1022.20",
"char_start": 650,
"char_end": 664
}
] | [
{
"label": 1,
"arg1": "N04-1022.1",
"arg2": "N04-1022.2",
"reverse": false
},
{
"label": 3,
"arg1": "N04-1022.5",
"arg2": "N04-1022.6",
"reverse": false
},
{
"label": 3,
"arg1": "N04-1022.7",
"arg2": "N04-1022.8",
"reverse": false
},
{
"label": 4,
"arg1": "N04-1022.10",
"arg2": "N04-1022.11",
"reverse": true
},
{
"label": 3,
"arg1": "N04-1022.13",
"arg2": "N04-1022.14",
"reverse": false
},
{
"label": 2,
"arg1": "N04-1022.15",
"arg2": "N04-1022.16",
"reverse": false
},
{
"label": 1,
"arg1": "N04-1022.18",
"arg2": "N04-1022.19",
"reverse": false
}
] |
N04-4028 | Confidence Estimation For Information Extraction |
Information extraction techniques automatically create structured databases from unstructured data sources, such as the Web or newswire documents. Despite the successes of these systems, accuracy will always be imperfect. For many reasons, it is highly desirable to accurately estimate the confidence the system has in the correctness of each extracted field. The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. We implement several techniques to estimate the confidence of both extracted fields and entire multi-field records, obtaining an average precision of 98% for retrieving correct fields and 87% for multi-field records.
| [
{
"id": "N04-4028.1",
"char_start": 1,
"char_end": 34
},
{
"id": "N04-4028.2",
"char_start": 56,
"char_end": 76
},
{
"id": "N04-4028.3",
"char_start": 82,
"char_end": 107
},
{
"id": "N04-4028.4",
"char_start": 128,
"char_end": 146
},
{
"id": "N04-4028.5",
"char_start": 188,
"char_end": 196
},
{
"id": "N04-4028.6",
"char_start": 291,
"char_end": 301
},
{
"id": "N04-4028.7",
"char_start": 344,
"char_end": 359
},
{
"id": "N04-4028.8",
"char_start": 365,
"char_end": 394
},
{
"id": "N04-4028.9",
"char_start": 421,
"char_end": 464
},
{
"id": "N04-4028.10",
"char_start": 468,
"char_end": 487
},
{
"id": "N04-4028.11",
"char_start": 516,
"char_end": 544
},
{
"id": "N04-4028.12",
"char_start": 602,
"char_end": 610
},
{
"id": "N04-4028.13",
"char_start": 618,
"char_end": 623
},
{
"id": "N04-4028.14",
"char_start": 629,
"char_end": 642
},
{
"id": "N04-4028.15",
"char_start": 692,
"char_end": 702
},
{
"id": "N04-4028.16",
"char_start": 711,
"char_end": 727
},
{
"id": "N04-4028.17",
"char_start": 739,
"char_end": 758
},
{
"id": "N04-4028.18",
"char_start": 773,
"char_end": 790
},
{
"id": "N04-4028.19",
"char_start": 821,
"char_end": 827
}
] | [
{
"label": 2,
"arg1": "N04-4028.1",
"arg2": "N04-4028.2",
"reverse": false
},
{
"label": 3,
"arg1": "N04-4028.6",
"arg2": "N04-4028.7",
"reverse": false
},
{
"label": 1,
"arg1": "N04-4028.8",
"arg2": "N04-4028.9",
"reverse": true
},
{
"label": 1,
"arg1": "N04-4028.10",
"arg2": "N04-4028.11",
"reverse": false
},
{
"label": 3,
"arg1": "N04-4028.12",
"arg2": "N04-4028.14",
"reverse": true
},
{
"label": 3,
"arg1": "N04-4028.15",
"arg2": "N04-4028.16",
"reverse": false
}
] |
M92-1025 |
GE NLTOOLSET: Description Of The System As Used For MUC-4
|
The GE NLToolset is a set of text interpretation tools designed to be easily adapted to new domains. This report summarizes the system and its performance on the MUC-4 task.
| [
{
"id": "M92-1025.1",
"char_start": 5,
"char_end": 17
},
{
"id": "M92-1025.2",
"char_start": 30,
"char_end": 55
},
{
"id": "M92-1025.3",
"char_start": 93,
"char_end": 100
},
{
"id": "M92-1025.4",
"char_start": 163,
"char_end": 173
}
] | [] |
P05-1028 |
Exploring And Exploiting The Limited Utility Of Captions In Recognizing Intention In Information Graphics |
This paper presents a corpus study that explores the extent to which captions contribute to recognizing the intended message of an information graphic. It then presents an implemented graphic interpretation system that takes into account a variety of communicative signals, and an evaluation study showing that evidence obtained from shallow processing of the graphic's caption has a significant impact on the system's success. This work is part of a larger project whose goal is to provide sight-impaired users with effective access to information graphics.
| [
{
"id": "P05-1028.1",
"char_start": 23,
"char_end": 35
},
{
"id": "P05-1028.2",
"char_start": 132,
"char_end": 151
},
{
"id": "P05-1028.3",
"char_start": 185,
"char_end": 214
},
{
"id": "P05-1028.4",
"char_start": 252,
"char_end": 273
},
{
"id": "P05-1028.5",
"char_start": 335,
"char_end": 353
},
{
"id": "P05-1028.6",
"char_start": 492,
"char_end": 512
},
{
"id": "P05-1028.7",
"char_start": 538,
"char_end": 558
}
] | [
{
"label": 1,
"arg1": "P05-1028.3",
"arg2": "P05-1028.4",
"reverse": true
}
] |
P05-1057 |
Log-Linear Models For Word Alignment |
We present a framework for word alignment based on log-linear models. All knowledge sources are treated as feature functions, which depend on the source langauge sentence, the target language sentence and possible additional variables. Log-linear models allow statistical alignment models to be easily extended by incorporating syntactic information. In this paper, we use IBM Model 3 alignment probabilities, POS correspondence, and bilingual dictionary coverage as features. Our experiments show that log-linear models significantly outperform IBM translation models.
| [
{
"id": "P05-1057.1",
"char_start": 28,
"char_end": 42
},
{
"id": "P05-1057.2",
"char_start": 52,
"char_end": 69
},
{
"id": "P05-1057.3",
"char_start": 75,
"char_end": 92
},
{
"id": "P05-1057.4",
"char_start": 108,
"char_end": 125
},
{
"id": "P05-1057.5",
"char_start": 147,
"char_end": 171
},
{
"id": "P05-1057.6",
"char_start": 177,
"char_end": 201
},
{
"id": "P05-1057.7",
"char_start": 237,
"char_end": 254
},
{
"id": "P05-1057.8",
"char_start": 261,
"char_end": 289
},
{
"id": "P05-1057.9",
"char_start": 329,
"char_end": 350
},
{
"id": "P05-1057.10",
"char_start": 374,
"char_end": 409
},
{
"id": "P05-1057.11",
"char_start": 411,
"char_end": 429
},
{
"id": "P05-1057.12",
"char_start": 435,
"char_end": 464
},
{
"id": "P05-1057.13",
"char_start": 468,
"char_end": 476
},
{
"id": "P05-1057.14",
"char_start": 504,
"char_end": 521
},
{
"id": "P05-1057.15",
"char_start": 547,
"char_end": 569
}
] | [
{
"label": 1,
"arg1": "P05-1057.1",
"arg2": "P05-1057.2",
"reverse": true
},
{
"label": 1,
"arg1": "P05-1057.3",
"arg2": "P05-1057.4",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1057.7",
"arg2": "P05-1057.8",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1057.12",
"arg2": "P05-1057.13",
"reverse": false
},
{
"label": 6,
"arg1": "P05-1057.14",
"arg2": "P05-1057.15",
"reverse": false
}
] |
P05-2013 | Automatic Induction Of A CCG Grammar For Turkish |
This paper presents the results of automatically inducing a Combinatory Categorial Grammar (CCG) lexicon from a Turkish dependency treebank. The fact that Turkish is an agglutinating free word order language presents a challenge for language theories. We explored possible ways to obtain a compact lexicon, consistent with CCG principles, from a treebank which is an order of magnitude smaller than Penn WSJ.
| [
{
"id": "P05-2013.1",
"char_start": 61,
"char_end": 105
},
{
"id": "P05-2013.2",
"char_start": 113,
"char_end": 140
},
{
"id": "P05-2013.3",
"char_start": 156,
"char_end": 163
},
{
"id": "P05-2013.4",
"char_start": 170,
"char_end": 208
},
{
"id": "P05-2013.5",
"char_start": 234,
"char_end": 251
},
{
"id": "P05-2013.6",
"char_start": 291,
"char_end": 306
},
{
"id": "P05-2013.7",
"char_start": 324,
"char_end": 338
},
{
"id": "P05-2013.8",
"char_start": 347,
"char_end": 355
},
{
"id": "P05-2013.9",
"char_start": 400,
"char_end": 408
}
] | [
{
"label": 1,
"arg1": "P05-2013.1",
"arg2": "P05-2013.2",
"reverse": true
},
{
"label": 3,
"arg1": "P05-2013.3",
"arg2": "P05-2013.4",
"reverse": true
},
{
"label": 6,
"arg1": "P05-2013.8",
"arg2": "P05-2013.9",
"reverse": false
}
] |
I05-2044 |
Two-Phase Shift-Reduce Deterministic Dependency Parser of Chinese |
In the Chinese language, a verb may have its dependents on its left, right or on both sides. The ambiguity resolution of right-side dependencies is essential for dependency parsing of sentences with two or more verbs. Previous works on shift-reduce dependency parsers may not guarantee the connectivity of a dependency tree due to their weakness at resolving the right-side dependencies. This paper proposes a two-phase shift-reduce dependency parser based on SVM learning. The left-side dependents and right-side nominal dependents are detected in Phase I, and right-side verbal dependents are decided in Phase II. In experimental evaluation, our proposed method outperforms previous shift-reduce dependency parsers for the Chine language, showing improvement of dependency accuracy by 10.08%.
| [
{
"id": "I05-2044.1",
"char_start": 8,
"char_end": 24
},
{
"id": "I05-2044.2",
"char_start": 28,
"char_end": 32
},
{
"id": "I05-2044.3",
"char_start": 46,
"char_end": 56
},
{
"id": "I05-2044.4",
"char_start": 98,
"char_end": 118
},
{
"id": "I05-2044.5",
"char_start": 122,
"char_end": 145
},
{
"id": "I05-2044.6",
"char_start": 163,
"char_end": 181
},
{
"id": "I05-2044.7",
"char_start": 185,
"char_end": 194
},
{
"id": "I05-2044.8",
"char_start": 212,
"char_end": 217
},
{
"id": "I05-2044.9",
"char_start": 237,
"char_end": 268
},
{
"id": "I05-2044.10",
"char_start": 291,
"char_end": 303
},
{
"id": "I05-2044.11",
"char_start": 309,
"char_end": 324
},
{
"id": "I05-2044.12",
"char_start": 364,
"char_end": 387
},
{
"id": "I05-2044.13",
"char_start": 411,
"char_end": 451
},
{
"id": "I05-2044.14",
"char_start": 461,
"char_end": 473
},
{
"id": "I05-2044.15",
"char_start": 479,
"char_end": 499
},
{
"id": "I05-2044.16",
"char_start": 504,
"char_end": 533
},
{
"id": "I05-2044.17",
"char_start": 563,
"char_end": 591
},
{
"id": "I05-2044.18",
"char_start": 686,
"char_end": 717
},
{
"id": "I05-2044.19",
"char_start": 726,
"char_end": 740
},
{
"id": "I05-2044.20",
"char_start": 765,
"char_end": 784
}
] | [
{
"label": 4,
"arg1": "I05-2044.1",
"arg2": "I05-2044.2",
"reverse": true
},
{
"label": 4,
"arg1": "I05-2044.4",
"arg2": "I05-2044.6",
"reverse": false
},
{
"label": 4,
"arg1": "I05-2044.7",
"arg2": "I05-2044.8",
"reverse": true
},
{
"label": 3,
"arg1": "I05-2044.10",
"arg2": "I05-2044.11",
"reverse": false
},
{
"label": 1,
"arg1": "I05-2044.13",
"arg2": "I05-2044.14",
"reverse": true
},
{
"label": 1,
"arg1": "I05-2044.18",
"arg2": "I05-2044.19",
"reverse": false
}
] |
E99-1038 | Focusing On Focus : A Formalization |
We present an operable definition of focus which is argued to be of a cognito-pragmatic nature and explore how it is determined in discourse in a formalized manner. For this purpose, a file card model of discourse model and knowledge store is introduced enabling the decomposition and formal representation of its determination process as a programmable algorithm (FDA). Interdisciplinary evidence from social and cognitive psychology is cited and the prospect of the integration of focus via FDA as a discourse-level construct into speech synthesis systems, in particular, concept-to-speech systems, is also briefly discussed.
| [
{
"id": "E99-1038.1",
"char_start": 38,
"char_end": 43
},
{
"id": "E99-1038.2",
"char_start": 132,
"char_end": 141
},
{
"id": "E99-1038.3",
"char_start": 205,
"char_end": 220
},
{
"id": "E99-1038.4",
"char_start": 225,
"char_end": 240
},
{
"id": "E99-1038.5",
"char_start": 268,
"char_end": 281
},
{
"id": "E99-1038.6",
"char_start": 286,
"char_end": 307
},
{
"id": "E99-1038.7",
"char_start": 315,
"char_end": 336
},
{
"id": "E99-1038.8",
"char_start": 366,
"char_end": 369
},
{
"id": "E99-1038.9",
"char_start": 484,
"char_end": 489
},
{
"id": "E99-1038.10",
"char_start": 494,
"char_end": 497
},
{
"id": "E99-1038.11",
"char_start": 503,
"char_end": 528
},
{
"id": "E99-1038.12",
"char_start": 534,
"char_end": 558
},
{
"id": "E99-1038.13",
"char_start": 575,
"char_end": 600
}
] | [
{
"label": 4,
"arg1": "E99-1038.1",
"arg2": "E99-1038.2",
"reverse": false
},
{
"label": 1,
"arg1": "E99-1038.9",
"arg2": "E99-1038.12",
"reverse": false
}
] |
E87-1037 |
A Comparison Of Rule-Invocation Strategies In Context-Free Chart Parsing |
Currently several grammatical formalisms converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone, e.g. LFG and PATR-II. Typically the processing of these formalisms is organized within a chart-parsing framework. The declarative character of the formalisms makes it important to decide upon an overall optimal control strategy on the part of the processor. In particular, this brings the rule-invocation strategy into critical focus: to gain maximal processing efficiency, one has to determine the best way of putting the rules to use. The aim of this paper is to provide a survey and a practical comparison of fundamental rule-invocation strategies within context-free chart parsing.
| [
{
"id": "E87-1037.1",
"char_start": 19,
"char_end": 41
},
{
"id": "E87-1037.2",
"char_start": 99,
"char_end": 136
},
{
"id": "E87-1037.3",
"char_start": 157,
"char_end": 160
},
{
"id": "E87-1037.4",
"char_start": 165,
"char_end": 172
},
{
"id": "E87-1037.5",
"char_start": 241,
"char_end": 264
},
{
"id": "E87-1037.6",
"char_start": 299,
"char_end": 309
},
{
"id": "E87-1037.7",
"char_start": 355,
"char_end": 379
},
{
"id": "E87-1037.8",
"char_start": 441,
"char_end": 465
},
{
"id": "E87-1037.9",
"char_start": 503,
"char_end": 524
},
{
"id": "E87-1037.10",
"char_start": 575,
"char_end": 580
},
{
"id": "E87-1037.11",
"char_start": 676,
"char_end": 702
},
{
"id": "E87-1037.12",
"char_start": 710,
"char_end": 736
}
] | [
{
"label": 1,
"arg1": "E87-1037.1",
"arg2": "E87-1037.2",
"reverse": true
},
{
"label": 4,
"arg1": "E87-1037.11",
"arg2": "E87-1037.12",
"reverse": false
}
] |
E91-1043 |
A Bidirectional Model For Natural Language Processing |
In this paper I will argue for a model of grammatical processing that is based on uniform processing and knowledge sources. The main feature of this model is to view parsing and generation as two strongly interleaved tasks performed by a single parametrized deduction process. It will be shown that this view supports flexible and efficient natural language processing.
| [
{
"id": "E91-1043.1",
"char_start": 34,
"char_end": 65
},
{
"id": "E91-1043.2",
"char_start": 83,
"char_end": 101
},
{
"id": "E91-1043.3",
"char_start": 106,
"char_end": 123
},
{
"id": "E91-1043.4",
"char_start": 134,
"char_end": 141
},
{
"id": "E91-1043.5",
"char_start": 167,
"char_end": 174
},
{
"id": "E91-1043.6",
"char_start": 179,
"char_end": 189
},
{
"id": "E91-1043.7",
"char_start": 246,
"char_end": 268
},
{
"id": "E91-1043.8",
"char_start": 342,
"char_end": 369
}
] | [
{
"label": 1,
"arg1": "E91-1043.1",
"arg2": "E91-1043.2",
"reverse": true
},
{
"label": 6,
"arg1": "E91-1043.5",
"arg2": "E91-1043.6",
"reverse": false
}
] |
E93-1023 |
A Probabilistic Context-Free Grammar For Disambiguation In Morphological Parsing |
One of the major problems one is faced with when decomposing words into their constituent parts is ambiguity: the generation of multiple analyses for one input word, many of which are implausible. In order to deal with ambiguity, the MORphological PArser MORPA is provided with a probabilistic context-free grammar (PCFG), i.e. it combines a "conventional" context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse. Consequently, remaining analyses can be ordered along a scale of plausibility. Test performance data will show that a PCFG yields good results in morphological parsing. MORPA is a fully implemented parser developed for use in a text-to-speech conversion system.
| [
{
"id": "E93-1023.1",
"char_start": 62,
"char_end": 67
},
{
"id": "E93-1023.2",
"char_start": 79,
"char_end": 96
},
{
"id": "E93-1023.3",
"char_start": 100,
"char_end": 109
},
{
"id": "E93-1023.4",
"char_start": 115,
"char_end": 125
},
{
"id": "E93-1023.5",
"char_start": 138,
"char_end": 146
},
{
"id": "E93-1023.6",
"char_start": 155,
"char_end": 165
},
{
"id": "E93-1023.7",
"char_start": 220,
"char_end": 229
},
{
"id": "E93-1023.8",
"char_start": 235,
"char_end": 261
},
{
"id": "E93-1023.9",
"char_start": 281,
"char_end": 322
},
{
"id": "E93-1023.10",
"char_start": 343,
"char_end": 392
},
{
"id": "E93-1023.11",
"char_start": 407,
"char_end": 434
},
{
"id": "E93-1023.12",
"char_start": 442,
"char_end": 476
},
{
"id": "E93-1023.13",
"char_start": 528,
"char_end": 533
},
{
"id": "E93-1023.14",
"char_start": 559,
"char_end": 567
},
{
"id": "E93-1023.15",
"char_start": 653,
"char_end": 657
},
{
"id": "E93-1023.16",
"char_start": 681,
"char_end": 702
},
{
"id": "E93-1023.17",
"char_start": 704,
"char_end": 709
},
{
"id": "E93-1023.18",
"char_start": 733,
"char_end": 739
},
{
"id": "E93-1023.19",
"char_start": 763,
"char_end": 795
}
] | [
{
"label": 4,
"arg1": "E93-1023.1",
"arg2": "E93-1023.2",
"reverse": true
},
{
"label": 3,
"arg1": "E93-1023.5",
"arg2": "E93-1023.6",
"reverse": false
},
{
"label": 4,
"arg1": "E93-1023.8",
"arg2": "E93-1023.9",
"reverse": true
},
{
"label": 1,
"arg1": "E93-1023.10",
"arg2": "E93-1023.12",
"reverse": true
},
{
"label": 1,
"arg1": "E93-1023.18",
"arg2": "E93-1023.19",
"reverse": false
}
] |
I05-3022 | Chinese Word Segmentation in FTRD Beijing |
This paper presents a word segmentation system in France Telecom R&D Beijing, which uses a unified approach to word breaking and OOV identification. The output can be customized to meet different segmentation standards through the application of an ordered list of transformation. The system participated in all the tracks of the segmentation bakeoff -- PK-open, PK-closed, AS-open, AS-closed, HK-open, HK-closed, MSR-open and MSR- closed -- and achieved the state-of-the-art performance in MSR-open, MSR-close and PK-open tracks. Analysis of the results shows that each component of the system contributed to the scores.
| [
{
"id": "I05-3022.1",
"char_start": 23,
"char_end": 47
},
{
"id": "I05-3022.2",
"char_start": 116,
"char_end": 129
},
{
"id": "I05-3022.3",
"char_start": 134,
"char_end": 152
},
{
"id": "I05-3022.4",
"char_start": 158,
"char_end": 164
},
{
"id": "I05-3022.5",
"char_start": 201,
"char_end": 223
},
{
"id": "I05-3022.6",
"char_start": 290,
"char_end": 296
},
{
"id": "I05-3022.7",
"char_start": 335,
"char_end": 355
},
{
"id": "I05-3022.8",
"char_start": 359,
"char_end": 366
},
{
"id": "I05-3022.9",
"char_start": 368,
"char_end": 377
},
{
"id": "I05-3022.10",
"char_start": 379,
"char_end": 386
},
{
"id": "I05-3022.11",
"char_start": 388,
"char_end": 397
},
{
"id": "I05-3022.12",
"char_start": 399,
"char_end": 406
},
{
"id": "I05-3022.13",
"char_start": 408,
"char_end": 417
},
{
"id": "I05-3022.14",
"char_start": 419,
"char_end": 427
},
{
"id": "I05-3022.15",
"char_start": 432,
"char_end": 443
},
{
"id": "I05-3022.16",
"char_start": 464,
"char_end": 492
},
{
"id": "I05-3022.17",
"char_start": 496,
"char_end": 504
},
{
"id": "I05-3022.18",
"char_start": 506,
"char_end": 515
},
{
"id": "I05-3022.19",
"char_start": 520,
"char_end": 527
},
{
"id": "I05-3022.20",
"char_start": 619,
"char_end": 625
}
] | [
{
"label": 3,
"arg1": "I05-3022.4",
"arg2": "I05-3022.5",
"reverse": true
},
{
"label": 2,
"arg1": "I05-3022.6",
"arg2": "I05-3022.16",
"reverse": false
}
] |
E93-1043 |
Coping With Derivation In A Morphological Component |
In this paper a morphological component with a limited capability to automatically interpret (and generate) derived words is presented. The system combines an extended two-level morphology [Trost, 1991a; Trost, 1991b] with a feature-based word grammar building on a hierarchical lexicon. Polymorphemic stems not explicitly stored in the lexicon are given a compositional interpretation. That way the system allows to minimize redundancy in the lexicon because derived words that are transparent need not to be stored explicitly. Also, words formed ad-hoc can be recognized correctly. The system is implemented in CommonLisp and has been tested on examples from German derivation.
| [
{
"id": "E93-1043.1",
"char_start": 17,
"char_end": 40
},
{
"id": "E93-1043.2",
"char_start": 109,
"char_end": 122
},
{
"id": "E93-1043.3",
"char_start": 169,
"char_end": 189
},
{
"id": "E93-1043.4",
"char_start": 226,
"char_end": 252
},
{
"id": "E93-1043.5",
"char_start": 267,
"char_end": 287
},
{
"id": "E93-1043.6",
"char_start": 289,
"char_end": 308
},
{
"id": "E93-1043.7",
"char_start": 338,
"char_end": 345
},
{
"id": "E93-1043.8",
"char_start": 358,
"char_end": 386
},
{
"id": "E93-1043.9",
"char_start": 445,
"char_end": 452
},
{
"id": "E93-1043.10",
"char_start": 461,
"char_end": 474
},
{
"id": "E93-1043.11",
"char_start": 536,
"char_end": 555
},
{
"id": "E93-1043.12",
"char_start": 662,
"char_end": 679
}
] | [
{
"label": 1,
"arg1": "E93-1043.1",
"arg2": "E93-1043.2",
"reverse": false
},
{
"label": 1,
"arg1": "E93-1043.4",
"arg2": "E93-1043.5",
"reverse": true
},
{
"label": 3,
"arg1": "E93-1043.6",
"arg2": "E93-1043.8",
"reverse": true
}
] |
E99-1014 |
Full Text Parsing Using Cascades Of Rules: An Information Extraction Perspective |
This paper proposes an approach to full parsing suitable for Information Extraction from texts. Sequences of cascades of rules deterministically analyze the text, building unambiguous structures. Initially basic chunks are analyzed; then argumental relations are recognized; finally modifier attachment is performed and the global parse tree is built. The approach was proven to work for three languages and different domains. It was implemented in the IE module of FACILE, a EU project for multilingual text classification and IE.
| [
{
"id": "E99-1014.1",
"char_start": 36,
"char_end": 48
},
{
"id": "E99-1014.2",
"char_start": 62,
"char_end": 84
},
{
"id": "E99-1014.3",
"char_start": 90,
"char_end": 95
},
{
"id": "E99-1014.4",
"char_start": 122,
"char_end": 127
},
{
"id": "E99-1014.5",
"char_start": 158,
"char_end": 162
},
{
"id": "E99-1014.6",
"char_start": 173,
"char_end": 195
},
{
"id": "E99-1014.7",
"char_start": 213,
"char_end": 219
},
{
"id": "E99-1014.8",
"char_start": 239,
"char_end": 259
},
{
"id": "E99-1014.9",
"char_start": 284,
"char_end": 303
},
{
"id": "E99-1014.10",
"char_start": 325,
"char_end": 342
},
{
"id": "E99-1014.11",
"char_start": 395,
"char_end": 404
},
{
"id": "E99-1014.12",
"char_start": 419,
"char_end": 426
},
{
"id": "E99-1014.13",
"char_start": 454,
"char_end": 463
},
{
"id": "E99-1014.14",
"char_start": 467,
"char_end": 531
}
] | [
{
"label": 1,
"arg1": "E99-1014.1",
"arg2": "E99-1014.2",
"reverse": false
},
{
"label": 3,
"arg1": "E99-1014.5",
"arg2": "E99-1014.6",
"reverse": true
},
{
"label": 4,
"arg1": "E99-1014.13",
"arg2": "E99-1014.14",
"reverse": false
}
] |
H91-1010 |
New Results With The Lincoln Tied-Mixture HMM CSR System |
The following describes recent work on the Lincoln CSR system. Some new variations in semiphone modeling have been tested. A very simple improved duration model has reduced the error rate by about 10% in both triphone and semiphone systems. A new training strategy has been tested which, by itself, did not provide useful improvements but suggests that improvements can be obtained by a related rapid adaptation technique. Finally, the recognizer has been modified to use bigram back-off language models. The system was then transferred from the RM task to the ATIS CSR task and a limited number of development tests performed. Evaluation test results are presented for both the RM and ATIS CSR tasks.
| [
{
"id": "H91-1010.1",
"char_start": 44,
"char_end": 62
},
{
"id": "H91-1010.2",
"char_start": 87,
"char_end": 105
},
{
"id": "H91-1010.3",
"char_start": 147,
"char_end": 161
},
{
"id": "H91-1010.4",
"char_start": 178,
"char_end": 188
},
{
"id": "H91-1010.5",
"char_start": 210,
"char_end": 240
},
{
"id": "H91-1010.6",
"char_start": 248,
"char_end": 265
},
{
"id": "H91-1010.7",
"char_start": 437,
"char_end": 447
},
{
"id": "H91-1010.8",
"char_start": 473,
"char_end": 504
},
{
"id": "H91-1010.9",
"char_start": 547,
"char_end": 554
},
{
"id": "H91-1010.10",
"char_start": 562,
"char_end": 575
},
{
"id": "H91-1010.11",
"char_start": 680,
"char_end": 701
}
] | [
{
"label": 2,
"arg1": "H91-1010.3",
"arg2": "H91-1010.4",
"reverse": false
},
{
"label": 1,
"arg1": "H91-1010.7",
"arg2": "H91-1010.8",
"reverse": true
}
] |
A97-1020 | Reading more into Foreign Languages
|
GLOSSER is designed to support reading and learning to read in a foreign language. There are four language pairs currently supported by GLOSSER: English-Bulgarian, English-Estonian, English-Hungarian and French-Dutch. The program is operational on UNIX and Windows '95 platforms, and has undergone a pilot user-study. A demonstration (in UNIX) for Applied Natural Language Processing emphasizes components put to novel technical uses in intelligent computer-assisted morphological analysis (ICALL), including disambiguated morphological analysis and lemmatized indexing for an aligned bilingual corpus of word examples.
| [
{
"id": "A97-1020.1",
"char_start": 1,
"char_end": 8
},
{
"id": "A97-1020.2",
"char_start": 74,
"char_end": 82
},
{
"id": "A97-1020.3",
"char_start": 99,
"char_end": 113
},
{
"id": "A97-1020.4",
"char_start": 137,
"char_end": 144
},
{
"id": "A97-1020.5",
"char_start": 146,
"char_end": 163
},
{
"id": "A97-1020.6",
"char_start": 165,
"char_end": 181
},
{
"id": "A97-1020.7",
"char_start": 183,
"char_end": 200
},
{
"id": "A97-1020.8",
"char_start": 205,
"char_end": 217
},
{
"id": "A97-1020.9",
"char_start": 349,
"char_end": 384
},
{
"id": "A97-1020.10",
"char_start": 438,
"char_end": 498
},
{
"id": "A97-1020.11",
"char_start": 510,
"char_end": 546
},
{
"id": "A97-1020.12",
"char_start": 551,
"char_end": 570
},
{
"id": "A97-1020.13",
"char_start": 578,
"char_end": 602
},
{
"id": "A97-1020.14",
"char_start": 606,
"char_end": 619
}
] | [
{
"label": 3,
"arg1": "A97-1020.3",
"arg2": "A97-1020.4",
"reverse": false
},
{
"label": 4,
"arg1": "A97-1020.13",
"arg2": "A97-1020.14",
"reverse": true
}
] |
A97-1042 | Identifying Topics By Position |
This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-specific regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.
| [
{
"id": "A97-1042.1",
"char_start": 56,
"char_end": 62
},
{
"id": "A97-1042.2",
"char_start": 66,
"char_end": 71
},
{
"id": "A97-1042.3",
"char_start": 97,
"char_end": 101
},
{
"id": "A97-1042.4",
"char_start": 130,
"char_end": 138
},
{
"id": "A97-1042.5",
"char_start": 160,
"char_end": 183
},
{
"id": "A97-1042.6",
"char_start": 230,
"char_end": 253
},
{
"id": "A97-1042.7",
"char_start": 263,
"char_end": 290
},
{
"id": "A97-1042.8",
"char_start": 294,
"char_end": 313
},
{
"id": "A97-1042.9",
"char_start": 363,
"char_end": 384
},
{
"id": "A97-1042.10",
"char_start": 386,
"char_end": 393
},
{
"id": "A97-1042.11",
"char_start": 399,
"char_end": 417
}
] | [
{
"label": 3,
"arg1": "A97-1042.1",
"arg2": "A97-1042.2",
"reverse": false
},
{
"label": 3,
"arg1": "A97-1042.7",
"arg2": "A97-1042.8",
"reverse": false
}
] |
H05-1064 |
Hidden-Variable Models For Discriminative Reranking |
We describe a new method for the representation of NLP structures within reranking approaches. We make use of a conditional log-linear model, with hidden variables representing the assignment of lexical items to word clusters or word senses. The model learns to automatically make these assignments based on a discriminative training criterion. Training and decoding with the model requires summing over an exponential number of hidden-variable assignments: the required summations can be computed efficiently and exactly using dynamic programming. As a case study, we apply the model to parse reranking. The model gives an F-measure improvement of ~1.25% beyond the base parser, and an ~0.25% improvement beyond Collins (2000) reranker. Although our experiments are focused on parsing, the techniques described generalize naturally to NLP structures other than parse trees.
| [
{
"id": "H05-1064.1",
"char_start": 52,
"char_end": 66
},
{
"id": "H05-1064.2",
"char_start": 74,
"char_end": 94
},
{
"id": "H05-1064.3",
"char_start": 113,
"char_end": 141
},
{
"id": "H05-1064.4",
"char_start": 148,
"char_end": 164
},
{
"id": "H05-1064.5",
"char_start": 182,
"char_end": 192
},
{
"id": "H05-1064.6",
"char_start": 196,
"char_end": 209
},
{
"id": "H05-1064.7",
"char_start": 213,
"char_end": 226
},
{
"id": "H05-1064.8",
"char_start": 230,
"char_end": 241
},
{
"id": "H05-1064.9",
"char_start": 288,
"char_end": 299
},
{
"id": "H05-1064.10",
"char_start": 311,
"char_end": 344
},
{
"id": "H05-1064.11",
"char_start": 346,
"char_end": 354
},
{
"id": "H05-1064.12",
"char_start": 359,
"char_end": 367
},
{
"id": "H05-1064.13",
"char_start": 430,
"char_end": 457
},
{
"id": "H05-1064.14",
"char_start": 529,
"char_end": 548
},
{
"id": "H05-1064.15",
"char_start": 589,
"char_end": 604
},
{
"id": "H05-1064.16",
"char_start": 625,
"char_end": 646
},
{
"id": "H05-1064.17",
"char_start": 668,
"char_end": 679
},
{
"id": "H05-1064.18",
"char_start": 714,
"char_end": 737
},
{
"id": "H05-1064.19",
"char_start": 779,
"char_end": 786
},
{
"id": "H05-1064.20",
"char_start": 837,
"char_end": 851
},
{
"id": "H05-1064.21",
"char_start": 863,
"char_end": 874
}
] | [
{
"label": 3,
"arg1": "H05-1064.3",
"arg2": "H05-1064.4",
"reverse": true
},
{
"label": 3,
"arg1": "H05-1064.6",
"arg2": "H05-1064.7",
"reverse": true
},
{
"label": 1,
"arg1": "H05-1064.9",
"arg2": "H05-1064.10",
"reverse": true
},
{
"label": 1,
"arg1": "H05-1064.12",
"arg2": "H05-1064.14",
"reverse": true
},
{
"label": 6,
"arg1": "H05-1064.17",
"arg2": "H05-1064.18",
"reverse": false
}
] |
I05-4008 |
Taiwan Child Language Corpus : Data Collection and Annotation |
Taiwan Child Language Corpus contains scripts transcribed from about 330 hours of recordings of fourteen young children from Southern Min Chinese speaking families in Taiwan. The format of the corpus adopts the Child Language Data Exchange System (CHILDES). The size of the corpus is about 1.6 million words. In this paper, we describe data collection, transcription, word segmentation, and part-of-speech annotation of this corpus. Applications of the corpus are also discussed.
| [
{
"id": "I05-4008.1",
"char_start": 1,
"char_end": 29
},
{
"id": "I05-4008.2",
"char_start": 39,
"char_end": 46
},
{
"id": "I05-4008.3",
"char_start": 83,
"char_end": 93
},
{
"id": "I05-4008.4",
"char_start": 126,
"char_end": 146
},
{
"id": "I05-4008.5",
"char_start": 194,
"char_end": 200
},
{
"id": "I05-4008.6",
"char_start": 212,
"char_end": 257
},
{
"id": "I05-4008.7",
"char_start": 275,
"char_end": 281
},
{
"id": "I05-4008.8",
"char_start": 303,
"char_end": 308
},
{
"id": "I05-4008.9",
"char_start": 337,
"char_end": 352
},
{
"id": "I05-4008.10",
"char_start": 354,
"char_end": 367
},
{
"id": "I05-4008.11",
"char_start": 369,
"char_end": 386
},
{
"id": "I05-4008.12",
"char_start": 392,
"char_end": 417
},
{
"id": "I05-4008.13",
"char_start": 426,
"char_end": 432
},
{
"id": "I05-4008.14",
"char_start": 454,
"char_end": 460
}
] | [
{
"label": 4,
"arg1": "I05-4008.1",
"arg2": "I05-4008.2",
"reverse": true
},
{
"label": 3,
"arg1": "I05-4008.3",
"arg2": "I05-4008.4",
"reverse": true
},
{
"label": 3,
"arg1": "I05-4008.5",
"arg2": "I05-4008.6",
"reverse": true
},
{
"label": 4,
"arg1": "I05-4008.7",
"arg2": "I05-4008.8",
"reverse": true
},
{
"label": 1,
"arg1": "I05-4008.12",
"arg2": "I05-4008.13",
"reverse": false
}
] |
P81-1032 | Dynamic Strategy Selection in Flexible Parsing
|
Robust natural language interpretation requires strong semantic domain models, fail-soft recovery heuristics, and very flexible control structures. Although single-strategy parsers have met with a measure of success, a multi-strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input. A parsing algorithm is presented that integrates several different parsing strategies, with case-frame instantiation dominating. Each of these parsing strategies exploits different types of knowledge; and their combination provides a strong framework in which to process conjunctions, fragmentary input, and ungrammatical structures, as well as less exotic, grammatically correct input. Several specific heuristics for handling ungrammatical input are presented within this multi-strategy framework.
| [
{
"id": "P81-1032.1",
"char_start": 8,
"char_end": 39
},
{
"id": "P81-1032.2",
"char_start": 56,
"char_end": 78
},
{
"id": "P81-1032.3",
"char_start": 80,
"char_end": 109
},
{
"id": "P81-1032.4",
"char_start": 129,
"char_end": 147
},
{
"id": "P81-1032.5",
"char_start": 158,
"char_end": 181
},
{
"id": "P81-1032.6",
"char_start": 220,
"char_end": 243
},
{
"id": "P81-1032.7",
"char_start": 334,
"char_end": 364
},
{
"id": "P81-1032.8",
"char_start": 381,
"char_end": 409
},
{
"id": "P81-1032.9",
"char_start": 427,
"char_end": 462
},
{
"id": "P81-1032.10",
"char_start": 466,
"char_end": 483
},
{
"id": "P81-1032.11",
"char_start": 531,
"char_end": 549
},
{
"id": "P81-1032.12",
"char_start": 556,
"char_end": 580
},
{
"id": "P81-1032.13",
"char_start": 607,
"char_end": 625
},
{
"id": "P81-1032.14",
"char_start": 645,
"char_end": 663
},
{
"id": "P81-1032.15",
"char_start": 735,
"char_end": 747
},
{
"id": "P81-1032.16",
"char_start": 749,
"char_end": 766
},
{
"id": "P81-1032.17",
"char_start": 772,
"char_end": 796
},
{
"id": "P81-1032.18",
"char_start": 822,
"char_end": 849
},
{
"id": "P81-1032.19",
"char_start": 859,
"char_end": 878
},
{
"id": "P81-1032.20",
"char_start": 892,
"char_end": 911
},
{
"id": "P81-1032.21",
"char_start": 938,
"char_end": 962
}
] | [
{
"label": 1,
"arg1": "P81-1032.1",
"arg2": "P81-1032.2",
"reverse": true
},
{
"label": 6,
"arg1": "P81-1032.5",
"arg2": "P81-1032.6",
"reverse": false
},
{
"label": 6,
"arg1": "P81-1032.7",
"arg2": "P81-1032.8",
"reverse": false
},
{
"label": 1,
"arg1": "P81-1032.10",
"arg2": "P81-1032.11",
"reverse": true
},
{
"label": 1,
"arg1": "P81-1032.13",
"arg2": "P81-1032.14",
"reverse": true
},
{
"label": 6,
"arg1": "P81-1032.17",
"arg2": "P81-1032.18",
"reverse": false
},
{
"label": 4,
"arg1": "P81-1032.19",
"arg2": "P81-1032.21",
"reverse": false
}
] |
P85-1015 | Parsing with Discontinuous Constituents
|
By generalizing the notion of location of a constituent to allow discontinuous locations, one can describe the discontinuous constituents of non-configurational languages. These discontinuous constituents can be described by a variant of definite clause grammars, and these grammars can be used in conjunction with a proof procedure to create a parser for non-configurational languages.
| [
{
"id": "P85-1015.1",
"char_start": 31,
"char_end": 56
},
{
"id": "P85-1015.2",
"char_start": 66,
"char_end": 89
},
{
"id": "P85-1015.3",
"char_start": 112,
"char_end": 138
},
{
"id": "P85-1015.4",
"char_start": 142,
"char_end": 171
},
{
"id": "P85-1015.5",
"char_start": 179,
"char_end": 205
},
{
"id": "P85-1015.6",
"char_start": 239,
"char_end": 263
},
{
"id": "P85-1015.7",
"char_start": 275,
"char_end": 283
},
{
"id": "P85-1015.8",
"char_start": 318,
"char_end": 333
},
{
"id": "P85-1015.9",
"char_start": 346,
"char_end": 386
}
] | [
{
"label": 4,
"arg1": "P85-1015.3",
"arg2": "P85-1015.4",
"reverse": false
},
{
"label": 3,
"arg1": "P85-1015.5",
"arg2": "P85-1015.6",
"reverse": true
},
{
"label": 1,
"arg1": "P85-1015.7",
"arg2": "P85-1015.9",
"reverse": false
}
] |
P91-1016 | The Acquisition and Application of Context Sensitive Grammar for English
|
A system is described for acquiring a context-sensitive, phrase structure grammar which is applied by a best-path, bottom-up, deterministic parser. The grammar was based on English news stories and a high degree of success in parsing is reported. Overall, this research concludes that CSG is a computationally and conceptually tractable approach to the construction of phrase structure grammar for news story text.
| [
{
"id": "P91-1016.1",
"char_start": 40,
"char_end": 83
},
{
"id": "P91-1016.2",
"char_start": 106,
"char_end": 148
},
{
"id": "P91-1016.3",
"char_start": 154,
"char_end": 161
},
{
"id": "P91-1016.4",
"char_start": 175,
"char_end": 195
},
{
"id": "P91-1016.5",
"char_start": 228,
"char_end": 235
},
{
"id": "P91-1016.6",
"char_start": 287,
"char_end": 290
},
{
"id": "P91-1016.7",
"char_start": 371,
"char_end": 395
},
{
"id": "P91-1016.8",
"char_start": 400,
"char_end": 415
}
] | [
{
"label": 1,
"arg1": "P91-1016.1",
"arg2": "P91-1016.2",
"reverse": false
},
{
"label": 1,
"arg1": "P91-1016.6",
"arg2": "P91-1016.7",
"reverse": false
}
] |
P95-1027 | A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness
|
We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.
| [
{
"id": "P95-1027.1",
"char_start": 14,
"char_end": 32
},
{
"id": "P95-1027.2",
"char_start": 75,
"char_end": 97
},
{
"id": "P95-1027.3",
"char_start": 116,
"char_end": 142
},
{
"id": "P95-1027.4",
"char_start": 160,
"char_end": 181
},
{
"id": "P95-1027.5",
"char_start": 275,
"char_end": 279
},
{
"id": "P95-1027.6",
"char_start": 301,
"char_end": 329
},
{
"id": "P95-1027.7",
"char_start": 335,
"char_end": 343
},
{
"id": "P95-1027.8",
"char_start": 398,
"char_end": 418
},
{
"id": "P95-1027.9",
"char_start": 690,
"char_end": 704
},
{
"id": "P95-1027.10",
"char_start": 746,
"char_end": 782
},
{
"id": "P95-1027.11",
"char_start": 916,
"char_end": 939
}
] | [
{
"label": 4,
"arg1": "P95-1027.3",
"arg2": "P95-1027.4",
"reverse": false
}
] |
P97-1015 | Probing the lexicon in evaluating commercial MT systems
|
In the past the evaluation of machine translation systems has focused on single system evaluations because there were only few systems available. But now there are several commercial systems for the same language pair. This requires new methods of comparative evaluation. In the paper we propose a black-box method for comparing the lexical coverage of MT systems. The method is based on lists of words from different frequency classes. It is shown how these word lists can be compiled and used for testing. We also present the results of using our method on 6 MT systems that translate between English and German.
| [
{
"id": "P97-1015.1",
"char_start": 32,
"char_end": 59
},
{
"id": "P97-1015.2",
"char_start": 206,
"char_end": 219
},
{
"id": "P97-1015.3",
"char_start": 300,
"char_end": 316
},
{
"id": "P97-1015.4",
"char_start": 335,
"char_end": 351
},
{
"id": "P97-1015.5",
"char_start": 355,
"char_end": 365
},
{
"id": "P97-1015.6",
"char_start": 399,
"char_end": 404
},
{
"id": "P97-1015.7",
"char_start": 420,
"char_end": 437
},
{
"id": "P97-1015.8",
"char_start": 461,
"char_end": 471
},
{
"id": "P97-1015.9",
"char_start": 563,
"char_end": 573
},
{
"id": "P97-1015.10",
"char_start": 597,
"char_end": 604
},
{
"id": "P97-1015.11",
"char_start": 609,
"char_end": 615
}
] | [
{
"label": 3,
"arg1": "P97-1015.4",
"arg2": "P97-1015.5",
"reverse": false
},
{
"label": 3,
"arg1": "P97-1015.6",
"arg2": "P97-1015.7",
"reverse": true
}
] |
P97-1052 | On Interpreting F-Structures as UDRSs
|
We describe a method for interpreting abstract flat syntactic representations, LFG f-structures, as underspecified semantic representations, here Underspecified Discourse Representation Structures (UDRSs). The method establishes a one-to-one correspondence between subsets of the LFG and UDRS formalisms. It provides a model theoretic interpretation and an inferential component which operates directly on underspecified representations for f-structures through the translation images of f-structures as UDRSs.
| [
{
"id": "P97-1052.1",
"char_start": 39,
"char_end": 96
},
{
"id": "P97-1052.2",
"char_start": 101,
"char_end": 205
},
{
"id": "P97-1052.3",
"char_start": 232,
"char_end": 257
},
{
"id": "P97-1052.4",
"char_start": 281,
"char_end": 284
},
{
"id": "P97-1052.5",
"char_start": 289,
"char_end": 293
},
{
"id": "P97-1052.6",
"char_start": 320,
"char_end": 350
},
{
"id": "P97-1052.7",
"char_start": 358,
"char_end": 379
},
{
"id": "P97-1052.8",
"char_start": 407,
"char_end": 437
},
{
"id": "P97-1052.9",
"char_start": 442,
"char_end": 454
},
{
"id": "P97-1052.10",
"char_start": 467,
"char_end": 485
},
{
"id": "P97-1052.11",
"char_start": 489,
"char_end": 501
},
{
"id": "P97-1052.12",
"char_start": 505,
"char_end": 510
}
] | [
{
"label": 3,
"arg1": "P97-1052.1",
"arg2": "P97-1052.2",
"reverse": true
},
{
"label": 3,
"arg1": "P97-1052.6",
"arg2": "P97-1052.9",
"reverse": false
},
{
"label": 3,
"arg1": "P97-1052.10",
"arg2": "P97-1052.11",
"reverse": false
}
] |
P99-1025 | Construct Algebra : Analytical Dialog Management
|
In this paper we describe a systematic approach for creating a dialog management system based on a Construct Algebra, a collection of relations and operations on a task representation. These relations and operations are analytical components for building higher level abstractions called dialog motivators. The dialog manager, consisting of a collection of dialog motivators, is entirely built using the Construct Algebra.
| [
{
"id": "P99-1025.1",
"char_start": 64,
"char_end": 88
},
{
"id": "P99-1025.2",
"char_start": 100,
"char_end": 117
},
{
"id": "P99-1025.3",
"char_start": 121,
"char_end": 159
},
{
"id": "P99-1025.4",
"char_start": 165,
"char_end": 184
},
{
"id": "P99-1025.5",
"char_start": 192,
"char_end": 216
},
{
"id": "P99-1025.6",
"char_start": 221,
"char_end": 242
},
{
"id": "P99-1025.7",
"char_start": 289,
"char_end": 306
},
{
"id": "P99-1025.8",
"char_start": 312,
"char_end": 326
},
{
"id": "P99-1025.9",
"char_start": 344,
"char_end": 375
},
{
"id": "P99-1025.10",
"char_start": 405,
"char_end": 422
}
] | [
{
"label": 1,
"arg1": "P99-1025.1",
"arg2": "P99-1025.2",
"reverse": true
},
{
"label": 3,
"arg1": "P99-1025.3",
"arg2": "P99-1025.4",
"reverse": false
},
{
"label": 4,
"arg1": "P99-1025.6",
"arg2": "P99-1025.7",
"reverse": false
},
{
"label": 4,
"arg1": "P99-1025.8",
"arg2": "P99-1025.9",
"reverse": true
}
] |
P99-1068 | Mining the Web for Bilingual Text
|
STRAND (Resnik, 1998) is a language-independent system for automatic discovery of text in parallel translation on the World Wide Web. This paper extends the preliminary STRAND results by adding automatic language identification, scaling up by orders of magnitude, and formally evaluating performance. The most recent end-product is an automatically acquired parallel corpus comprising 2491 English-French document pairs, approximately 1.5 million words per language.
| [
{
"id": "P99-1068.1",
"char_start": 1,
"char_end": 7
},
{
"id": "P99-1068.2",
"char_start": 28,
"char_end": 55
},
{
"id": "P99-1068.3",
"char_start": 60,
"char_end": 87
},
{
"id": "P99-1068.4",
"char_start": 91,
"char_end": 111
},
{
"id": "P99-1068.5",
"char_start": 170,
"char_end": 176
},
{
"id": "P99-1068.6",
"char_start": 195,
"char_end": 228
},
{
"id": "P99-1068.7",
"char_start": 336,
"char_end": 374
},
{
"id": "P99-1068.8",
"char_start": 391,
"char_end": 420
},
{
"id": "P99-1068.9",
"char_start": 448,
"char_end": 453
},
{
"id": "P99-1068.10",
"char_start": 458,
"char_end": 466
}
] | [
{
"label": 1,
"arg1": "P99-1068.2",
"arg2": "P99-1068.3",
"reverse": false
},
{
"label": 4,
"arg1": "P99-1068.5",
"arg2": "P99-1068.6",
"reverse": true
},
{
"label": 4,
"arg1": "P99-1068.7",
"arg2": "P99-1068.8",
"reverse": true
}
] |
L08-1260 | Verb-Noun Collocation SyntLex Dictionary : Corpus-Based Approach
|
The project presented here is a part of a long term research program aiming at a full lexicon grammar for Polish (SyntLex). The main of this project is computer-assisted acquisition and morpho-syntactic description of verb-noun collocations in Polish. We present methodology and resources obtained in three main project phases which are: dictionary-based acquisition of collocation lexicon, feasibility study for corpus-based lexicon enlargement phase, corpus-based lexicon enlargement and collocation description. In this paper we focus on the results of the third phase. The presented here corpus-based approach permitted us to triple the size the verb-noun collocation dictionary for Polish. In the paper we describe the SyntLex Dictionary of Collocations and announce some future research intended to be a separate project continuation.
| [
{
"id": "L08-1260.1",
"char_start": 87,
"char_end": 123
},
{
"id": "L08-1260.2",
"char_start": 153,
"char_end": 241
},
{
"id": "L08-1260.3",
"char_start": 245,
"char_end": 251
},
{
"id": "L08-1260.4",
"char_start": 339,
"char_end": 367
},
{
"id": "L08-1260.5",
"char_start": 371,
"char_end": 390
},
{
"id": "L08-1260.6",
"char_start": 414,
"char_end": 446
},
{
"id": "L08-1260.7",
"char_start": 454,
"char_end": 486
},
{
"id": "L08-1260.8",
"char_start": 491,
"char_end": 514
},
{
"id": "L08-1260.9",
"char_start": 593,
"char_end": 614
},
{
"id": "L08-1260.10",
"char_start": 651,
"char_end": 694
},
{
"id": "L08-1260.11",
"char_start": 725,
"char_end": 759
}
] | [
{
"label": 1,
"arg1": "L08-1260.2",
"arg2": "L08-1260.3",
"reverse": false
},
{
"label": 1,
"arg1": "L08-1260.9",
"arg2": "L08-1260.10",
"reverse": false
}
] |
L08-1540 | Czech MWE Database
|
In this paper we deal with a recently developed large Czech MWE database containing at the moment 160 000 MWEs (treated as lexical units). It was compiled from various resources such as encyclopedias and dictionaries, public databases of proper names and toponyms, collocations obtained from Czech WordNet, lists of botanical and zoological terms and others. We describe the structure of the database and give basic types of MWEs according to domains they belong to. We compare the built MWEs database with the corpus data from Czech National Corpus (approx. 100 mil. tokens) and present results of this comparison in the paper. These MWEs have not been obtained from the corpus since their frequencies in it are rather low. To obtain a more complete list of MWEs we propose and use a technique exploiting the Word Sketch Engine, which allows us to work with statistical parameters such as frequency of MWEs and their components as well as with the salience for the whole MWEs. We also discuss exploitation of the database for working out a more adequate tagging and lemmatization. The final goal is to be able to recognize MWEs in corpus text and lemmatize them as complete lexical units, i. e. to make tagging and lemmatization more adequate.
| [
{
"id": "L08-1540.1",
"char_start": 49,
"char_end": 73
},
{
"id": "L08-1540.2",
"char_start": 107,
"char_end": 111
},
{
"id": "L08-1540.3",
"char_start": 124,
"char_end": 137
},
{
"id": "L08-1540.4",
"char_start": 187,
"char_end": 200
},
{
"id": "L08-1540.5",
"char_start": 205,
"char_end": 217
},
{
"id": "L08-1540.6",
"char_start": 226,
"char_end": 235
},
{
"id": "L08-1540.7",
"char_start": 239,
"char_end": 251
},
{
"id": "L08-1540.8",
"char_start": 256,
"char_end": 264
},
{
"id": "L08-1540.9",
"char_start": 266,
"char_end": 278
},
{
"id": "L08-1540.10",
"char_start": 293,
"char_end": 306
},
{
"id": "L08-1540.11",
"char_start": 317,
"char_end": 347
},
{
"id": "L08-1540.12",
"char_start": 393,
"char_end": 401
},
{
"id": "L08-1540.13",
"char_start": 426,
"char_end": 430
},
{
"id": "L08-1540.14",
"char_start": 489,
"char_end": 502
},
{
"id": "L08-1540.15",
"char_start": 512,
"char_end": 523
},
{
"id": "L08-1540.16",
"char_start": 529,
"char_end": 550
},
{
"id": "L08-1540.17",
"char_start": 636,
"char_end": 640
},
{
"id": "L08-1540.18",
"char_start": 673,
"char_end": 679
},
{
"id": "L08-1540.19",
"char_start": 760,
"char_end": 764
},
{
"id": "L08-1540.20",
"char_start": 811,
"char_end": 829
},
{
"id": "L08-1540.21",
"char_start": 860,
"char_end": 882
},
{
"id": "L08-1540.22",
"char_start": 904,
"char_end": 908
},
{
"id": "L08-1540.23",
"char_start": 950,
"char_end": 958
},
{
"id": "L08-1540.24",
"char_start": 973,
"char_end": 977
},
{
"id": "L08-1540.25",
"char_start": 1015,
"char_end": 1023
},
{
"id": "L08-1540.26",
"char_start": 1056,
"char_end": 1063
},
{
"id": "L08-1540.27",
"char_start": 1068,
"char_end": 1081
},
{
"id": "L08-1540.28",
"char_start": 1125,
"char_end": 1129
},
{
"id": "L08-1540.29",
"char_start": 1133,
"char_end": 1144
},
{
"id": "L08-1540.30",
"char_start": 1176,
"char_end": 1189
},
{
"id": "L08-1540.31",
"char_start": 1205,
"char_end": 1212
},
{
"id": "L08-1540.32",
"char_start": 1217,
"char_end": 1230
}
] | [
{
"label": 4,
"arg1": "L08-1540.1",
"arg2": "L08-1540.2",
"reverse": true
},
{
"label": 4,
"arg1": "L08-1540.6",
"arg2": "L08-1540.7",
"reverse": true
},
{
"label": 4,
"arg1": "L08-1540.9",
"arg2": "L08-1540.10",
"reverse": false
},
{
"label": 6,
"arg1": "L08-1540.14",
"arg2": "L08-1540.15",
"reverse": false
},
{
"label": 3,
"arg1": "L08-1540.20",
"arg2": "L08-1540.21",
"reverse": true
},
{
"label": 3,
"arg1": "L08-1540.23",
"arg2": "L08-1540.24",
"reverse": false
},
{
"label": 1,
"arg1": "L08-1540.25",
"arg2": "L08-1540.26",
"reverse": false
},
{
"label": 4,
"arg1": "L08-1540.28",
"arg2": "L08-1540.29",
"reverse": false
}
] |
L08-1110 | Using Log-linear Models for Tuning Machine Translation Output
|
We describe a set of experiments to explore statistical techniques for ranking and selecting the best translations in a graph of translation hypotheses. In a previous paper (Carl, 2007) we have described how the hypotheses graph is generated through shallow mapping and permutation rules. We have given examples of its nodes consisting of vectors representing morpho-syntactic properties of words and phrases. This paper describes a number of methods for elaborating statistical feature functions from some of the vector components. The feature functions are trained off-line on different types of text and their log-linear combination is then used to retrieve the best M translation paths in the graph. We compare two language modelling toolkits, the CMU and the SRI toolkit and arrive at three results: 1) word-lemma based feature function models produce better results than token-based models, 2) adding a PoS-tag feature function to the word-lemma model improves the output and 3) weights for lexical translations are suitable if the training material is similar to the texts to be translated.
| [
{
"id": "L08-1110.1",
"char_start": 45,
"char_end": 67
},
{
"id": "L08-1110.2",
"char_start": 103,
"char_end": 115
},
{
"id": "L08-1110.3",
"char_start": 121,
"char_end": 126
},
{
"id": "L08-1110.4",
"char_start": 130,
"char_end": 152
},
{
"id": "L08-1110.5",
"char_start": 213,
"char_end": 229
},
{
"id": "L08-1110.6",
"char_start": 251,
"char_end": 266
},
{
"id": "L08-1110.7",
"char_start": 271,
"char_end": 288
},
{
"id": "L08-1110.8",
"char_start": 320,
"char_end": 325
},
{
"id": "L08-1110.9",
"char_start": 340,
"char_end": 388
},
{
"id": "L08-1110.10",
"char_start": 392,
"char_end": 397
},
{
"id": "L08-1110.11",
"char_start": 402,
"char_end": 409
},
{
"id": "L08-1110.12",
"char_start": 468,
"char_end": 497
},
{
"id": "L08-1110.13",
"char_start": 515,
"char_end": 532
},
{
"id": "L08-1110.14",
"char_start": 538,
"char_end": 555
},
{
"id": "L08-1110.15",
"char_start": 599,
"char_end": 603
},
{
"id": "L08-1110.16",
"char_start": 614,
"char_end": 636
},
{
"id": "L08-1110.17",
"char_start": 673,
"char_end": 690
},
{
"id": "L08-1110.18",
"char_start": 698,
"char_end": 703
},
{
"id": "L08-1110.19",
"char_start": 720,
"char_end": 747
},
{
"id": "L08-1110.20",
"char_start": 753,
"char_end": 756
},
{
"id": "L08-1110.21",
"char_start": 765,
"char_end": 776
},
{
"id": "L08-1110.22",
"char_start": 809,
"char_end": 849
},
{
"id": "L08-1110.23",
"char_start": 878,
"char_end": 896
},
{
"id": "L08-1110.24",
"char_start": 910,
"char_end": 934
},
{
"id": "L08-1110.25",
"char_start": 942,
"char_end": 958
},
{
"id": "L08-1110.26",
"char_start": 986,
"char_end": 993
},
{
"id": "L08-1110.27",
"char_start": 998,
"char_end": 1018
},
{
"id": "L08-1110.28",
"char_start": 1039,
"char_end": 1056
},
{
"id": "L08-1110.29",
"char_start": 1075,
"char_end": 1080
}
] | [
{
"label": 1,
"arg1": "L08-1110.1",
"arg2": "L08-1110.2",
"reverse": false
},
{
"label": 4,
"arg1": "L08-1110.3",
"arg2": "L08-1110.4",
"reverse": true
},
{
"label": 1,
"arg1": "L08-1110.5",
"arg2": "L08-1110.6",
"reverse": true
},
{
"label": 4,
"arg1": "L08-1110.8",
"arg2": "L08-1110.9",
"reverse": true
},
{
"label": 1,
"arg1": "L08-1110.16",
"arg2": "L08-1110.17",
"reverse": false
},
{
"label": 6,
"arg1": "L08-1110.20",
"arg2": "L08-1110.21",
"reverse": false
},
{
"label": 6,
"arg1": "L08-1110.22",
"arg2": "L08-1110.23",
"reverse": false
},
{
"label": 4,
"arg1": "L08-1110.24",
"arg2": "L08-1110.25",
"reverse": false
},
{
"label": 3,
"arg1": "L08-1110.26",
"arg2": "L08-1110.27",
"reverse": false
},
{
"label": 6,
"arg1": "L08-1110.28",
"arg2": "L08-1110.29",
"reverse": false
}
] |
L08-1154 | Chinese Term Extraction Based on Delimiters
|
Existing techniques extract term candidates by looking for internal and contextual information associated with domain specific terms. The algorithms always face the dilemma that fewer features are not enough to distinguish terms from non-terms whereas more features lead to more conflicts among selected features. This paper presents a novel approach for term extraction based on delimiters which are much more stable and domain independent. The proposed approach is not as sensitive to term frequency as that of previous works. This approach has no strict limit or hard rules and thus they can deal with all kinds of terms. It also requires no prior domain knowledge and no additional training to adapt to new domains. Consequently, the proposed approach can be applied to different domains easily and it is especially useful for resource-limited domains. Evaluations conducted on two different domains for Chinese term extraction show significant improvements over existing techniques which verifies its efficiency and domain independent nature. Experiments on new term extraction indicate that the proposed approach can also serve as an effective tool for domain lexicon expansion.
| [
{
"id": "L08-1154.1",
"char_start": 29,
"char_end": 44
},
{
"id": "L08-1154.2",
"char_start": 60,
"char_end": 95
},
{
"id": "L08-1154.3",
"char_start": 112,
"char_end": 133
},
{
"id": "L08-1154.4",
"char_start": 185,
"char_end": 193
},
{
"id": "L08-1154.5",
"char_start": 224,
"char_end": 229
},
{
"id": "L08-1154.6",
"char_start": 235,
"char_end": 244
},
{
"id": "L08-1154.7",
"char_start": 258,
"char_end": 266
},
{
"id": "L08-1154.8",
"char_start": 305,
"char_end": 313
},
{
"id": "L08-1154.9",
"char_start": 356,
"char_end": 371
},
{
"id": "L08-1154.10",
"char_start": 381,
"char_end": 391
},
{
"id": "L08-1154.11",
"char_start": 488,
"char_end": 502
},
{
"id": "L08-1154.12",
"char_start": 567,
"char_end": 577
},
{
"id": "L08-1154.13",
"char_start": 619,
"char_end": 624
},
{
"id": "L08-1154.14",
"char_start": 652,
"char_end": 668
},
{
"id": "L08-1154.15",
"char_start": 687,
"char_end": 695
},
{
"id": "L08-1154.16",
"char_start": 712,
"char_end": 719
},
{
"id": "L08-1154.17",
"char_start": 785,
"char_end": 792
},
{
"id": "L08-1154.18",
"char_start": 832,
"char_end": 856
},
{
"id": "L08-1154.19",
"char_start": 897,
"char_end": 904
},
{
"id": "L08-1154.20",
"char_start": 909,
"char_end": 932
},
{
"id": "L08-1154.21",
"char_start": 1064,
"char_end": 1083
},
{
"id": "L08-1154.22",
"char_start": 1160,
"char_end": 1184
}
] | [
{
"label": 3,
"arg1": "L08-1154.2",
"arg2": "L08-1154.3",
"reverse": false
},
{
"label": 3,
"arg1": "L08-1154.4",
"arg2": "L08-1154.5",
"reverse": false
},
{
"label": 1,
"arg1": "L08-1154.9",
"arg2": "L08-1154.10",
"reverse": true
}
] |
L08-1050 | From Sentence to Discourse : Building an Annotation Scheme for Discourse Based on Prague Dependency Treebank
|
The present paper reports on a preparatory research for building a language corpus annotation scenario capturing the discourse relations in Czech. We primarily focus on the description of the syntactically motivated relations in discourse, basing our findings on the theoretical background of the Prague Dependency Treebank 2.0 and the Penn Discourse Treebank 2. Our aim is to revisit the present-day syntactico-semantic (tectogrammatical) annotation in the Prague Dependency Treebank, extend it for the purposes of a sentence-boundary-crossing representation and eventually to design a new, discourse level of annotation. In this paper, we propose a feasible process of such a transfer, comparing the possibilities the Praguian dependency-based approach offers with the Penn discourse annotation based primarily on the analysis and classification of discourse connectives.
| [
{
"id": "L08-1050.1",
"char_start": 68,
"char_end": 103
},
{
"id": "L08-1050.2",
"char_start": 118,
"char_end": 137
},
{
"id": "L08-1050.3",
"char_start": 141,
"char_end": 146
},
{
"id": "L08-1050.4",
"char_start": 193,
"char_end": 226
},
{
"id": "L08-1050.5",
"char_start": 230,
"char_end": 239
},
{
"id": "L08-1050.6",
"char_start": 298,
"char_end": 328
},
{
"id": "L08-1050.7",
"char_start": 337,
"char_end": 362
},
{
"id": "L08-1050.8",
"char_start": 402,
"char_end": 451
},
{
"id": "L08-1050.9",
"char_start": 459,
"char_end": 485
},
{
"id": "L08-1050.10",
"char_start": 519,
"char_end": 560
},
{
"id": "L08-1050.11",
"char_start": 593,
"char_end": 608
},
{
"id": "L08-1050.12",
"char_start": 612,
"char_end": 622
},
{
"id": "L08-1050.13",
"char_start": 721,
"char_end": 755
},
{
"id": "L08-1050.14",
"char_start": 772,
"char_end": 797
},
{
"id": "L08-1050.15",
"char_start": 852,
"char_end": 873
}
] | [
{
"label": 4,
"arg1": "L08-1050.2",
"arg2": "L08-1050.3",
"reverse": false
},
{
"label": 4,
"arg1": "L08-1050.4",
"arg2": "L08-1050.5",
"reverse": false
},
{
"label": 4,
"arg1": "L08-1050.8",
"arg2": "L08-1050.9",
"reverse": false
},
{
"label": 3,
"arg1": "L08-1050.11",
"arg2": "L08-1050.12",
"reverse": false
},
{
"label": 6,
"arg1": "L08-1050.13",
"arg2": "L08-1050.14",
"reverse": false
}
] |
L08-1097 | Unsupervised Acquisition of Verb Subcategorization Frames from Shallow-Parsed Corpora
|
In this paper, we reported experiments of unsupervised automatic acquisition of Italian and English verb subcategorization frames (SCFs) from general and domain corpora. The proposed technique operates on syntactically shallow-parsed corpora on the basis of a limited number of search heuristics not relying on any previous lexico-syntactic knowledge about SCFs. Although preliminary, reported results are in line with state-of-the-art lexical acquisition systems. The issue of whether verbs sharing similar SCFs distributions happen to share similar semantic properties as well was also explored by clustering verbs that share frames with the same distribution using the Minimum Description Length Principle (MDL). First experiments in this direction were carried out on Italian verbs with encouraging results.
| [
{
"id": "L08-1097.1",
"char_start": 43,
"char_end": 77
},
{
"id": "L08-1097.2",
"char_start": 81,
"char_end": 137
},
{
"id": "L08-1097.3",
"char_start": 143,
"char_end": 169
},
{
"id": "L08-1097.4",
"char_start": 206,
"char_end": 242
},
{
"id": "L08-1097.5",
"char_start": 279,
"char_end": 296
},
{
"id": "L08-1097.6",
"char_start": 325,
"char_end": 351
},
{
"id": "L08-1097.7",
"char_start": 358,
"char_end": 362
},
{
"id": "L08-1097.8",
"char_start": 420,
"char_end": 464
},
{
"id": "L08-1097.9",
"char_start": 487,
"char_end": 492
},
{
"id": "L08-1097.10",
"char_start": 509,
"char_end": 527
},
{
"id": "L08-1097.11",
"char_start": 544,
"char_end": 571
},
{
"id": "L08-1097.12",
"char_start": 612,
"char_end": 617
},
{
"id": "L08-1097.13",
"char_start": 629,
"char_end": 635
},
{
"id": "L08-1097.14",
"char_start": 650,
"char_end": 662
},
{
"id": "L08-1097.15",
"char_start": 673,
"char_end": 715
},
{
"id": "L08-1097.16",
"char_start": 773,
"char_end": 786
}
] | [
{
"label": 1,
"arg1": "L08-1097.1",
"arg2": "L08-1097.3",
"reverse": true
},
{
"label": 3,
"arg1": "L08-1097.6",
"arg2": "L08-1097.7",
"reverse": false
},
{
"label": 3,
"arg1": "L08-1097.9",
"arg2": "L08-1097.10",
"reverse": true
},
{
"label": 3,
"arg1": "L08-1097.12",
"arg2": "L08-1097.13",
"reverse": true
}
] |
N04-2005 | A Multi-Path Architecture For Machine Translation Of English Text Into American Sign Language Animation
|
The translation of English text into American Sign Language (ASL) animation tests the limits of traditional MT architectural designs. A new semantic representation is proposed that uses virtual reality 3D scene modeling software to produce spatially complex ASL phenomena called "classifier predicates." The model acts as an interlingua within a new multi-pathway MT architecture design that also incorporates transfer and direct approaches into a single system.
| [
{
"id": "N04-2005.1",
"char_start": 5,
"char_end": 16
},
{
"id": "N04-2005.2",
"char_start": 20,
"char_end": 32
},
{
"id": "N04-2005.3",
"char_start": 38,
"char_end": 76
},
{
"id": "N04-2005.4",
"char_start": 97,
"char_end": 133
},
{
"id": "N04-2005.5",
"char_start": 141,
"char_end": 164
},
{
"id": "N04-2005.6",
"char_start": 187,
"char_end": 229
},
{
"id": "N04-2005.7",
"char_start": 241,
"char_end": 272
},
{
"id": "N04-2005.8",
"char_start": 281,
"char_end": 302
},
{
"id": "N04-2005.9",
"char_start": 326,
"char_end": 337
},
{
"id": "N04-2005.10",
"char_start": 351,
"char_end": 387
},
{
"id": "N04-2005.11",
"char_start": 411,
"char_end": 419
},
{
"id": "N04-2005.12",
"char_start": 424,
"char_end": 441
}
] | [
{
"label": 1,
"arg1": "N04-2005.1",
"arg2": "N04-2005.4",
"reverse": true
},
{
"label": 1,
"arg1": "N04-2005.5",
"arg2": "N04-2005.6",
"reverse": false
},
{
"label": 4,
"arg1": "N04-2005.9",
"arg2": "N04-2005.10",
"reverse": false
}
] |
A92-1010 | Integrating Natural Language Components Into Graphical Discourse
|
In our current research into the design of cognitively well-motivated interfaces relying primarily on the display of graphical information, we have observed that graphical information alone does not provide sufficient support to users - particularly when situations arise that do not simply conform to the users' expectations. This can occur due to too much information being requested, too little, information of the wrong kind, etc. To solve this problem, we are working towards the integration of natural language generation to augment the interaction
| [
{
"id": "A92-1010.1",
"char_start": 44,
"char_end": 81
},
{
"id": "A92-1010.2",
"char_start": 107,
"char_end": 139
},
{
"id": "A92-1010.3",
"char_start": 163,
"char_end": 184
},
{
"id": "A92-1010.4",
"char_start": 359,
"char_end": 370
},
{
"id": "A92-1010.5",
"char_start": 400,
"char_end": 411
},
{
"id": "A92-1010.6",
"char_start": 501,
"char_end": 528
},
{
"id": "A92-1010.7",
"char_start": 544,
"char_end": 555
}
] | [
{
"label": 1,
"arg1": "A92-1010.1",
"arg2": "A92-1010.2",
"reverse": true
},
{
"label": 1,
"arg1": "A92-1010.6",
"arg2": "A92-1010.7",
"reverse": false
}
] |
H94-1064 | The LIMSI Continuous Speech Dictation System
|
A major axis of research at LIMSI is directed at multilingual, speaker-independent, large vocabulary speech dictation. In this paper the LIMSI recognizer which was evaluated in the ARPA NOV93 CSR test is described, and experimental results on the WSJ and BREF corpora under closely matched conditions are reported. For both corpora word recognition experiments were carried out with vocabularies containing up to 20k words. The recognizer makes use of continuous density HMM with Gaussian mixture for acoustic modeling and n-gram statistics estimated on the newspaper texts for language modeling. The recognizer uses a time-synchronous graph-search strategy which is shown to still be viable with a 20k-word vocabulary when used with bigram back-off language models. A second forward pass, which makes use of a word graph generated with the bigram, incorporates a trigram language model. Acoustic modeling uses cepstrum-based features, context-dependent phone models (intra and interword), phone duration models, and sex-dependent models.
| [
{
"id": "H94-1064.1",
"char_start": 50,
"char_end": 118
},
{
"id": "H94-1064.2",
"char_start": 138,
"char_end": 154
},
{
"id": "H94-1064.3",
"char_start": 182,
"char_end": 201
},
{
"id": "H94-1064.4",
"char_start": 248,
"char_end": 268
},
{
"id": "H94-1064.5",
"char_start": 325,
"char_end": 332
},
{
"id": "H94-1064.6",
"char_start": 333,
"char_end": 361
},
{
"id": "H94-1064.7",
"char_start": 384,
"char_end": 396
},
{
"id": "H94-1064.8",
"char_start": 418,
"char_end": 423
},
{
"id": "H94-1064.9",
"char_start": 453,
"char_end": 475
},
{
"id": "H94-1064.10",
"char_start": 481,
"char_end": 497
},
{
"id": "H94-1064.11",
"char_start": 502,
"char_end": 519
},
{
"id": "H94-1064.12",
"char_start": 524,
"char_end": 541
},
{
"id": "H94-1064.13",
"char_start": 559,
"char_end": 574
},
{
"id": "H94-1064.14",
"char_start": 579,
"char_end": 596
},
{
"id": "H94-1064.15",
"char_start": 620,
"char_end": 658
},
{
"id": "H94-1064.16",
"char_start": 735,
"char_end": 766
},
{
"id": "H94-1064.17",
"char_start": 777,
"char_end": 789
},
{
"id": "H94-1064.18",
"char_start": 812,
"char_end": 822
},
{
"id": "H94-1064.19",
"char_start": 842,
"char_end": 848
},
{
"id": "H94-1064.20",
"char_start": 865,
"char_end": 887
},
{
"id": "H94-1064.21",
"char_start": 889,
"char_end": 906
},
{
"id": "H94-1064.22",
"char_start": 912,
"char_end": 935
},
{
"id": "H94-1064.23",
"char_start": 937,
"char_end": 989
},
{
"id": "H94-1064.24",
"char_start": 991,
"char_end": 1012
},
{
"id": "H94-1064.25",
"char_start": 1018,
"char_end": 1038
}
] | [
{
"label": 4,
"arg1": "H94-1064.7",
"arg2": "H94-1064.8",
"reverse": true
},
{
"label": 3,
"arg1": "H94-1064.9",
"arg2": "H94-1064.10",
"reverse": true
},
{
"label": 1,
"arg1": "H94-1064.12",
"arg2": "H94-1064.14",
"reverse": false
},
{
"label": 1,
"arg1": "H94-1064.17",
"arg2": "H94-1064.18",
"reverse": true
},
{
"label": 1,
"arg1": "H94-1064.21",
"arg2": "H94-1064.22",
"reverse": true
}
] |
A00-1024 |
Categorizing Unknown Words : Using Decision Trees To Identify Names And Misspellings
|
This paper introduces a system for categorizing unknown words. The system is based on a multi-component architecture where each component is responsible for identifying one class of unknown words. The focus of this paper is the components that identify names and spelling errors. Each component uses a decision tree architecture to combine multiple types of evidence about the unknown word. The system is evaluated using data from live closed captions - a genre replete with a wide variety of unknown words.
| [
{
"id": "A00-1024.1",
"char_start": 25,
"char_end": 62
},
{
"id": "A00-1024.2",
"char_start": 68,
"char_end": 74
},
{
"id": "A00-1024.3",
"char_start": 89,
"char_end": 117
},
{
"id": "A00-1024.4",
"char_start": 129,
"char_end": 138
},
{
"id": "A00-1024.5",
"char_start": 183,
"char_end": 196
},
{
"id": "A00-1024.6",
"char_start": 229,
"char_end": 239
},
{
"id": "A00-1024.7",
"char_start": 254,
"char_end": 259
},
{
"id": "A00-1024.8",
"char_start": 264,
"char_end": 279
},
{
"id": "A00-1024.9",
"char_start": 286,
"char_end": 295
},
{
"id": "A00-1024.10",
"char_start": 303,
"char_end": 329
},
{
"id": "A00-1024.11",
"char_start": 359,
"char_end": 367
},
{
"id": "A00-1024.12",
"char_start": 378,
"char_end": 390
},
{
"id": "A00-1024.13",
"char_start": 396,
"char_end": 402
},
{
"id": "A00-1024.14",
"char_start": 432,
"char_end": 452
},
{
"id": "A00-1024.15",
"char_start": 494,
"char_end": 507
}
] | [
{
"label": 1,
"arg1": "A00-1024.2",
"arg2": "A00-1024.3",
"reverse": true
},
{
"label": 1,
"arg1": "A00-1024.6",
"arg2": "A00-1024.7",
"reverse": false
},
{
"label": 1,
"arg1": "A00-1024.9",
"arg2": "A00-1024.10",
"reverse": true
},
{
"label": 3,
"arg1": "A00-1024.11",
"arg2": "A00-1024.12",
"reverse": false
},
{
"label": 4,
"arg1": "A00-1024.14",
"arg2": "A00-1024.15",
"reverse": true
}
] |
X96-1059 | NEC Corporation And University Of Sheffield: Description Of NEC/Sheffleld System Used For MET Japanese
|
Recognition of proper nouns in Japanese text has been studied as a part of the more general problem of morphological analysis in Japanese text processing ([1] [2]). It has also been studied in the framework of Japanese information extraction ([3]) in recent years. Our approach to the Multi-lingual Evaluation Task (MET) for Japanese text is to consider the given task as a morphological analysis problem in Japanese. Our morphological analyzer has done all the necessary work for the recognition and classification of proper names, numerical and temporal expressions, i.e. Named Entity (NE) items in the Japanese text. The analyzer is called "Amorph". Amorph recognizes NE items in two stages: dictionary lookup and rule application. First, it uses several kinds of dictionaries to segment and tag Japanese character strings. Second, based on the information resulting from the dictionary lookup stage, a set of rules is applied to the segmented strings in order to identify NE items. When a segment is found to be an NE item, this information is added to the segment and it is used to generate the final output.
| [
{
"id": "X96-1059.1",
"char_start": 1,
"char_end": 28
},
{
"id": "X96-1059.2",
"char_start": 32,
"char_end": 45
},
{
"id": "X96-1059.3",
"char_start": 104,
"char_end": 126
},
{
"id": "X96-1059.4",
"char_start": 130,
"char_end": 154
},
{
"id": "X96-1059.5",
"char_start": 211,
"char_end": 242
},
{
"id": "X96-1059.6",
"char_start": 326,
"char_end": 339
},
{
"id": "X96-1059.7",
"char_start": 375,
"char_end": 405
},
{
"id": "X96-1059.8",
"char_start": 409,
"char_end": 417
},
{
"id": "X96-1059.9",
"char_start": 423,
"char_end": 445
},
{
"id": "X96-1059.10",
"char_start": 486,
"char_end": 598
},
{
"id": "X96-1059.11",
"char_start": 606,
"char_end": 619
},
{
"id": "X96-1059.12",
"char_start": 625,
"char_end": 633
},
{
"id": "X96-1059.13",
"char_start": 672,
"char_end": 680
},
{
"id": "X96-1059.14",
"char_start": 696,
"char_end": 713
},
{
"id": "X96-1059.15",
"char_start": 718,
"char_end": 734
},
{
"id": "X96-1059.16",
"char_start": 768,
"char_end": 780
},
{
"id": "X96-1059.17",
"char_start": 800,
"char_end": 826
},
{
"id": "X96-1059.18",
"char_start": 880,
"char_end": 903
},
{
"id": "X96-1059.19",
"char_start": 914,
"char_end": 919
},
{
"id": "X96-1059.20",
"char_start": 938,
"char_end": 955
},
{
"id": "X96-1059.21",
"char_start": 977,
"char_end": 985
},
{
"id": "X96-1059.22",
"char_start": 994,
"char_end": 1001
},
{
"id": "X96-1059.23",
"char_start": 1020,
"char_end": 1027
},
{
"id": "X96-1059.24",
"char_start": 1062,
"char_end": 1069
}
] | [
{
"label": 1,
"arg1": "X96-1059.1",
"arg2": "X96-1059.2",
"reverse": false
},
{
"label": 4,
"arg1": "X96-1059.3",
"arg2": "X96-1059.4",
"reverse": false
},
{
"label": 1,
"arg1": "X96-1059.7",
"arg2": "X96-1059.8",
"reverse": false
},
{
"label": 1,
"arg1": "X96-1059.9",
"arg2": "X96-1059.10",
"reverse": false
},
{
"label": 1,
"arg1": "X96-1059.16",
"arg2": "X96-1059.17",
"reverse": false
},
{
"label": 1,
"arg1": "X96-1059.19",
"arg2": "X96-1059.20",
"reverse": false
}
] |
H05-1041 | A Practically Unsupervised Learning Method To Identify Single-Snippet Answers To Definition Questions On The Web
|
We present a practically unsupervised learning method to produce single-snippet answers to definition questions in question answering systems that supplement Web search engines. The method exploits on-line encyclopedias and dictionaries to generate automatically an arbitrarily large number of positive and negative definition examples, which are then used to train an svm to separate the two classes. We show experimentally that the proposed method is viable, that it outperforms the alternative of training the system on questions and news articles from trec, and that it helps the search engine handle definition questions significantly better.
| [
{
"id": "H05-1041.1",
"char_start": 14,
"char_end": 54
},
{
"id": "H05-1041.2",
"char_start": 66,
"char_end": 88
},
{
"id": "H05-1041.3",
"char_start": 92,
"char_end": 112
},
{
"id": "H05-1041.4",
"char_start": 116,
"char_end": 142
},
{
"id": "H05-1041.5",
"char_start": 159,
"char_end": 177
},
{
"id": "H05-1041.6",
"char_start": 199,
"char_end": 237
},
{
"id": "H05-1041.7",
"char_start": 295,
"char_end": 336
},
{
"id": "H05-1041.8",
"char_start": 370,
"char_end": 373
},
{
"id": "H05-1041.9",
"char_start": 514,
"char_end": 520
},
{
"id": "H05-1041.10",
"char_start": 524,
"char_end": 533
},
{
"id": "H05-1041.11",
"char_start": 538,
"char_end": 561
},
{
"id": "H05-1041.12",
"char_start": 585,
"char_end": 598
},
{
"id": "H05-1041.13",
"char_start": 606,
"char_end": 626
}
] | [
{
"label": 1,
"arg1": "H05-1041.1",
"arg2": "H05-1041.2",
"reverse": false
},
{
"label": 1,
"arg1": "H05-1041.6",
"arg2": "H05-1041.7",
"reverse": false
}
] |
W02-1403 | Lexically-Based Terminology Structuring : Some Inherent Limits
|
Terminology structuring has been the subject of much work in the context of terms extracted from corpora: given a set of terms, obtained from an existing resource or extracted from a corpus, identifying hierarchical (or other types of) relations between these terms. The present paper focusses on terminology structuring by lexical methods, which match terms on the basis on their content words, taking morphological variants into account. Experiments are done on a 'flat' list of terms obtained from an originally hierarchically-structured terminology: the French version of the US National Library of Medicine MeSH thesaurus. We compare the lexically-induced relations with the original MeSH relations: after a quantitative evaluation of their congruence through recall and precision metrics, we perform a qualitative, human analysis ofthe 'new' relations not present in the MeSH. This analysis shows, on the one hand, the limits of the lexical structuring method. On the other hand, it also reveals some specific structuring choices and naming conventions made by the MeSH designers, and emphasizes ontological commitments that cannot be left to automatic structuring.
| [
{
"id": "W02-1403.1",
"char_start": 1,
"char_end": 24
},
{
"id": "W02-1403.2",
"char_start": 77,
"char_end": 82
},
{
"id": "W02-1403.3",
"char_start": 98,
"char_end": 105
},
{
"id": "W02-1403.4",
"char_start": 122,
"char_end": 127
},
{
"id": "W02-1403.5",
"char_start": 184,
"char_end": 190
},
{
"id": "W02-1403.6",
"char_start": 204,
"char_end": 246
},
{
"id": "W02-1403.7",
"char_start": 261,
"char_end": 266
},
{
"id": "W02-1403.8",
"char_start": 298,
"char_end": 321
},
{
"id": "W02-1403.9",
"char_start": 325,
"char_end": 340
},
{
"id": "W02-1403.10",
"char_start": 354,
"char_end": 359
},
{
"id": "W02-1403.11",
"char_start": 382,
"char_end": 395
},
{
"id": "W02-1403.12",
"char_start": 404,
"char_end": 426
},
{
"id": "W02-1403.13",
"char_start": 482,
"char_end": 487
},
{
"id": "W02-1403.14",
"char_start": 516,
"char_end": 553
},
{
"id": "W02-1403.15",
"char_start": 581,
"char_end": 627
},
{
"id": "W02-1403.16",
"char_start": 644,
"char_end": 671
},
{
"id": "W02-1403.17",
"char_start": 690,
"char_end": 704
},
{
"id": "W02-1403.18",
"char_start": 766,
"char_end": 794
},
{
"id": "W02-1403.19",
"char_start": 849,
"char_end": 858
},
{
"id": "W02-1403.20",
"char_start": 878,
"char_end": 882
},
{
"id": "W02-1403.21",
"char_start": 940,
"char_end": 966
},
{
"id": "W02-1403.22",
"char_start": 1041,
"char_end": 1059
},
{
"id": "W02-1403.23",
"char_start": 1072,
"char_end": 1076
},
{
"id": "W02-1403.24",
"char_start": 1150,
"char_end": 1171
}
] | [
{
"label": 4,
"arg1": "W02-1403.2",
"arg2": "W02-1403.3",
"reverse": false
},
{
"label": 4,
"arg1": "W02-1403.4",
"arg2": "W02-1403.5",
"reverse": false
},
{
"label": 3,
"arg1": "W02-1403.6",
"arg2": "W02-1403.7",
"reverse": false
},
{
"label": 1,
"arg1": "W02-1403.8",
"arg2": "W02-1403.9",
"reverse": true
},
{
"label": 4,
"arg1": "W02-1403.10",
"arg2": "W02-1403.11",
"reverse": true
},
{
"label": 4,
"arg1": "W02-1403.13",
"arg2": "W02-1403.14",
"reverse": false
},
{
"label": 6,
"arg1": "W02-1403.16",
"arg2": "W02-1403.17",
"reverse": false
},
{
"label": 4,
"arg1": "W02-1403.19",
"arg2": "W02-1403.20",
"reverse": false
},
{
"label": 3,
"arg1": "W02-1403.22",
"arg2": "W02-1403.23",
"reverse": false
}
] |
W02-1404 | Alignment And Extraction Of Bilingual Legal Terminology From Context Profiles
|
In this study, we propose a knowledge-independent method for aligning terms and thus extracting translations from a small, domain-specific corpus consisting of parallel English and Chinese court judgments from Hong Kong. With a sentence-aligned corpus, translation equivalences are suggested by analysing the frequency profiles of parallel concordances. The method overcomes the limitations of conventional statistical methods which require large corpora to be effective, and lexical approaches which depend on existing bilingual dictionaries. Pilot testing on a parallel corpus of about 113K Chinese words and 120K English words gives an encouraging 85% precision and 45% recall. Future work includes fine-tuning the algorithm upon the analysis of the errors, and acquiring a translation lexicon for legal terminology by filtering out general terms.
| [
{
"id": "W02-1404.1",
"char_start": 29,
"char_end": 57
},
{
"id": "W02-1404.2",
"char_start": 71,
"char_end": 76
},
{
"id": "W02-1404.3",
"char_start": 97,
"char_end": 109
},
{
"id": "W02-1404.4",
"char_start": 117,
"char_end": 146
},
{
"id": "W02-1404.5",
"char_start": 161,
"char_end": 205
},
{
"id": "W02-1404.6",
"char_start": 229,
"char_end": 252
},
{
"id": "W02-1404.7",
"char_start": 254,
"char_end": 278
},
{
"id": "W02-1404.8",
"char_start": 310,
"char_end": 328
},
{
"id": "W02-1404.9",
"char_start": 332,
"char_end": 353
},
{
"id": "W02-1404.10",
"char_start": 395,
"char_end": 427
},
{
"id": "W02-1404.11",
"char_start": 442,
"char_end": 455
},
{
"id": "W02-1404.12",
"char_start": 477,
"char_end": 495
},
{
"id": "W02-1404.13",
"char_start": 521,
"char_end": 543
},
{
"id": "W02-1404.14",
"char_start": 564,
"char_end": 579
},
{
"id": "W02-1404.15",
"char_start": 594,
"char_end": 607
},
{
"id": "W02-1404.16",
"char_start": 617,
"char_end": 630
},
{
"id": "W02-1404.17",
"char_start": 656,
"char_end": 665
},
{
"id": "W02-1404.18",
"char_start": 674,
"char_end": 680
},
{
"id": "W02-1404.19",
"char_start": 719,
"char_end": 728
},
{
"id": "W02-1404.20",
"char_start": 778,
"char_end": 797
},
{
"id": "W02-1404.21",
"char_start": 802,
"char_end": 819
},
{
"id": "W02-1404.22",
"char_start": 837,
"char_end": 850
}
] | [
{
"label": 4,
"arg1": "W02-1404.3",
"arg2": "W02-1404.4",
"reverse": false
},
{
"label": 3,
"arg1": "W02-1404.8",
"arg2": "W02-1404.9",
"reverse": false
},
{
"label": 1,
"arg1": "W02-1404.10",
"arg2": "W02-1404.11",
"reverse": true
},
{
"label": 1,
"arg1": "W02-1404.12",
"arg2": "W02-1404.13",
"reverse": true
},
{
"label": 4,
"arg1": "W02-1404.14",
"arg2": "W02-1404.15",
"reverse": true
},
{
"label": 4,
"arg1": "W02-1404.20",
"arg2": "W02-1404.21",
"reverse": true
}
] |
W02-1602 | Coedition To Share Text Revision Across Languages And Improve MT A Posteriori
|
Coedition of a natural language text and its representation in some interlingual form seems the best and simplest way to share text revision across languages. For various reasons, UNL graphs are the best candidates in this context. We are developing a prototype where, in the simplest sharing scenario, naive users interact directly with the text in their language (L0), and indirectly with the associated graph. The modified graph is then sent to the UNL-L0 deconverter and the result shown. If is is satisfactory, the errors were probably due to the graph, not to the deconverter, and the graph is sent to deconverters in other languages. Versions in some other languages known by the user may be displayed, so that improvement sharing is visible and encouraging. As new versions are added with appropriate tags and attributes in the original multilingual document, nothing is ever lost, and cooperative working on a document is rendered feasible. On the internal side, liaisons are established between elements of the text and the graph by using broadly available resources such as a LO-English or better a L0-UNL dictionary, a morphosyntactic parser of L0, and a canonical graph2tree transformation. Establishing a "best" correspondence between the "UNL-tree+L0" and the "MS-L0 structure", a lattice, may be done using the dictionary and trying to align the tree and the selected trajectory with as few crossing liaisons as possible. A central goal of this research is to merge approaches from pivot MT, interactive MT, and multilingual text authoring.
| [
{
"id": "W02-1602.1",
"char_start": 1,
"char_end": 10
},
{
"id": "W02-1602.2",
"char_start": 16,
"char_end": 37
},
{
"id": "W02-1602.3",
"char_start": 69,
"char_end": 86
},
{
"id": "W02-1602.4",
"char_start": 128,
"char_end": 141
},
{
"id": "W02-1602.5",
"char_start": 149,
"char_end": 158
},
{
"id": "W02-1602.6",
"char_start": 181,
"char_end": 191
},
{
"id": "W02-1602.7",
"char_start": 253,
"char_end": 262
},
{
"id": "W02-1602.8",
"char_start": 286,
"char_end": 302
},
{
"id": "W02-1602.9",
"char_start": 343,
"char_end": 347
},
{
"id": "W02-1602.10",
"char_start": 357,
"char_end": 370
},
{
"id": "W02-1602.11",
"char_start": 407,
"char_end": 412
},
{
"id": "W02-1602.12",
"char_start": 427,
"char_end": 432
},
{
"id": "W02-1602.13",
"char_start": 453,
"char_end": 471
},
{
"id": "W02-1602.14",
"char_start": 553,
"char_end": 558
},
{
"id": "W02-1602.15",
"char_start": 571,
"char_end": 582
},
{
"id": "W02-1602.16",
"char_start": 592,
"char_end": 597
},
{
"id": "W02-1602.17",
"char_start": 609,
"char_end": 621
},
{
"id": "W02-1602.18",
"char_start": 631,
"char_end": 640
},
{
"id": "W02-1602.19",
"char_start": 665,
"char_end": 674
},
{
"id": "W02-1602.20",
"char_start": 810,
"char_end": 814
},
{
"id": "W02-1602.21",
"char_start": 819,
"char_end": 829
},
{
"id": "W02-1602.22",
"char_start": 837,
"char_end": 867
},
{
"id": "W02-1602.23",
"char_start": 920,
"char_end": 928
},
{
"id": "W02-1602.24",
"char_start": 1022,
"char_end": 1026
},
{
"id": "W02-1602.25",
"char_start": 1035,
"char_end": 1040
},
{
"id": "W02-1602.26",
"char_start": 1088,
"char_end": 1128
},
{
"id": "W02-1602.27",
"char_start": 1132,
"char_end": 1160
},
{
"id": "W02-1602.28",
"char_start": 1168,
"char_end": 1203
},
{
"id": "W02-1602.29",
"char_start": 1255,
"char_end": 1266
},
{
"id": "W02-1602.30",
"char_start": 1277,
"char_end": 1292
},
{
"id": "W02-1602.31",
"char_start": 1297,
"char_end": 1304
},
{
"id": "W02-1602.32",
"char_start": 1328,
"char_end": 1338
},
{
"id": "W02-1602.33",
"char_start": 1363,
"char_end": 1367
},
{
"id": "W02-1602.34",
"char_start": 1385,
"char_end": 1395
},
{
"id": "W02-1602.35",
"char_start": 1408,
"char_end": 1425
},
{
"id": "W02-1602.36",
"char_start": 1499,
"char_end": 1507
},
{
"id": "W02-1602.37",
"char_start": 1509,
"char_end": 1523
},
{
"id": "W02-1602.38",
"char_start": 1529,
"char_end": 1556
}
] | [
{
"label": 1,
"arg1": "W02-1602.1",
"arg2": "W02-1602.2",
"reverse": false
},
{
"label": 3,
"arg1": "W02-1602.9",
"arg2": "W02-1602.10",
"reverse": true
},
{
"label": 3,
"arg1": "W02-1602.33",
"arg2": "W02-1602.35",
"reverse": true
}
] |
W03-0406 |
Unsupervised Learning Of Word Sense Disambiguation Rules By Estimating An Optimum Iteration Number In The EM Algorithm
|
In this paper, we improve an unsupervised learning method using the Expectation-Maximization (EM) algorithm proposed by Nigam et al. for text classification problems in order to apply it to word sense disambiguation (WSD) problems. The improved method stops the EM algorithm at the optimum iteration number. To estimate that number, we propose two methods. In experiments, we solved 50 noun WSD problems in the Japanese Dictionary Task in SENSEVAL2. The score of our method is a match for the best public score of this task. Furthermore, our methods were confirmed to be effective also for verb WSD problems.
| [
{
"id": "W03-0406.1",
"char_start": 30,
"char_end": 58
},
{
"id": "W03-0406.2",
"char_start": 69,
"char_end": 108
},
{
"id": "W03-0406.3",
"char_start": 138,
"char_end": 166
},
{
"id": "W03-0406.4",
"char_start": 191,
"char_end": 231
},
{
"id": "W03-0406.5",
"char_start": 263,
"char_end": 275
},
{
"id": "W03-0406.6",
"char_start": 283,
"char_end": 307
},
{
"id": "W03-0406.7",
"char_start": 387,
"char_end": 404
},
{
"id": "W03-0406.8",
"char_start": 412,
"char_end": 449
},
{
"id": "W03-0406.9",
"char_start": 591,
"char_end": 608
}
] | [
{
"label": 1,
"arg1": "W03-0406.2",
"arg2": "W03-0406.3",
"reverse": false
},
{
"label": 3,
"arg1": "W03-0406.5",
"arg2": "W03-0406.6",
"reverse": true
},
{
"label": 4,
"arg1": "W03-0406.7",
"arg2": "W03-0406.8",
"reverse": false
}
] |
W99-0408 | Modeling User Language Proficiency In A Writing Tutor For Deaf Learners Of English
|
In this paper we discuss a proposed user knowledge modeling architecture for the ICICLE system, a language tutoring application for deaf learners of written English. The model will represent the language proficiency of the user and is designed to be referenced during both writing analysis and feedback production. We motivate our model design by citing relevant research on second language and cognitive skill acquisition, and briefly discuss preliminary empirical evidence supporting the design. We conclude by showing how our design can provide a rich and robust information base to a language assessment / correction application by modeling user proficiency at a high level of granularity and specificity.
| [
{
"id": "W99-0408.1",
"char_start": 37,
"char_end": 73
},
{
"id": "W99-0408.2",
"char_start": 82,
"char_end": 95
},
{
"id": "W99-0408.3",
"char_start": 99,
"char_end": 128
},
{
"id": "W99-0408.4",
"char_start": 150,
"char_end": 165
},
{
"id": "W99-0408.5",
"char_start": 196,
"char_end": 216
},
{
"id": "W99-0408.6",
"char_start": 274,
"char_end": 290
},
{
"id": "W99-0408.7",
"char_start": 295,
"char_end": 314
},
{
"id": "W99-0408.8",
"char_start": 332,
"char_end": 344
},
{
"id": "W99-0408.9",
"char_start": 376,
"char_end": 423
},
{
"id": "W99-0408.10",
"char_start": 491,
"char_end": 497
},
{
"id": "W99-0408.11",
"char_start": 530,
"char_end": 536
},
{
"id": "W99-0408.12",
"char_start": 560,
"char_end": 583
},
{
"id": "W99-0408.13",
"char_start": 646,
"char_end": 662
}
] | [
{
"label": 4,
"arg1": "W99-0408.1",
"arg2": "W99-0408.2",
"reverse": false
},
{
"label": 1,
"arg1": "W99-0408.3",
"arg2": "W99-0408.4",
"reverse": false
},
{
"label": 1,
"arg1": "W99-0408.8",
"arg2": "W99-0408.9",
"reverse": true
}
] |
P98-1083 |
Using Decision Trees to Construct a Practical Parser
|
This paper describes novel and practical Japanese parsers that uses decision trees. First, we construct a single decision tree to estimate modification probabilities; how one phrase tends to modify another. Next, we introduce a boosting algorithm in which several decision trees are constructed and then combined for probability estimation. The two constructed parsers are evaluated by using the EDR Japanese annotated corpus. The single-tree method outperforms the conventional Japanese stochastic methods by 4%. Moreover, the boosting version is shown to have significant advantages; 1) better parsing accuracy than its single-tree counterpart for any amount of training data and 2) no over-fitting to data for various iterations.
| [
{
"id": "P98-1083.1",
"char_start": 42,
"char_end": 58
},
{
"id": "P98-1083.2",
"char_start": 69,
"char_end": 83
},
{
"id": "P98-1083.3",
"char_start": 114,
"char_end": 127
},
{
"id": "P98-1083.4",
"char_start": 140,
"char_end": 166
},
{
"id": "P98-1083.5",
"char_start": 176,
"char_end": 182
},
{
"id": "P98-1083.6",
"char_start": 229,
"char_end": 247
},
{
"id": "P98-1083.7",
"char_start": 265,
"char_end": 279
},
{
"id": "P98-1083.8",
"char_start": 318,
"char_end": 340
},
{
"id": "P98-1083.9",
"char_start": 362,
"char_end": 369
},
{
"id": "P98-1083.10",
"char_start": 397,
"char_end": 426
},
{
"id": "P98-1083.11",
"char_start": 467,
"char_end": 507
},
{
"id": "P98-1083.12",
"char_start": 597,
"char_end": 613
},
{
"id": "P98-1083.13",
"char_start": 665,
"char_end": 678
},
{
"id": "P98-1083.14",
"char_start": 689,
"char_end": 709
},
{
"id": "P98-1083.15",
"char_start": 722,
"char_end": 732
}
] | [
{
"label": 1,
"arg1": "P98-1083.1",
"arg2": "P98-1083.2",
"reverse": true
},
{
"label": 3,
"arg1": "P98-1083.3",
"arg2": "P98-1083.4",
"reverse": false
},
{
"label": 1,
"arg1": "P98-1083.7",
"arg2": "P98-1083.8",
"reverse": false
}
] |
I08-1027 |
Automatic Estimation of Word Significance oriented for Speech-based Information Retrieval
|
Automatic estimation of word significance oriented for speech-based Information Retrieval (IR) is addressed. Since the significance of words differs in IR, automatic speech recognition (ASR) performance has been evaluated based on weighted word error rate (WWER), which gives a weight on errors from the viewpoint of IR, instead of word error rate (WER), which treats all words uniformly. A decoding strategy that minimizes WWER based on a Minimum Bayes-Risk framework has been shown, and the reduction of errors on both ASR and IR has been reported. In this paper, we propose an automatic estimation method for word significance (weights) based on its influence on IR. Specifically, weights are estimated so that evaluation measures of ASR and IR are equivalent. We apply the proposed method to a speech-based information retrieval system, which is a typical IR system, and show that the method works well.
| [
{
"id": "I08-1027.1",
"char_start": 1,
"char_end": 21
},
{
"id": "I08-1027.2",
"char_start": 25,
"char_end": 42
},
{
"id": "I08-1027.3",
"char_start": 56,
"char_end": 95
},
{
"id": "I08-1027.4",
"char_start": 120,
"char_end": 132
},
{
"id": "I08-1027.5",
"char_start": 136,
"char_end": 141
},
{
"id": "I08-1027.6",
"char_start": 153,
"char_end": 155
},
{
"id": "I08-1027.7",
"char_start": 157,
"char_end": 203
},
{
"id": "I08-1027.8",
"char_start": 232,
"char_end": 263
},
{
"id": "I08-1027.9",
"char_start": 279,
"char_end": 285
},
{
"id": "I08-1027.10",
"char_start": 318,
"char_end": 320
},
{
"id": "I08-1027.11",
"char_start": 333,
"char_end": 354
},
{
"id": "I08-1027.12",
"char_start": 373,
"char_end": 378
},
{
"id": "I08-1027.13",
"char_start": 392,
"char_end": 409
},
{
"id": "I08-1027.14",
"char_start": 425,
"char_end": 429
},
{
"id": "I08-1027.15",
"char_start": 441,
"char_end": 469
},
{
"id": "I08-1027.16",
"char_start": 522,
"char_end": 525
},
{
"id": "I08-1027.17",
"char_start": 530,
"char_end": 532
},
{
"id": "I08-1027.18",
"char_start": 581,
"char_end": 608
},
{
"id": "I08-1027.19",
"char_start": 613,
"char_end": 640
},
{
"id": "I08-1027.20",
"char_start": 667,
"char_end": 669
},
{
"id": "I08-1027.21",
"char_start": 685,
"char_end": 692
},
{
"id": "I08-1027.22",
"char_start": 715,
"char_end": 734
},
{
"id": "I08-1027.23",
"char_start": 738,
"char_end": 741
},
{
"id": "I08-1027.24",
"char_start": 746,
"char_end": 748
},
{
"id": "I08-1027.25",
"char_start": 799,
"char_end": 840
},
{
"id": "I08-1027.26",
"char_start": 861,
"char_end": 870
}
] | [
{
"label": 1,
"arg1": "I08-1027.2",
"arg2": "I08-1027.3",
"reverse": false
},
{
"label": 3,
"arg1": "I08-1027.4",
"arg2": "I08-1027.5",
"reverse": false
},
{
"label": 6,
"arg1": "I08-1027.9",
"arg2": "I08-1027.11",
"reverse": false
},
{
"label": 1,
"arg1": "I08-1027.13",
"arg2": "I08-1027.15",
"reverse": true
},
{
"label": 6,
"arg1": "I08-1027.16",
"arg2": "I08-1027.17",
"reverse": false
},
{
"label": 1,
"arg1": "I08-1027.18",
"arg2": "I08-1027.19",
"reverse": false
}
] |
I08-1043 |
Paraphrasing Depending on Bilingual Context Toward Generalization of Translation Knowledge
|
This study presents a method to automatically acquire paraphrases using bilingual corpora, which utilizes the bilingual dependency relations obtained by projecting a monolingual dependency parse onto the other language sentence based on statistical alignment techniques. Since the paraphrasing method is capable of clearly disambiguating the sense of an original phrase using the bilingual context of dependency relation, it would be possible to obtain interchangeable paraphrases under a given context. Also, we provide an advanced method to acquire generalized translation knowledge using the extracted paraphrases. We applied the method to acquire the generalized translation knowledge for Korean-English translation. Through experiments with parallel corpora of a Korean and English language pairs, we show that our paraphrasing method effectively extracts paraphrases with high precision, 94.3% and 84.6% respectively for Korean and English, and the translation knowledge extracted from the bilingual corpora could be generalized successfully using the paraphrases with the 12.5% compression ratio.
| [
{
"id": "I08-1043.1",
"char_start": 23,
"char_end": 66
},
{
"id": "I08-1043.2",
"char_start": 73,
"char_end": 90
},
{
"id": "I08-1043.3",
"char_start": 111,
"char_end": 141
},
{
"id": "I08-1043.4",
"char_start": 167,
"char_end": 195
},
{
"id": "I08-1043.5",
"char_start": 238,
"char_end": 270
},
{
"id": "I08-1043.6",
"char_start": 282,
"char_end": 301
},
{
"id": "I08-1043.7",
"char_start": 343,
"char_end": 348
},
{
"id": "I08-1043.8",
"char_start": 364,
"char_end": 370
},
{
"id": "I08-1043.9",
"char_start": 381,
"char_end": 398
},
{
"id": "I08-1043.10",
"char_start": 402,
"char_end": 421
},
{
"id": "I08-1043.11",
"char_start": 470,
"char_end": 481
},
{
"id": "I08-1043.12",
"char_start": 496,
"char_end": 503
},
{
"id": "I08-1043.13",
"char_start": 552,
"char_end": 585
},
{
"id": "I08-1043.14",
"char_start": 606,
"char_end": 617
},
{
"id": "I08-1043.15",
"char_start": 656,
"char_end": 689
},
{
"id": "I08-1043.16",
"char_start": 694,
"char_end": 720
},
{
"id": "I08-1043.17",
"char_start": 747,
"char_end": 763
},
{
"id": "I08-1043.18",
"char_start": 769,
"char_end": 802
},
{
"id": "I08-1043.19",
"char_start": 821,
"char_end": 840
},
{
"id": "I08-1043.20",
"char_start": 862,
"char_end": 873
},
{
"id": "I08-1043.21",
"char_start": 884,
"char_end": 893
},
{
"id": "I08-1043.22",
"char_start": 928,
"char_end": 934
},
{
"id": "I08-1043.23",
"char_start": 939,
"char_end": 946
},
{
"id": "I08-1043.24",
"char_start": 956,
"char_end": 977
},
{
"id": "I08-1043.25",
"char_start": 997,
"char_end": 1014
},
{
"id": "I08-1043.26",
"char_start": 1059,
"char_end": 1070
},
{
"id": "I08-1043.27",
"char_start": 1086,
"char_end": 1103
}
] | [
{
"label": 1,
"arg1": "I08-1043.1",
"arg2": "I08-1043.2",
"reverse": true
},
{
"label": 1,
"arg1": "I08-1043.3",
"arg2": "I08-1043.5",
"reverse": true
},
{
"label": 3,
"arg1": "I08-1043.7",
"arg2": "I08-1043.8",
"reverse": false
},
{
"label": 3,
"arg1": "I08-1043.11",
"arg2": "I08-1043.12",
"reverse": true
},
{
"label": 1,
"arg1": "I08-1043.13",
"arg2": "I08-1043.14",
"reverse": true
},
{
"label": 3,
"arg1": "I08-1043.15",
"arg2": "I08-1043.16",
"reverse": false
},
{
"label": 4,
"arg1": "I08-1043.17",
"arg2": "I08-1043.18",
"reverse": true
},
{
"label": 2,
"arg1": "I08-1043.19",
"arg2": "I08-1043.21",
"reverse": false
},
{
"label": 4,
"arg1": "I08-1043.24",
"arg2": "I08-1043.25",
"reverse": false
},
{
"label": 3,
"arg1": "I08-1043.26",
"arg2": "I08-1043.27",
"reverse": true
}
] |
W04-1307 |
Statistics Learning And Universal Grammar : Modeling Word Segmentation
|
This paper describes a computational model of word segmentation and presents simulation results on realistic acquisition. In particular, we explore the capacity and limitations of statistical learning mechanisms that have recently gained prominence in cognitive psychology and linguistics.
| [
{
"id": "W04-1307.1",
"char_start": 24,
"char_end": 43
},
{
"id": "W04-1307.2",
"char_start": 47,
"char_end": 64
},
{
"id": "W04-1307.3",
"char_start": 100,
"char_end": 121
},
{
"id": "W04-1307.4",
"char_start": 181,
"char_end": 212
},
{
"id": "W04-1307.5",
"char_start": 253,
"char_end": 273
},
{
"id": "W04-1307.6",
"char_start": 278,
"char_end": 289
}
] | [
{
"label": 3,
"arg1": "W04-1307.1",
"arg2": "W04-1307.2",
"reverse": false
},
{
"label": 1,
"arg1": "W04-1307.4",
"arg2": "W04-1307.5",
"reverse": false
}
] |
W04-2204 |
Automatic Construction Of A Transfer Dictionary Considering Directionality
|
In this paper, we show how to construct a transfer dictionary automatically. Dictionary construction, one of the most difficult tasks in developing a machine translation system, is expensive. To avoid this problem, we investigate how we build a dictionary using existing linguistic resources. Our algorithm can be applied to any language pairs, but for the present we focus on building a Korean-to-Japanese dictionary using English as a pivot. We attempt three ways of automatic construction to corroborate the effect of the directionality of dictionaries. First, we introduce "one-time look up" method using a Korean-to-English and a Japanese-to-English dictionary. Second, we show a method using "overlapping constraint" with a Korean-to-English dictionary and an English-to-Japanese dictionary. Third, we consider another alternative method rarely used for building a dictionary: an English-to-Korean dictionary and English-to-Japanese dictionary. We found that the first method is the most effective and the best result can be obtained from combining the three methods.
| [
{
"id": "W04-2204.1",
"char_start": 43,
"char_end": 62
},
{
"id": "W04-2204.2",
"char_start": 78,
"char_end": 101
},
{
"id": "W04-2204.3",
"char_start": 151,
"char_end": 177
},
{
"id": "W04-2204.4",
"char_start": 246,
"char_end": 256
},
{
"id": "W04-2204.5",
"char_start": 272,
"char_end": 292
},
{
"id": "W04-2204.6",
"char_start": 298,
"char_end": 307
},
{
"id": "W04-2204.7",
"char_start": 330,
"char_end": 344
},
{
"id": "W04-2204.8",
"char_start": 389,
"char_end": 418
},
{
"id": "W04-2204.9",
"char_start": 425,
"char_end": 432
},
{
"id": "W04-2204.10",
"char_start": 438,
"char_end": 443
},
{
"id": "W04-2204.11",
"char_start": 470,
"char_end": 492
},
{
"id": "W04-2204.12",
"char_start": 526,
"char_end": 540
},
{
"id": "W04-2204.13",
"char_start": 544,
"char_end": 556
},
{
"id": "W04-2204.14",
"char_start": 578,
"char_end": 603
},
{
"id": "W04-2204.15",
"char_start": 612,
"char_end": 666
},
{
"id": "W04-2204.16",
"char_start": 699,
"char_end": 723
},
{
"id": "W04-2204.17",
"char_start": 731,
"char_end": 759
},
{
"id": "W04-2204.18",
"char_start": 767,
"char_end": 797
},
{
"id": "W04-2204.19",
"char_start": 872,
"char_end": 882
},
{
"id": "W04-2204.20",
"char_start": 887,
"char_end": 915
},
{
"id": "W04-2204.21",
"char_start": 920,
"char_end": 950
}
] | [
{
"label": 4,
"arg1": "W04-2204.2",
"arg2": "W04-2204.3",
"reverse": false
},
{
"label": 1,
"arg1": "W04-2204.4",
"arg2": "W04-2204.5",
"reverse": true
},
{
"label": 1,
"arg1": "W04-2204.6",
"arg2": "W04-2204.7",
"reverse": false
},
{
"label": 1,
"arg1": "W04-2204.8",
"arg2": "W04-2204.10",
"reverse": true
},
{
"label": 3,
"arg1": "W04-2204.12",
"arg2": "W04-2204.13",
"reverse": false
},
{
"label": 1,
"arg1": "W04-2204.14",
"arg2": "W04-2204.15",
"reverse": true
}
] |
W04-2703 |
Annotating Discourse Connectives And Their Arguments
|
This paper describes a new, large scale discourse-level annotation project - the Penn Discourse TreeBank (PDTB). We present an approach to annotating a level of discourse structure that is based on identifying discourse connectives and their arguments. The PDTB is being built directly on top of the Penn TreeBank and Propbank, thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of practical algorithms. We provide a detailed preliminary analysis of inter-annotator agreement - both the level of agreement and the types of inter-annotator variation.
| [
{
"id": "W04-2703.1",
"char_start": 29,
"char_end": 67
},
{
"id": "W04-2703.2",
"char_start": 82,
"char_end": 112
},
{
"id": "W04-2703.3",
"char_start": 162,
"char_end": 181
},
{
"id": "W04-2703.4",
"char_start": 211,
"char_end": 232
},
{
"id": "W04-2703.5",
"char_start": 243,
"char_end": 252
},
{
"id": "W04-2703.6",
"char_start": 258,
"char_end": 262
},
{
"id": "W04-2703.7",
"char_start": 301,
"char_end": 314
},
{
"id": "W04-2703.8",
"char_start": 319,
"char_end": 327
},
{
"id": "W04-2703.9",
"char_start": 370,
"char_end": 401
},
{
"id": "W04-2703.10",
"char_start": 473,
"char_end": 493
},
{
"id": "W04-2703.11",
"char_start": 541,
"char_end": 566
},
{
"id": "W04-2703.12",
"char_start": 578,
"char_end": 596
},
{
"id": "W04-2703.13",
"char_start": 614,
"char_end": 639
}
] | [
{
"label": 1,
"arg1": "W04-2703.3",
"arg2": "W04-2703.4",
"reverse": true
}
] |
W05-1308 |
INTEX : A Syntactic Role Driven Protein-Protein Interaction Extractor For Bio-Medical Text
|
In this paper, we present a fully automated extraction system, named IntEx, to identify gene and protein interactions in biomedical text. Our approach is based on first splitting complex sentences into simple clausal structures made up of syntactic roles. Then, tagging biological entities with the help of biomedical and linguistic ontologies. Finally, extracting complete interactions by analyzing the matching contents of syntactic roles and their linguistically significant combinations. Our extraction system handles complex sentences and extracts multiple and nested interactions specified in a sentence. Experimental evaluations with two other state of the art extraction systems indicate that the IntEx system achieves better performance without the labor intensive pattern engineering requirement.
| [
{
"id": "W05-1308.1",
"char_start": 29,
"char_end": 62
},
{
"id": "W05-1308.2",
"char_start": 70,
"char_end": 75
},
{
"id": "W05-1308.3",
"char_start": 89,
"char_end": 118
},
{
"id": "W05-1308.4",
"char_start": 122,
"char_end": 137
},
{
"id": "W05-1308.5",
"char_start": 180,
"char_end": 197
},
{
"id": "W05-1308.6",
"char_start": 203,
"char_end": 228
},
{
"id": "W05-1308.7",
"char_start": 240,
"char_end": 255
},
{
"id": "W05-1308.8",
"char_start": 271,
"char_end": 290
},
{
"id": "W05-1308.9",
"char_start": 308,
"char_end": 344
},
{
"id": "W05-1308.10",
"char_start": 366,
"char_end": 387
},
{
"id": "W05-1308.11",
"char_start": 426,
"char_end": 441
},
{
"id": "W05-1308.12",
"char_start": 497,
"char_end": 514
},
{
"id": "W05-1308.13",
"char_start": 523,
"char_end": 540
},
{
"id": "W05-1308.14",
"char_start": 554,
"char_end": 586
},
{
"id": "W05-1308.15",
"char_start": 602,
"char_end": 610
},
{
"id": "W05-1308.16",
"char_start": 669,
"char_end": 687
},
{
"id": "W05-1308.17",
"char_start": 706,
"char_end": 718
},
{
"id": "W05-1308.18",
"char_start": 735,
"char_end": 746
},
{
"id": "W05-1308.19",
"char_start": 775,
"char_end": 806
}
] | [
{
"label": 4,
"arg1": "W05-1308.3",
"arg2": "W05-1308.4",
"reverse": false
},
{
"label": 4,
"arg1": "W05-1308.5",
"arg2": "W05-1308.6",
"reverse": true
},
{
"label": 3,
"arg1": "W05-1308.8",
"arg2": "W05-1308.9",
"reverse": true
},
{
"label": 1,
"arg1": "W05-1308.12",
"arg2": "W05-1308.13",
"reverse": false
},
{
"label": 4,
"arg1": "W05-1308.14",
"arg2": "W05-1308.15",
"reverse": false
},
{
"label": 2,
"arg1": "W05-1308.17",
"arg2": "W05-1308.18",
"reverse": false
}
] |
W06-1605 |
Distributional Measures Of Concept- Distance : A Task-Oriented Evaluation
|
We propose a framework to derive the distance between concepts from distributional measures of word co-occurrences. We use the categories in a published thesaurus as coarse-grained concepts, allowing all possible distance values to be stored in a concept-concept matrix roughly.01% the size of that created by existing measures. We show that the newly proposed concept-distance measures outperform traditional distributional word-distance measures in the tasks of (1) ranking word pairs in order of semantic distance, and (2) correcting real-word spelling errors. In the latter task, of all the WordNet-based measures, only that proposed by Jiang and Conrath outperforms the best distributional concept-distance measures.
| [
{
"id": "W06-1605.1",
"char_start": 38,
"char_end": 46
},
{
"id": "W06-1605.2",
"char_start": 55,
"char_end": 63
},
{
"id": "W06-1605.3",
"char_start": 69,
"char_end": 115
},
{
"id": "W06-1605.4",
"char_start": 128,
"char_end": 138
},
{
"id": "W06-1605.5",
"char_start": 154,
"char_end": 163
},
{
"id": "W06-1605.6",
"char_start": 167,
"char_end": 190
},
{
"id": "W06-1605.7",
"char_start": 214,
"char_end": 229
},
{
"id": "W06-1605.8",
"char_start": 248,
"char_end": 270
},
{
"id": "W06-1605.9",
"char_start": 362,
"char_end": 387
},
{
"id": "W06-1605.10",
"char_start": 399,
"char_end": 448
},
{
"id": "W06-1605.11",
"char_start": 477,
"char_end": 487
},
{
"id": "W06-1605.12",
"char_start": 500,
"char_end": 517
},
{
"id": "W06-1605.13",
"char_start": 538,
"char_end": 563
},
{
"id": "W06-1605.14",
"char_start": 596,
"char_end": 618
},
{
"id": "W06-1605.15",
"char_start": 681,
"char_end": 721
}
] | [
{
"label": 3,
"arg1": "W06-1605.1",
"arg2": "W06-1605.2",
"reverse": false
},
{
"label": 4,
"arg1": "W06-1605.4",
"arg2": "W06-1605.5",
"reverse": false
},
{
"label": 4,
"arg1": "W06-1605.6",
"arg2": "W06-1605.8",
"reverse": false
},
{
"label": 6,
"arg1": "W06-1605.9",
"arg2": "W06-1605.10",
"reverse": false
},
{
"label": 3,
"arg1": "W06-1605.11",
"arg2": "W06-1605.12",
"reverse": true
},
{
"label": 6,
"arg1": "W06-1605.14",
"arg2": "W06-1605.15",
"reverse": false
}
] |
W07-0208 |
Learning to Transform Linguistic Graphs
|
We argue in favor of the the use of labeled directed graph to represent various types of linguistic structures, and illustrate how this allows one to view NLP tasks as graph transformations. We present a general method for learning such transformations from an annotated corpus and describe experiments with two applications of the method: identification of non-local depenencies (using Penn Treebank data) and semantic role labeling (using Proposition Bank data).
| [
{
"id": "W07-0208.1",
"char_start": 37,
"char_end": 59
},
{
"id": "W07-0208.2",
"char_start": 90,
"char_end": 111
},
{
"id": "W07-0208.3",
"char_start": 156,
"char_end": 165
},
{
"id": "W07-0208.4",
"char_start": 169,
"char_end": 190
},
{
"id": "W07-0208.5",
"char_start": 238,
"char_end": 253
},
{
"id": "W07-0208.6",
"char_start": 262,
"char_end": 278
},
{
"id": "W07-0208.7",
"char_start": 341,
"char_end": 380
},
{
"id": "W07-0208.8",
"char_start": 388,
"char_end": 406
},
{
"id": "W07-0208.9",
"char_start": 412,
"char_end": 434
},
{
"id": "W07-0208.10",
"char_start": 442,
"char_end": 463
}
] | [
{
"label": 3,
"arg1": "W07-0208.1",
"arg2": "W07-0208.2",
"reverse": false
},
{
"label": 3,
"arg1": "W07-0208.3",
"arg2": "W07-0208.4",
"reverse": true
},
{
"label": 3,
"arg1": "W07-0208.7",
"arg2": "W07-0208.8",
"reverse": false
},
{
"label": 3,
"arg1": "W07-0208.9",
"arg2": "W07-0208.10",
"reverse": false
}
] |
P08-1105 |
Credibility Improves Topical Blog Post Retrieval
|
Topical blog post retrieval is the task of ranking blog posts with respect to their relevance for a given topic. To improve topical blog post retrieval we incorporate textual credibility indicators in the retrieval process. We consider two groups of indicators: post level (determined using information about individual blog posts only) and blog level (determined using information from the underlying blogs). We describe how to estimate these indicators and how to integrate them into a retrieval approach based on language models. Experiments on the TREC Blog track test set show that both groups of credibility indicators significantly improve retrieval effectiveness; the best performance is achieved when combining them.
| [
{
"id": "P08-1105.1",
"char_start": 1,
"char_end": 28
},
{
"id": "P08-1105.2",
"char_start": 52,
"char_end": 62
},
{
"id": "P08-1105.3",
"char_start": 85,
"char_end": 94
},
{
"id": "P08-1105.4",
"char_start": 107,
"char_end": 112
},
{
"id": "P08-1105.5",
"char_start": 125,
"char_end": 152
},
{
"id": "P08-1105.6",
"char_start": 168,
"char_end": 198
},
{
"id": "P08-1105.7",
"char_start": 206,
"char_end": 223
},
{
"id": "P08-1105.8",
"char_start": 251,
"char_end": 261
},
{
"id": "P08-1105.9",
"char_start": 321,
"char_end": 331
},
{
"id": "P08-1105.10",
"char_start": 403,
"char_end": 408
},
{
"id": "P08-1105.11",
"char_start": 445,
"char_end": 455
},
{
"id": "P08-1105.12",
"char_start": 489,
"char_end": 507
},
{
"id": "P08-1105.13",
"char_start": 517,
"char_end": 532
},
{
"id": "P08-1105.14",
"char_start": 553,
"char_end": 577
},
{
"id": "P08-1105.15",
"char_start": 603,
"char_end": 625
},
{
"id": "P08-1105.16",
"char_start": 648,
"char_end": 671
}
] | [
{
"label": 3,
"arg1": "P08-1105.2",
"arg2": "P08-1105.3",
"reverse": true
},
{
"label": 1,
"arg1": "P08-1105.5",
"arg2": "P08-1105.6",
"reverse": true
},
{
"label": 1,
"arg1": "P08-1105.11",
"arg2": "P08-1105.12",
"reverse": false
},
{
"label": 2,
"arg1": "P08-1105.15",
"arg2": "P08-1105.16",
"reverse": false
}
] |
P08-2034 |
Lyric-based Song Sentiment Classification with Sentiment Vector Space Model |
Lyric-based song sentiment classification seeks to assign songs appropriate sentiment labels such as light-hearted heavy-hearted. Four problems render vector space model (VSM)-based text classification approach ineffective: 1) Many words within song lyrics actually contribute little to sentiment; 2) Nouns and verbs used to express sentiment are ambiguous; 3) Negations and modifiers around the sentiment keywords make particular contributions to sentiment; 4) Song lyric is usually very short. To address these problems, the sentiment vector space model (s-VSM) is proposed to represent song lyric document. The preliminary experiments prove that the s-VSM model outperforms the VSM model in the lyric-based song sentiment classification task.
| [
{
"id": "P08-2034.1",
"char_start": 1,
"char_end": 42
},
{
"id": "P08-2034.2",
"char_start": 77,
"char_end": 93
},
{
"id": "P08-2034.3",
"char_start": 152,
"char_end": 211
},
{
"id": "P08-2034.4",
"char_start": 233,
"char_end": 238
},
{
"id": "P08-2034.5",
"char_start": 246,
"char_end": 257
},
{
"id": "P08-2034.6",
"char_start": 288,
"char_end": 297
},
{
"id": "P08-2034.7",
"char_start": 302,
"char_end": 307
},
{
"id": "P08-2034.8",
"char_start": 312,
"char_end": 317
},
{
"id": "P08-2034.9",
"char_start": 334,
"char_end": 343
},
{
"id": "P08-2034.10",
"char_start": 362,
"char_end": 371
},
{
"id": "P08-2034.11",
"char_start": 376,
"char_end": 385
},
{
"id": "P08-2034.12",
"char_start": 397,
"char_end": 415
},
{
"id": "P08-2034.13",
"char_start": 449,
"char_end": 458
},
{
"id": "P08-2034.14",
"char_start": 463,
"char_end": 473
},
{
"id": "P08-2034.15",
"char_start": 528,
"char_end": 564
},
{
"id": "P08-2034.16",
"char_start": 590,
"char_end": 609
},
{
"id": "P08-2034.17",
"char_start": 654,
"char_end": 665
},
{
"id": "P08-2034.18",
"char_start": 682,
"char_end": 691
},
{
"id": "P08-2034.19",
"char_start": 699,
"char_end": 745
}
] | [
{
"label": 4,
"arg1": "P08-2034.4",
"arg2": "P08-2034.5",
"reverse": false
},
{
"label": 3,
"arg1": "P08-2034.15",
"arg2": "P08-2034.16",
"reverse": false
},
{
"label": 6,
"arg1": "P08-2034.17",
"arg2": "P08-2034.18",
"reverse": false
}
] |
C08-1118 |
Source Language Markers in EUROPARL Translations
|
This paper shows that it is very often possible to identify the source language of medium-length speeches in the EUROPARL corpus on the basis of frequency counts of word n-grams (87.2%-96.7% accuracy depending on classification method). The paper also examines in detail which positive markers are most powerful and identifies a number of linguistic aspects as well as culture- and domain-related ones.
| [
{
"id": "C08-1118.1",
"char_start": 65,
"char_end": 80
},
{
"id": "C08-1118.2",
"char_start": 114,
"char_end": 129
},
{
"id": "C08-1118.3",
"char_start": 146,
"char_end": 162
},
{
"id": "C08-1118.4",
"char_start": 166,
"char_end": 178
},
{
"id": "C08-1118.5",
"char_start": 192,
"char_end": 200
},
{
"id": "C08-1118.6",
"char_start": 214,
"char_end": 235
},
{
"id": "C08-1118.7",
"char_start": 278,
"char_end": 294
}
] | [
{
"label": 3,
"arg1": "C08-1118.1",
"arg2": "C08-1118.2",
"reverse": false
},
{
"label": 3,
"arg1": "C08-1118.3",
"arg2": "C08-1118.4",
"reverse": false
},
{
"label": 2,
"arg1": "C08-1118.5",
"arg2": "C08-1118.6",
"reverse": true
}
] |
C08-1128 |
Bayesian Semi-Supervised Chinese Word Segmentation for Statistical Machine Translation
|
Words in Chinese text are not naturally separated by delimiters, which poses a challenge to standard machine translation (MT) systems. In MT, the widely used approach is to apply a Chinese word segmenter trained from manually annotated data, using a fixed lexicon. Such word segmentation is not necessarily optimal for translation. We propose a Bayesian semi-supervised Chinese word segmentation model which uses both monolingual and bilingual information to derive a segmentation suitable for MT. Experiments show that our method improves a state-of-the-art MT system in a small and a large data environment.
| [
{
"id": "C08-1128.1",
"char_start": 1,
"char_end": 6
},
{
"id": "C08-1128.2",
"char_start": 10,
"char_end": 22
},
{
"id": "C08-1128.3",
"char_start": 54,
"char_end": 64
},
{
"id": "C08-1128.4",
"char_start": 93,
"char_end": 134
},
{
"id": "C08-1128.5",
"char_start": 139,
"char_end": 141
},
{
"id": "C08-1128.6",
"char_start": 182,
"char_end": 204
},
{
"id": "C08-1128.7",
"char_start": 218,
"char_end": 241
},
{
"id": "C08-1128.8",
"char_start": 257,
"char_end": 264
},
{
"id": "C08-1128.9",
"char_start": 271,
"char_end": 288
},
{
"id": "C08-1128.10",
"char_start": 320,
"char_end": 331
},
{
"id": "C08-1128.11",
"char_start": 346,
"char_end": 402
},
{
"id": "C08-1128.12",
"char_start": 419,
"char_end": 456
},
{
"id": "C08-1128.13",
"char_start": 469,
"char_end": 481
},
{
"id": "C08-1128.14",
"char_start": 495,
"char_end": 497
},
{
"id": "C08-1128.15",
"char_start": 543,
"char_end": 569
},
{
"id": "C08-1128.16",
"char_start": 587,
"char_end": 609
}
] | [
{
"label": 4,
"arg1": "C08-1128.1",
"arg2": "C08-1128.2",
"reverse": false
},
{
"label": 1,
"arg1": "C08-1128.5",
"arg2": "C08-1128.6",
"reverse": true
},
{
"label": 1,
"arg1": "C08-1128.11",
"arg2": "C08-1128.12",
"reverse": true
},
{
"label": 1,
"arg1": "C08-1128.13",
"arg2": "C08-1128.14",
"reverse": false
}
] |
C08-2010 |
The Impact of Reference Quality on Automatic MT Evaluation
|
Language resource quality is crucial in NLP. Many of the resources used are derived from data created by human beings out of an NLP context, especially regarding MT and reference translations. Indeed, automatic evaluations need high-quality data that allow the comparison of both automatic and human translations. The validation of these resources is widely recommended before being used. This paper describes the impact of using different-quality references on evaluation. Surprisingly enough, similar scores are obtained in many cases regardless of the quality. Thus, the limitations of the automatic metrics used within MT are also discussed in this regard.
| [
{
"id": "C08-2010.1",
"char_start": 1,
"char_end": 26
},
{
"id": "C08-2010.2",
"char_start": 41,
"char_end": 44
},
{
"id": "C08-2010.3",
"char_start": 129,
"char_end": 132
},
{
"id": "C08-2010.4",
"char_start": 163,
"char_end": 165
},
{
"id": "C08-2010.5",
"char_start": 170,
"char_end": 192
},
{
"id": "C08-2010.6",
"char_start": 202,
"char_end": 223
},
{
"id": "C08-2010.7",
"char_start": 229,
"char_end": 246
},
{
"id": "C08-2010.8",
"char_start": 281,
"char_end": 313
},
{
"id": "C08-2010.9",
"char_start": 431,
"char_end": 459
},
{
"id": "C08-2010.10",
"char_start": 463,
"char_end": 473
},
{
"id": "C08-2010.11",
"char_start": 594,
"char_end": 611
},
{
"id": "C08-2010.12",
"char_start": 624,
"char_end": 626
}
] | [
{
"label": 1,
"arg1": "C08-2010.6",
"arg2": "C08-2010.7",
"reverse": true
},
{
"label": 1,
"arg1": "C08-2010.9",
"arg2": "C08-2010.10",
"reverse": false
},
{
"label": 1,
"arg1": "C08-2010.11",
"arg2": "C08-2010.12",
"reverse": false
}
] |
C08-3010 |
A Linguistic Knowledge Discovery Tool : Very Large Ngram Database Search with Arbitrary Wildcards
|
In this paper, we will describe a search tool for a huge set of ngrams. The tool supports queries with an arbitrary number of wildcards. It takes a fraction of a second for a search, and can provide the fillers of the wildcards. The system runs on a single Linux PC with reasonable size memory (less than 4GB) and disk space (less than 400GB). This system can be a very useful tool for linguistic knowledge discovery and other NLP tasks.
| [
{
"id": "C08-3010.1",
"char_start": 35,
"char_end": 46
},
{
"id": "C08-3010.2",
"char_start": 65,
"char_end": 71
},
{
"id": "C08-3010.3",
"char_start": 91,
"char_end": 98
},
{
"id": "C08-3010.4",
"char_start": 127,
"char_end": 136
},
{
"id": "C08-3010.5",
"char_start": 204,
"char_end": 211
},
{
"id": "C08-3010.6",
"char_start": 219,
"char_end": 228
},
{
"id": "C08-3010.7",
"char_start": 288,
"char_end": 294
},
{
"id": "C08-3010.8",
"char_start": 315,
"char_end": 325
},
{
"id": "C08-3010.9",
"char_start": 387,
"char_end": 417
},
{
"id": "C08-3010.10",
"char_start": 428,
"char_end": 437
}
] | [
{
"label": 3,
"arg1": "C08-3010.3",
"arg2": "C08-3010.4",
"reverse": true
}
] |
W03-2907 |
Unsupervised Learning of Bulgarian POS Tags
|
This paper presents an approach to the unsupervised learning of parts of speech which uses both morphological and syntactic information. While the model is more complex than those which have been employed for unsupervised learning of POS tags in English, which use only syntactic information, the variety of languages in the world requires that we consider morphology as well. In many languages, morphology provides better clues to a word's category than word order. We present the computational model for POS learning, and present results for applying it to Bulgarian, a Slavic language with relatively free word order and rich morphology.
| [
{
"id": "W03-2907.1",
"char_start": 40,
"char_end": 61
},
{
"id": "W03-2907.2",
"char_start": 65,
"char_end": 80
},
{
"id": "W03-2907.3",
"char_start": 97,
"char_end": 136
},
{
"id": "W03-2907.4",
"char_start": 148,
"char_end": 153
},
{
"id": "W03-2907.5",
"char_start": 210,
"char_end": 231
},
{
"id": "W03-2907.6",
"char_start": 235,
"char_end": 254
},
{
"id": "W03-2907.7",
"char_start": 271,
"char_end": 292
},
{
"id": "W03-2907.8",
"char_start": 309,
"char_end": 318
},
{
"id": "W03-2907.9",
"char_start": 358,
"char_end": 368
},
{
"id": "W03-2907.10",
"char_start": 386,
"char_end": 395
},
{
"id": "W03-2907.11",
"char_start": 397,
"char_end": 407
},
{
"id": "W03-2907.12",
"char_start": 456,
"char_end": 466
},
{
"id": "W03-2907.13",
"char_start": 483,
"char_end": 502
},
{
"id": "W03-2907.14",
"char_start": 507,
"char_end": 519
},
{
"id": "W03-2907.15",
"char_start": 560,
"char_end": 569
},
{
"id": "W03-2907.16",
"char_start": 573,
"char_end": 588
},
{
"id": "W03-2907.17",
"char_start": 605,
"char_end": 620
},
{
"id": "W03-2907.18",
"char_start": 625,
"char_end": 640
}
] | [
{
"label": 1,
"arg1": "W03-2907.1",
"arg2": "W03-2907.3",
"reverse": true
},
{
"label": 1,
"arg1": "W03-2907.5",
"arg2": "W03-2907.7",
"reverse": true
},
{
"label": 4,
"arg1": "W03-2907.8",
"arg2": "W03-2907.9",
"reverse": true
},
{
"label": 6,
"arg1": "W03-2907.11",
"arg2": "W03-2907.12",
"reverse": false
},
{
"label": 1,
"arg1": "W03-2907.13",
"arg2": "W03-2907.14",
"reverse": false
},
{
"label": 3,
"arg1": "W03-2907.15",
"arg2": "W03-2907.17",
"reverse": false
}
] |
W08-2122 |
A Latent Variable Model of Synchronous Parsing for Syntactic and Semantic Dependencies
|
We propose a solution to the challenge of the CoNLL 2008 shared task that uses a generative history-based latent variable model to predict the most likely derivation of a synchronous dependency parser for both syntactic and semantic dependencies. The submitted model yields 79.1% macro-average F1 performance, for the joint task, 86.9% syntactic dependencies LAS and 71.0% semantic dependencies F1. A larger model trained after the deadline achieves 80.5% macro-average F1, 87.6% syntactic dependencies LAS, and 73.1% semantic dependencies F1.
| [
{
"id": "W08-2122.1",
"char_start": 47,
"char_end": 69
},
{
"id": "W08-2122.2",
"char_start": 82,
"char_end": 128
},
{
"id": "W08-2122.3",
"char_start": 156,
"char_end": 166
},
{
"id": "W08-2122.4",
"char_start": 172,
"char_end": 201
},
{
"id": "W08-2122.5",
"char_start": 211,
"char_end": 246
},
{
"id": "W08-2122.6",
"char_start": 262,
"char_end": 267
},
{
"id": "W08-2122.7",
"char_start": 281,
"char_end": 309
},
{
"id": "W08-2122.8",
"char_start": 337,
"char_end": 363
},
{
"id": "W08-2122.9",
"char_start": 374,
"char_end": 398
},
{
"id": "W08-2122.10",
"char_start": 409,
"char_end": 414
},
{
"id": "W08-2122.11",
"char_start": 457,
"char_end": 473
},
{
"id": "W08-2122.12",
"char_start": 481,
"char_end": 507
},
{
"id": "W08-2122.13",
"char_start": 519,
"char_end": 543
}
] | [
{
"label": 1,
"arg1": "W08-2122.1",
"arg2": "W08-2122.2",
"reverse": true
},
{
"label": 3,
"arg1": "W08-2122.3",
"arg2": "W08-2122.5",
"reverse": false
},
{
"label": 2,
"arg1": "W08-2122.6",
"arg2": "W08-2122.7",
"reverse": false
},
{
"label": 2,
"arg1": "W08-2122.10",
"arg2": "W08-2122.11",
"reverse": false
}
] |
P03-1034 |
Integrating Discourse Markers Into A Pipelined Natural Language Generation Architecture
|
Pipelined Natural Language Generation (NLG) systems have grown increasingly complex as architectural modules were added to support language functionalities such as referring expressions, lexical choice, and revision. This has given rise to discussions about the relative placement of these new modules in the overall architecture. Recent work on another aspect of multi-paragraph text, discourse markers, indicates it is time to consider where a discourse marker insertion algorithm fits in. We present examples which suggest that in a pipelined NLG architecture, the best approach is to strongly tie it to a revision component. Finally, we evaluate the approach in a working multi-page system.
| [
{
"id": "P03-1034.1",
"char_start": 1,
"char_end": 52
},
{
"id": "P03-1034.2",
"char_start": 88,
"char_end": 109
},
{
"id": "P03-1034.3",
"char_start": 132,
"char_end": 156
},
{
"id": "P03-1034.4",
"char_start": 165,
"char_end": 186
},
{
"id": "P03-1034.5",
"char_start": 188,
"char_end": 202
},
{
"id": "P03-1034.6",
"char_start": 208,
"char_end": 216
},
{
"id": "P03-1034.7",
"char_start": 295,
"char_end": 302
},
{
"id": "P03-1034.8",
"char_start": 318,
"char_end": 330
},
{
"id": "P03-1034.9",
"char_start": 365,
"char_end": 385
},
{
"id": "P03-1034.10",
"char_start": 387,
"char_end": 404
},
{
"id": "P03-1034.11",
"char_start": 447,
"char_end": 483
},
{
"id": "P03-1034.12",
"char_start": 537,
"char_end": 563
},
{
"id": "P03-1034.13",
"char_start": 610,
"char_end": 628
},
{
"id": "P03-1034.14",
"char_start": 677,
"char_end": 694
}
] | [
{
"label": 1,
"arg1": "P03-1034.2",
"arg2": "P03-1034.3",
"reverse": false
},
{
"label": 4,
"arg1": "P03-1034.7",
"arg2": "P03-1034.8",
"reverse": false
},
{
"label": 4,
"arg1": "P03-1034.12",
"arg2": "P03-1034.13",
"reverse": true
}
] |
P06-1088 |
Multi-Tagging For Lexicalized-Grammar Parsing
|
With performance above 97% accuracy for newspaper text, part of speech (pos) tagging might be considered a solved problem. Previous studies have shown that allowing the parser to resolve pos tag ambiguity does not improve performance. However, for grammar formalisms which use more fine-grained grammatical categories, for example tag and ccg, tagging accuracy is much lower. In fact, for these formalisms, premature ambiguity resolution makes parsing infeasible. We describe a multi-tagging approach which maintains a suitable level of lexical category ambiguity for accurate and efficient ccg parsing. We extend this multi-tagging approach to the pos level to overcome errors introduced by automatically assigned pos tags. Although pos tagging accuracy seems high, maintaining some pos tag ambiguity in the language processing pipeline results in more accurate ccg supertagging.
| [
{
"id": "P06-1088.1",
"char_start": 28,
"char_end": 36
},
{
"id": "P06-1088.2",
"char_start": 41,
"char_end": 55
},
{
"id": "P06-1088.3",
"char_start": 57,
"char_end": 85
},
{
"id": "P06-1088.4",
"char_start": 170,
"char_end": 176
},
{
"id": "P06-1088.5",
"char_start": 188,
"char_end": 205
},
{
"id": "P06-1088.6",
"char_start": 249,
"char_end": 267
},
{
"id": "P06-1088.7",
"char_start": 283,
"char_end": 318
},
{
"id": "P06-1088.8",
"char_start": 332,
"char_end": 335
},
{
"id": "P06-1088.9",
"char_start": 340,
"char_end": 343
},
{
"id": "P06-1088.10",
"char_start": 345,
"char_end": 361
},
{
"id": "P06-1088.11",
"char_start": 396,
"char_end": 406
},
{
"id": "P06-1088.12",
"char_start": 418,
"char_end": 438
},
{
"id": "P06-1088.13",
"char_start": 445,
"char_end": 452
},
{
"id": "P06-1088.14",
"char_start": 479,
"char_end": 501
},
{
"id": "P06-1088.15",
"char_start": 538,
"char_end": 564
},
{
"id": "P06-1088.16",
"char_start": 592,
"char_end": 603
},
{
"id": "P06-1088.17",
"char_start": 620,
"char_end": 642
},
{
"id": "P06-1088.18",
"char_start": 650,
"char_end": 659
},
{
"id": "P06-1088.19",
"char_start": 716,
"char_end": 724
},
{
"id": "P06-1088.20",
"char_start": 735,
"char_end": 755
},
{
"id": "P06-1088.21",
"char_start": 785,
"char_end": 802
},
{
"id": "P06-1088.22",
"char_start": 810,
"char_end": 838
},
{
"id": "P06-1088.23",
"char_start": 864,
"char_end": 880
}
] | [
{
"label": 2,
"arg1": "P06-1088.1",
"arg2": "P06-1088.3",
"reverse": true
},
{
"label": 1,
"arg1": "P06-1088.4",
"arg2": "P06-1088.5",
"reverse": false
},
{
"label": 1,
"arg1": "P06-1088.6",
"arg2": "P06-1088.7",
"reverse": true
},
{
"label": 1,
"arg1": "P06-1088.14",
"arg2": "P06-1088.16",
"reverse": false
},
{
"label": 2,
"arg1": "P06-1088.21",
"arg2": "P06-1088.23",
"reverse": false
}
] |
P06-3008 |
Discursive Usage Of Six Chinese Punctuation Marks
|
Both rhetorical structure and punctuation have been helpful in discourse processing. Based on a corpus annotation project, this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts: Colon, Dash, Ellipsis, Exclamation Mark, Question Mark, and Semicolon. The rhetorical patterns of these marks are compared against patterns around cue phrases in general. Results show that these Chinese punctuation marks, though fewer in number than cue phrases, are easy to identify, have strong correlation with certain relations, and can be used as distinctive indicators of nuclearity in Chinese texts.
| [
{
"id": "P06-3008.1",
"char_start": 6,
"char_end": 26
},
{
"id": "P06-3008.2",
"char_start": 31,
"char_end": 42
},
{
"id": "P06-3008.3",
"char_start": 64,
"char_end": 84
},
{
"id": "P06-3008.4",
"char_start": 97,
"char_end": 122
},
{
"id": "P06-3008.5",
"char_start": 147,
"char_end": 163
},
{
"id": "P06-3008.6",
"char_start": 169,
"char_end": 194
},
{
"id": "P06-3008.7",
"char_start": 198,
"char_end": 219
},
{
"id": "P06-3008.8",
"char_start": 221,
"char_end": 226
},
{
"id": "P06-3008.9",
"char_start": 228,
"char_end": 232
},
{
"id": "P06-3008.10",
"char_start": 234,
"char_end": 242
},
{
"id": "P06-3008.11",
"char_start": 244,
"char_end": 260
},
{
"id": "P06-3008.12",
"char_start": 262,
"char_end": 275
},
{
"id": "P06-3008.13",
"char_start": 281,
"char_end": 290
},
{
"id": "P06-3008.14",
"char_start": 296,
"char_end": 315
},
{
"id": "P06-3008.15",
"char_start": 352,
"char_end": 360
},
{
"id": "P06-3008.16",
"char_start": 368,
"char_end": 379
},
{
"id": "P06-3008.17",
"char_start": 416,
"char_end": 441
},
{
"id": "P06-3008.18",
"char_start": 471,
"char_end": 482
},
{
"id": "P06-3008.19",
"char_start": 613,
"char_end": 626
}
] | [
{
"label": 1,
"arg1": "P06-3008.2",
"arg2": "P06-3008.3",
"reverse": false
},
{
"label": 3,
"arg1": "P06-3008.5",
"arg2": "P06-3008.6",
"reverse": false
},
{
"label": 6,
"arg1": "P06-3008.14",
"arg2": "P06-3008.15",
"reverse": false
},
{
"label": 6,
"arg1": "P06-3008.17",
"arg2": "P06-3008.18",
"reverse": false
}
] |
C90-3007 | Partial Descriptions And Systemic Grammar
|
This paper examines the properties of feature-based partial descriptions built on top of Halliday's systemic networks. We show that the crucial operation of consistency checking for such descriptions is NP-complete, and therefore probably intractable, but proceed to develop algorithms which can sometimes alleviate the unpleasant consequences of this intractability.
| [
{
"id": "C90-3007.1",
"char_start": 39,
"char_end": 73
},
{
"id": "C90-3007.2",
"char_start": 90,
"char_end": 118
},
{
"id": "C90-3007.3",
"char_start": 158,
"char_end": 178
},
{
"id": "C90-3007.4",
"char_start": 276,
"char_end": 286
},
{
"id": "C90-3007.5",
"char_start": 353,
"char_end": 367
}
] | [
{
"label": 1,
"arg1": "C90-3007.4",
"arg2": "C90-3007.5",
"reverse": false
}
] |
C94-1091 | Classifier Assignment By Corpus-Based Approach
|
This paper presents an algorithm for selecting an appropriate classifier word for a noun. In Thai language, it frequently happens that there is fluctuation in the choice of classifier for a given concrete noun, both from the point of view of the whole speech community and individual speakers. Basically, there is no exact rule for classifier selection. As far as we can do in the rule-based approach is to give a default rule to pick up a corresponding classifier of each noun. Registration of classifier for each noun is limited to the type of unit classifier because other types are open due to the meaning of representation. We propose a corpus-based method (Biber,1993; Nagao,1993; Smadja,1993) which generates Noun Classifier Associations (NCA) to overcome the problems in classifier assignment and semantic construction of noun phrase. The NCA is created statistically from a large corpus and recomposed under concept hierarchy constraints and frequency of occurrences.
| [
{
"id": "C94-1091.1",
"char_start": 63,
"char_end": 78
},
{
"id": "C94-1091.2",
"char_start": 85,
"char_end": 89
},
{
"id": "C94-1091.3",
"char_start": 94,
"char_end": 107
},
{
"id": "C94-1091.4",
"char_start": 174,
"char_end": 184
},
{
"id": "C94-1091.5",
"char_start": 197,
"char_end": 210
},
{
"id": "C94-1091.6",
"char_start": 253,
"char_end": 269
},
{
"id": "C94-1091.7",
"char_start": 274,
"char_end": 293
},
{
"id": "C94-1091.8",
"char_start": 333,
"char_end": 353
},
{
"id": "C94-1091.9",
"char_start": 382,
"char_end": 401
},
{
"id": "C94-1091.10",
"char_start": 415,
"char_end": 427
},
{
"id": "C94-1091.11",
"char_start": 455,
"char_end": 465
},
{
"id": "C94-1091.12",
"char_start": 474,
"char_end": 478
},
{
"id": "C94-1091.13",
"char_start": 496,
"char_end": 506
},
{
"id": "C94-1091.14",
"char_start": 516,
"char_end": 520
},
{
"id": "C94-1091.15",
"char_start": 539,
"char_end": 562
},
{
"id": "C94-1091.16",
"char_start": 643,
"char_end": 662
},
{
"id": "C94-1091.17",
"char_start": 717,
"char_end": 751
},
{
"id": "C94-1091.18",
"char_start": 780,
"char_end": 801
},
{
"id": "C94-1091.19",
"char_start": 806,
"char_end": 842
},
{
"id": "C94-1091.20",
"char_start": 848,
"char_end": 851
},
{
"id": "C94-1091.21",
"char_start": 890,
"char_end": 896
},
{
"id": "C94-1091.22",
"char_start": 918,
"char_end": 947
},
{
"id": "C94-1091.23",
"char_start": 952,
"char_end": 976
}
] | [
{
"label": 3,
"arg1": "C94-1091.1",
"arg2": "C94-1091.2",
"reverse": false
},
{
"label": 3,
"arg1": "C94-1091.4",
"arg2": "C94-1091.5",
"reverse": false
},
{
"label": 3,
"arg1": "C94-1091.11",
"arg2": "C94-1091.12",
"reverse": false
},
{
"label": 3,
"arg1": "C94-1091.13",
"arg2": "C94-1091.14",
"reverse": false
},
{
"label": 1,
"arg1": "C94-1091.16",
"arg2": "C94-1091.18",
"reverse": false
}
] |
C02-1120 |
An Unsupervised Learning Method For Associative Relationships Between Verb Phrases
|
This paper describes an unsupervised learning method for associative relationships between verb phrases, which is important in developing reliable Q&A systems. Consider the situation that a user gives a query "How much petrol was imported to Japan from Saudi Arabia?" to a Q&A system, but the text given to the system includes only the description "X tonnes of petrol was conveyed to Japan from Saudi Arabia". We think that the description is a good clue to find the answer for our query, "X tonnes". But there is no large-scale database that provides the associative relationship between "imported" and "conveyed". Our aim is to develop an unsupervised learning method that can obtain such an associative relationship, which we call scenario consistency. The method we are currently working on uses an expectation-maximization (EM) based word-clustering algorithm, and we have evaluated the effectiveness of this method using Japanese verb phrases.
| [
{
"id": "C02-1120.1",
"char_start": 25,
"char_end": 53
},
{
"id": "C02-1120.2",
"char_start": 58,
"char_end": 104
},
{
"id": "C02-1120.3",
"char_start": 148,
"char_end": 163
},
{
"id": "C02-1120.4",
"char_start": 208,
"char_end": 213
},
{
"id": "C02-1120.5",
"char_start": 278,
"char_end": 292
},
{
"id": "C02-1120.6",
"char_start": 302,
"char_end": 306
},
{
"id": "C02-1120.7",
"char_start": 345,
"char_end": 356
},
{
"id": "C02-1120.8",
"char_start": 437,
"char_end": 448
},
{
"id": "C02-1120.9",
"char_start": 491,
"char_end": 496
},
{
"id": "C02-1120.10",
"char_start": 526,
"char_end": 546
},
{
"id": "C02-1120.11",
"char_start": 565,
"char_end": 589
},
{
"id": "C02-1120.12",
"char_start": 650,
"char_end": 678
},
{
"id": "C02-1120.13",
"char_start": 703,
"char_end": 727
},
{
"id": "C02-1120.14",
"char_start": 743,
"char_end": 763
},
{
"id": "C02-1120.15",
"char_start": 812,
"char_end": 873
},
{
"id": "C02-1120.16",
"char_start": 936,
"char_end": 957
}
] | [
{
"label": 1,
"arg1": "C02-1120.1",
"arg2": "C02-1120.2",
"reverse": false
},
{
"label": 1,
"arg1": "C02-1120.12",
"arg2": "C02-1120.13",
"reverse": false
},
{
"label": 1,
"arg1": "C02-1120.15",
"arg2": "C02-1120.16",
"reverse": false
}
] |
C04-1022 | Automatic Learning Of Language Model Structure
|
Statistical language modeling remains a challenging task, in particular for morphologically rich languages. Recently, new approaches based on factored language models have been developed to address this problem. These models provide principled ways of including additional conditioning variables other than the preceding words, such as morphological or syntactic features. However, the number of possible choices for model parameters creates a large space of models that cannot be searched exhaustively. This paper presents an entirely data-driven model selection procedure based on genetic search, which is shown to outperform both knowledge-based and random selection procedures on two different language modeling tasks (Arabic and Turkish).
| [
{
"id": "C04-1022.1",
"char_start": 1,
"char_end": 30
},
{
"id": "C04-1022.2",
"char_start": 77,
"char_end": 107
},
{
"id": "C04-1022.3",
"char_start": 143,
"char_end": 167
},
{
"id": "C04-1022.4",
"char_start": 219,
"char_end": 225
},
{
"id": "C04-1022.5",
"char_start": 274,
"char_end": 296
},
{
"id": "C04-1022.6",
"char_start": 312,
"char_end": 327
},
{
"id": "C04-1022.7",
"char_start": 337,
"char_end": 372
},
{
"id": "C04-1022.8",
"char_start": 418,
"char_end": 434
},
{
"id": "C04-1022.9",
"char_start": 445,
"char_end": 466
},
{
"id": "C04-1022.10",
"char_start": 528,
"char_end": 574
},
{
"id": "C04-1022.11",
"char_start": 584,
"char_end": 598
},
{
"id": "C04-1022.12",
"char_start": 634,
"char_end": 681
},
{
"id": "C04-1022.13",
"char_start": 699,
"char_end": 722
},
{
"id": "C04-1022.14",
"char_start": 724,
"char_end": 730
},
{
"id": "C04-1022.15",
"char_start": 735,
"char_end": 742
}
] | [
{
"label": 6,
"arg1": "C04-1022.4",
"arg2": "C04-1022.6",
"reverse": false
},
{
"label": 1,
"arg1": "C04-1022.10",
"arg2": "C04-1022.11",
"reverse": true
},
{
"label": 1,
"arg1": "C04-1022.12",
"arg2": "C04-1022.13",
"reverse": false
}
] |
E85-1004 |
Montagovian Definite Clause Grammar
|
This paper reports a completed stage of ongoing research at the University of York. Landsbergen's advocacy of analytical inverses for compositional syntax rules encourages the application of Definite Clause Grammar techniques to the construction of a parser returning Montague analysis trees. A parser MDCC is presented which implements an augmented Friedman - Warren algorithm permitting post referencing* and interfaces with a language of intenslonal logic translator LILT so as to display the derivational history of corresponding reduced IL formulae. Some familiarity with Montague's PTQ and the basic DCG mechanism is assumed.
| [
{
"id": "E85-1004.1",
"char_start": 111,
"char_end": 130
},
{
"id": "E85-1004.2",
"char_start": 135,
"char_end": 161
},
{
"id": "E85-1004.3",
"char_start": 192,
"char_end": 226
},
{
"id": "E85-1004.4",
"char_start": 252,
"char_end": 258
},
{
"id": "E85-1004.5",
"char_start": 269,
"char_end": 293
},
{
"id": "E85-1004.6",
"char_start": 297,
"char_end": 308
},
{
"id": "E85-1004.7",
"char_start": 342,
"char_end": 379
},
{
"id": "E85-1004.8",
"char_start": 391,
"char_end": 407
},
{
"id": "E85-1004.9",
"char_start": 443,
"char_end": 476
},
{
"id": "E85-1004.10",
"char_start": 498,
"char_end": 518
},
{
"id": "E85-1004.11",
"char_start": 536,
"char_end": 555
},
{
"id": "E85-1004.12",
"char_start": 579,
"char_end": 593
},
{
"id": "E85-1004.13",
"char_start": 602,
"char_end": 621
}
] | [
{
"label": 3,
"arg1": "E85-1004.1",
"arg2": "E85-1004.2",
"reverse": false
},
{
"label": 1,
"arg1": "E85-1004.3",
"arg2": "E85-1004.4",
"reverse": false
},
{
"label": 1,
"arg1": "E85-1004.6",
"arg2": "E85-1004.7",
"reverse": true
},
{
"label": 3,
"arg1": "E85-1004.10",
"arg2": "E85-1004.11",
"reverse": false
}
] |
E89-1040 |
An Approach To Sentence-Level Anaphora In Machine Translation
|
Theoretical research in the area of machine translation usually involves the search for and creation of an appropriate formalism. An important issue in this respect is the way in which the compositionality of translation is to be defined. In this paper, we will introduce the anaphoric component of the Mimo formalism. It makes the definition and translation of anaphoric relations possible, relations which are usually problematic for systems that adhere to strict compositionality. In Mimo, the translation of anaphoric relations is compositional. The anaphoric component is used to define linguistic phenomena such as wh-movement, the passive and the binding of reflexives and pronouns mono-lingually. The actual working of the component will be shown in this paper by means of a detailed discussion of wh-movement.
| [
{
"id": "E89-1040.1",
"char_start": 37,
"char_end": 56
},
{
"id": "E89-1040.2",
"char_start": 120,
"char_end": 129
},
{
"id": "E89-1040.3",
"char_start": 190,
"char_end": 206
},
{
"id": "E89-1040.4",
"char_start": 210,
"char_end": 221
},
{
"id": "E89-1040.5",
"char_start": 277,
"char_end": 296
},
{
"id": "E89-1040.6",
"char_start": 304,
"char_end": 318
},
{
"id": "E89-1040.7",
"char_start": 348,
"char_end": 359
},
{
"id": "E89-1040.8",
"char_start": 363,
"char_end": 382
},
{
"id": "E89-1040.9",
"char_start": 393,
"char_end": 402
},
{
"id": "E89-1040.10",
"char_start": 460,
"char_end": 483
},
{
"id": "E89-1040.11",
"char_start": 488,
"char_end": 492
},
{
"id": "E89-1040.12",
"char_start": 498,
"char_end": 509
},
{
"id": "E89-1040.13",
"char_start": 513,
"char_end": 532
},
{
"id": "E89-1040.14",
"char_start": 555,
"char_end": 574
},
{
"id": "E89-1040.15",
"char_start": 593,
"char_end": 613
},
{
"id": "E89-1040.16",
"char_start": 622,
"char_end": 633
},
{
"id": "E89-1040.17",
"char_start": 639,
"char_end": 646
},
{
"id": "E89-1040.18",
"char_start": 655,
"char_end": 689
},
{
"id": "E89-1040.19",
"char_start": 807,
"char_end": 818
}
] | [
{
"label": 1,
"arg1": "E89-1040.1",
"arg2": "E89-1040.2",
"reverse": true
},
{
"label": 3,
"arg1": "E89-1040.3",
"arg2": "E89-1040.4",
"reverse": false
},
{
"label": 4,
"arg1": "E89-1040.5",
"arg2": "E89-1040.6",
"reverse": false
},
{
"label": 1,
"arg1": "E89-1040.11",
"arg2": "E89-1040.12",
"reverse": false
},
{
"label": 1,
"arg1": "E89-1040.14",
"arg2": "E89-1040.15",
"reverse": false
}
] |
C96-1062 | Interpretation Of Nominal Compounds : Combining Domain-Independent And Domain-Specific Information
|
A domain independent model is proposed for the automated interpretation of nominal compounds in English. This model is meant to account for productive rules of interpretation which are inferred from the morpho-syntactic and semantic characteristics of the nominal constituents. In particular, we make extensive use of Pustejovsky's principles concerning the predicative information associated with nominals. We argue that it is necessary to draw a line between generalizable semantic principles and domain-specific semantic information. We explain this distinction and we show how this model may be applied to the interpretation of compounds in real texts, provided that complementary semantic information are retrieved.
| [
{
"id": "C96-1062.1",
"char_start": 3,
"char_end": 27
},
{
"id": "C96-1062.2",
"char_start": 48,
"char_end": 72
},
{
"id": "C96-1062.3",
"char_start": 76,
"char_end": 93
},
{
"id": "C96-1062.4",
"char_start": 97,
"char_end": 104
},
{
"id": "C96-1062.5",
"char_start": 111,
"char_end": 116
},
{
"id": "C96-1062.6",
"char_start": 141,
"char_end": 175
},
{
"id": "C96-1062.7",
"char_start": 204,
"char_end": 249
},
{
"id": "C96-1062.8",
"char_start": 257,
"char_end": 277
},
{
"id": "C96-1062.9",
"char_start": 359,
"char_end": 382
},
{
"id": "C96-1062.10",
"char_start": 399,
"char_end": 407
},
{
"id": "C96-1062.11",
"char_start": 462,
"char_end": 495
},
{
"id": "C96-1062.12",
"char_start": 500,
"char_end": 536
},
{
"id": "C96-1062.13",
"char_start": 615,
"char_end": 629
},
{
"id": "C96-1062.14",
"char_start": 633,
"char_end": 642
},
{
"id": "C96-1062.15",
"char_start": 646,
"char_end": 656
},
{
"id": "C96-1062.16",
"char_start": 686,
"char_end": 706
}
] | [
{
"label": 1,
"arg1": "C96-1062.1",
"arg2": "C96-1062.2",
"reverse": false
},
{
"label": 4,
"arg1": "C96-1062.3",
"arg2": "C96-1062.4",
"reverse": false
},
{
"label": 3,
"arg1": "C96-1062.7",
"arg2": "C96-1062.8",
"reverse": false
},
{
"label": 3,
"arg1": "C96-1062.9",
"arg2": "C96-1062.10",
"reverse": false
},
{
"label": 6,
"arg1": "C96-1062.11",
"arg2": "C96-1062.12",
"reverse": false
},
{
"label": 3,
"arg1": "C96-1062.13",
"arg2": "C96-1062.14",
"reverse": false
}
] |
P05-1018 | Modeling Local Coherence: An Entity-Based Approach
|
This paper considers the problem of automatic assessment of local coherence. We present a novel entity-based representation of discourse which is inspired by Centering Theory and can be computed automatically from raw text. We view coherence assessment as a ranking learning problem and show that the proposed discourse representation supports the effective learning of a ranking function. Our experiments demonstrate that the induced model achieves significantly higher accuracy than a state-of-the-art coherence model.
| [
{
"id": "P05-1018.1",
"char_start": 61,
"char_end": 76
},
{
"id": "P05-1018.2",
"char_start": 97,
"char_end": 124
},
{
"id": "P05-1018.3",
"char_start": 128,
"char_end": 137
},
{
"id": "P05-1018.4",
"char_start": 159,
"char_end": 175
},
{
"id": "P05-1018.5",
"char_start": 215,
"char_end": 223
},
{
"id": "P05-1018.6",
"char_start": 233,
"char_end": 253
},
{
"id": "P05-1018.7",
"char_start": 259,
"char_end": 283
},
{
"id": "P05-1018.8",
"char_start": 311,
"char_end": 335
},
{
"id": "P05-1018.9",
"char_start": 373,
"char_end": 389
},
{
"id": "P05-1018.10",
"char_start": 428,
"char_end": 441
},
{
"id": "P05-1018.11",
"char_start": 472,
"char_end": 480
},
{
"id": "P05-1018.12",
"char_start": 488,
"char_end": 520
}
] | [
{
"label": 3,
"arg1": "P05-1018.2",
"arg2": "P05-1018.3",
"reverse": false
},
{
"label": 3,
"arg1": "P05-1018.6",
"arg2": "P05-1018.7",
"reverse": true
},
{
"label": 6,
"arg1": "P05-1018.10",
"arg2": "P05-1018.12",
"reverse": false
}
] |
P05-1056 |
Using Conditional Random Fields For Sentence Boundary Detection In Speech
|
Sentence boundary detection in speech is important for enriching speech recognition output, making it easier for humans to read and downstream modules to process. In previous work, we have developed hidden Markov model (HMM) and maximum entropy (Maxent) classifiers that integrate textual and prosodic knowledge sources for detecting sentence boundaries. In this paper, we evaluate the use of a conditional random field (CRF) for this task and relate results with this model to our prior work. We evaluate across two corpora (conversational telephone speech and broadcast news speech) on both human transcriptions and speech recognition output. In general, our CRF model yields a lower error rate than the HMM and Max-ent models on the NIST sentence boundary detection task in speech, although it is interesting to note that the best results are achieved by three-way voting among the classifiers. This probably occurs because each model has different strengths and weaknesses for modeling the knowledge sources.
| [
{
"id": "P05-1056.1",
"char_start": 1,
"char_end": 28
},
{
"id": "P05-1056.2",
"char_start": 32,
"char_end": 38
},
{
"id": "P05-1056.3",
"char_start": 66,
"char_end": 84
},
{
"id": "P05-1056.4",
"char_start": 200,
"char_end": 266
},
{
"id": "P05-1056.5",
"char_start": 303,
"char_end": 320
},
{
"id": "P05-1056.6",
"char_start": 335,
"char_end": 354
},
{
"id": "P05-1056.7",
"char_start": 396,
"char_end": 426
},
{
"id": "P05-1056.8",
"char_start": 542,
"char_end": 558
},
{
"id": "P05-1056.9",
"char_start": 563,
"char_end": 584
},
{
"id": "P05-1056.10",
"char_start": 594,
"char_end": 614
},
{
"id": "P05-1056.11",
"char_start": 619,
"char_end": 637
},
{
"id": "P05-1056.12",
"char_start": 662,
"char_end": 665
},
{
"id": "P05-1056.13",
"char_start": 707,
"char_end": 729
},
{
"id": "P05-1056.14",
"char_start": 737,
"char_end": 774
},
{
"id": "P05-1056.15",
"char_start": 778,
"char_end": 784
},
{
"id": "P05-1056.16",
"char_start": 859,
"char_end": 875
},
{
"id": "P05-1056.17",
"char_start": 886,
"char_end": 897
},
{
"id": "P05-1056.18",
"char_start": 933,
"char_end": 938
},
{
"id": "P05-1056.19",
"char_start": 995,
"char_end": 1012
}
] | [
{
"label": 1,
"arg1": "P05-1056.1",
"arg2": "P05-1056.3",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1056.4",
"arg2": "P05-1056.5",
"reverse": true
},
{
"label": 6,
"arg1": "P05-1056.10",
"arg2": "P05-1056.11",
"reverse": false
},
{
"label": 6,
"arg1": "P05-1056.12",
"arg2": "P05-1056.13",
"reverse": false
}
] |
P05-2008 |
Using Emoticons To Reduce Dependency In Machine Learning Techniques For Sentiment Classification
|
Sentiment Classification seeks to identify a piece of text according to its author's general feeling toward their subject, be it positive or negative. Traditional machine learning techniques have been applied to this problem with reasonable success, but they have been shown to work well only when there is a good match between the training and test data with respect to topic. This paper demonstrates that match with respect to domain and time is also important, and presents preliminary experiments with training data labeled with emoticons, which has the potential of being independent of domain, topic and time.
| [
{
"id": "P05-2008.1",
"char_start": 1,
"char_end": 25
},
{
"id": "P05-2008.2",
"char_start": 55,
"char_end": 59
},
{
"id": "P05-2008.3",
"char_start": 115,
"char_end": 122
},
{
"id": "P05-2008.4",
"char_start": 164,
"char_end": 191
},
{
"id": "P05-2008.5",
"char_start": 333,
"char_end": 355
},
{
"id": "P05-2008.6",
"char_start": 372,
"char_end": 377
},
{
"id": "P05-2008.7",
"char_start": 507,
"char_end": 520
},
{
"id": "P05-2008.8",
"char_start": 534,
"char_end": 543
},
{
"id": "P05-2008.9",
"char_start": 593,
"char_end": 599
},
{
"id": "P05-2008.10",
"char_start": 601,
"char_end": 606
}
] | [
{
"label": 3,
"arg1": "P05-2008.5",
"arg2": "P05-2008.6",
"reverse": true
},
{
"label": 3,
"arg1": "P05-2008.7",
"arg2": "P05-2008.8",
"reverse": true
}
] |
I05-2043 | Trend Survey on Japanese Natural Language Processing Studies over the Last Decade
|
Using natural language processing, we carried out a trend survey on Japanese natural language processing studies that have been done over the last ten years. We determined the changes in the number of papers published for each research organization and on each research area as well as the relationship between research organizations and research areas. This paper is useful for both recognizing trends in Japanese NLP and constructing a method of supporting trend surveys using NLP.
| [
{
"id": "I05-2043.2",
"char_start": 7,
"char_end": 34
},
{
"id": "I05-2043.3",
"char_start": 69,
"char_end": 113
},
{
"id": "I05-2043.4",
"char_start": 407,
"char_end": 419
},
{
"id": "I05-2043.5",
"char_start": 480,
"char_end": 483
},
{
"id": "I05-2043.1",
"char_start": 16,
"char_end": 52
}
] | [
{
"label": 1,
"arg1": "I05-2043.2",
"arg2": "I05-2043.3",
"reverse": false
}
] |
E99-1034 |
Finding Content-Bearing Terms Using Term Similarities
|
This paper explores the issue of using different co-occurrence similarities between terms for separating query terms that are useful for retrieval from those that are harmful. The hypothesis under examination is that useful terms tend to be more similar to each other than to other query terms. Preliminary experiments with similarities computed using first-order and second-order co-occurrence seem to confirm the hypothesis. Term similarities could then be used for determining which query terms are useful and best reflect the user's information need. A possible application would be to use this source of evidence for tuning the weights of the query terms.
| [
{
"id": "E99-1034.1",
"char_start": 50,
"char_end": 76
},
{
"id": "E99-1034.2",
"char_start": 85,
"char_end": 90
},
{
"id": "E99-1034.3",
"char_start": 106,
"char_end": 117
},
{
"id": "E99-1034.4",
"char_start": 138,
"char_end": 147
},
{
"id": "E99-1034.5",
"char_start": 218,
"char_end": 230
},
{
"id": "E99-1034.6",
"char_start": 283,
"char_end": 294
},
{
"id": "E99-1034.7",
"char_start": 353,
"char_end": 395
},
{
"id": "E99-1034.8",
"char_start": 428,
"char_end": 445
},
{
"id": "E99-1034.9",
"char_start": 487,
"char_end": 498
},
{
"id": "E99-1034.10",
"char_start": 634,
"char_end": 641
},
{
"id": "E99-1034.11",
"char_start": 649,
"char_end": 660
}
] | [
{
"label": 3,
"arg1": "E99-1034.1",
"arg2": "E99-1034.2",
"reverse": false
},
{
"label": 1,
"arg1": "E99-1034.3",
"arg2": "E99-1034.4",
"reverse": false
},
{
"label": 6,
"arg1": "E99-1034.5",
"arg2": "E99-1034.6",
"reverse": false
},
{
"label": 3,
"arg1": "E99-1034.8",
"arg2": "E99-1034.9",
"reverse": false
},
{
"label": 3,
"arg1": "E99-1034.10",
"arg2": "E99-1034.11",
"reverse": false
}
] |
E85-1041 |
The Structure Of Communicative Context Of Dialogue Interaction
|
We propose a draft scheme of the model formalizing the structure of communicative context in dialogue interaction. The relationships between the interacting partners are considered as system of three automata representing the partners of the dialogue and environment.
| [
{
"id": "E85-1041.1",
"char_start": 34,
"char_end": 39
},
{
"id": "E85-1041.2",
"char_start": 56,
"char_end": 90
},
{
"id": "E85-1041.3",
"char_start": 94,
"char_end": 114
},
{
"id": "E85-1041.4",
"char_start": 243,
"char_end": 251
}
] | [
{
"label": 3,
"arg1": "E85-1041.2",
"arg2": "E85-1041.3",
"reverse": false
}
] |
E91-1012 |
Non-Deterministic Recursive Ascent Parsing
|
A purely functional implementation of LR-parsers is given, together with a simple correctness proof. It is presented as a generalization of the recursive descent parser. For non-LR grammars the time-complexity of our parser is cubic if the functions that constitute the parser are implemented as memo-functions, i.e. functions that memorize the results of previous invocations. Memo-functions also facilitate a simple way to construct a very compact representation of the parse forest. For LR(0) grammars, our algorithm is closely related to the recursive ascent parsers recently discovered by Kruse-man Aretz [1] and Roberts [2]. Extended CF grammars (grammars with regular expressions at the right hand side) can be parsed with a simple modification of the LR-parser for normal CF grammars.
| [
{
"id": "E91-1012.1",
"char_start": 39,
"char_end": 49
},
{
"id": "E91-1012.2",
"char_start": 83,
"char_end": 100
},
{
"id": "E91-1012.3",
"char_start": 145,
"char_end": 169
},
{
"id": "E91-1012.4",
"char_start": 175,
"char_end": 190
},
{
"id": "E91-1012.5",
"char_start": 218,
"char_end": 224
},
{
"id": "E91-1012.6",
"char_start": 271,
"char_end": 277
},
{
"id": "E91-1012.7",
"char_start": 297,
"char_end": 311
},
{
"id": "E91-1012.8",
"char_start": 379,
"char_end": 393
},
{
"id": "E91-1012.9",
"char_start": 473,
"char_end": 485
},
{
"id": "E91-1012.10",
"char_start": 491,
"char_end": 505
},
{
"id": "E91-1012.11",
"char_start": 547,
"char_end": 571
},
{
"id": "E91-1012.12",
"char_start": 632,
"char_end": 652
},
{
"id": "E91-1012.13",
"char_start": 654,
"char_end": 662
},
{
"id": "E91-1012.14",
"char_start": 668,
"char_end": 687
},
{
"id": "E91-1012.15",
"char_start": 760,
"char_end": 769
},
{
"id": "E91-1012.16",
"char_start": 781,
"char_end": 792
}
] | [
{
"label": 1,
"arg1": "E91-1012.8",
"arg2": "E91-1012.9",
"reverse": false
},
{
"label": 4,
"arg1": "E91-1012.13",
"arg2": "E91-1012.14",
"reverse": true
},
{
"label": 1,
"arg1": "E91-1012.15",
"arg2": "E91-1012.16",
"reverse": false
}
] |
P05-1010 |
Probabilistic CFG With Latent Annotations
|
This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F1, sentences < 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.
| [
{
"id": "P05-1010.1",
"char_start": 22,
"char_end": 52
},
{
"id": "P05-1010.2",
"char_start": 56,
"char_end": 67
},
{
"id": "P05-1010.3",
"char_start": 83,
"char_end": 90
},
{
"id": "P05-1010.4",
"char_start": 97,
"char_end": 102
},
{
"id": "P05-1010.5",
"char_start": 122,
"char_end": 126
},
{
"id": "P05-1010.6",
"char_start": 136,
"char_end": 156
},
{
"id": "P05-1010.7",
"char_start": 176,
"char_end": 192
},
{
"id": "P05-1010.8",
"char_start": 206,
"char_end": 215
},
{
"id": "P05-1010.9",
"char_start": 249,
"char_end": 262
},
{
"id": "P05-1010.10",
"char_start": 266,
"char_end": 274
},
{
"id": "P05-1010.11",
"char_start": 277,
"char_end": 290
},
{
"id": "P05-1010.12",
"char_start": 300,
"char_end": 312
},
{
"id": "P05-1010.13",
"char_start": 328,
"char_end": 335
},
{
"id": "P05-1010.14",
"char_start": 343,
"char_end": 350
},
{
"id": "P05-1010.15",
"char_start": 354,
"char_end": 361
},
{
"id": "P05-1010.16",
"char_start": 371,
"char_end": 385
},
{
"id": "P05-1010.17",
"char_start": 451,
"char_end": 466
},
{
"id": "P05-1010.18",
"char_start": 494,
"char_end": 499
},
{
"id": "P05-1010.19",
"char_start": 507,
"char_end": 518
},
{
"id": "P05-1010.20",
"char_start": 533,
"char_end": 542
},
{
"id": "P05-1010.21",
"char_start": 551,
"char_end": 556
},
{
"id": "P05-1010.22",
"char_start": 593,
"char_end": 618
},
{
"id": "P05-1010.23",
"char_start": 643,
"char_end": 667
}
] | [
{
"label": 3,
"arg1": "P05-1010.1",
"arg2": "P05-1010.2",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1010.8",
"arg2": "P05-1010.9",
"reverse": true
},
{
"label": 1,
"arg1": "P05-1010.10",
"arg2": "P05-1010.12",
"reverse": true
},
{
"label": 1,
"arg1": "P05-1010.13",
"arg2": "P05-1010.14",
"reverse": true
},
{
"label": 2,
"arg1": "P05-1010.18",
"arg2": "P05-1010.19",
"reverse": false
},
{
"label": 4,
"arg1": "P05-1010.20",
"arg2": "P05-1010.21",
"reverse": true
},
{
"label": 1,
"arg1": "P05-1010.22",
"arg2": "P05-1010.23",
"reverse": true
}
] |
P05-1053 |
Exploring Various Knowledge In Relation Extraction
|
Extracting semantic relationships between entities is challenging. This paper investigates the incorporation of diverse lexical, syntactic and semantic knowledge in feature-based relation extraction using SVM. Our study illustrates that the base phrase chunking information is very effective for relation extraction and contributes to most of the performance improvement from syntactic aspect while additional information from full parsing gives limited further enhancement. This suggests that most of useful information in full parse trees for relation extraction is shallow and can be captured by chunking. We also demonstrate how semantic information such as WordNet and Name List, can be used in feature-based relation extraction to further improve the performance. Evaluation on the ACE corpus shows that effective incorporation of diverse features enables our system outperform previously best-reported systems on the 24 ACE relation subtypes and significantly outperforms tree kernel-based systems by over 20 in F-measure on the 5 ACE relation types.
| [
{
"id": "P05-1053.1",
"char_start": 1,
"char_end": 51
},
{
"id": "P05-1053.2",
"char_start": 121,
"char_end": 162
},
{
"id": "P05-1053.3",
"char_start": 166,
"char_end": 199
},
{
"id": "P05-1053.4",
"char_start": 206,
"char_end": 209
},
{
"id": "P05-1053.5",
"char_start": 247,
"char_end": 262
},
{
"id": "P05-1053.6",
"char_start": 297,
"char_end": 316
},
{
"id": "P05-1053.7",
"char_start": 348,
"char_end": 371
},
{
"id": "P05-1053.8",
"char_start": 377,
"char_end": 386
},
{
"id": "P05-1053.9",
"char_start": 428,
"char_end": 440
},
{
"id": "P05-1053.10",
"char_start": 525,
"char_end": 541
},
{
"id": "P05-1053.11",
"char_start": 546,
"char_end": 565
},
{
"id": "P05-1053.12",
"char_start": 600,
"char_end": 608
},
{
"id": "P05-1053.13",
"char_start": 634,
"char_end": 654
},
{
"id": "P05-1053.14",
"char_start": 663,
"char_end": 670
},
{
"id": "P05-1053.15",
"char_start": 675,
"char_end": 684
},
{
"id": "P05-1053.16",
"char_start": 701,
"char_end": 734
},
{
"id": "P05-1053.17",
"char_start": 758,
"char_end": 769
},
{
"id": "P05-1053.18",
"char_start": 771,
"char_end": 781
},
{
"id": "P05-1053.19",
"char_start": 789,
"char_end": 799
},
{
"id": "P05-1053.20",
"char_start": 846,
"char_end": 854
},
{
"id": "P05-1053.21",
"char_start": 867,
"char_end": 873
},
{
"id": "P05-1053.22",
"char_start": 910,
"char_end": 917
},
{
"id": "P05-1053.23",
"char_start": 925,
"char_end": 949
},
{
"id": "P05-1053.24",
"char_start": 980,
"char_end": 1005
},
{
"id": "P05-1053.25",
"char_start": 1020,
"char_end": 1029
},
{
"id": "P05-1053.26",
"char_start": 1037,
"char_end": 1057
}
] | [
{
"label": 1,
"arg1": "P05-1053.3",
"arg2": "P05-1053.4",
"reverse": true
},
{
"label": 2,
"arg1": "P05-1053.5",
"arg2": "P05-1053.7",
"reverse": false
},
{
"label": 2,
"arg1": "P05-1053.13",
"arg2": "P05-1053.17",
"reverse": false
},
{
"label": 6,
"arg1": "P05-1053.21",
"arg2": "P05-1053.22",
"reverse": false
}
] |
P05-1076 | Automatic Acquisition Of Adjectival Subcategorization From Corpora
|
This paper describes a novel system for acquiring adjectival subcategorization frames (scfs) and associated frequency information from English corpus data. The system incorporates a decision-tree classifier for 30 scf types which tests for the presence of grammatical relations (grs) in the output of a robust statistical parser. It uses a powerful pattern-matching language to classify grs into frames hierarchically in a way that mirrors inheritance-based lexica. The experiments show that the system is able to detect scf types with 70% precision and 66% recall rate. A new tool for linguistic annotation of scfs in corpus data is also introduced which can considerably alleviate the process of obtaining training and test data for subcategorization acquisition.
| [
{
"id": "P05-1076.1",
"char_start": 30,
"char_end": 36
},
{
"id": "P05-1076.2",
"char_start": 41,
"char_end": 86
},
{
"id": "P05-1076.3",
"char_start": 88,
"char_end": 92
},
{
"id": "P05-1076.4",
"char_start": 136,
"char_end": 143
},
{
"id": "P05-1076.5",
"char_start": 144,
"char_end": 155
},
{
"id": "P05-1076.6",
"char_start": 161,
"char_end": 167
},
{
"id": "P05-1076.7",
"char_start": 183,
"char_end": 207
},
{
"id": "P05-1076.8",
"char_start": 215,
"char_end": 224
},
{
"id": "P05-1076.9",
"char_start": 257,
"char_end": 278
},
{
"id": "P05-1076.10",
"char_start": 280,
"char_end": 283
},
{
"id": "P05-1076.11",
"char_start": 292,
"char_end": 298
},
{
"id": "P05-1076.12",
"char_start": 311,
"char_end": 329
},
{
"id": "P05-1076.13",
"char_start": 350,
"char_end": 375
},
{
"id": "P05-1076.14",
"char_start": 388,
"char_end": 391
},
{
"id": "P05-1076.15",
"char_start": 397,
"char_end": 403
},
{
"id": "P05-1076.16",
"char_start": 441,
"char_end": 465
},
{
"id": "P05-1076.17",
"char_start": 471,
"char_end": 482
},
{
"id": "P05-1076.18",
"char_start": 497,
"char_end": 503
},
{
"id": "P05-1076.19",
"char_start": 522,
"char_end": 531
},
{
"id": "P05-1076.20",
"char_start": 537,
"char_end": 550
},
{
"id": "P05-1076.21",
"char_start": 555,
"char_end": 570
},
{
"id": "P05-1076.22",
"char_start": 578,
"char_end": 582
},
{
"id": "P05-1076.23",
"char_start": 587,
"char_end": 608
},
{
"id": "P05-1076.24",
"char_start": 612,
"char_end": 616
},
{
"id": "P05-1076.25",
"char_start": 620,
"char_end": 631
},
{
"id": "P05-1076.26",
"char_start": 709,
"char_end": 731
},
{
"id": "P05-1076.27",
"char_start": 736,
"char_end": 765
}
] | [
{
"label": 1,
"arg1": "P05-1076.1",
"arg2": "P05-1076.2",
"reverse": false
},
{
"label": 3,
"arg1": "P05-1076.4",
"arg2": "P05-1076.5",
"reverse": false
},
{
"label": 4,
"arg1": "P05-1076.6",
"arg2": "P05-1076.7",
"reverse": true
},
{
"label": 4,
"arg1": "P05-1076.9",
"arg2": "P05-1076.11",
"reverse": false
},
{
"label": 3,
"arg1": "P05-1076.14",
"arg2": "P05-1076.15",
"reverse": true
},
{
"label": 2,
"arg1": "P05-1076.18",
"arg2": "P05-1076.20",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1076.22",
"arg2": "P05-1076.23",
"reverse": false
},
{
"label": 4,
"arg1": "P05-1076.24",
"arg2": "P05-1076.25",
"reverse": false
},
{
"label": 1,
"arg1": "P05-1076.26",
"arg2": "P05-1076.27",
"reverse": false
}
] |
I05-2013 | Automatic recognition of French expletive pronoun occurrences
|
We present a tool, called ILIMP, which takes as input a raw text in French and produces as output the same text in which every occurrence of the pronoun il is tagged either with tag [ANA] for anaphoric or [IMP] for impersonal or expletive. This tool is therefore designed to distinguish between the anaphoric occurrences of il, for which an anaphora resolution system has to look for an antecedent, and the expletive occurrences of this pronoun, for which it does not make sense to look for an antecedent. The precision rate for ILIMP is 97,5%. The few errors are analyzed in detail. Other tasks using the method developed for ILIMP are described briefly, as well as the use of ILIMP in a modular syntactic analysis system.
| [
{
"id": "I05-2013.1",
"char_start": 14,
"char_end": 18
},
{
"id": "I05-2013.2",
"char_start": 27,
"char_end": 32
},
{
"id": "I05-2013.3",
"char_start": 57,
"char_end": 65
},
{
"id": "I05-2013.4",
"char_start": 69,
"char_end": 75
},
{
"id": "I05-2013.5",
"char_start": 108,
"char_end": 112
},
{
"id": "I05-2013.6",
"char_start": 146,
"char_end": 156
},
{
"id": "I05-2013.7",
"char_start": 183,
"char_end": 188
},
{
"id": "I05-2013.8",
"char_start": 193,
"char_end": 202
},
{
"id": "I05-2013.9",
"char_start": 206,
"char_end": 211
},
{
"id": "I05-2013.10",
"char_start": 216,
"char_end": 226
},
{
"id": "I05-2013.11",
"char_start": 230,
"char_end": 239
},
{
"id": "I05-2013.12",
"char_start": 246,
"char_end": 250
},
{
"id": "I05-2013.13",
"char_start": 300,
"char_end": 327
},
{
"id": "I05-2013.14",
"char_start": 342,
"char_end": 368
},
{
"id": "I05-2013.15",
"char_start": 408,
"char_end": 429
},
{
"id": "I05-2013.16",
"char_start": 438,
"char_end": 445
},
{
"id": "I05-2013.17",
"char_start": 511,
"char_end": 525
},
{
"id": "I05-2013.18",
"char_start": 530,
"char_end": 535
},
{
"id": "I05-2013.19",
"char_start": 554,
"char_end": 560
},
{
"id": "I05-2013.20",
"char_start": 591,
"char_end": 596
},
{
"id": "I05-2013.21",
"char_start": 607,
"char_end": 613
},
{
"id": "I05-2013.22",
"char_start": 628,
"char_end": 633
},
{
"id": "I05-2013.23",
"char_start": 679,
"char_end": 684
},
{
"id": "I05-2013.24",
"char_start": 698,
"char_end": 723
}
] | [
{
"label": 3,
"arg1": "I05-2013.3",
"arg2": "I05-2013.4",
"reverse": true
},
{
"label": 3,
"arg1": "I05-2013.6",
"arg2": "I05-2013.7",
"reverse": true
},
{
"label": 3,
"arg1": "I05-2013.15",
"arg2": "I05-2013.16",
"reverse": false
},
{
"label": 2,
"arg1": "I05-2013.17",
"arg2": "I05-2013.18",
"reverse": true
},
{
"label": 1,
"arg1": "I05-2013.20",
"arg2": "I05-2013.21",
"reverse": true
},
{
"label": 1,
"arg1": "I05-2013.23",
"arg2": "I05-2013.24",
"reverse": false
}
] |
E85-1037 | A PROBLEM SOLVING APPROACH TO GENERATING TEXT FROM SYSTEMIC GRAMMARS
|
Systemic grammar has been used for AI text generation work in the past, but the implementations have tended be ad hoc or inefficient. This paper presents an approach to systemic text generation where AI problem solving techniques are applied directly to an unadulterated systemic grammar. This approach is made possible by a special relationship between systemic grammar and problem solving: both are organized primarily as choosing from alternatives. The result is simple, efficient text generation firmly based in a linguistic theory.
| [
{
"id": "E85-1037.1",
"char_start": 1,
"char_end": 17
},
{
"id": "E85-1037.2",
"char_start": 36,
"char_end": 54
},
{
"id": "E85-1037.3",
"char_start": 81,
"char_end": 96
},
{
"id": "E85-1037.4",
"char_start": 179,
"char_end": 194
},
{
"id": "E85-1037.5",
"char_start": 201,
"char_end": 230
},
{
"id": "E85-1037.6",
"char_start": 272,
"char_end": 288
},
{
"id": "E85-1037.7",
"char_start": 295,
"char_end": 303
},
{
"id": "E85-1037.8",
"char_start": 355,
"char_end": 371
},
{
"id": "E85-1037.9",
"char_start": 376,
"char_end": 391
},
{
"id": "E85-1037.10",
"char_start": 485,
"char_end": 500
},
{
"id": "E85-1037.11",
"char_start": 519,
"char_end": 536
}
] | [
{
"label": 1,
"arg1": "E85-1037.1",
"arg2": "E85-1037.2",
"reverse": false
},
{
"label": 1,
"arg1": "E85-1037.5",
"arg2": "E85-1037.6",
"reverse": false
},
{
"label": 1,
"arg1": "E85-1037.10",
"arg2": "E85-1037.11",
"reverse": true
}
] |
E89-1016 | User Studies And The Design Of Natural Language Systems
|
This paper presents a critical discussion of the various approaches that have been used in the evaluation of Natural Language systems. We conclude that previous approaches have neglected to evaluate systems in the context of their use, e.g. solving a task requiring data retrieval. This raises questions about the validity of such approaches. In the second half of the paper, we report a laboratory study using the Wizard of Oz technique to identify NL requirements for carrying out this task. We evaluate the demands that task dialogues collected using this technique, place upon a prototype Natural Language system. We identify three important requirements which arose from the task that we gave our subjects: operators specific to the task of database access, complex contextual reference and reference to the structure of the information source. We discuss how these might be satisfied by future Natural Language systems.
| [
{
"id": "E89-1016.1",
"char_start": 23,
"char_end": 42
},
{
"id": "E89-1016.2",
"char_start": 58,
"char_end": 68
},
{
"id": "E89-1016.3",
"char_start": 96,
"char_end": 134
},
{
"id": "E89-1016.4",
"char_start": 162,
"char_end": 172
},
{
"id": "E89-1016.5",
"char_start": 200,
"char_end": 207
},
{
"id": "E89-1016.6",
"char_start": 252,
"char_end": 256
},
{
"id": "E89-1016.7",
"char_start": 267,
"char_end": 281
},
{
"id": "E89-1016.8",
"char_start": 332,
"char_end": 342
},
{
"id": "E89-1016.9",
"char_start": 389,
"char_end": 405
},
{
"id": "E89-1016.10",
"char_start": 416,
"char_end": 438
},
{
"id": "E89-1016.11",
"char_start": 451,
"char_end": 466
},
{
"id": "E89-1016.12",
"char_start": 489,
"char_end": 493
},
{
"id": "E89-1016.13",
"char_start": 524,
"char_end": 538
},
{
"id": "E89-1016.14",
"char_start": 560,
"char_end": 569
},
{
"id": "E89-1016.15",
"char_start": 584,
"char_end": 617
},
{
"id": "E89-1016.16",
"char_start": 681,
"char_end": 685
},
{
"id": "E89-1016.17",
"char_start": 747,
"char_end": 762
},
{
"id": "E89-1016.18",
"char_start": 772,
"char_end": 792
},
{
"id": "E89-1016.19",
"char_start": 814,
"char_end": 823
},
{
"id": "E89-1016.20",
"char_start": 831,
"char_end": 849
},
{
"id": "E89-1016.21",
"char_start": 901,
"char_end": 925
}
] | [
{
"label": 5,
"arg1": "E89-1016.1",
"arg2": "E89-1016.3",
"reverse": false
},
{
"label": 1,
"arg1": "E89-1016.6",
"arg2": "E89-1016.7",
"reverse": true
},
{
"label": 1,
"arg1": "E89-1016.9",
"arg2": "E89-1016.10",
"reverse": true
},
{
"label": 3,
"arg1": "E89-1016.19",
"arg2": "E89-1016.20",
"reverse": false
}
] |
E93-1013 |
LFG Semantics Via Constraints
|
Semantic theories of natural language associate meanings with utterances by providing meanings for lexical items and rules for determining the meaning of larger units given the meanings of their parts. Traditionally, meanings are combined via function composition, which works well when constituent structure trees are used to guide semantic composition. More recently, the functional structure of LFG has been used to provide the syntactic information necessary for constraining derivations of meaning in a cross-linguistically uniform format. It has been difficult, however, to reconcile this approach with the combination of meanings by function composition.
In contrast to compositional approaches, we present a deductive approach to assembling meanings, based on reasoning with constraints, which meshes well with the unordered nature of information in the functional structure. Our use of linear logic as a 'glue' for assembling meanings also allows for a coherent treatment of modification as well as of the LFG requirements of completeness and coherence.
| [
{
"id": "E93-1013.1",
"char_start": 1,
"char_end": 18
},
{
"id": "E93-1013.2",
"char_start": 22,
"char_end": 38
},
{
"id": "E93-1013.3",
"char_start": 49,
"char_end": 57
},
{
"id": "E93-1013.4",
"char_start": 63,
"char_end": 73
},
{
"id": "E93-1013.5",
"char_start": 87,
"char_end": 95
},
{
"id": "E93-1013.6",
"char_start": 100,
"char_end": 113
},
{
"id": "E93-1013.7",
"char_start": 118,
"char_end": 123
},
{
"id": "E93-1013.8",
"char_start": 144,
"char_end": 151
},
{
"id": "E93-1013.9",
"char_start": 162,
"char_end": 167
},
{
"id": "E93-1013.10",
"char_start": 178,
"char_end": 186
},
{
"id": "E93-1013.11",
"char_start": 218,
"char_end": 226
},
{
"id": "E93-1013.12",
"char_start": 244,
"char_end": 264
},
{
"id": "E93-1013.13",
"char_start": 288,
"char_end": 315
},
{
"id": "E93-1013.14",
"char_start": 334,
"char_end": 354
},
{
"id": "E93-1013.15",
"char_start": 375,
"char_end": 395
},
{
"id": "E93-1013.16",
"char_start": 399,
"char_end": 402
},
{
"id": "E93-1013.17",
"char_start": 432,
"char_end": 453
},
{
"id": "E93-1013.18",
"char_start": 481,
"char_end": 492
},
{
"id": "E93-1013.19",
"char_start": 496,
"char_end": 503
},
{
"id": "E93-1013.20",
"char_start": 509,
"char_end": 544
},
{
"id": "E93-1013.21",
"char_start": 596,
"char_end": 604
},
{
"id": "E93-1013.22",
"char_start": 629,
"char_end": 637
},
{
"id": "E93-1013.23",
"char_start": 641,
"char_end": 661
},
{
"id": "E93-1013.24",
"char_start": 679,
"char_end": 703
},
{
"id": "E93-1013.25",
"char_start": 718,
"char_end": 736
},
{
"id": "E93-1013.26",
"char_start": 751,
"char_end": 759
},
{
"id": "E93-1013.27",
"char_start": 770,
"char_end": 796
},
{
"id": "E93-1013.28",
"char_start": 845,
"char_end": 856
},
{
"id": "E93-1013.29",
"char_start": 864,
"char_end": 884
},
{
"id": "E93-1013.30",
"char_start": 897,
"char_end": 909
},
{
"id": "E93-1013.31",
"char_start": 937,
"char_end": 945
},
{
"id": "E93-1013.32",
"char_start": 986,
"char_end": 998
},
{
"id": "E93-1013.33",
"char_start": 1017,
"char_end": 1020
},
{
"id": "E93-1013.34",
"char_start": 1037,
"char_end": 1049
},
{
"id": "E93-1013.35",
"char_start": 1054,
"char_end": 1063
}
] | [
{
"label": 5,
"arg1": "E93-1013.1",
"arg2": "E93-1013.2",
"reverse": false
},
{
"label": 3,
"arg1": "E93-1013.3",
"arg2": "E93-1013.4",
"reverse": false
},
{
"label": 3,
"arg1": "E93-1013.5",
"arg2": "E93-1013.6",
"reverse": false
},
{
"label": 3,
"arg1": "E93-1013.8",
"arg2": "E93-1013.9",
"reverse": false
},
{
"label": 1,
"arg1": "E93-1013.13",
"arg2": "E93-1013.14",
"reverse": false
},
{
"label": 4,
"arg1": "E93-1013.15",
"arg2": "E93-1013.16",
"reverse": false
},
{
"label": 1,
"arg1": "E93-1013.17",
"arg2": "E93-1013.18",
"reverse": false
},
{
"label": 6,
"arg1": "E93-1013.24",
"arg2": "E93-1013.25",
"reverse": false
},
{
"label": 4,
"arg1": "E93-1013.28",
"arg2": "E93-1013.29",
"reverse": false
},
{
"label": 3,
"arg1": "E93-1013.33",
"arg2": "E93-1013.34",
"reverse": true
}
] |
E95-1036 | Splitting The Reference Time: Temporal Anaphora And Quantification In DRT
|
This paper presents an analysis of temporal anaphora in sentences which contain quantification over events, within the framework of Discourse Representation Theory. The analysis in (Partee, 1984) of quantified sentences, introduced by a temporal connective, gives the wrong truth-conditions when the temporal connective in the subordinate clause is before or after. This problem has been previously analyzed in (de Swart, 1991) as an instance of the proportion problem and given a solution from a Generalized Quantifier approach. By using a careful distinction between the different notions of reference time based on (Kamp and Reyle, 1993), we propose a solution to this problem, within the framework of DRT. We show some applications of this solution to additional temporal anaphora phenomena in quantified sentences.
| [
{
"id": "E95-1036.1",
"char_start": 36,
"char_end": 53
},
{
"id": "E95-1036.2",
"char_start": 57,
"char_end": 66
},
{
"id": "E95-1036.3",
"char_start": 81,
"char_end": 107
},
{
"id": "E95-1036.4",
"char_start": 133,
"char_end": 164
},
{
"id": "E95-1036.5",
"char_start": 200,
"char_end": 220
},
{
"id": "E95-1036.6",
"char_start": 238,
"char_end": 257
},
{
"id": "E95-1036.7",
"char_start": 275,
"char_end": 291
},
{
"id": "E95-1036.8",
"char_start": 301,
"char_end": 320
},
{
"id": "E95-1036.9",
"char_start": 328,
"char_end": 346
},
{
"id": "E95-1036.10",
"char_start": 372,
"char_end": 379
},
{
"id": "E95-1036.11",
"char_start": 451,
"char_end": 469
},
{
"id": "E95-1036.12",
"char_start": 498,
"char_end": 529
},
{
"id": "E95-1036.13",
"char_start": 595,
"char_end": 609
},
{
"id": "E95-1036.14",
"char_start": 673,
"char_end": 680
},
{
"id": "E95-1036.15",
"char_start": 706,
"char_end": 709
},
{
"id": "E95-1036.16",
"char_start": 745,
"char_end": 753
},
{
"id": "E95-1036.17",
"char_start": 768,
"char_end": 795
},
{
"id": "E95-1036.18",
"char_start": 799,
"char_end": 819
}
] | [
{
"label": 4,
"arg1": "E95-1036.2",
"arg2": "E95-1036.3",
"reverse": true
},
{
"label": 4,
"arg1": "E95-1036.5",
"arg2": "E95-1036.6",
"reverse": true
},
{
"label": 4,
"arg1": "E95-1036.8",
"arg2": "E95-1036.9",
"reverse": false
},
{
"label": 5,
"arg1": "E95-1036.11",
"arg2": "E95-1036.12",
"reverse": true
},
{
"label": 1,
"arg1": "E95-1036.14",
"arg2": "E95-1036.15",
"reverse": true
},
{
"label": 1,
"arg1": "E95-1036.16",
"arg2": "E95-1036.17",
"reverse": false
}
] |
H89-2019 |
A Proposal For SLS Evaluation
|
This paper proposes an automatic, essentially domain-independent means of evaluating Spoken Language Systems (SLS) which combines software we have developed for that purpose (the "Comparator") and a set of specifications for answer expressions (the "Common Answer Specification", or CAS). The Comparator checks whether the answer provided by a SLS accords with a canonical answer, returning either true or false. The Common Answer Specification determines the syntax of answer expressions, the minimal content that must be included in them, the data to be included in and excluded from test corpora, and the procedures used by the Comparator. Though some details of the CAS are particular to individual domains, the Comparator software is domain-independent, as is the CAS approach.
| [
{
"id": "H89-2019.1",
"char_start": 47,
"char_end": 115
},
{
"id": "H89-2019.2",
"char_start": 131,
"char_end": 139
},
{
"id": "H89-2019.3",
"char_start": 181,
"char_end": 191
},
{
"id": "H89-2019.4",
"char_start": 207,
"char_end": 221
},
{
"id": "H89-2019.5",
"char_start": 226,
"char_end": 244
},
{
"id": "H89-2019.6",
"char_start": 251,
"char_end": 278
},
{
"id": "H89-2019.7",
"char_start": 284,
"char_end": 287
},
{
"id": "H89-2019.8",
"char_start": 294,
"char_end": 304
},
{
"id": "H89-2019.9",
"char_start": 345,
"char_end": 348
},
{
"id": "H89-2019.10",
"char_start": 364,
"char_end": 380
},
{
"id": "H89-2019.11",
"char_start": 418,
"char_end": 445
},
{
"id": "H89-2019.12",
"char_start": 461,
"char_end": 467
},
{
"id": "H89-2019.13",
"char_start": 471,
"char_end": 489
},
{
"id": "H89-2019.14",
"char_start": 503,
"char_end": 510
},
{
"id": "H89-2019.15",
"char_start": 546,
"char_end": 550
},
{
"id": "H89-2019.16",
"char_start": 587,
"char_end": 599
},
{
"id": "H89-2019.17",
"char_start": 609,
"char_end": 619
},
{
"id": "H89-2019.18",
"char_start": 632,
"char_end": 642
},
{
"id": "H89-2019.19",
"char_start": 671,
"char_end": 674
},
{
"id": "H89-2019.20",
"char_start": 704,
"char_end": 711
},
{
"id": "H89-2019.21",
"char_start": 717,
"char_end": 736
},
{
"id": "H89-2019.22",
"char_start": 740,
"char_end": 758
},
{
"id": "H89-2019.23",
"char_start": 770,
"char_end": 782
}
] | [
{
"label": 1,
"arg1": "H89-2019.1",
"arg2": "H89-2019.2",
"reverse": true
},
{
"label": 3,
"arg1": "H89-2019.4",
"arg2": "H89-2019.5",
"reverse": false
},
{
"label": 3,
"arg1": "H89-2019.12",
"arg2": "H89-2019.13",
"reverse": false
},
{
"label": 4,
"arg1": "H89-2019.15",
"arg2": "H89-2019.16",
"reverse": false
},
{
"label": 1,
"arg1": "H89-2019.17",
"arg2": "H89-2019.18",
"reverse": false
},
{
"label": 3,
"arg1": "H89-2019.19",
"arg2": "H89-2019.20",
"reverse": true
},
{
"label": 3,
"arg1": "H89-2019.21",
"arg2": "H89-2019.22",
"reverse": true
}
] |
H93-1076 | Speech and Text-Image Processing in Documents
|
Two themes have evolved in speech and text image processing work at Xerox PARC that expand and redefine the role of recognition technology in document-oriented applications. One is the development of systems that provide functionality similar to that of text processors but operate directly on audio and scanned image data. A second, related theme is the use of speech and text-image recognition to retrieve arbitrary, user-specified information from documents with signal content. This paper discusses three research initiatives at PARC that exemplify these themes: a text-image editor[1], a wordspotter for voice editing and indexing[12], and a decoding framework for scanned-document content retrieval[4]. The discussion focuses on key concepts embodied in the research that enable novel signal-based document processing functionality.
| [
{
"id": "H93-1076.3",
"char_start": 28,
"char_end": 60
},
{
"id": "H93-1076.4",
"char_start": 69,
"char_end": 79
},
{
"id": "H93-1076.5",
"char_start": 117,
"char_end": 139
},
{
"id": "H93-1076.6",
"char_start": 143,
"char_end": 173
},
{
"id": "H93-1076.7",
"char_start": 255,
"char_end": 270
},
{
"id": "H93-1076.8",
"char_start": 295,
"char_end": 323
},
{
"id": "H93-1076.9",
"char_start": 363,
"char_end": 396
},
{
"id": "H93-1076.10",
"char_start": 452,
"char_end": 481
},
{
"id": "H93-1076.11",
"char_start": 534,
"char_end": 538
},
{
"id": "H93-1076.12",
"char_start": 570,
"char_end": 587
},
{
"id": "H93-1076.13",
"char_start": 594,
"char_end": 605
},
{
"id": "H93-1076.14",
"char_start": 610,
"char_end": 636
},
{
"id": "H93-1076.15",
"char_start": 648,
"char_end": 666
},
{
"id": "H93-1076.16",
"char_start": 671,
"char_end": 705
},
{
"id": "H93-1076.17",
"char_start": 792,
"char_end": 838
},
{
"id": "H93-1076.1",
"char_start": 0,
"char_end": 32
},
{
"id": "H93-1076.2",
"char_start": 36,
"char_end": 45
}
] | [
{
"label": 5,
"arg1": "H93-1076.3",
"arg2": "H93-1076.4",
"reverse": true
},
{
"label": 4,
"arg1": "H93-1076.5",
"arg2": "H93-1076.6",
"reverse": false
},
{
"label": 6,
"arg1": "H93-1076.7",
"arg2": "H93-1076.8",
"reverse": false
},
{
"label": 1,
"arg1": "H93-1076.9",
"arg2": "H93-1076.10",
"reverse": false
}
] |
A97-1028 |
A Statistical Profile Of The Named Entity Task
|
In this paper we present a statistical profile of the Named Entity task, a specific information extraction task for which corpora in several languages are available. Using the results of the statistical analysis, we propose an algorithm for lower bound estimation for Named Entity corpora and discuss the significance of the cross-lingual comparisons provided by the analysis.
| [
{
"id": "A97-1028.1",
"char_start": 28,
"char_end": 47
},
{
"id": "A97-1028.2",
"char_start": 55,
"char_end": 72
},
{
"id": "A97-1028.3",
"char_start": 85,
"char_end": 112
},
{
"id": "A97-1028.4",
"char_start": 123,
"char_end": 130
},
{
"id": "A97-1028.5",
"char_start": 142,
"char_end": 151
},
{
"id": "A97-1028.6",
"char_start": 177,
"char_end": 184
},
{
"id": "A97-1028.7",
"char_start": 192,
"char_end": 212
},
{
"id": "A97-1028.8",
"char_start": 228,
"char_end": 237
},
{
"id": "A97-1028.9",
"char_start": 242,
"char_end": 264
},
{
"id": "A97-1028.10",
"char_start": 269,
"char_end": 289
},
{
"id": "A97-1028.11",
"char_start": 326,
"char_end": 351
},
{
"id": "A97-1028.12",
"char_start": 368,
"char_end": 376
}
] | [
{
"label": 3,
"arg1": "A97-1028.1",
"arg2": "A97-1028.2",
"reverse": false
},
{
"label": 3,
"arg1": "A97-1028.4",
"arg2": "A97-1028.5",
"reverse": true
},
{
"label": 2,
"arg1": "A97-1028.6",
"arg2": "A97-1028.7",
"reverse": true
},
{
"label": 1,
"arg1": "A97-1028.8",
"arg2": "A97-1028.9",
"reverse": false
},
{
"label": 5,
"arg1": "A97-1028.11",
"arg2": "A97-1028.12",
"reverse": true
}
] |
H05-1115 |
Using Random Walks For Question-Focused Sentence Retrieval
|
We consider the problem of question-focused sentence retrieval from complex news articles describing multi-event stories published over time. Annotators generated a list of questions central to understanding each story in our corpus. Because of the dynamic nature of the stories, many questions are time-sensitive (e.g. "How many victims have been found?"). Judges found sentences providing an answer to each question. To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization. Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to the input question via IDF-weighted word overlap. In our experiments, the method achieves a TRDR score that is significantly higher than that of the baseline.
| [
{
"id": "H05-1115.1",
"char_start": 28,
"char_end": 63
},
{
"id": "H05-1115.2",
"char_start": 77,
"char_end": 90
},
{
"id": "H05-1115.3",
"char_start": 102,
"char_end": 141
},
{
"id": "H05-1115.4",
"char_start": 143,
"char_end": 153
},
{
"id": "H05-1115.5",
"char_start": 174,
"char_end": 183
},
{
"id": "H05-1115.6",
"char_start": 214,
"char_end": 219
},
{
"id": "H05-1115.7",
"char_start": 227,
"char_end": 233
},
{
"id": "H05-1115.8",
"char_start": 272,
"char_end": 279
},
{
"id": "H05-1115.9",
"char_start": 286,
"char_end": 295
},
{
"id": "H05-1115.10",
"char_start": 359,
"char_end": 365
},
{
"id": "H05-1115.11",
"char_start": 372,
"char_end": 381
},
{
"id": "H05-1115.12",
"char_start": 395,
"char_end": 401
},
{
"id": "H05-1115.13",
"char_start": 410,
"char_end": 418
},
{
"id": "H05-1115.14",
"char_start": 435,
"char_end": 461
},
{
"id": "H05-1115.15",
"char_start": 474,
"char_end": 504
},
{
"id": "H05-1115.16",
"char_start": 550,
"char_end": 563
},
{
"id": "H05-1115.17",
"char_start": 608,
"char_end": 629
},
{
"id": "H05-1115.18",
"char_start": 686,
"char_end": 692
},
{
"id": "H05-1115.19",
"char_start": 746,
"char_end": 754
},
{
"id": "H05-1115.20",
"char_start": 775,
"char_end": 785
},
{
"id": "H05-1115.21",
"char_start": 794,
"char_end": 802
},
{
"id": "H05-1115.22",
"char_start": 816,
"char_end": 824
},
{
"id": "H05-1115.23",
"char_start": 829,
"char_end": 854
},
{
"id": "H05-1115.24",
"char_start": 880,
"char_end": 886
},
{
"id": "H05-1115.25",
"char_start": 898,
"char_end": 908
},
{
"id": "H05-1115.26",
"char_start": 955,
"char_end": 963
}
] | [
{
"label": 1,
"arg1": "H05-1115.1",
"arg2": "H05-1115.2",
"reverse": false
},
{
"label": 4,
"arg1": "H05-1115.6",
"arg2": "H05-1115.7",
"reverse": false
},
{
"label": 4,
"arg1": "H05-1115.11",
"arg2": "H05-1115.12",
"reverse": true
},
{
"label": 1,
"arg1": "H05-1115.14",
"arg2": "H05-1115.15",
"reverse": true
},
{
"label": 6,
"arg1": "H05-1115.18",
"arg2": "H05-1115.19",
"reverse": false
},
{
"label": 3,
"arg1": "H05-1115.20",
"arg2": "H05-1115.21",
"reverse": false
},
{
"label": 2,
"arg1": "H05-1115.24",
"arg2": "H05-1115.25",
"reverse": false
}
] |
J89-4003 |
A Formal Model For Context-Free Languages Augmented With Reduplication
|
A model is presented to characterize the class of languages obtained by adding reduplication to context-free languages. The model is a pushdown automaton augmented with the ability to check reduplication by using the stack in a new way. The class of languages generated is shown to lie strictly between the context-free languages and the indexed languages. The model appears capable of accommodating the sort of reduplications that have been observed to occur in natural languages, but it excludes many of the unnatural constructions that other formal models have permitted.
| [
{
"id": "J89-4003.1",
"char_start": 3,
"char_end": 8
},
{
"id": "J89-4003.2",
"char_start": 42,
"char_end": 60
},
{
"id": "J89-4003.3",
"char_start": 80,
"char_end": 93
},
{
"id": "J89-4003.4",
"char_start": 97,
"char_end": 119
},
{
"id": "J89-4003.5",
"char_start": 125,
"char_end": 130
},
{
"id": "J89-4003.6",
"char_start": 136,
"char_end": 154
},
{
"id": "J89-4003.7",
"char_start": 191,
"char_end": 204
},
{
"id": "J89-4003.8",
"char_start": 218,
"char_end": 223
},
{
"id": "J89-4003.9",
"char_start": 242,
"char_end": 260
},
{
"id": "J89-4003.10",
"char_start": 308,
"char_end": 330
},
{
"id": "J89-4003.11",
"char_start": 339,
"char_end": 356
},
{
"id": "J89-4003.12",
"char_start": 362,
"char_end": 367
},
{
"id": "J89-4003.13",
"char_start": 413,
"char_end": 427
},
{
"id": "J89-4003.14",
"char_start": 464,
"char_end": 481
},
{
"id": "J89-4003.15",
"char_start": 521,
"char_end": 534
},
{
"id": "J89-4003.16",
"char_start": 546,
"char_end": 559
}
] | [
{
"label": 3,
"arg1": "J89-4003.1",
"arg2": "J89-4003.2",
"reverse": false
},
{
"label": 4,
"arg1": "J89-4003.3",
"arg2": "J89-4003.4",
"reverse": false
},
{
"label": 1,
"arg1": "J89-4003.6",
"arg2": "J89-4003.8",
"reverse": true
},
{
"label": 4,
"arg1": "J89-4003.13",
"arg2": "J89-4003.14",
"reverse": false
}
] |
I05-6010 |
Some remarks on the Annotation of Quantifying Noun Groups in Treebanks
|
This article is devoted to the problem of quantifying noun groups in German. After a thorough description of the phenomena, the results of corpus-based investigations are described. Moreover, some examples are given that underline the necessity of integrating some kind of information other than grammar sensu stricto into the treebank. We argue that a more sophisticated and fine-grained annotation in the tree-bank would have very positve effects on stochastic parsers trained on the tree-bank and on grammars induced from the treebank, and it would make the treebank more valuable as a source of data for theoretical linguistic investigations. The information gained from corpus research and the analyses that are proposed are realized in the framework of SILVA, a parsing and extraction tool for German text corpora.
| [
{
"id": "I05-6010.1",
"char_start": 43,
"char_end": 66
},
{
"id": "I05-6010.2",
"char_start": 70,
"char_end": 76
},
{
"id": "I05-6010.3",
"char_start": 140,
"char_end": 167
},
{
"id": "I05-6010.4",
"char_start": 297,
"char_end": 318
},
{
"id": "I05-6010.5",
"char_start": 328,
"char_end": 336
},
{
"id": "I05-6010.6",
"char_start": 390,
"char_end": 400
},
{
"id": "I05-6010.7",
"char_start": 408,
"char_end": 417
},
{
"id": "I05-6010.8",
"char_start": 453,
"char_end": 471
},
{
"id": "I05-6010.9",
"char_start": 487,
"char_end": 496
},
{
"id": "I05-6010.10",
"char_start": 504,
"char_end": 512
},
{
"id": "I05-6010.11",
"char_start": 530,
"char_end": 538
},
{
"id": "I05-6010.12",
"char_start": 562,
"char_end": 570
},
{
"id": "I05-6010.13",
"char_start": 590,
"char_end": 604
},
{
"id": "I05-6010.14",
"char_start": 609,
"char_end": 646
},
{
"id": "I05-6010.15",
"char_start": 676,
"char_end": 691
},
{
"id": "I05-6010.16",
"char_start": 760,
"char_end": 765
},
{
"id": "I05-6010.17",
"char_start": 769,
"char_end": 776
},
{
"id": "I05-6010.18",
"char_start": 781,
"char_end": 796
},
{
"id": "I05-6010.19",
"char_start": 801,
"char_end": 820
}
] | [
{
"label": 4,
"arg1": "I05-6010.1",
"arg2": "I05-6010.2",
"reverse": false
},
{
"label": 3,
"arg1": "I05-6010.4",
"arg2": "I05-6010.5",
"reverse": true
},
{
"label": 4,
"arg1": "I05-6010.6",
"arg2": "I05-6010.7",
"reverse": false
},
{
"label": 1,
"arg1": "I05-6010.12",
"arg2": "I05-6010.14",
"reverse": false
},
{
"label": 1,
"arg1": "I05-6010.18",
"arg2": "I05-6010.19",
"reverse": false
}
] |
P83-1004 | Formal Constraints on Metarules
|
Metagrammatical formalisms that combine context-free phrase structure rules and metarules (MPS grammars) allow concise statement of generalizations about the syntax of natural languages. Unconstrained MPS grammars, unfortunately, are not computationally safe. We evaluate several proposals for constraining them, basing our assessment on computational tractability and explanatory adequacy. We show that none of them satisfies both criteria, and suggest new directions for research on alternative metagrammatical formalisms. | [
{
"id": "P83-1004.1",
"char_start": 1,
"char_end": 27
},
{
"id": "P83-1004.2",
"char_start": 41,
"char_end": 76
},
{
"id": "P83-1004.3",
"char_start": 81,
"char_end": 105
},
{
"id": "P83-1004.4",
"char_start": 159,
"char_end": 165
},
{
"id": "P83-1004.5",
"char_start": 169,
"char_end": 186
},
{
"id": "P83-1004.6",
"char_start": 188,
"char_end": 214
},
{
"id": "P83-1004.7",
"char_start": 339,
"char_end": 390
},
{
"id": "P83-1004.8",
"char_start": 498,
"char_end": 524
}
] | [
{
"label": 4,
"arg1": "P83-1004.1",
"arg2": "P83-1004.2",
"reverse": true
},
{
"label": 4,
"arg1": "P83-1004.4",
"arg2": "P83-1004.5",
"reverse": false
}
] |
P87-1022 | A CENTERING APPROACH TO PRONOUNS
|
In this paper we present a formalization of the centering approach to modeling attentional structure in discourse and use it as the basis for an algorithm to track discourse context and bind pronouns. As described in [GJW86], the process of centering attention on entities in the discourse gives rise to the intersentential transitional states of continuing, retaining and shifting. We propose an extension to these states which handles some additional cases of multiple ambiguous pronouns. The algorithm has been implemented in an HPSG natural language system which serves as the interface to a database query application. | [
{
"id": "P87-1022.1",
"char_start": 28,
"char_end": 41
},
{
"id": "P87-1022.2",
"char_start": 49,
"char_end": 67
},
{
"id": "P87-1022.3",
"char_start": 80,
"char_end": 114
},
{
"id": "P87-1022.4",
"char_start": 146,
"char_end": 155
},
{
"id": "P87-1022.5",
"char_start": 165,
"char_end": 182
},
{
"id": "P87-1022.6",
"char_start": 192,
"char_end": 200
},
{
"id": "P87-1022.7",
"char_start": 242,
"char_end": 290
},
{
"id": "P87-1022.8",
"char_start": 309,
"char_end": 382
},
{
"id": "P87-1022.9",
"char_start": 417,
"char_end": 423
},
{
"id": "P87-1022.10",
"char_start": 472,
"char_end": 490
},
{
"id": "P87-1022.11",
"char_start": 496,
"char_end": 505
},
{
"id": "P87-1022.12",
"char_start": 533,
"char_end": 561
},
{
"id": "P87-1022.13",
"char_start": 597,
"char_end": 623
}
] | [
{
"label": 3,
"arg1": "P87-1022.1",
"arg2": "P87-1022.3",
"reverse": false
},
{
"label": 1,
"arg1": "P87-1022.4",
"arg2": "P87-1022.5",
"reverse": false
},
{
"label": 1,
"arg1": "P87-1022.12",
"arg2": "P87-1022.13",
"reverse": false
}
] |
P95-1013 | Compilation of HPSG to TAG
|
We present an implemented compilation algorithm that translates HPSG into lexicalized feature-based TAG, relating concepts of the two theories. While HPSG has a more elaborated principle-based theory of possible phrase structures, TAG provides the means to represent lexicalized structures more explicitly. Our objectives are met by giving clear definitions that determine the projection of structures from the lexicon, and identify maximal projections, auxiliary trees and foot nodes. | [
{
"id": "P95-1013.1",
"char_start": 27,
"char_end": 48
},
{
"id": "P95-1013.2",
"char_start": 65,
"char_end": 69
},
{
"id": "P95-1013.3",
"char_start": 75,
"char_end": 104
},
{
"id": "P95-1013.4",
"char_start": 135,
"char_end": 143
},
{
"id": "P95-1013.5",
"char_start": 151,
"char_end": 155
},
{
"id": "P95-1013.6",
"char_start": 178,
"char_end": 200
},
{
"id": "P95-1013.7",
"char_start": 213,
"char_end": 230
},
{
"id": "P95-1013.8",
"char_start": 232,
"char_end": 235
},
{
"id": "P95-1013.9",
"char_start": 268,
"char_end": 290
},
{
"id": "P95-1013.10",
"char_start": 378,
"char_end": 402
},
{
"id": "P95-1013.11",
"char_start": 412,
"char_end": 419
},
{
"id": "P95-1013.12",
"char_start": 434,
"char_end": 453
},
{
"id": "P95-1013.13",
"char_start": 455,
"char_end": 470
},
{
"id": "P95-1013.14",
"char_start": 475,
"char_end": 485
}
] | [
{
"label": 4,
"arg1": "P95-1013.5",
"arg2": "P95-1013.6",
"reverse": true
},
{
"label": 1,
"arg1": "P95-1013.8",
"arg2": "P95-1013.9",
"reverse": false
},
{
"label": 4,
"arg1": "P95-1013.10",
"arg2": "P95-1013.11",
"reverse": false
}
] |
P97-1002 | Fast Context-Free Parsing Requires Fast Boolean Matrix Multiplication
|
Valiant showed that Boolean matrix multiplication (BMM) can be used for CFG parsing. We prove a dual result: CFG parsers running in time O(|G||w|3-e) on a grammar G and a string w can be used to multiply m x m Boolean matrices in time O(m3-e/3). In the process we also provide a formal definition of parsing motivated by an informal notion due to Lang. Our result establishes one of the first limitations on general CFG parsing: a fast, practical CFG parser would yield a fast, practical BMM algorithm, which is not believed to exist. | [
{
"id": "P97-1002.1",
"char_start": 21,
"char_end": 56
},
{
"id": "P97-1002.2",
"char_start": 73,
"char_end": 84
},
{
"id": "P97-1002.3",
"char_start": 110,
"char_end": 121
},
{
"id": "P97-1002.4",
"char_start": 133,
"char_end": 150
},
{
"id": "P97-1002.5",
"char_start": 156,
"char_end": 165
},
{
"id": "P97-1002.6",
"char_start": 172,
"char_end": 180
},
{
"id": "P97-1002.7",
"char_start": 205,
"char_end": 227
},
{
"id": "P97-1002.8",
"char_start": 231,
"char_end": 245
},
{
"id": "P97-1002.9",
"char_start": 280,
"char_end": 297
},
{
"id": "P97-1002.10",
"char_start": 301,
"char_end": 308
},
{
"id": "P97-1002.11",
"char_start": 417,
"char_end": 428
},
{
"id": "P97-1002.12",
"char_start": 448,
"char_end": 458
},
{
"id": "P97-1002.13",
"char_start": 489,
"char_end": 502
}
] | [
{
"label": 1,
"arg1": "P97-1002.1",
"arg2": "P97-1002.2",
"reverse": false
},
{
"label": 3,
"arg1": "P97-1002.3",
"arg2": "P97-1002.4",
"reverse": true
},
{
"label": 3,
"arg1": "P97-1002.9",
"arg2": "P97-1002.10",
"reverse": false
},
{
"label": 2,
"arg1": "P97-1002.12",
"arg2": "P97-1002.13",
"reverse": false
}
] |
P97-1040 | Efficient Generation in Primitive Optimality Theory
|
This paper introduces primitive Optimality Theory (OTP), a linguistically motivated formalization of OT. OTP specifies the class of autosegmental representations, the universal generator Gen, and the two simple families of permissible constraints. In contrast to less restricted theories using Generalized Alignment, OTP's optimal surface forms can be generated with finite-state methods adapted from (Ellison, 1994). Unfortunately these methods take time exponential on the size of the grammar. Indeed the generation problem is shown NP-complete in this sense. However, techniques are discussed for making Ellison's approach fast in the typical case, including a simple trick that alone provides a 100-fold speedup on a grammar fragment of moderate size. One avenue for future improvements is a new finite-state notion, factored automata, where regular languages are represented compactly via formal intersections of FSAs. | [
{
"id": "P97-1040.1",
"char_start": 23,
"char_end": 56
},
{
"id": "P97-1040.2",
"char_start": 102,
"char_end": 104
},
{
"id": "P97-1040.3",
"char_start": 106,
"char_end": 109
},
{
"id": "P97-1040.4",
"char_start": 124,
"char_end": 162
},
{
"id": "P97-1040.5",
"char_start": 168,
"char_end": 191
},
{
"id": "P97-1040.6",
"char_start": 224,
"char_end": 247
},
{
"id": "P97-1040.7",
"char_start": 280,
"char_end": 288
},
{
"id": "P97-1040.8",
"char_start": 295,
"char_end": 316
},
{
"id": "P97-1040.9",
"char_start": 318,
"char_end": 321
},
{
"id": "P97-1040.10",
"char_start": 332,
"char_end": 345
},
{
"id": "P97-1040.11",
"char_start": 368,
"char_end": 388
},
{
"id": "P97-1040.12",
"char_start": 439,
"char_end": 446
},
{
"id": "P97-1040.13",
"char_start": 452,
"char_end": 495
},
{
"id": "P97-1040.14",
"char_start": 508,
"char_end": 526
},
{
"id": "P97-1040.15",
"char_start": 536,
"char_end": 547
},
{
"id": "P97-1040.16",
"char_start": 608,
"char_end": 626
},
{
"id": "P97-1040.17",
"char_start": 722,
"char_end": 729
},
{
"id": "P97-1040.18",
"char_start": 801,
"char_end": 820
},
{
"id": "P97-1040.19",
"char_start": 822,
"char_end": 839
},
{
"id": "P97-1040.20",
"char_start": 847,
"char_end": 864
},
{
"id": "P97-1040.21",
"char_start": 895,
"char_end": 923
}
] | [
{
"label": 3,
"arg1": "P97-1040.1",
"arg2": "P97-1040.2",
"reverse": false
},
{
"label": 1,
"arg1": "P97-1040.7",
"arg2": "P97-1040.8",
"reverse": true
},
{
"label": 1,
"arg1": "P97-1040.10",
"arg2": "P97-1040.11",
"reverse": true
},
{
"label": 3,
"arg1": "P97-1040.12",
"arg2": "P97-1040.13",
"reverse": true
},
{
"label": 3,
"arg1": "P97-1040.14",
"arg2": "P97-1040.15",
"reverse": true
},
{
"label": 3,
"arg1": "P97-1040.20",
"arg2": "P97-1040.21",
"reverse": true
}
] |