id
stringlengths
8
8
title
stringlengths
18
138
abstract
stringlengths
177
1.96k
entities
list
relations
list
H01-1001
Activity detection for information access to oral communication
Oral communication is ubiquitous and carries important information yet it is also time consuming to document. Given the development of storage media and networks one could just record and store a conversation for documentation. The question is, however, how an interesting information piece would be found in a large database . Traditional information retrieval techniques use a histogram of keywords as the document representation but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance. An alternative index could be the activity such as discussing, planning, informing, story-telling, etc. This paper addresses the problem of the automatic detection of those activities in meeting situation and everyday rejoinders. Several extensions of this basic idea are being discussed and/or evaluated: Similar to activities one can define subsets of larger database and detect those automatically which is shown on a large database of TV shows . Emotions and other indices such as the dominance distribution of speakers might be available on the surface and could be used directly. Despite the small size of the databases used some results about the effectiveness of these indices can be obtained.
[ { "id": "H01-1001.1", "char_start": 1, "char_end": 19 }, { "id": "H01-1001.2", "char_start": 136, "char_end": 162 }, { "id": "H01-1001.3", "char_start": 197, "char_end": 209 }, { "id": "H01-1001.4", "char_start": 312, "char_end": 326 }, { "id": "H01-1001.5", "char_start": 341, "char_end": 373 }, { "id": "H01-1001.6", "char_start": 380, "char_end": 389 }, { "id": "H01-1001.7", "char_start": 393, "char_end": 401 }, { "id": "H01-1001.8", "char_start": 409, "char_end": 432 }, { "id": "H01-1001.9", "char_start": 437, "char_end": 455 }, { "id": "H01-1001.10", "char_start": 477, "char_end": 484 }, { "id": "H01-1001.11", "char_start": 564, "char_end": 569 }, { "id": "H01-1001.12", "char_start": 693, "char_end": 712 }, { "id": "H01-1001.13", "char_start": 910, "char_end": 918 }, { "id": "H01-1001.14", "char_start": 976, "char_end": 984 }, { "id": "H01-1001.15", "char_start": 988, "char_end": 996 }, { "id": "H01-1001.16", "char_start": 999, "char_end": 1007 }, { "id": "H01-1001.17", "char_start": 1018, "char_end": 1025 }, { "id": "H01-1001.18", "char_start": 1038, "char_end": 1072 }, { "id": "H01-1001.19", "char_start": 1099, "char_end": 1106 }, { "id": "H01-1001.20", "char_start": 1165, "char_end": 1174 }, { "id": "H01-1001.21", "char_start": 1226, "char_end": 1233 } ]
[ { "label": 1, "arg1": "H01-1001.5", "arg2": "H01-1001.7", "reverse": true }, { "label": 1, "arg1": "H01-1001.9", "arg2": "H01-1001.10", "reverse": false }, { "label": 4, "arg1": "H01-1001.14", "arg2": "H01-1001.15", "reverse": true } ]
H01-1017
Dialogue Interaction with the DARPA Communicator Infrastructure: The Development of Useful Software
To support engaging human users in robust, mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems , the DARPA Communicator program [1] is funding the development of a distributed message-passing infrastructure for dialogue systems which all Communicator participants are using. In this presentation, we describe the features of and requirements for a genuinely useful software infrastructure for this purpose.
[ { "id": "H01-1017.1", "char_start": 44, "char_end": 89 }, { "id": "H01-1017.2", "char_start": 133, "char_end": 149 }, { "id": "H01-1017.3", "char_start": 156, "char_end": 182 }, { "id": "H01-1017.4", "char_start": 219, "char_end": 261 }, { "id": "H01-1017.5", "char_start": 266, "char_end": 282 }, { "id": "H01-1017.6", "char_start": 293, "char_end": 305 }, { "id": "H01-1017.7", "char_start": 384, "char_end": 396 }, { "id": "H01-1017.8", "char_start": 420, "char_end": 443 } ]
[ { "label": 3, "arg1": "H01-1017.4", "arg2": "H01-1017.5", "reverse": false } ]
H01-1041
Interlingua-Based Broad-Coverage Korean-to-English Translation in CCLING
At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory) . The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame . The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers , relatively free word order , and frequent omissions of arguments ). (ii) High quality translation via word sense disambiguation and accurate word order generation of the target language . (iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars . Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document .
[ { "id": "H01-1041.1", "char_start": 54, "char_end": 98 }, { "id": "H01-1041.2", "char_start": 99, "char_end": 162 }, { "id": "H01-1041.3", "char_start": 169, "char_end": 212 }, { "id": "H01-1041.4", "char_start": 229, "char_end": 241 }, { "id": "H01-1041.5", "char_start": 244, "char_end": 289 }, { "id": "H01-1041.6", "char_start": 304, "char_end": 343 }, { "id": "H01-1041.7", "char_start": 353, "char_end": 367 }, { "id": "H01-1041.8", "char_start": 431, "char_end": 438 }, { "id": "H01-1041.9", "char_start": 442, "char_end": 448 }, { "id": "H01-1041.10", "char_start": 452, "char_end": 471 }, { "id": "H01-1041.11", "char_start": 477, "char_end": 495 }, { "id": "H01-1041.12", "char_start": 509, "char_end": 524 }, { "id": "H01-1041.13", "char_start": 553, "char_end": 562 }, { "id": "H01-1041.14", "char_start": 584, "char_end": 595 }, { "id": "H01-1041.15", "char_start": 600, "char_end": 625 }, { "id": "H01-1041.16", "char_start": 639, "char_end": 660 }, { "id": "H01-1041.17", "char_start": 668, "char_end": 683 }, { "id": "H01-1041.18", "char_start": 692, "char_end": 716 }, { "id": "H01-1041.19", "char_start": 736, "char_end": 743 }, { "id": "H01-1041.20", "char_start": 748, "char_end": 797 }, { "id": "H01-1041.21", "char_start": 823, "char_end": 848 }, { "id": "H01-1041.22", "char_start": 918, "char_end": 936 }, { "id": "H01-1041.23", "char_start": 981, "char_end": 998 } ]
[ { "label": 4, "arg1": "H01-1041.3", "arg2": "H01-1041.4", "reverse": true }, { "label": 1, "arg1": "H01-1041.8", "arg2": "H01-1041.9", "reverse": false }, { "label": 3, "arg1": "H01-1041.10", "arg2": "H01-1041.11", "reverse": true }, { "label": 1, "arg1": "H01-1041.14", "arg2": "H01-1041.15", "reverse": true } ]
H01-1042
Is That Your Final Answer?
The purpose of this research is to test the efficacy of applying automated evaluation techniques , originally devised for the evaluation of human language learners , to the output of machine translation (MT) systems . We believe that these evaluation techniques will provide information about both the human language learning process , the translation process and the development of machine translation systems . This, the first experiment in a series of experiments, looks at the intelligibility of MT output . A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words . Even more illuminating was the factors on which the assessors made their decisions. We tested this to see if similar criteria could be elicited from duplicating the experiment using machine translation output . Subjects were given a set of up to six extracts of translated newswire text . Some of the extracts were expert human translations , others were machine translation outputs . The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation . Additionally, they were asked to mark the word at which they made this decision. The results of this experiment, along with a preliminary analysis of the factors involved in the decision making process will be presented here.
[ { "id": "H01-1042.1", "char_start": 66, "char_end": 97 }, { "id": "H01-1042.2", "char_start": 141, "char_end": 164 }, { "id": "H01-1042.3", "char_start": 174, "char_end": 180 }, { "id": "H01-1042.4", "char_start": 184, "char_end": 216 }, { "id": "H01-1042.5", "char_start": 241, "char_end": 262 }, { "id": "H01-1042.6", "char_start": 303, "char_end": 334 }, { "id": "H01-1042.7", "char_start": 341, "char_end": 360 }, { "id": "H01-1042.8", "char_start": 369, "char_end": 380 }, { "id": "H01-1042.9", "char_start": 384, "char_end": 411 }, { "id": "H01-1042.10", "char_start": 482, "char_end": 497 }, { "id": "H01-1042.11", "char_start": 501, "char_end": 510 }, { "id": "H01-1042.12", "char_start": 515, "char_end": 543 }, { "id": "H01-1042.13", "char_start": 556, "char_end": 565 }, { "id": "H01-1042.14", "char_start": 584, "char_end": 622 }, { "id": "H01-1042.15", "char_start": 640, "char_end": 645 }, { "id": "H01-1042.16", "char_start": 700, "char_end": 709 }, { "id": "H01-1042.17", "char_start": 830, "char_end": 856 }, { "id": "H01-1042.18", "char_start": 910, "char_end": 934 }, { "id": "H01-1042.19", "char_start": 963, "char_end": 988 }, { "id": "H01-1042.20", "char_start": 1003, "char_end": 1030 }, { "id": "H01-1042.21", "char_start": 1145, "char_end": 1169 }, { "id": "H01-1042.22", "char_start": 1175, "char_end": 1194 }, { "id": "H01-1042.23", "char_start": 1239, "char_end": 1243 } ]
[ { "label": 1, "arg1": "H01-1042.1", "arg2": "H01-1042.3", "reverse": false }, { "label": 3, "arg1": "H01-1042.10", "arg2": "H01-1042.11", "reverse": false } ]
H01-1049
Listen-Communicate-Show (LCS): Spoken Language Command of Agent-based Remote Information Access
Listen-Communicate-Show (LCS) is a new paradigm for human interaction with data sources . We integrate a spoken language understanding system with intelligent mobile agents that mediate between users and information sources . We have built and will demonstrate an application of this approach called LCS-Marine . Using LCS-Marine , tactical personnel can converse with their logistics system to place a supply or information request. The request is passed to a mobile, intelligent agent for execution at the appropriate database . Requestors can also instruct the system to notify them when the status of a request changes or when a request is complete. We have demonstrated this capability in several field exercises with the Marines and are currently developing applications of this technology in new domains .
[ { "id": "H01-1049.1", "char_start": 1, "char_end": 30 }, { "id": "H01-1049.2", "char_start": 53, "char_end": 88 }, { "id": "H01-1049.3", "char_start": 106, "char_end": 142 }, { "id": "H01-1049.4", "char_start": 148, "char_end": 173 }, { "id": "H01-1049.5", "char_start": 195, "char_end": 200 }, { "id": "H01-1049.6", "char_start": 205, "char_end": 224 }, { "id": "H01-1049.7", "char_start": 301, "char_end": 311 }, { "id": "H01-1049.8", "char_start": 320, "char_end": 330 }, { "id": "H01-1049.9", "char_start": 462, "char_end": 487 }, { "id": "H01-1049.10", "char_start": 521, "char_end": 529 }, { "id": "H01-1049.11", "char_start": 532, "char_end": 542 }, { "id": "H01-1049.12", "char_start": 608, "char_end": 615 }, { "id": "H01-1049.13", "char_start": 634, "char_end": 641 }, { "id": "H01-1049.14", "char_start": 800, "char_end": 811 } ]
[ { "label": 4, "arg1": "H01-1049.3", "arg2": "H01-1049.4", "reverse": true } ]
H01-1058
On Combining Language Models : Oracle Approach
In this paper, we address the problem of combining several language models (LMs) . We find that simple interpolation methods , like log-linear and linear interpolation , improve the performance but fall short of the performance of an oracle . The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate ) from a list of word strings , where each word string has been obtained by using a different LM . Actually, the oracle acts like a dynamic combiner with hard decisions using the reference . We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further . We suggest a method that mimics the behavior of the oracle using a neural network or a decision tree . The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence .
[ { "id": "H01-1058.1", "char_start": 60, "char_end": 81 }, { "id": "H01-1058.2", "char_start": 104, "char_end": 125 }, { "id": "H01-1058.3", "char_start": 133, "char_end": 168 }, { "id": "H01-1058.4", "char_start": 183, "char_end": 194 }, { "id": "H01-1058.5", "char_start": 217, "char_end": 228 }, { "id": "H01-1058.6", "char_start": 235, "char_end": 241 }, { "id": "H01-1058.7", "char_start": 248, "char_end": 254 }, { "id": "H01-1058.8", "char_start": 265, "char_end": 286 }, { "id": "H01-1058.9", "char_start": 303, "char_end": 314 }, { "id": "H01-1058.10", "char_start": 329, "char_end": 340 }, { "id": "H01-1058.11", "char_start": 353, "char_end": 380 }, { "id": "H01-1058.12", "char_start": 398, "char_end": 410 }, { "id": "H01-1058.13", "char_start": 424, "char_end": 435 }, { "id": "H01-1058.14", "char_start": 475, "char_end": 477 }, { "id": "H01-1058.15", "char_start": 494, "char_end": 500 }, { "id": "H01-1058.16", "char_start": 513, "char_end": 529 }, { "id": "H01-1058.17", "char_start": 535, "char_end": 549 }, { "id": "H01-1058.18", "char_start": 560, "char_end": 569 }, { "id": "H01-1058.19", "char_start": 637, "char_end": 671 }, { "id": "H01-1058.20", "char_start": 687, "char_end": 698 }, { "id": "H01-1058.21", "char_start": 761, "char_end": 767 }, { "id": "H01-1058.22", "char_start": 776, "char_end": 790 }, { "id": "H01-1058.23", "char_start": 796, "char_end": 809 }, { "id": "H01-1058.24", "char_start": 842, "char_end": 845 }, { "id": "H01-1058.25", "char_start": 851, "char_end": 870 }, { "id": "H01-1058.26", "char_start": 892, "char_end": 902 }, { "id": "H01-1058.27", "char_start": 924, "char_end": 926 }, { "id": "H01-1058.28", "char_start": 941, "char_end": 951 } ]
[ { "label": 2, "arg1": "H01-1058.2", "arg2": "H01-1058.4", "reverse": false }, { "label": 2, "arg1": "H01-1058.9", "arg2": "H01-1058.10", "reverse": false }, { "label": 2, "arg1": "H01-1058.13", "arg2": "H01-1058.14", "reverse": true }, { "label": 1, "arg1": "H01-1058.16", "arg2": "H01-1058.18", "reverse": true }, { "label": 2, "arg1": "H01-1058.19", "arg2": "H01-1058.20", "reverse": false }, { "label": 3, "arg1": "H01-1058.24", "arg2": "H01-1058.25", "reverse": true }, { "label": 3, "arg1": "H01-1058.27", "arg2": "H01-1058.28", "reverse": true } ]
H01-1070
Towards an Intelligent Multilingual Keyboard System
This paper proposes a practical approach employing n-gram models and error-correction rules for Thai key prediction and Thai-English language identification . The paper also proposes rule-reduction algorithm applying mutual information to reduce the error-correction rules . Our algorithm reported more than 99% accuracy in both language identification and key prediction .
[ { "id": "H01-1070.1", "char_start": 52, "char_end": 65 }, { "id": "H01-1070.2", "char_start": 70, "char_end": 92 }, { "id": "H01-1070.3", "char_start": 97, "char_end": 116 }, { "id": "H01-1070.4", "char_start": 121, "char_end": 157 }, { "id": "H01-1070.5", "char_start": 184, "char_end": 208 }, { "id": "H01-1070.6", "char_start": 218, "char_end": 236 }, { "id": "H01-1070.7", "char_start": 251, "char_end": 273 }, { "id": "H01-1070.8", "char_start": 313, "char_end": 321 }, { "id": "H01-1070.9", "char_start": 330, "char_end": 353 }, { "id": "H01-1070.10", "char_start": 358, "char_end": 372 } ]
[ { "label": 1, "arg1": "H01-1070.2", "arg2": "H01-1070.3", "reverse": false }, { "label": 1, "arg1": "H01-1070.6", "arg2": "H01-1070.7", "reverse": false }, { "label": 2, "arg1": "H01-1070.8", "arg2": "H01-1070.9", "reverse": true } ]
N01-1003
SPoT: A Trainable Sentence Planner
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping , i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences . In this paper, we present SPoT , a sentence planner , and a new methodology for automatically training SPoT on the basis of feedback provided by human judges . We reconceptualize the task into two distinct phases. First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possible sentence plans for a given text-plan input . Second, the sentence-plan-ranker (SPR) ranks the list of output sentence plans , and then selects the top-ranked plan . The SPR uses ranking rules automatically learned from training data . We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan .
[ { "id": "N01-1003.1", "char_start": 1, "char_end": 18 }, { "id": "N01-1003.2", "char_start": 81, "char_end": 97 }, { "id": "N01-1003.3", "char_start": 119, "char_end": 138 }, { "id": "N01-1003.4", "char_start": 154, "char_end": 165 }, { "id": "N01-1003.5", "char_start": 223, "char_end": 232 }, { "id": "N01-1003.6", "char_start": 261, "char_end": 265 }, { "id": "N01-1003.7", "char_start": 270, "char_end": 286 }, { "id": "N01-1003.8", "char_start": 338, "char_end": 342 }, { "id": "N01-1003.9", "char_start": 359, "char_end": 367 }, { "id": "N01-1003.10", "char_start": 380, "char_end": 392 }, { "id": "N01-1003.11", "char_start": 471, "char_end": 511 }, { "id": "N01-1003.12", "char_start": 559, "char_end": 573 }, { "id": "N01-1003.13", "char_start": 586, "char_end": 601 }, { "id": "N01-1003.14", "char_start": 616, "char_end": 642 }, { "id": "N01-1003.15", "char_start": 668, "char_end": 682 }, { "id": "N01-1003.16", "char_start": 717, "char_end": 721 }, { "id": "N01-1003.17", "char_start": 728, "char_end": 731 }, { "id": "N01-1003.18", "char_start": 737, "char_end": 750 }, { "id": "N01-1003.19", "char_start": 778, "char_end": 791 }, { "id": "N01-1003.20", "char_start": 819, "char_end": 822 }, { "id": "N01-1003.21", "char_start": 842, "char_end": 855 }, { "id": "N01-1003.22", "char_start": 906, "char_end": 936 } ]
[ { "label": 3, "arg1": "N01-1003.12", "arg2": "N01-1003.13", "reverse": false }, { "label": 1, "arg1": "N01-1003.18", "arg2": "N01-1003.19", "reverse": true }, { "label": 6, "arg1": "N01-1003.21", "arg2": "N01-1003.22", "reverse": false } ]
P01-1004
Low-cost, High-performance Translation Retrieval: Dumber is Better
In this paper, we compare the relative effects of segment order , segmentation and segment contiguity on the retrieval performance of a translation memory system . We take a selection of both bag-of-words and segment order-sensitive string comparison methods , and run each over both character- and word-segmented data , in combination with a range of local segment contiguity models (in the form of N-grams ). Over two distinct datasets , we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models . Further,in their optimum configuration , bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy , but much faster. We also provide evidence that our findings are scalable.
[ { "id": "P01-1004.1", "char_start": 51, "char_end": 64 }, { "id": "P01-1004.2", "char_start": 67, "char_end": 79 }, { "id": "P01-1004.3", "char_start": 84, "char_end": 102 }, { "id": "P01-1004.4", "char_start": 110, "char_end": 131 }, { "id": "P01-1004.5", "char_start": 137, "char_end": 162 }, { "id": "P01-1004.6", "char_start": 193, "char_end": 259 }, { "id": "P01-1004.7", "char_start": 285, "char_end": 319 }, { "id": "P01-1004.8", "char_start": 353, "char_end": 384 }, { "id": "P01-1004.9", "char_start": 401, "char_end": 408 }, { "id": "P01-1004.10", "char_start": 430, "char_end": 438 }, { "id": "P01-1004.11", "char_start": 454, "char_end": 462 }, { "id": "P01-1004.12", "char_start": 483, "char_end": 500 }, { "id": "P01-1004.13", "char_start": 512, "char_end": 530 }, { "id": "P01-1004.14", "char_start": 561, "char_end": 579 }, { "id": "P01-1004.15", "char_start": 607, "char_end": 620 }, { "id": "P01-1004.16", "char_start": 623, "char_end": 643 }, { "id": "P01-1004.17", "char_start": 674, "char_end": 705 }, { "id": "P01-1004.18", "char_start": 718, "char_end": 736 } ]
[ { "label": 2, "arg1": "P01-1004.4", "arg2": "P01-1004.5", "reverse": true }, { "label": 1, "arg1": "P01-1004.6", "arg2": "P01-1004.7", "reverse": false }, { "label": 1, "arg1": "P01-1004.11", "arg2": "P01-1004.12", "reverse": true }, { "label": 6, "arg1": "P01-1004.16", "arg2": "P01-1004.17", "reverse": false } ]
P01-1007
Guided Parsing of Range Concatenation Languages
The theoretical study of the range concatenation grammar [RCG] formalism has revealed many attractive properties which may be used in NLP . In particular, range concatenation languages [RCL] can be parsed in polynomial time and many classical grammatical formalisms can be translated into equivalent RCGs without increasing their worst-case parsing time complexity . For example, after translation into an equivalent RCG , any tree adjoining grammar can be parsed in O(n6) time . In this paper, we study a parsing technique whose purpose is to improve the practical efficiency of RCL parsers . The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L . The results of a practical evaluation of this method on a wide coverage English grammar are given.
[ { "id": "P01-1007.1", "char_start": 30, "char_end": 73 }, { "id": "P01-1007.2", "char_start": 135, "char_end": 138 }, { "id": "P01-1007.3", "char_start": 156, "char_end": 191 }, { "id": "P01-1007.4", "char_start": 209, "char_end": 224 }, { "id": "P01-1007.5", "char_start": 244, "char_end": 266 }, { "id": "P01-1007.6", "char_start": 301, "char_end": 305 }, { "id": "P01-1007.7", "char_start": 331, "char_end": 365 }, { "id": "P01-1007.8", "char_start": 387, "char_end": 398 }, { "id": "P01-1007.9", "char_start": 418, "char_end": 421 }, { "id": "P01-1007.10", "char_start": 428, "char_end": 450 }, { "id": "P01-1007.11", "char_start": 468, "char_end": 478 }, { "id": "P01-1007.12", "char_start": 507, "char_end": 524 }, { "id": "P01-1007.13", "char_start": 581, "char_end": 592 }, { "id": "P01-1007.14", "char_start": 599, "char_end": 632 }, { "id": "P01-1007.15", "char_start": 640, "char_end": 651 }, { "id": "P01-1007.16", "char_start": 658, "char_end": 668 }, { "id": "P01-1007.17", "char_start": 687, "char_end": 692 }, { "id": "P01-1007.18", "char_start": 708, "char_end": 732 }, { "id": "P01-1007.19", "char_start": 751, "char_end": 761 }, { "id": "P01-1007.20", "char_start": 777, "char_end": 790 }, { "id": "P01-1007.21", "char_start": 851, "char_end": 880 } ]
[ { "label": 1, "arg1": "P01-1007.1", "arg2": "P01-1007.2", "reverse": false }, { "label": 3, "arg1": "P01-1007.3", "arg2": "P01-1007.4", "reverse": true }, { "label": 3, "arg1": "P01-1007.10", "arg2": "P01-1007.11", "reverse": true }, { "label": 1, "arg1": "P01-1007.12", "arg2": "P01-1007.13", "reverse": false }, { "label": 1, "arg1": "P01-1007.17", "arg2": "P01-1007.18", "reverse": true }, { "label": 1, "arg1": "P01-1007.19", "arg2": "P01-1007.20", "reverse": false } ]
P01-1008
Extracting Paraphrases from a Parallel Corpus
While paraphrasing is critical both for interpretation and generation of natural language , current systems use manual or semi-automatic methods to collect paraphrases . We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text . Our approach yields phrasal and single word lexical paraphrases as well as syntactic paraphrases .
[ { "id": "P01-1008.1", "char_start": 7, "char_end": 19 }, { "id": "P01-1008.2", "char_start": 41, "char_end": 90 }, { "id": "P01-1008.3", "char_start": 157, "char_end": 168 }, { "id": "P01-1008.4", "char_start": 185, "char_end": 216 }, { "id": "P01-1008.5", "char_start": 221, "char_end": 250 }, { "id": "P01-1008.6", "char_start": 258, "char_end": 297 }, { "id": "P01-1008.7", "char_start": 310, "char_end": 321 }, { "id": "P01-1008.8", "char_start": 344, "char_end": 387 }, { "id": "P01-1008.9", "char_start": 399, "char_end": 420 } ]
[ { "label": 1, "arg1": "P01-1008.1", "arg2": "P01-1008.2", "reverse": false }, { "label": 1, "arg1": "P01-1008.4", "arg2": "P01-1008.5", "reverse": false } ]
P01-1009
Alternative Phrases and Natural Language Information Retrieval
This paper presents a formal analysis for a large class of words called alternative markers , which includes other (than) , such (as) , and besides . These words appear frequently enough in dialog to warrant serious attention , yet present natural language search engines perform poorly on queries containing them. I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine 's operational semantics . The value of this approach is that as the operational semantics of natural language applications improve, even larger improvements are possible.
[ { "id": "P01-1009.1", "char_start": 23, "char_end": 38 }, { "id": "P01-1009.2", "char_start": 60, "char_end": 65 }, { "id": "P01-1009.3", "char_start": 73, "char_end": 92 }, { "id": "P01-1009.4", "char_start": 110, "char_end": 122 }, { "id": "P01-1009.5", "char_start": 125, "char_end": 134 }, { "id": "P01-1009.6", "char_start": 141, "char_end": 148 }, { "id": "P01-1009.7", "char_start": 157, "char_end": 162 }, { "id": "P01-1009.8", "char_start": 191, "char_end": 197 }, { "id": "P01-1009.9", "char_start": 217, "char_end": 226 }, { "id": "P01-1009.10", "char_start": 241, "char_end": 272 }, { "id": "P01-1009.11", "char_start": 291, "char_end": 298 }, { "id": "P01-1009.12", "char_start": 332, "char_end": 343 }, { "id": "P01-1009.13", "char_start": 349, "char_end": 362 }, { "id": "P01-1009.14", "char_start": 433, "char_end": 448 }, { "id": "P01-1009.15", "char_start": 477, "char_end": 490 }, { "id": "P01-1009.16", "char_start": 494, "char_end": 515 }, { "id": "P01-1009.17", "char_start": 560, "char_end": 581 }, { "id": "P01-1009.18", "char_start": 585, "char_end": 614 } ]
[ { "label": 5, "arg1": "P01-1009.1", "arg2": "P01-1009.3", "reverse": false }, { "label": 2, "arg1": "P01-1009.12", "arg2": "P01-1009.14", "reverse": true }, { "label": 4, "arg1": "P01-1009.17", "arg2": "P01-1009.18", "reverse": false } ]
P01-1047
Extending Lambek grammars: a logical account of minimalist grammars
We provide a logical definition of Minimalist grammars , that are Stabler's formalization of Chomsky's minimalist program . Our logical definition leads to a neat relation to categorial grammar , (yielding a treatment of Montague semantics ), a parsing-as-deduction in a resource sensitive logic , and a learning algorithm from structured data (based on a typing-algorithm and type-unification ). Here we emphasize the connection to Montague semantics which can be viewed as a formal computation of the logical form .
[ { "id": "P01-1047.1", "char_start": 14, "char_end": 32 }, { "id": "P01-1047.2", "char_start": 36, "char_end": 55 }, { "id": "P01-1047.3", "char_start": 67, "char_end": 90 }, { "id": "P01-1047.4", "char_start": 94, "char_end": 122 }, { "id": "P01-1047.5", "char_start": 129, "char_end": 147 }, { "id": "P01-1047.6", "char_start": 176, "char_end": 194 }, { "id": "P01-1047.7", "char_start": 222, "char_end": 240 }, { "id": "P01-1047.8", "char_start": 246, "char_end": 266 }, { "id": "P01-1047.9", "char_start": 272, "char_end": 296 }, { "id": "P01-1047.10", "char_start": 305, "char_end": 323 }, { "id": "P01-1047.11", "char_start": 329, "char_end": 344 }, { "id": "P01-1047.12", "char_start": 357, "char_end": 373 }, { "id": "P01-1047.13", "char_start": 378, "char_end": 394 }, { "id": "P01-1047.14", "char_start": 434, "char_end": 452 }, { "id": "P01-1047.15", "char_start": 478, "char_end": 496 }, { "id": "P01-1047.16", "char_start": 504, "char_end": 516 } ]
[ { "label": 1, "arg1": "P01-1047.10", "arg2": "P01-1047.11", "reverse": true } ]
P01-1056
Evaluating a Trainable Sentence Planner for a Spoken Dialogue System
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches . In this paper We experimentally evaluate a trainable sentence planner for a spoken dialogue system by eliciting subjective human judgments . In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component , two rule-based sentence planners , and two baseline sentence planners . We show that the trainable sentence planner performs better than the rule-based systems and the baselines , and as well as the hand-crafted system .
[ { "id": "P01-1056.1", "char_start": 1, "char_end": 38 }, { "id": "P01-1056.2", "char_start": 52, "char_end": 78 }, { "id": "P01-1056.3", "char_start": 149, "char_end": 156 }, { "id": "P01-1056.4", "char_start": 160, "char_end": 170 }, { "id": "P01-1056.5", "char_start": 185, "char_end": 205 }, { "id": "P01-1056.6", "char_start": 223, "char_end": 275 }, { "id": "P01-1056.7", "char_start": 321, "char_end": 347 }, { "id": "P01-1056.8", "char_start": 354, "char_end": 376 }, { "id": "P01-1056.9", "char_start": 390, "char_end": 416 }, { "id": "P01-1056.10", "char_start": 484, "char_end": 532 }, { "id": "P01-1056.11", "char_start": 539, "char_end": 567 }, { "id": "P01-1056.12", "char_start": 578, "char_end": 604 }, { "id": "P01-1056.13", "char_start": 624, "char_end": 650 }, { "id": "P01-1056.14", "char_start": 676, "char_end": 694 }, { "id": "P01-1056.15", "char_start": 703, "char_end": 712 }, { "id": "P01-1056.16", "char_start": 734, "char_end": 753 } ]
[ { "label": 3, "arg1": "P01-1056.3", "arg2": "P01-1056.4", "reverse": false }, { "label": 6, "arg1": "P01-1056.5", "arg2": "P01-1056.6", "reverse": false }, { "label": 1, "arg1": "P01-1056.7", "arg2": "P01-1056.8", "reverse": false }, { "label": 6, "arg1": "P01-1056.13", "arg2": "P01-1056.14", "reverse": false } ]
P01-1070
Using Machine Learning Techniques to Interpret WH-questions
We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions . These models , which are built from shallow linguistic features of questions , are employed to predict target variables which represent a user's informational goals . We report on different aspects of the predictive performance of our models , including the influence of various training and testing factors on predictive performance , and examine the relationships among the target variables.
[ { "id": "P01-1070.1", "char_start": 22, "char_end": 49 }, { "id": "P01-1070.2", "char_start": 95, "char_end": 113 }, { "id": "P01-1070.3", "char_start": 117, "char_end": 129 }, { "id": "P01-1070.4", "char_start": 138, "char_end": 144 }, { "id": "P01-1070.5", "char_start": 168, "char_end": 195 }, { "id": "P01-1070.6", "char_start": 199, "char_end": 208 }, { "id": "P01-1070.7", "char_start": 270, "char_end": 296 }, { "id": "P01-1070.8", "char_start": 337, "char_end": 359 }, { "id": "P01-1070.9", "char_start": 367, "char_end": 373 }, { "id": "P01-1070.10", "char_start": 411, "char_end": 439 }, { "id": "P01-1070.11", "char_start": 443, "char_end": 465 } ]
[ { "label": 3, "arg1": "P01-1070.2", "arg2": "P01-1070.3", "reverse": false }, { "label": 3, "arg1": "P01-1070.5", "arg2": "P01-1070.6", "reverse": false }, { "label": 2, "arg1": "P01-1070.8", "arg2": "P01-1070.9", "reverse": true }, { "label": 2, "arg1": "P01-1070.10", "arg2": "P01-1070.11", "reverse": false } ]
N03-1001
Effective Utterance Classification with Unsupervised Phonotactic Models
This paper describes a method for utterance classification that does not require manual transcription of training data . The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription . In our method, unsupervised training is first used to train a phone n-gram model for a particular domain ; the output of recognition with this model is then passed to a phone-string classifier . The classification accuracy of the method is evaluated on three different spoken language system domains .
[ { "id": "N03-1001.1", "char_start": 35, "char_end": 59 }, { "id": "N03-1001.2", "char_start": 82, "char_end": 102 }, { "id": "N03-1001.3", "char_start": 106, "char_end": 119 }, { "id": "N03-1001.4", "char_start": 142, "char_end": 176 }, { "id": "N03-1001.5", "char_start": 196, "char_end": 207 }, { "id": "N03-1001.6", "char_start": 216, "char_end": 252 }, { "id": "N03-1001.7", "char_start": 323, "char_end": 347 }, { "id": "N03-1001.8", "char_start": 358, "char_end": 378 }, { "id": "N03-1001.9", "char_start": 396, "char_end": 417 }, { "id": "N03-1001.10", "char_start": 443, "char_end": 461 }, { "id": "N03-1001.11", "char_start": 479, "char_end": 485 }, { "id": "N03-1001.12", "char_start": 492, "char_end": 498 }, { "id": "N03-1001.13", "char_start": 502, "char_end": 513 }, { "id": "N03-1001.14", "char_start": 524, "char_end": 529 }, { "id": "N03-1001.15", "char_start": 550, "char_end": 573 }, { "id": "N03-1001.16", "char_start": 580, "char_end": 603 }, { "id": "N03-1001.17", "char_start": 650, "char_end": 680 } ]
[ { "label": 1, "arg1": "N03-1001.1", "arg2": "N03-1001.2", "reverse": true }, { "label": 1, "arg1": "N03-1001.7", "arg2": "N03-1001.8", "reverse": true }, { "label": 1, "arg1": "N03-1001.9", "arg2": "N03-1001.10", "reverse": false }, { "label": 1, "arg1": "N03-1001.12", "arg2": "N03-1001.15", "reverse": true } ]
N03-1004
In Question Answering, Two Heads Are Better Than One
Motivated by the success of ensemble methods in machine learning and other areas of natural language processing , we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora . The answering agents adopt fundamentally different strategies, one utilizing primarily knowledge-based mechanisms and the other adopting statistical techniques . We present our multi-level answer resolution algorithm that combines results from the answering agents at the question, passage, and/or answer levels . Experiments evaluating the effectiveness of our answer resolution algorithm show a 35.0% relative improvement over our baseline system in the number of questions correctly answered , and a 32.8% improvement according to the average precision metric .
[ { "id": "N03-1004.1", "char_start": 29, "char_end": 45 }, { "id": "N03-1004.2", "char_start": 49, "char_end": 65 }, { "id": "N03-1004.3", "char_start": 85, "char_end": 112 }, { "id": "N03-1004.4", "char_start": 130, "char_end": 192 }, { "id": "N03-1004.5", "char_start": 248, "char_end": 264 }, { "id": "N03-1004.6", "char_start": 279, "char_end": 286 }, { "id": "N03-1004.7", "char_start": 299, "char_end": 306 }, { "id": "N03-1004.8", "char_start": 313, "char_end": 329 }, { "id": "N03-1004.9", "char_start": 396, "char_end": 422 }, { "id": "N03-1004.10", "char_start": 446, "char_end": 468 }, { "id": "N03-1004.11", "char_start": 486, "char_end": 525 }, { "id": "N03-1004.12", "char_start": 557, "char_end": 573 }, { "id": "N03-1004.13", "char_start": 581, "char_end": 620 }, { "id": "N03-1004.14", "char_start": 671, "char_end": 698 }, { "id": "N03-1004.15", "char_start": 742, "char_end": 757 }, { "id": "N03-1004.16", "char_start": 775, "char_end": 803 }, { "id": "N03-1004.17", "char_start": 847, "char_end": 871 } ]
[ { "label": 1, "arg1": "N03-1004.4", "arg2": "N03-1004.5", "reverse": true }, { "label": 4, "arg1": "N03-1004.6", "arg2": "N03-1004.7", "reverse": false }, { "label": 1, "arg1": "N03-1004.8", "arg2": "N03-1004.9", "reverse": true }, { "label": 6, "arg1": "N03-1004.14", "arg2": "N03-1004.15", "reverse": false } ]
N03-1012
Semantic Coherence Scoring Using an Ontology
In this paper we present ONTOSCORE , a system for scoring sets of concepts on the basis of an ontology . We apply our system to the task of scoring alternative speech recognition hypotheses (SRH) in terms of their semantic coherence . We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherent speech recognition hypotheses . An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent (given a baseline of 54.55%).
[ { "id": "N03-1012.1", "char_start": 26, "char_end": 35 }, { "id": "N03-1012.2", "char_start": 67, "char_end": 75 }, { "id": "N03-1012.3", "char_start": 95, "char_end": 103 }, { "id": "N03-1012.4", "char_start": 141, "char_end": 148 }, { "id": "N03-1012.5", "char_start": 161, "char_end": 196 }, { "id": "N03-1012.6", "char_start": 215, "char_end": 233 }, { "id": "N03-1012.7", "char_start": 252, "char_end": 273 }, { "id": "N03-1012.8", "char_start": 290, "char_end": 306 }, { "id": "N03-1012.9", "char_start": 379, "char_end": 408 }, { "id": "N03-1012.10", "char_start": 451, "char_end": 465 }, { "id": "N03-1012.11", "char_start": 516, "char_end": 529 }, { "id": "N03-1012.12", "char_start": 539, "char_end": 543 }, { "id": "N03-1012.13", "char_start": 586, "char_end": 594 } ]
[ { "label": 1, "arg1": "N03-1012.4", "arg2": "N03-1012.5", "reverse": false }, { "label": 4, "arg1": "N03-1012.11", "arg2": "N03-1012.12", "reverse": true } ]
N03-1017
Statistical Phrase-Based Translation
We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models . Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models outperform word-based models . Our empirical results, which hold for all examined language pairs , suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations . Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. Learning only syntactically motivated phrases degrades the performance of our systems.
[ { "id": "N03-1017.1", "char_start": 18, "char_end": 48 }, { "id": "N03-1017.2", "char_start": 53, "char_end": 71 }, { "id": "N03-1017.3", "char_start": 141, "char_end": 172 }, { "id": "N03-1017.4", "char_start": 277, "char_end": 296 }, { "id": "N03-1017.5", "char_start": 308, "char_end": 325 }, { "id": "N03-1017.6", "char_start": 379, "char_end": 393 }, { "id": "N03-1017.7", "char_start": 492, "char_end": 510 }, { "id": "N03-1017.8", "char_start": 514, "char_end": 533 }, { "id": "N03-1017.9", "char_start": 539, "char_end": 560 }, { "id": "N03-1017.10", "char_start": 565, "char_end": 582 }, { "id": "N03-1017.11", "char_start": 586, "char_end": 605 }, { "id": "N03-1017.12", "char_start": 631, "char_end": 638 }, { "id": "N03-1017.13", "char_start": 657, "char_end": 662 }, { "id": "N03-1017.14", "char_start": 676, "char_end": 683 }, { "id": "N03-1017.15", "char_start": 689, "char_end": 730 }, { "id": "N03-1017.16", "char_start": 791, "char_end": 822 } ]
[ { "label": 6, "arg1": "N03-1017.4", "arg2": "N03-1017.5", "reverse": false }, { "label": 1, "arg1": "N03-1017.7", "arg2": "N03-1017.9", "reverse": true }, { "label": 3, "arg1": "N03-1017.10", "arg2": "N03-1017.11", "reverse": false }, { "label": 4, "arg1": "N03-1017.14", "arg2": "N03-1017.15", "reverse": false } ]
N03-1018
A Generative Probabilistic OCR Model for NLP Applications
In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework , progressing from generation of true text through its transformation into the noisy output of an OCR system . The model is designed for use in error correction , with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks . We present an implementation of the model based on finite-state models , demonstrate the model 's ability to significantly reduce character and word error rate , and provide evaluation results involving automatic extraction of translation lexicons from printed text .
[ { "id": "N03-1018.1", "char_start": 31, "char_end": 97 }, { "id": "N03-1018.2", "char_start": 142, "char_end": 165 }, { "id": "N03-1018.3", "char_start": 199, "char_end": 208 }, { "id": "N03-1018.4", "char_start": 245, "char_end": 257 }, { "id": "N03-1018.5", "char_start": 264, "char_end": 274 }, { "id": "N03-1018.6", "char_start": 281, "char_end": 286 }, { "id": "N03-1018.7", "char_start": 310, "char_end": 326 }, { "id": "N03-1018.8", "char_start": 345, "char_end": 360 }, { "id": "N03-1018.9", "char_start": 365, "char_end": 371 }, { "id": "N03-1018.10", "char_start": 385, "char_end": 396 }, { "id": "N03-1018.11", "char_start": 433, "char_end": 442 }, { "id": "N03-1018.12", "char_start": 481, "char_end": 486 }, { "id": "N03-1018.13", "char_start": 496, "char_end": 515 }, { "id": "N03-1018.14", "char_start": 534, "char_end": 539 }, { "id": "N03-1018.15", "char_start": 575, "char_end": 604 }, { "id": "N03-1018.16", "char_start": 648, "char_end": 668 }, { "id": "N03-1018.17", "char_start": 672, "char_end": 692 }, { "id": "N03-1018.18", "char_start": 698, "char_end": 710 } ]
[ { "label": 1, "arg1": "N03-1018.6", "arg2": "N03-1018.7", "reverse": false }, { "label": 1, "arg1": "N03-1018.8", "arg2": "N03-1018.9", "reverse": false }, { "label": 1, "arg1": "N03-1018.12", "arg2": "N03-1018.13", "reverse": true }, { "label": 2, "arg1": "N03-1018.14", "arg2": "N03-1018.15", "reverse": false }, { "label": 1, "arg1": "N03-1018.16", "arg2": "N03-1018.18", "reverse": false } ]
N03-1026
Statistical Sentence Condensation using Ambiguity Packing and Stochastic Disambiguation Methods for Lexical-Functional Grammar
We present an application of ambiguity packing and stochastic disambiguation techniques for Lexical-Functional Grammars (LFG) to the domain of sentence condensation . Our system incorporates a linguistic parser/generator for LFG , a transfer component for parse reduction operating on packed parse forests , and a maximum-entropy model for stochastic output selection . Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems . An experimental evaluation of summarization quality shows a close correlation between the automatic parse-based evaluation and a manual evaluation of generated strings . Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator .
[ { "id": "N03-1026.1", "char_start": 30, "char_end": 88 }, { "id": "N03-1026.2", "char_start": 93, "char_end": 126 }, { "id": "N03-1026.3", "char_start": 144, "char_end": 165 }, { "id": "N03-1026.4", "char_start": 194, "char_end": 221 }, { "id": "N03-1026.5", "char_start": 226, "char_end": 229 }, { "id": "N03-1026.6", "char_start": 234, "char_end": 252 }, { "id": "N03-1026.7", "char_start": 257, "char_end": 272 }, { "id": "N03-1026.8", "char_start": 286, "char_end": 306 }, { "id": "N03-1026.9", "char_start": 315, "char_end": 336 }, { "id": "N03-1026.10", "char_start": 341, "char_end": 368 }, { "id": "N03-1026.11", "char_start": 415, "char_end": 440 }, { "id": "N03-1026.12", "char_start": 474, "char_end": 487 }, { "id": "N03-1026.13", "char_start": 499, "char_end": 528 }, { "id": "N03-1026.14", "char_start": 534, "char_end": 557 }, { "id": "N03-1026.15", "char_start": 561, "char_end": 574 }, { "id": "N03-1026.16", "char_start": 621, "char_end": 653 }, { "id": "N03-1026.17", "char_start": 660, "char_end": 677 }, { "id": "N03-1026.18", "char_start": 691, "char_end": 698 }, { "id": "N03-1026.19", "char_start": 709, "char_end": 722 }, { "id": "N03-1026.20", "char_start": 791, "char_end": 805 }, { "id": "N03-1026.21", "char_start": 813, "char_end": 826 }, { "id": "N03-1026.22", "char_start": 847, "char_end": 880 } ]
[ { "label": 1, "arg1": "N03-1026.1", "arg2": "N03-1026.2", "reverse": false }, { "label": 1, "arg1": "N03-1026.4", "arg2": "N03-1026.5", "reverse": false }, { "label": 1, "arg1": "N03-1026.6", "arg2": "N03-1026.7", "reverse": false }, { "label": 1, "arg1": "N03-1026.9", "arg2": "N03-1026.10", "reverse": true }, { "label": 5, "arg1": "N03-1026.14", "arg2": "N03-1026.15", "reverse": false }, { "label": 3, "arg1": "N03-1026.20", "arg2": "N03-1026.21", "reverse": false } ]
N03-1033
Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation , (ii) broad use of lexical features , including jointly conditioning on multiple consecutive words , (iii) effective use of priors in conditional loglinear models , and (iv) fine-grained modeling of unknown word features . Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ , an error reduction of 4.4% on the best previous single automatically learned tagging result.
[ { "id": "N03-1033.1", "char_start": 18, "char_end": 39 }, { "id": "N03-1033.2", "char_start": 128, "char_end": 140 }, { "id": "N03-1033.3", "char_start": 147, "char_end": 180 }, { "id": "N03-1033.4", "char_start": 201, "char_end": 217 }, { "id": "N03-1033.5", "char_start": 230, "char_end": 280 }, { "id": "N03-1033.6", "char_start": 306, "char_end": 312 }, { "id": "N03-1033.7", "char_start": 316, "char_end": 344 }, { "id": "N03-1033.8", "char_start": 381, "char_end": 402 }, { "id": "N03-1033.9", "char_start": 447, "char_end": 453 }, { "id": "N03-1033.10", "char_start": 469, "char_end": 477 }, { "id": "N03-1033.11", "char_start": 485, "char_end": 502 }, { "id": "N03-1033.12", "char_start": 508, "char_end": 523 }, { "id": "N03-1033.13", "char_start": 582, "char_end": 589 } ]
[ { "label": 1, "arg1": "N03-1033.6", "arg2": "N03-1033.7", "reverse": false }, { "label": 2, "arg1": "N03-1033.9", "arg2": "N03-1033.10", "reverse": false } ]
N03-2003
Getting More Mileage from Web Text Sources for Conversational Speech Language Modeling using Class-Dependent Mixtures
Sources of training data suitable for language modeling of conversational speech are limited. In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task , but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams .
[ { "id": "N03-2003.1", "char_start": 12, "char_end": 25 }, { "id": "N03-2003.2", "char_start": 39, "char_end": 56 }, { "id": "N03-2003.3", "char_start": 60, "char_end": 81 }, { "id": "N03-2003.4", "char_start": 122, "char_end": 135 }, { "id": "N03-2003.5", "char_start": 161, "char_end": 165 }, { "id": "N03-2003.6", "char_start": 175, "char_end": 178 }, { "id": "N03-2003.7", "char_start": 201, "char_end": 206 }, { "id": "N03-2003.8", "char_start": 214, "char_end": 219 }, { "id": "N03-2003.9", "char_start": 234, "char_end": 250 }, { "id": "N03-2003.10", "char_start": 323, "char_end": 327 }, { "id": "N03-2003.11", "char_start": 337, "char_end": 366 }, { "id": "N03-2003.12", "char_start": 370, "char_end": 377 } ]
[ { "label": 1, "arg1": "N03-2003.1", "arg2": "N03-2003.2", "reverse": false }, { "label": 4, "arg1": "N03-2003.5", "arg2": "N03-2003.6", "reverse": false } ]
N03-2006
Adaptation Using Out-of-Domain Corpus within EBMT
In order to boost the translation quality of EBMT based on a small-sized bilingual corpus , we use an out-of-domain bilingual corpus and, in addition, the language model of an in-domain monolingual corpus . We conducted experiments with an EBMT system . The two evaluation measures of the BLEU score and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model .
[ { "id": "N03-2006.1", "char_start": 23, "char_end": 42 }, { "id": "N03-2006.2", "char_start": 46, "char_end": 50 }, { "id": "N03-2006.3", "char_start": 74, "char_end": 90 }, { "id": "N03-2006.4", "char_start": 117, "char_end": 133 }, { "id": "N03-2006.5", "char_start": 156, "char_end": 170 }, { "id": "N03-2006.6", "char_start": 187, "char_end": 205 }, { "id": "N03-2006.7", "char_start": 241, "char_end": 252 }, { "id": "N03-2006.8", "char_start": 263, "char_end": 282 }, { "id": "N03-2006.9", "char_start": 290, "char_end": 300 }, { "id": "N03-2006.10", "char_start": 309, "char_end": 319 }, { "id": "N03-2006.11", "char_start": 370, "char_end": 386 }, { "id": "N03-2006.12", "char_start": 420, "char_end": 434 } ]
[ { "label": 2, "arg1": "N03-2006.1", "arg2": "N03-2006.2", "reverse": true } ]
N03-2015
Unsupervised Learning of Morphology for English and Inuktitut
We describe a simple unsupervised technique for learning morphology by identifying hubs in an automaton . For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. We create a word-trie , transform it into a minimal DFA , then identify hubs . Those hubs mark the boundary between root and suffix , achieving similar performance to more complex mixtures of techniques.
[ { "id": "N03-2015.1", "char_start": 22, "char_end": 44 }, { "id": "N03-2015.2", "char_start": 58, "char_end": 68 }, { "id": "N03-2015.3", "char_start": 84, "char_end": 88 }, { "id": "N03-2015.4", "char_start": 95, "char_end": 104 }, { "id": "N03-2015.5", "char_start": 127, "char_end": 130 }, { "id": "N03-2015.6", "char_start": 136, "char_end": 140 }, { "id": "N03-2015.7", "char_start": 146, "char_end": 151 }, { "id": "N03-2015.8", "char_start": 157, "char_end": 166 }, { "id": "N03-2015.9", "char_start": 188, "char_end": 198 }, { "id": "N03-2015.10", "char_start": 229, "char_end": 238 }, { "id": "N03-2015.11", "char_start": 261, "char_end": 272 }, { "id": "N03-2015.12", "char_start": 289, "char_end": 293 }, { "id": "N03-2015.13", "char_start": 302, "char_end": 306 }, { "id": "N03-2015.14", "char_start": 333, "char_end": 337 }, { "id": "N03-2015.15", "char_start": 342, "char_end": 348 }, { "id": "N03-2015.16", "char_start": 369, "char_end": 380 } ]
[ { "label": 4, "arg1": "N03-2015.3", "arg2": "N03-2015.4", "reverse": false } ]
N03-2017
Word Alignment with Cohesion Constraint
We present a syntax-based constraint for word alignment , known as the cohesion constraint . It requires disjoint English phrases to be mapped to non-overlapping intervals in the French sentence . We evaluate the utility of this constraint in two different algorithms. The results show that it can provide a significant improvement in alignment quality .
[ { "id": "N03-2017.1", "char_start": 14, "char_end": 37 }, { "id": "N03-2017.2", "char_start": 42, "char_end": 56 }, { "id": "N03-2017.3", "char_start": 72, "char_end": 91 }, { "id": "N03-2017.4", "char_start": 115, "char_end": 130 }, { "id": "N03-2017.5", "char_start": 180, "char_end": 195 }, { "id": "N03-2017.6", "char_start": 230, "char_end": 240 }, { "id": "N03-2017.7", "char_start": 336, "char_end": 353 } ]
[ { "label": 1, "arg1": "N03-2017.1", "arg2": "N03-2017.2", "reverse": false } ]
N03-2025
Bootstrapping for Named Entity Tagging Using Concept-based Seeds
A novel bootstrapping approach to Named Entity (NE) tagging using concept-based seeds and successive learners is presented. This approach only requires a few common noun or pronoun seeds that correspond to the concept for the targeted NE , e.g. he/she/man/woman for PERSON NE . The bootstrapping procedure is implemented as training two successive learners . First, decision list is used to learn the parsing-based NE rules . Then, a Hidden Markov Model is trained on a corpus automatically tagged by the first learner . The resulting NE system approaches supervised NE performance for some NE types .
[ { "id": "N03-2025.1", "char_start": 9, "char_end": 31 }, { "id": "N03-2025.2", "char_start": 35, "char_end": 60 }, { "id": "N03-2025.3", "char_start": 67, "char_end": 86 }, { "id": "N03-2025.4", "char_start": 91, "char_end": 110 }, { "id": "N03-2025.5", "char_start": 159, "char_end": 170 }, { "id": "N03-2025.6", "char_start": 174, "char_end": 181 }, { "id": "N03-2025.7", "char_start": 182, "char_end": 187 }, { "id": "N03-2025.8", "char_start": 211, "char_end": 218 }, { "id": "N03-2025.9", "char_start": 236, "char_end": 238 }, { "id": "N03-2025.10", "char_start": 267, "char_end": 276 }, { "id": "N03-2025.11", "char_start": 283, "char_end": 306 }, { "id": "N03-2025.12", "char_start": 338, "char_end": 357 }, { "id": "N03-2025.13", "char_start": 367, "char_end": 380 }, { "id": "N03-2025.14", "char_start": 402, "char_end": 424 }, { "id": "N03-2025.15", "char_start": 435, "char_end": 454 }, { "id": "N03-2025.16", "char_start": 471, "char_end": 477 }, { "id": "N03-2025.17", "char_start": 512, "char_end": 519 }, { "id": "N03-2025.18", "char_start": 536, "char_end": 545 }, { "id": "N03-2025.19", "char_start": 557, "char_end": 570 }, { "id": "N03-2025.20", "char_start": 592, "char_end": 600 } ]
[ { "label": 1, "arg1": "N03-2025.1", "arg2": "N03-2025.2", "reverse": false }, { "label": 3, "arg1": "N03-2025.7", "arg2": "N03-2025.8", "reverse": false }, { "label": 1, "arg1": "N03-2025.13", "arg2": "N03-2025.14", "reverse": true }, { "label": 1, "arg1": "N03-2025.15", "arg2": "N03-2025.16", "reverse": false } ]
N03-2036
A Phrase-Based Unigram Model for Statistical Machine Translation
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models . The units of translation are blocks - pairs of phrases . During decoding , we use a block unigram model and a word-based trigram language model . During training , the blocks are learned from source interval projections using an underlying word alignment . We show experimental results on block selection criteria based on unigram counts and phrase length.
[ { "id": "N03-2036.1", "char_start": 30, "char_end": 56 }, { "id": "N03-2036.2", "char_start": 61, "char_end": 92 }, { "id": "N03-2036.3", "char_start": 125, "char_end": 141 }, { "id": "N03-2036.4", "char_start": 155, "char_end": 174 }, { "id": "N03-2036.5", "char_start": 181, "char_end": 201 }, { "id": "N03-2036.6", "char_start": 206, "char_end": 212 }, { "id": "N03-2036.7", "char_start": 224, "char_end": 231 }, { "id": "N03-2036.8", "char_start": 241, "char_end": 249 }, { "id": "N03-2036.9", "char_start": 261, "char_end": 280 }, { "id": "N03-2036.10", "char_start": 287, "char_end": 320 }, { "id": "N03-2036.11", "char_start": 330, "char_end": 338 }, { "id": "N03-2036.12", "char_start": 345, "char_end": 351 }, { "id": "N03-2036.13", "char_start": 369, "char_end": 396 }, { "id": "N03-2036.14", "char_start": 417, "char_end": 431 }, { "id": "N03-2036.15", "char_start": 466, "char_end": 490 }, { "id": "N03-2036.16", "char_start": 500, "char_end": 507 }, { "id": "N03-2036.17", "char_start": 519, "char_end": 525 } ]
[ { "label": 1, "arg1": "N03-2036.1", "arg2": "N03-2036.2", "reverse": false }, { "label": 4, "arg1": "N03-2036.12", "arg2": "N03-2036.13", "reverse": false }, { "label": 1, "arg1": "N03-2036.15", "arg2": "N03-2036.16", "reverse": true } ]
N03-3010
Cooperative Model Based Language Understanding in Dialogue
In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system . We build this based on both Finite State Model (FSM) and Statistical Learning Model (SLM) . FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. Statistical approach is much more robust but less accurate. Cooperative Model incorporates all the three strategies together and thus can suppress all the shortcomings of different strategies and has all the advantages of the three strategies.
[ { "id": "N03-3010.1", "char_start": 35, "char_end": 52 }, { "id": "N03-3010.2", "char_start": 57, "char_end": 87 }, { "id": "N03-3010.3", "char_start": 93, "char_end": 108 }, { "id": "N03-3010.4", "char_start": 139, "char_end": 163 }, { "id": "N03-3010.5", "char_start": 168, "char_end": 200 }, { "id": "N03-3010.6", "char_start": 203, "char_end": 206 }, { "id": "N03-3010.7", "char_start": 235, "char_end": 257 }, { "id": "N03-3010.8", "char_start": 322, "char_end": 342 }, { "id": "N03-3010.9", "char_start": 382, "char_end": 399 } ]
[ { "label": 1, "arg1": "N03-3010.1", "arg2": "N03-3010.2", "reverse": false } ]
N03-4010
JAVELIN: A Flexible, Planner-Based Architecture for Question Answering
The JAVELIN system integrates a flexible, planning-based architecture with a variety of language processing modules to provide an open-domain question answering capability on free text . The demonstration will focus on how JAVELIN processes questions and retrieves the most likely answer candidates from the given text corpus . The operation of the system will be explained in depth through browsing the repository of data objects created by the system during each question answering session .
[ { "id": "N03-4010.1", "char_start": 5, "char_end": 19 }, { "id": "N03-4010.2", "char_start": 43, "char_end": 70 }, { "id": "N03-4010.3", "char_start": 89, "char_end": 116 }, { "id": "N03-4010.4", "char_start": 131, "char_end": 172 }, { "id": "N03-4010.5", "char_start": 176, "char_end": 185 }, { "id": "N03-4010.6", "char_start": 224, "char_end": 231 }, { "id": "N03-4010.7", "char_start": 242, "char_end": 251 }, { "id": "N03-4010.8", "char_start": 282, "char_end": 299 }, { "id": "N03-4010.9", "char_start": 315, "char_end": 326 }, { "id": "N03-4010.10", "char_start": 405, "char_end": 415 }, { "id": "N03-4010.11", "char_start": 419, "char_end": 431 }, { "id": "N03-4010.12", "char_start": 466, "char_end": 492 } ]
[ { "label": 4, "arg1": "N03-4010.1", "arg2": "N03-4010.2", "reverse": true }, { "label": 1, "arg1": "N03-4010.6", "arg2": "N03-4010.7", "reverse": false }, { "label": 4, "arg1": "N03-4010.8", "arg2": "N03-4010.9", "reverse": false }, { "label": 4, "arg1": "N03-4010.10", "arg2": "N03-4010.11", "reverse": true } ]
P03-1002
Using Predicate-Argument Structures for Information Extraction
In this paper we present a novel, customizable : IE paradigm that takes advantage of predicate-argument structures . We also introduce a new way of automatically identifying predicate argument structures , which is central to our IE paradigm . It is based on: (1) an extended set of features ; and (2) inductive decision tree learning . The experimental results prove our claim that accurate predicate-argument structures enable high quality IE results.
[ { "id": "P03-1002.1", "char_start": 50, "char_end": 61 }, { "id": "P03-1002.2", "char_start": 86, "char_end": 115 }, { "id": "P03-1002.3", "char_start": 175, "char_end": 204 }, { "id": "P03-1002.4", "char_start": 231, "char_end": 242 }, { "id": "P03-1002.5", "char_start": 284, "char_end": 292 }, { "id": "P03-1002.6", "char_start": 303, "char_end": 335 }, { "id": "P03-1002.7", "char_start": 393, "char_end": 422 }, { "id": "P03-1002.8", "char_start": 443, "char_end": 445 } ]
[ { "label": 1, "arg1": "P03-1002.1", "arg2": "P03-1002.2", "reverse": true } ]
P03-1005
Hierarchical Directed Acyclic Graph Kernel: Methods for Structured Natural Language Data
This paper proposes the Hierarchical Directed Acyclic Graph (HDAG) Kernel for structured natural language data . The HDAG Kernel directly accepts several levels of both chunks and their relations , and then efficiently computes the weighed sum of the number of common attribute sequences of the HDAGs . We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a kernel function . The results of the experiments demonstrate that the HDAG Kernel is superior to other kernel functions and baseline methods .
[ { "id": "P03-1005.1", "char_start": 25, "char_end": 74 }, { "id": "P03-1005.2", "char_start": 79, "char_end": 111 }, { "id": "P03-1005.3", "char_start": 118, "char_end": 129 }, { "id": "P03-1005.4", "char_start": 170, "char_end": 176 }, { "id": "P03-1005.5", "char_start": 187, "char_end": 196 }, { "id": "P03-1005.6", "char_start": 233, "char_end": 244 }, { "id": "P03-1005.7", "char_start": 269, "char_end": 288 }, { "id": "P03-1005.8", "char_start": 296, "char_end": 301 }, { "id": "P03-1005.9", "char_start": 338, "char_end": 361 }, { "id": "P03-1005.10", "char_start": 366, "char_end": 390 }, { "id": "P03-1005.11", "char_start": 424, "char_end": 442 }, { "id": "P03-1005.12", "char_start": 449, "char_end": 464 }, { "id": "P03-1005.13", "char_start": 519, "char_end": 530 }, { "id": "P03-1005.14", "char_start": 552, "char_end": 568 }, { "id": "P03-1005.15", "char_start": 573, "char_end": 589 } ]
[ { "label": 1, "arg1": "P03-1005.1", "arg2": "P03-1005.2", "reverse": false }, { "label": 3, "arg1": "P03-1005.7", "arg2": "P03-1005.8", "reverse": false }, { "label": 6, "arg1": "P03-1005.13", "arg2": "P03-1005.14", "reverse": false } ]
P03-1009
Clustering Polysemic Subcategorization Frame Distributions Semantically
Previous research has demonstrated the utility of clustering in inducing semantic verb classes from undisambiguated corpus data . We describe a new approach which involves clustering subcategorization frame (SCF) distributions using the Information Bottleneck and nearest neighbour methods. In contrast to previous work, we particularly focus on clustering polysemic verbs . A novel evaluation scheme is proposed which accounts for the effect of polysemy on the clusters , offering us a good insight into the potential and limitations of semantically classifying undisambiguated SCF data .
[ { "id": "P03-1009.1", "char_start": 51, "char_end": 61 }, { "id": "P03-1009.2", "char_start": 74, "char_end": 95 }, { "id": "P03-1009.3", "char_start": 117, "char_end": 128 }, { "id": "P03-1009.4", "char_start": 184, "char_end": 213 }, { "id": "P03-1009.5", "char_start": 238, "char_end": 260 }, { "id": "P03-1009.6", "char_start": 265, "char_end": 282 }, { "id": "P03-1009.7", "char_start": 358, "char_end": 373 }, { "id": "P03-1009.8", "char_start": 384, "char_end": 401 }, { "id": "P03-1009.9", "char_start": 447, "char_end": 455 }, { "id": "P03-1009.10", "char_start": 463, "char_end": 471 }, { "id": "P03-1009.11", "char_start": 539, "char_end": 563 }, { "id": "P03-1009.12", "char_start": 564, "char_end": 588 } ]
[ { "label": 3, "arg1": "P03-1009.2", "arg2": "P03-1009.3", "reverse": false }, { "label": 1, "arg1": "P03-1009.4", "arg2": "P03-1009.5", "reverse": true }, { "label": 3, "arg1": "P03-1009.9", "arg2": "P03-1009.10", "reverse": false } ]
P03-1022
A Machine Learning Approach to Pronoun Resolution in Spoken Dialogue
We apply a decision tree based approach to pronoun resolution in spoken dialogue . Our system deals with pronouns with NP- and non-NP-antecedents . We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features . We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron's (2002) manually tuned system .
[ { "id": "P03-1022.1", "char_start": 12, "char_end": 40 }, { "id": "P03-1022.2", "char_start": 44, "char_end": 62 }, { "id": "P03-1022.3", "char_start": 66, "char_end": 81 }, { "id": "P03-1022.4", "char_start": 106, "char_end": 114 }, { "id": "P03-1022.5", "char_start": 120, "char_end": 146 }, { "id": "P03-1022.6", "char_start": 169, "char_end": 177 }, { "id": "P03-1022.7", "char_start": 191, "char_end": 209 }, { "id": "P03-1022.8", "char_start": 213, "char_end": 228 }, { "id": "P03-1022.9", "char_start": 262, "char_end": 270 }, { "id": "P03-1022.10", "char_start": 306, "char_end": 327 }, { "id": "P03-1022.11", "char_start": 362, "char_end": 398 } ]
[ { "label": 1, "arg1": "P03-1022.1", "arg2": "P03-1022.2", "reverse": false }, { "label": 3, "arg1": "P03-1022.4", "arg2": "P03-1022.5", "reverse": true }, { "label": 1, "arg1": "P03-1022.7", "arg2": "P03-1022.8", "reverse": false } ]
P03-1030
Optimizing Story Link Detection is not Equivalent to Optimizing New Event Detection
Link detection has been regarded as a core technology for the Topic Detection and Tracking tasks of new event detection . In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of precision and recall on both systems. Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging , new similarity measures and expanded stop lists . Experimental results validate our hypothesis.
[ { "id": "P03-1030.1", "char_start": 1, "char_end": 15 }, { "id": "P03-1030.2", "char_start": 63, "char_end": 97 }, { "id": "P03-1030.3", "char_start": 101, "char_end": 120 }, { "id": "P03-1030.4", "char_start": 150, "char_end": 170 }, { "id": "P03-1030.5", "char_start": 175, "char_end": 194 }, { "id": "P03-1030.6", "char_start": 198, "char_end": 224 }, { "id": "P03-1030.7", "char_start": 258, "char_end": 267 }, { "id": "P03-1030.8", "char_start": 272, "char_end": 278 }, { "id": "P03-1030.9", "char_start": 398, "char_end": 420 }, { "id": "P03-1030.10", "char_start": 427, "char_end": 446 }, { "id": "P03-1030.11", "char_start": 460, "char_end": 470 } ]
[ { "label": 4, "arg1": "P03-1030.2", "arg2": "P03-1030.3", "reverse": false } ]
P03-1031
Corpus-based Discourse Understanding in Spoken Dialogue Systems
This paper concerns the discourse understanding process in spoken dialogue systems . This process enables the system to understand user utterances based on the context of a dialogue . Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding , it is not appropriate to decide on a single understanding result after each user utterance . By holding multiple candidates for understanding results and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved. This paper proposes a method for resolving this ambiguity based on statistical information obtained from dialogue corpora . Unlike conventional methods that use hand-crafted rules , the proposed method enables easy design of the discourse understanding process . Experiment results have shown that a system that exploits the proposed method performs sufficiently and that holding multiple candidates for understanding results is effective.
[ { "id": "P03-1031.1", "char_start": 25, "char_end": 56 }, { "id": "P03-1031.2", "char_start": 60, "char_end": 83 }, { "id": "P03-1031.3", "char_start": 132, "char_end": 147 }, { "id": "P03-1031.4", "char_start": 161, "char_end": 168 }, { "id": "P03-1031.5", "char_start": 174, "char_end": 182 }, { "id": "P03-1031.6", "char_start": 200, "char_end": 210 }, { "id": "P03-1031.7", "char_start": 219, "char_end": 232 }, { "id": "P03-1031.8", "char_start": 262, "char_end": 276 }, { "id": "P03-1031.9", "char_start": 288, "char_end": 297 }, { "id": "P03-1031.10", "char_start": 301, "char_end": 321 }, { "id": "P03-1031.11", "char_start": 368, "char_end": 381 }, { "id": "P03-1031.12", "char_start": 400, "char_end": 414 }, { "id": "P03-1031.13", "char_start": 437, "char_end": 447 }, { "id": "P03-1031.14", "char_start": 452, "char_end": 465 }, { "id": "P03-1031.15", "char_start": 492, "char_end": 501 }, { "id": "P03-1031.16", "char_start": 509, "char_end": 517 }, { "id": "P03-1031.17", "char_start": 534, "char_end": 566 }, { "id": "P03-1031.18", "char_start": 632, "char_end": 641 }, { "id": "P03-1031.19", "char_start": 651, "char_end": 674 }, { "id": "P03-1031.20", "char_start": 689, "char_end": 705 }, { "id": "P03-1031.21", "char_start": 745, "char_end": 763 }, { "id": "P03-1031.22", "char_start": 813, "char_end": 844 }, { "id": "P03-1031.23", "char_start": 973, "char_end": 983 }, { "id": "P03-1031.24", "char_start": 988, "char_end": 1001 } ]
[ { "label": 1, "arg1": "P03-1031.1", "arg2": "P03-1031.2", "reverse": false }, { "label": 3, "arg1": "P03-1031.4", "arg2": "P03-1031.5", "reverse": false }, { "label": 1, "arg1": "P03-1031.6", "arg2": "P03-1031.7", "reverse": false }, { "label": 3, "arg1": "P03-1031.9", "arg2": "P03-1031.10", "reverse": false }, { "label": 1, "arg1": "P03-1031.13", "arg2": "P03-1031.14", "reverse": false }, { "label": 3, "arg1": "P03-1031.19", "arg2": "P03-1031.20", "reverse": false }, { "label": 1, "arg1": "P03-1031.23", "arg2": "P03-1031.24", "reverse": false } ]
P03-1033
Flexible Guidance Generation using User Model in Spoken Dialogue Systems
We address appropriate user modeling in order to generate cooperative responses to each user in spoken dialogue systems . Unlike previous studies that focus on user 's knowledge or typical kinds of users , the user model we propose is more comprehensive. Specifically, we set up three dimensions of user models : skill level to the system, knowledge level on the target domain and the degree of hastiness . Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system. We obtained reasonable classification accuracy for all dimensions. Dialogue strategies based on the user modeling are implemented in Kyoto city bus information system that has been developed at our laboratory. Experimental evaluation shows that the cooperative responses adaptive to individual users serve as good guidance for novice users without increasing the dialogue duration for skilled users .
[ { "id": "P03-1033.1", "char_start": 24, "char_end": 37 }, { "id": "P03-1033.2", "char_start": 59, "char_end": 80 }, { "id": "P03-1033.3", "char_start": 89, "char_end": 93 }, { "id": "P03-1033.4", "char_start": 97, "char_end": 120 }, { "id": "P03-1033.5", "char_start": 161, "char_end": 165 }, { "id": "P03-1033.6", "char_start": 169, "char_end": 178 }, { "id": "P03-1033.7", "char_start": 199, "char_end": 204 }, { "id": "P03-1033.8", "char_start": 211, "char_end": 221 }, { "id": "P03-1033.9", "char_start": 300, "char_end": 311 }, { "id": "P03-1033.10", "char_start": 314, "char_end": 325 }, { "id": "P03-1033.11", "char_start": 341, "char_end": 356 }, { "id": "P03-1033.12", "char_start": 364, "char_end": 377 }, { "id": "P03-1033.13", "char_start": 396, "char_end": 405 }, { "id": "P03-1033.14", "char_start": 422, "char_end": 428 }, { "id": "P03-1033.15", "char_start": 458, "char_end": 480 }, { "id": "P03-1033.16", "char_start": 492, "char_end": 505 }, { "id": "P03-1033.17", "char_start": 554, "char_end": 577 }, { "id": "P03-1033.18", "char_start": 598, "char_end": 617 }, { "id": "P03-1033.19", "char_start": 631, "char_end": 644 }, { "id": "P03-1033.20", "char_start": 664, "char_end": 697 }, { "id": "P03-1033.21", "char_start": 780, "char_end": 801 }, { "id": "P03-1033.22", "char_start": 814, "char_end": 830 }, { "id": "P03-1033.23", "char_start": 858, "char_end": 870 }, { "id": "P03-1033.24", "char_start": 894, "char_end": 911 }, { "id": "P03-1033.25", "char_start": 916, "char_end": 929 } ]
[ { "label": 1, "arg1": "P03-1033.1", "arg2": "P03-1033.2", "reverse": false }, { "label": 3, "arg1": "P03-1033.14", "arg2": "P03-1033.15", "reverse": true }, { "label": 1, "arg1": "P03-1033.18", "arg2": "P03-1033.19", "reverse": true } ]
P03-1050
Unsupervised Learning of Arabic Stemming using a Parallel Corpus
This paper presents an unsupervised learning approach to building a non-English (Arabic) stemmer . The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources . No parallel text is needed after the training phase . Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre . Examples and results will be given for Arabic , but the approach is applicable to any language that needs affix removal . Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules , affix lists , and human annotated text , in addition to an unsupervised component . Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text , and 96% of the performance of the proprietary stemmer above.
[ { "id": "P03-1050.1", "char_start": 24, "char_end": 54 }, { "id": "P03-1050.2", "char_start": 69, "char_end": 97 }, { "id": "P03-1050.3", "char_start": 104, "char_end": 118 }, { "id": "P03-1050.4", "char_start": 131, "char_end": 162 }, { "id": "P03-1050.5", "char_start": 178, "char_end": 193 }, { "id": "P03-1050.6", "char_start": 222, "char_end": 237 }, { "id": "P03-1050.7", "char_start": 250, "char_end": 268 }, { "id": "P03-1050.8", "char_start": 274, "char_end": 287 }, { "id": "P03-1050.9", "char_start": 308, "char_end": 322 }, { "id": "P03-1050.10", "char_start": 325, "char_end": 354 }, { "id": "P03-1050.11", "char_start": 390, "char_end": 397 }, { "id": "P03-1050.12", "char_start": 435, "char_end": 441 }, { "id": "P03-1050.13", "char_start": 445, "char_end": 450 }, { "id": "P03-1050.14", "char_start": 492, "char_end": 498 }, { "id": "P03-1050.15", "char_start": 539, "char_end": 547 }, { "id": "P03-1050.16", "char_start": 559, "char_end": 572 }, { "id": "P03-1050.17", "char_start": 579, "char_end": 603 }, { "id": "P03-1050.18", "char_start": 621, "char_end": 630 }, { "id": "P03-1050.19", "char_start": 668, "char_end": 682 }, { "id": "P03-1050.20", "char_start": 695, "char_end": 700 }, { "id": "P03-1050.21", "char_start": 703, "char_end": 714 }, { "id": "P03-1050.22", "char_start": 721, "char_end": 741 }, { "id": "P03-1050.23", "char_start": 762, "char_end": 784 }, { "id": "P03-1050.24", "char_start": 787, "char_end": 808 }, { "id": "P03-1050.25", "char_start": 815, "char_end": 843 }, { "id": "P03-1050.26", "char_start": 882, "char_end": 899 }, { "id": "P03-1050.27", "char_start": 905, "char_end": 919 }, { "id": "P03-1050.28", "char_start": 968, "char_end": 975 } ]
[ { "label": 1, "arg1": "P03-1050.1", "arg2": "P03-1050.2", "reverse": false }, { "label": 1, "arg1": "P03-1050.3", "arg2": "P03-1050.4", "reverse": true }, { "label": 3, "arg1": "P03-1050.15", "arg2": "P03-1050.16", "reverse": true }, { "label": 2, "arg1": "P03-1050.17", "arg2": "P03-1050.18", "reverse": false }, { "label": 1, "arg1": "P03-1050.24", "arg2": "P03-1050.25", "reverse": true } ]
P03-1051
Language Model Based Arabic Word Segmentation
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme ). Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus . The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input . The language model is initially estimated from a small manually segmented corpus of about 110,000 words . To improve the segmentation accuracy , we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus , and re-estimate the model parameters with the expanded vocabulary and training corpus . The resulting Arabic word segmentation system achieves around 97% exact match accuracy on a test corpus containing 28,449 word tokens . We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest.
[ { "id": "P03-1051.1", "char_start": 16, "char_end": 40 }, { "id": "P03-1051.2", "char_start": 46, "char_end": 51 }, { "id": "P03-1051.3", "char_start": 59, "char_end": 63 }, { "id": "P03-1051.4", "char_start": 90, "char_end": 99 }, { "id": "P03-1051.5", "char_start": 107, "char_end": 114 }, { "id": "P03-1051.6", "char_start": 115, "char_end": 135 }, { "id": "P03-1051.7", "char_start": 177, "char_end": 185 }, { "id": "P03-1051.8", "char_start": 221, "char_end": 253 }, { "id": "P03-1051.9", "char_start": 282, "char_end": 304 }, { "id": "P03-1051.10", "char_start": 318, "char_end": 339 }, { "id": "P03-1051.11", "char_start": 353, "char_end": 378 }, { "id": "P03-1051.12", "char_start": 402, "char_end": 424 }, { "id": "P03-1051.13", "char_start": 456, "char_end": 473 }, { "id": "P03-1051.14", "char_start": 486, "char_end": 491 }, { "id": "P03-1051.15", "char_start": 498, "char_end": 512 }, { "id": "P03-1051.16", "char_start": 549, "char_end": 574 }, { "id": "P03-1051.17", "char_start": 592, "char_end": 597 }, { "id": "P03-1051.18", "char_start": 615, "char_end": 627 }, { "id": "P03-1051.19", "char_start": 628, "char_end": 636 }, { "id": "P03-1051.20", "char_start": 649, "char_end": 671 }, { "id": "P03-1051.21", "char_start": 704, "char_end": 709 }, { "id": "P03-1051.22", "char_start": 729, "char_end": 733 }, { "id": "P03-1051.23", "char_start": 734, "char_end": 752 }, { "id": "P03-1051.24", "char_start": 775, "char_end": 791 }, { "id": "P03-1051.25", "char_start": 810, "char_end": 820 }, { "id": "P03-1051.26", "char_start": 825, "char_end": 840 }, { "id": "P03-1051.27", "char_start": 857, "char_end": 888 }, { "id": "P03-1051.28", "char_start": 909, "char_end": 929 }, { "id": "P03-1051.29", "char_start": 935, "char_end": 946 }, { "id": "P03-1051.30", "char_start": 965, "char_end": 976 }, { "id": "P03-1051.31", "char_start": 1068, "char_end": 1094 }, { "id": "P03-1051.32", "char_start": 1132, "char_end": 1157 }, { "id": "P03-1051.33", "char_start": 1165, "char_end": 1173 } ]
[ { "label": 1, "arg1": "P03-1051.8", "arg2": "P03-1051.9", "reverse": false }, { "label": 1, "arg1": "P03-1051.10", "arg2": "P03-1051.11", "reverse": true }, { "label": 3, "arg1": "P03-1051.13", "arg2": "P03-1051.14", "reverse": false }, { "label": 3, "arg1": "P03-1051.15", "arg2": "P03-1051.16", "reverse": false }, { "label": 4, "arg1": "P03-1051.21", "arg2": "P03-1051.23", "reverse": false }, { "label": 1, "arg1": "P03-1051.24", "arg2": "P03-1051.25", "reverse": false }, { "label": 2, "arg1": "P03-1051.27", "arg2": "P03-1051.28", "reverse": false }, { "label": 4, "arg1": "P03-1051.29", "arg2": "P03-1051.30", "reverse": true }, { "label": 3, "arg1": "P03-1051.32", "arg2": "P03-1051.33", "reverse": false } ]
P03-1058
Exploiting Parallel Texts for Word Sense Disambiguation: An Empirical Study
A central problem of word sense disambiguation (WSD) is the lack of manually sense-tagged data required for supervised learning . In this paper, we evaluate an approach to automatically acquire sense-tagged training data from English-Chinese parallel corpora , which are then used for disambiguating the nouns in the SENSEVAL-2 English lexical sample task . Our investigation reveals that this method of acquiring sense-tagged data is promising. On a subset of the most difficult SENSEVAL-2 nouns , the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage . Our analysis also highlights the importance of the issue of domain dependence in evaluating WSD programs .
[ { "id": "P03-1058.1", "char_start": 23, "char_end": 54 }, { "id": "P03-1058.2", "char_start": 70, "char_end": 96 }, { "id": "P03-1058.3", "char_start": 110, "char_end": 129 }, { "id": "P03-1058.4", "char_start": 196, "char_end": 222 }, { "id": "P03-1058.5", "char_start": 228, "char_end": 260 }, { "id": "P03-1058.6", "char_start": 306, "char_end": 311 }, { "id": "P03-1058.7", "char_start": 319, "char_end": 357 }, { "id": "P03-1058.8", "char_start": 396, "char_end": 433 }, { "id": "P03-1058.9", "char_start": 482, "char_end": 498 }, { "id": "P03-1058.10", "char_start": 505, "char_end": 513 }, { "id": "P03-1058.11", "char_start": 650, "char_end": 676 }, { "id": "P03-1058.12", "char_start": 691, "char_end": 705 }, { "id": "P03-1058.13", "char_start": 768, "char_end": 785 }, { "id": "P03-1058.14", "char_start": 800, "char_end": 812 } ]
[ { "label": 1, "arg1": "P03-1058.1", "arg2": "P03-1058.2", "reverse": true }, { "label": 3, "arg1": "P03-1058.4", "arg2": "P03-1058.5", "reverse": false }, { "label": 3, "arg1": "P03-1058.13", "arg2": "P03-1058.14", "reverse": false } ]
P03-1068
Towards a Resource for Lexical Semantics: A Large German Corpus with Extensive Semantic Annotation
We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the large-scale acquisition of word-semantic information , e.g. the construction of domain-independent lexica . The backbone of the annotation are semantic roles in the frame semantics paradigm . We report experiences and evaluate the annotated data from the first project stage. On this basis, we discuss the problems of vagueness and ambiguity in semantic annotation .
[ { "id": "P03-1068.1", "char_start": 50, "char_end": 79 }, { "id": "P03-1068.2", "char_start": 127, "char_end": 167 }, { "id": "P03-1068.3", "char_start": 195, "char_end": 220 }, { "id": "P03-1068.4", "char_start": 243, "char_end": 253 }, { "id": "P03-1068.5", "char_start": 258, "char_end": 272 }, { "id": "P03-1068.6", "char_start": 280, "char_end": 304 }, { "id": "P03-1068.7", "char_start": 346, "char_end": 360 }, { "id": "P03-1068.8", "char_start": 433, "char_end": 442 }, { "id": "P03-1068.9", "char_start": 447, "char_end": 456 }, { "id": "P03-1068.10", "char_start": 460, "char_end": 479 } ]
[ { "label": 1, "arg1": "P03-1068.1", "arg2": "P03-1068.2", "reverse": false }, { "label": 3, "arg1": "P03-1068.9", "arg2": "P03-1068.10", "reverse": false } ]
P03-1070
Towards a Model of Face-to-Face Grounding
We investigate the verbal and nonverbal means for grounding , and propose a design for embodied conversational agents that relies on both kinds of signals to establish common ground in human-computer interaction . We analyzed eye gaze , head nods and attentional focus in the context of a direction-giving task . The distribution of nonverbal behaviors differed depending on the type of dialogue move being grounded, and the overall pattern reflected a monitoring of lack of negative feedback . Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state .
[ { "id": "P03-1070.1", "char_start": 20, "char_end": 46 }, { "id": "P03-1070.2", "char_start": 51, "char_end": 60 }, { "id": "P03-1070.3", "char_start": 88, "char_end": 118 }, { "id": "P03-1070.4", "char_start": 148, "char_end": 155 }, { "id": "P03-1070.5", "char_start": 169, "char_end": 182 }, { "id": "P03-1070.6", "char_start": 186, "char_end": 212 }, { "id": "P03-1070.7", "char_start": 227, "char_end": 235 }, { "id": "P03-1070.8", "char_start": 238, "char_end": 247 }, { "id": "P03-1070.9", "char_start": 252, "char_end": 269 }, { "id": "P03-1070.10", "char_start": 290, "char_end": 311 }, { "id": "P03-1070.11", "char_start": 334, "char_end": 353 }, { "id": "P03-1070.12", "char_start": 388, "char_end": 401 }, { "id": "P03-1070.13", "char_start": 476, "char_end": 493 }, { "id": "P03-1070.14", "char_start": 534, "char_end": 537 }, { "id": "P03-1070.15", "char_start": 548, "char_end": 583 }, { "id": "P03-1070.16", "char_start": 594, "char_end": 608 } ]
[ { "label": 3, "arg1": "P03-1070.9", "arg2": "P03-1070.10", "reverse": false }, { "label": 1, "arg1": "P03-1070.14", "arg2": "P03-1070.15", "reverse": true } ]
P03-2036
Comparison between CFG filtering techniques for LTAG and HPSG
An empirical comparison of CFG filtering techniques for LTAG and HPSG is presented. We demonstrate that an approximation of HPSG produces a more effective CFG filter than that of LTAG . We also investigate the reason for that difference.
[ { "id": "P03-2036.1", "char_start": 28, "char_end": 52 }, { "id": "P03-2036.2", "char_start": 57, "char_end": 61 }, { "id": "P03-2036.3", "char_start": 66, "char_end": 70 }, { "id": "P03-2036.4", "char_start": 125, "char_end": 129 }, { "id": "P03-2036.5", "char_start": 156, "char_end": 166 }, { "id": "P03-2036.6", "char_start": 180, "char_end": 184 } ]
[ { "label": 6, "arg1": "P03-2036.4", "arg2": "P03-2036.6", "reverse": false } ]
C04-1106
Lower and higher estimates of the number of "true analogies" between sentences contained in a large multilingual corpus
The reality of analogies between words is refuted by noone (e.g., I walked is to to walk as I laughed is to to laugh, noted I walked : to walk :: I laughed : to laugh). But computational linguists seem to be quite dubious about analogies between sentences : they would not be enough numerous to be of any use. We report experiments conducted on a multilingual corpus to estimate the number of analogies among the sentences that it contains. We give two estimates, a lower one and a higher one. As an analogy must be valid on the level of form as well as on the level of meaning , we relied on the idea that translation should preserve meaning to test for similar meanings .
[ { "id": "C04-1106.1", "char_start": 16, "char_end": 39 }, { "id": "C04-1106.2", "char_start": 174, "char_end": 197 }, { "id": "C04-1106.3", "char_start": 229, "char_end": 256 }, { "id": "C04-1106.4", "char_start": 348, "char_end": 367 }, { "id": "C04-1106.5", "char_start": 394, "char_end": 403 }, { "id": "C04-1106.6", "char_start": 414, "char_end": 423 }, { "id": "C04-1106.7", "char_start": 501, "char_end": 508 }, { "id": "C04-1106.8", "char_start": 539, "char_end": 543 }, { "id": "C04-1106.9", "char_start": 571, "char_end": 578 }, { "id": "C04-1106.10", "char_start": 608, "char_end": 619 }, { "id": "C04-1106.11", "char_start": 636, "char_end": 643 }, { "id": "C04-1106.12", "char_start": 664, "char_end": 672 } ]
[ { "label": 3, "arg1": "C04-1106.5", "arg2": "C04-1106.6", "reverse": false } ]
N04-1024
Evaluating Multiple Aspects of Coherence in Student Essays
CriterionSM Online Essay Evaluation Service includes a capability that labels sentences in student writing with essay-based discourse elements (e.g., thesis statements ). We describe a new system that enhances Criterion 's capability, by evaluating multiple aspects of coherence in essays . This system identifies features of sentences based on semantic similarity measures and discourse structure . A support vector machine uses these features to capture breakdowns in coherence due to relatedness to the essay question and relatedness between discourse elements . Intra-sentential quality is evaluated with rule-based heuristics . Results indicate that the system yields higher performance than a baseline on all three aspects.
[ { "id": "N04-1024.1", "char_start": 1, "char_end": 44 }, { "id": "N04-1024.2", "char_start": 79, "char_end": 88 }, { "id": "N04-1024.3", "char_start": 100, "char_end": 107 }, { "id": "N04-1024.4", "char_start": 113, "char_end": 143 }, { "id": "N04-1024.5", "char_start": 151, "char_end": 168 }, { "id": "N04-1024.6", "char_start": 211, "char_end": 220 }, { "id": "N04-1024.7", "char_start": 270, "char_end": 279 }, { "id": "N04-1024.8", "char_start": 283, "char_end": 289 }, { "id": "N04-1024.9", "char_start": 315, "char_end": 323 }, { "id": "N04-1024.10", "char_start": 327, "char_end": 336 }, { "id": "N04-1024.11", "char_start": 346, "char_end": 374 }, { "id": "N04-1024.12", "char_start": 379, "char_end": 398 }, { "id": "N04-1024.13", "char_start": 403, "char_end": 425 }, { "id": "N04-1024.14", "char_start": 437, "char_end": 445 }, { "id": "N04-1024.15", "char_start": 457, "char_end": 480 }, { "id": "N04-1024.16", "char_start": 507, "char_end": 521 }, { "id": "N04-1024.17", "char_start": 546, "char_end": 564 }, { "id": "N04-1024.18", "char_start": 567, "char_end": 591 }, { "id": "N04-1024.19", "char_start": 610, "char_end": 631 }, { "id": "N04-1024.20", "char_start": 700, "char_end": 708 } ]
[ { "label": 1, "arg1": "N04-1024.1", "arg2": "N04-1024.3", "reverse": false }, { "label": 3, "arg1": "N04-1024.7", "arg2": "N04-1024.8", "reverse": false }, { "label": 3, "arg1": "N04-1024.9", "arg2": "N04-1024.10", "reverse": false }, { "label": 1, "arg1": "N04-1024.13", "arg2": "N04-1024.14", "reverse": true }, { "label": 5, "arg1": "N04-1024.18", "arg2": "N04-1024.19", "reverse": true } ]
H05-1005
Improving Multilingual Summarization: Using Redundancy in the Input to Correct MT errors
In this paper, we use the information redundancy in multilingual input to correct errors in machine translation and thus improve the quality of multilingual summaries . We consider the case of multi-document summarization , where the input documents are in Arabic , and the output summary is in English . Typically, information that makes it to a summary appears in many different lexical-syntactic forms in the input documents . Further, the use of multiple machine translation systems provides yet more redundancy , yielding different ways to realize that information in English . We demonstrate how errors in the machine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy , focusing on noun phrases .
[ { "id": "H05-1005.1", "char_start": 27, "char_end": 49 }, { "id": "H05-1005.2", "char_start": 53, "char_end": 71 }, { "id": "H05-1005.3", "char_start": 93, "char_end": 112 }, { "id": "H05-1005.4", "char_start": 145, "char_end": 167 }, { "id": "H05-1005.5", "char_start": 194, "char_end": 222 }, { "id": "H05-1005.6", "char_start": 241, "char_end": 250 }, { "id": "H05-1005.7", "char_start": 258, "char_end": 264 }, { "id": "H05-1005.8", "char_start": 282, "char_end": 289 }, { "id": "H05-1005.9", "char_start": 296, "char_end": 303 }, { "id": "H05-1005.10", "char_start": 348, "char_end": 355 }, { "id": "H05-1005.11", "char_start": 382, "char_end": 405 }, { "id": "H05-1005.12", "char_start": 419, "char_end": 428 }, { "id": "H05-1005.13", "char_start": 460, "char_end": 487 }, { "id": "H05-1005.14", "char_start": 506, "char_end": 516 }, { "id": "H05-1005.15", "char_start": 559, "char_end": 570 }, { "id": "H05-1005.16", "char_start": 574, "char_end": 581 }, { "id": "H05-1005.17", "char_start": 617, "char_end": 637 }, { "id": "H05-1005.18", "char_start": 651, "char_end": 667 }, { "id": "H05-1005.19", "char_start": 725, "char_end": 735 }, { "id": "H05-1005.20", "char_start": 750, "char_end": 762 } ]
[ { "label": 3, "arg1": "H05-1005.1", "arg2": "H05-1005.2", "reverse": false }, { "label": 3, "arg1": "H05-1005.6", "arg2": "H05-1005.7", "reverse": true }, { "label": 3, "arg1": "H05-1005.8", "arg2": "H05-1005.9", "reverse": true }, { "label": 3, "arg1": "H05-1005.15", "arg2": "H05-1005.16", "reverse": true } ]
H05-1012
A Maximum Entropy Word Aligner for Arabic-English Machine Translation
This paper presents a maximum entropy word alignment algorithm for Arabic-English based on supervised training data . We demonstrate that it is feasible to create training material for problems in machine translation and that a mixture of supervised and unsupervised methods yields superior performance . The probabilistic model used in the alignment directly models the link decisions . Significant improvement over traditional word alignment techniques is shown as well as improvement on several machine translation tests . Performance of the algorithm is contrasted with human annotation performance .
[ { "id": "H05-1012.1", "char_start": 23, "char_end": 63 }, { "id": "H05-1012.2", "char_start": 68, "char_end": 82 }, { "id": "H05-1012.3", "char_start": 92, "char_end": 116 }, { "id": "H05-1012.4", "char_start": 164, "char_end": 181 }, { "id": "H05-1012.5", "char_start": 198, "char_end": 217 }, { "id": "H05-1012.6", "char_start": 240, "char_end": 275 }, { "id": "H05-1012.7", "char_start": 292, "char_end": 303 }, { "id": "H05-1012.8", "char_start": 310, "char_end": 329 }, { "id": "H05-1012.9", "char_start": 342, "char_end": 351 }, { "id": "H05-1012.10", "char_start": 372, "char_end": 386 }, { "id": "H05-1012.11", "char_start": 430, "char_end": 455 }, { "id": "H05-1012.12", "char_start": 499, "char_end": 524 }, { "id": "H05-1012.13", "char_start": 575, "char_end": 603 } ]
[ { "label": 1, "arg1": "H05-1012.1", "arg2": "H05-1012.3", "reverse": true }, { "label": 2, "arg1": "H05-1012.6", "arg2": "H05-1012.7", "reverse": false }, { "label": 1, "arg1": "H05-1012.8", "arg2": "H05-1012.9", "reverse": false } ]
H05-1095
Translating with non-contiguous phrases
This paper presents a phrase-based statistical machine translation method , based on non-contiguous phrases , i.e. phrases with gaps. A method for producing such phrases from a word-aligned corpora is proposed. A statistical translation model is also presented that deals such phrases , as well as a training method based on the maximization of translation accuracy , as measured with the NIST evaluation metric . Translations are produced by means of a beam-search decoder . Experimental results are presented, that demonstrate how the proposed method allows to better generalize from the training data .
[ { "id": "H05-1095.1", "char_start": 23, "char_end": 74 }, { "id": "H05-1095.2", "char_start": 86, "char_end": 108 }, { "id": "H05-1095.3", "char_start": 116, "char_end": 123 }, { "id": "H05-1095.4", "char_start": 163, "char_end": 170 }, { "id": "H05-1095.5", "char_start": 178, "char_end": 198 }, { "id": "H05-1095.6", "char_start": 214, "char_end": 243 }, { "id": "H05-1095.7", "char_start": 278, "char_end": 285 }, { "id": "H05-1095.8", "char_start": 301, "char_end": 316 }, { "id": "H05-1095.9", "char_start": 346, "char_end": 366 }, { "id": "H05-1095.10", "char_start": 390, "char_end": 412 }, { "id": "H05-1095.11", "char_start": 415, "char_end": 427 }, { "id": "H05-1095.12", "char_start": 455, "char_end": 474 }, { "id": "H05-1095.13", "char_start": 591, "char_end": 604 } ]
[ { "label": 1, "arg1": "H05-1095.1", "arg2": "H05-1095.2", "reverse": true }, { "label": 4, "arg1": "H05-1095.4", "arg2": "H05-1095.5", "reverse": false }, { "label": 1, "arg1": "H05-1095.6", "arg2": "H05-1095.7", "reverse": false }, { "label": 1, "arg1": "H05-1095.8", "arg2": "H05-1095.9", "reverse": true }, { "label": 1, "arg1": "H05-1095.11", "arg2": "H05-1095.12", "reverse": true } ]
H05-1117
Automatically Evaluating Answers to Definition Questions
Following recent developments in the automatic evaluation of machine translation and document summarization , we present a similar approach, implemented in a measure called POURPRE , for automatically evaluating answers to definition questions . Until now, the only way to assess the correctness of answers to such questions involves manual determination of whether an information nugget appears in a system's response. The lack of automatic methods for scoring system output is an impediment to progress in the field, which we address with this work. Experiments with the TREC 2003 and TREC 2004 QA tracks indicate that rankings produced by our metric correlate highly with official rankings , and that POURPRE outperforms direct application of existing metrics.
[ { "id": "H05-1117.1", "char_start": 38, "char_end": 58 }, { "id": "H05-1117.2", "char_start": 62, "char_end": 81 }, { "id": "H05-1117.3", "char_start": 86, "char_end": 108 }, { "id": "H05-1117.4", "char_start": 174, "char_end": 181 }, { "id": "H05-1117.5", "char_start": 188, "char_end": 244 }, { "id": "H05-1117.6", "char_start": 455, "char_end": 476 }, { "id": "H05-1117.7", "char_start": 574, "char_end": 607 }, { "id": "H05-1117.8", "char_start": 622, "char_end": 630 }, { "id": "H05-1117.9", "char_start": 676, "char_end": 693 }, { "id": "H05-1117.10", "char_start": 705, "char_end": 712 } ]
[ { "label": 1, "arg1": "H05-1117.1", "arg2": "H05-1117.2", "reverse": false }, { "label": 6, "arg1": "H05-1117.8", "arg2": "H05-1117.9", "reverse": false } ]
H05-2007
Pattern Visualization for Machine Translation Output
We describe a method for identifying systematic patterns in translation data using part-of-speech tag sequences . We incorporate this analysis into a diagnostic tool intended for developers of machine translation systems , and demonstrate how our application can be used by developers to explore patterns in machine translation output .
[ { "id": "H05-2007.1", "char_start": 49, "char_end": 57 }, { "id": "H05-2007.2", "char_start": 61, "char_end": 77 }, { "id": "H05-2007.3", "char_start": 84, "char_end": 112 }, { "id": "H05-2007.4", "char_start": 151, "char_end": 166 }, { "id": "H05-2007.5", "char_start": 180, "char_end": 190 }, { "id": "H05-2007.6", "char_start": 194, "char_end": 221 }, { "id": "H05-2007.7", "char_start": 275, "char_end": 285 }, { "id": "H05-2007.8", "char_start": 297, "char_end": 305 }, { "id": "H05-2007.9", "char_start": 309, "char_end": 335 } ]
[ { "label": 4, "arg1": "H05-2007.1", "arg2": "H05-2007.2", "reverse": false }, { "label": 4, "arg1": "H05-2007.8", "arg2": "H05-2007.9", "reverse": false } ]
I05-2021
Evaluating the Word Sense Disambiguation Performance of Statistical Machine Translation
We present the first known empirical test of an increasingly common speculative claim, by evaluating a representative Chinese-to-English SMT model directly on word sense disambiguation performance , using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sample task . Much effort has been put in designing and evaluating dedicated word sense disambiguation (WSD) models , in particular with the Senseval series of workshops. At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences . Surprisingly however, the WSD accuracy of SMT models has never been evaluated and compared with that of the dedicated WSD models . We present controlled experiments showing the WSD accuracy of current typical SMT models to be significantly lower than that of all the dedicated WSD models considered. This tends to support the view that despite recent speculative claims to the contrary, current SMT models do have limitations in comparison with dedicated WSD models , and that SMT should benefit from the better predictions made by the WSD models .
[ { "id": "I05-2021.1", "char_start": 28, "char_end": 42 }, { "id": "I05-2021.2", "char_start": 119, "char_end": 147 }, { "id": "I05-2021.3", "char_start": 160, "char_end": 197 }, { "id": "I05-2021.4", "char_start": 215, "char_end": 241 }, { "id": "I05-2021.5", "char_start": 246, "char_end": 254 }, { "id": "I05-2021.6", "char_start": 264, "char_end": 302 }, { "id": "I05-2021.7", "char_start": 368, "char_end": 406 }, { "id": "I05-2021.8", "char_start": 432, "char_end": 440 }, { "id": "I05-2021.9", "char_start": 511, "char_end": 522 }, { "id": "I05-2021.10", "char_start": 526, "char_end": 563 }, { "id": "I05-2021.11", "char_start": 578, "char_end": 588 }, { "id": "I05-2021.12", "char_start": 622, "char_end": 633 }, { "id": "I05-2021.13", "char_start": 641, "char_end": 646 }, { "id": "I05-2021.14", "char_start": 650, "char_end": 675 }, { "id": "I05-2021.15", "char_start": 704, "char_end": 707 }, { "id": "I05-2021.16", "char_start": 708, "char_end": 716 }, { "id": "I05-2021.17", "char_start": 720, "char_end": 730 }, { "id": "I05-2021.18", "char_start": 796, "char_end": 806 }, { "id": "I05-2021.19", "char_start": 855, "char_end": 858 }, { "id": "I05-2021.20", "char_start": 859, "char_end": 867 }, { "id": "I05-2021.21", "char_start": 887, "char_end": 897 }, { "id": "I05-2021.22", "char_start": 955, "char_end": 965 }, { "id": "I05-2021.23", "char_start": 1073, "char_end": 1083 }, { "id": "I05-2021.24", "char_start": 1133, "char_end": 1143 }, { "id": "I05-2021.25", "char_start": 1155, "char_end": 1158 }, { "id": "I05-2021.26", "char_start": 1214, "char_end": 1224 } ]
[ { "label": 3, "arg1": "I05-2021.12", "arg2": "I05-2021.13", "reverse": false }, { "label": 2, "arg1": "I05-2021.16", "arg2": "I05-2021.17", "reverse": true }, { "label": 6, "arg1": "I05-2021.23", "arg2": "I05-2021.24", "reverse": false } ]
I05-2048
Statistical Machine Translation Part I: Hands-On Introduction
Statistical machine translation (SMT) is currently one of the hot spots in natural language processing . Over the last few years dramatic improvements have been made, and a number of comparative evaluations have shown, that SMT gives competitive results to rule-based translation systems , requiring significantly less development time. This is particularly important when building translation systems for new language pairs or new domains . This workshop is intended to give an introduction to statistical machine translation with a focus on practical considerations. Participants should be able, after attending this workshop, to set out building an SMT system themselves and achieving good baseline results in a short time. The tutorial will cover the basics of SMT : Theory will be put into practice. STTK , a statistical machine translation tool kit , will be introduced and used to build a working translation system . STTK has been developed by the presenter and co-workers over a number of years and is currently used as the basis of CMU's SMT system . It has also successfully been coupled with rule-based and example based machine translation modules to build a multi engine machine translation system . The source code of the tool kit will be made available.
[ { "id": "I05-2048.1", "char_start": 1, "char_end": 38 }, { "id": "I05-2048.2", "char_start": 76, "char_end": 103 }, { "id": "I05-2048.3", "char_start": 225, "char_end": 228 }, { "id": "I05-2048.4", "char_start": 258, "char_end": 288 }, { "id": "I05-2048.5", "char_start": 383, "char_end": 402 }, { "id": "I05-2048.6", "char_start": 411, "char_end": 425 }, { "id": "I05-2048.7", "char_start": 433, "char_end": 440 }, { "id": "I05-2048.8", "char_start": 496, "char_end": 527 }, { "id": "I05-2048.9", "char_start": 653, "char_end": 663 }, { "id": "I05-2048.10", "char_start": 694, "char_end": 710 }, { "id": "I05-2048.11", "char_start": 766, "char_end": 769 }, { "id": "I05-2048.12", "char_start": 806, "char_end": 810 }, { "id": "I05-2048.13", "char_start": 815, "char_end": 855 }, { "id": "I05-2048.14", "char_start": 905, "char_end": 923 }, { "id": "I05-2048.15", "char_start": 926, "char_end": 930 }, { "id": "I05-2048.16", "char_start": 1043, "char_end": 1059 }, { "id": "I05-2048.17", "char_start": 1105, "char_end": 1161 }, { "id": "I05-2048.18", "char_start": 1173, "char_end": 1212 }, { "id": "I05-2048.19", "char_start": 1219, "char_end": 1230 }, { "id": "I05-2048.20", "char_start": 1238, "char_end": 1246 } ]
[ { "label": 6, "arg1": "I05-2048.3", "arg2": "I05-2048.4", "reverse": false } ]
I05-4010
Harvesting the Bitexts of the Laws of Hong Kong From the Web
In this paper we present our recent work on harvesting English-Chinese bitexts of the laws of Hong Kong from the Web and aligning them to the subparagraph level via utilizing the numbering system in the legal text hierarchy . Basic methodology and practical techniques are reported in detail. The resultant bilingual corpus , 10.4M English words and 18.3M Chinese characters , is an authoritative and comprehensive text collection covering the specific and special domain of HK laws. It is particularly valuable to empirical MT research . This piece of work has also laid a foundation for exploring and harvesting English-Chinese bitexts in a larger volume from the Web .
[ { "id": "I05-4010.1", "char_start": 56, "char_end": 79 }, { "id": "I05-4010.2", "char_start": 114, "char_end": 117 }, { "id": "I05-4010.3", "char_start": 143, "char_end": 155 }, { "id": "I05-4010.4", "char_start": 180, "char_end": 196 }, { "id": "I05-4010.5", "char_start": 204, "char_end": 224 }, { "id": "I05-4010.6", "char_start": 308, "char_end": 324 }, { "id": "I05-4010.7", "char_start": 333, "char_end": 346 }, { "id": "I05-4010.8", "char_start": 357, "char_end": 375 }, { "id": "I05-4010.9", "char_start": 416, "char_end": 431 }, { "id": "I05-4010.10", "char_start": 516, "char_end": 537 }, { "id": "I05-4010.11", "char_start": 615, "char_end": 638 }, { "id": "I05-4010.12", "char_start": 667, "char_end": 670 } ]
[ { "label": 4, "arg1": "I05-4010.1", "arg2": "I05-4010.2", "reverse": false }, { "label": 4, "arg1": "I05-4010.11", "arg2": "I05-4010.12", "reverse": false } ]
I05-5003
Using Machine Translation Evaluation Techniques to Determine Sentence-level Semantic Equivalence
The task of machine translation (MT) evaluation is closely related to the task of sentence-level semantic equivalence classification . This paper investigates the utility of applying standard MT evaluation methods (BLEU, NIST, WER and PER) to building classifiers to predict semantic equivalence and entailment . We also introduce a novel classification method based on PER which leverages part of speech information of the words contributing to the word matches and non-matches in the sentence . Our results show that MT evaluation techniques are able to produce useful features for paraphrase classification and to a lesser extent entailment . Our technique gives a substantial improvement in paraphrase classification accuracy over all of the other models used in the experiments.
[ { "id": "I05-5003.1", "char_start": 13, "char_end": 48 }, { "id": "I05-5003.2", "char_start": 83, "char_end": 133 }, { "id": "I05-5003.3", "char_start": 193, "char_end": 240 }, { "id": "I05-5003.4", "char_start": 253, "char_end": 264 }, { "id": "I05-5003.5", "char_start": 276, "char_end": 296 }, { "id": "I05-5003.6", "char_start": 301, "char_end": 311 }, { "id": "I05-5003.7", "char_start": 340, "char_end": 361 }, { "id": "I05-5003.8", "char_start": 371, "char_end": 374 }, { "id": "I05-5003.9", "char_start": 391, "char_end": 417 }, { "id": "I05-5003.10", "char_start": 425, "char_end": 430 }, { "id": "I05-5003.11", "char_start": 451, "char_end": 479 }, { "id": "I05-5003.12", "char_start": 487, "char_end": 495 }, { "id": "I05-5003.13", "char_start": 520, "char_end": 544 }, { "id": "I05-5003.14", "char_start": 572, "char_end": 580 }, { "id": "I05-5003.15", "char_start": 585, "char_end": 610 }, { "id": "I05-5003.16", "char_start": 634, "char_end": 644 }, { "id": "I05-5003.17", "char_start": 651, "char_end": 660 }, { "id": "I05-5003.18", "char_start": 696, "char_end": 730 }, { "id": "I05-5003.19", "char_start": 753, "char_end": 759 } ]
[ { "label": 6, "arg1": "I05-5003.1", "arg2": "I05-5003.2", "reverse": false }, { "label": 1, "arg1": "I05-5003.3", "arg2": "I05-5003.4", "reverse": false }, { "label": 1, "arg1": "I05-5003.7", "arg2": "I05-5003.8", "reverse": true }, { "label": 3, "arg1": "I05-5003.9", "arg2": "I05-5003.10", "reverse": false }, { "label": 2, "arg1": "I05-5003.13", "arg2": "I05-5003.14", "reverse": false }, { "label": 2, "arg1": "I05-5003.17", "arg2": "I05-5003.18", "reverse": false } ]
I05-5008
Automatic generation of paraphrases to be used as translation references in objective evaluation measures of machine translation
We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like BLEU and NIST . We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality : at least 99% correct sentences ; (ii) their equivalence in meaning : at least 96% correct paraphrases either by meaning equivalence or entailment ; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases : slightly superior to that of hand-produced sets . The paraphrase sets produced by this method thus seem adequate as reference sets to be used for MT evaluation .
[ { "id": "I05-5008.1", "char_start": 50, "char_end": 60 }, { "id": "I05-5008.2", "char_start": 71, "char_end": 85 }, { "id": "I05-5008.3", "char_start": 100, "char_end": 114 }, { "id": "I05-5008.4", "char_start": 128, "char_end": 167 }, { "id": "I05-5008.5", "char_start": 173, "char_end": 177 }, { "id": "I05-5008.6", "char_start": 182, "char_end": 186 }, { "id": "I05-5008.7", "char_start": 220, "char_end": 231 }, { "id": "I05-5008.8", "char_start": 275, "char_end": 289 }, { "id": "I05-5008.9", "char_start": 313, "char_end": 322 }, { "id": "I05-5008.10", "char_start": 336, "char_end": 358 }, { "id": "I05-5008.11", "char_start": 382, "char_end": 393 }, { "id": "I05-5008.12", "char_start": 404, "char_end": 423 }, { "id": "I05-5008.13", "char_start": 427, "char_end": 437 }, { "id": "I05-5008.14", "char_start": 474, "char_end": 507 }, { "id": "I05-5008.15", "char_start": 520, "char_end": 531 }, { "id": "I05-5008.16", "char_start": 563, "char_end": 581 }, { "id": "I05-5008.17", "char_start": 588, "char_end": 598 }, { "id": "I05-5008.18", "char_start": 650, "char_end": 664 }, { "id": "I05-5008.19", "char_start": 680, "char_end": 693 } ]
[ { "label": 4, "arg1": "I05-5008.1", "arg2": "I05-5008.2", "reverse": false }, { "label": 3, "arg1": "I05-5008.14", "arg2": "I05-5008.15", "reverse": false }, { "label": 1, "arg1": "I05-5008.18", "arg2": "I05-5008.19", "reverse": false } ]
I05-6011
Annotating Honorifics Denoting Social Ranking of Referents
This paper proposes an annotating scheme that encodes honorifics (respectful words). Honorifics are used extensively in Japanese , reflecting the social relationship (e.g. social ranks and age) of the referents . This referential information is vital for resolving zero pronouns and improving machine translation outputs . Annotating honorifics is a complex task that involves identifying a predicate with honorifics , assigning ranks to referents of the predicate , calibrating the ranks , and connecting referents with their predicates .
[ { "id": "I05-6011.1", "char_start": 24, "char_end": 41 }, { "id": "I05-6011.2", "char_start": 55, "char_end": 65 }, { "id": "I05-6011.3", "char_start": 86, "char_end": 96 }, { "id": "I05-6011.4", "char_start": 121, "char_end": 129 }, { "id": "I05-6011.5", "char_start": 202, "char_end": 211 }, { "id": "I05-6011.6", "char_start": 219, "char_end": 242 }, { "id": "I05-6011.7", "char_start": 266, "char_end": 279 }, { "id": "I05-6011.8", "char_start": 294, "char_end": 321 }, { "id": "I05-6011.9", "char_start": 335, "char_end": 345 }, { "id": "I05-6011.10", "char_start": 392, "char_end": 401 }, { "id": "I05-6011.11", "char_start": 407, "char_end": 417 }, { "id": "I05-6011.12", "char_start": 430, "char_end": 435 }, { "id": "I05-6011.13", "char_start": 439, "char_end": 448 }, { "id": "I05-6011.14", "char_start": 456, "char_end": 465 }, { "id": "I05-6011.15", "char_start": 484, "char_end": 489 }, { "id": "I05-6011.16", "char_start": 507, "char_end": 516 }, { "id": "I05-6011.17", "char_start": 528, "char_end": 538 } ]
[ { "label": 3, "arg1": "I05-6011.1", "arg2": "I05-6011.2", "reverse": false }, { "label": 2, "arg1": "I05-6011.6", "arg2": "I05-6011.8", "reverse": false }, { "label": 3, "arg1": "I05-6011.12", "arg2": "I05-6011.13", "reverse": false } ]
J05-1003
Discriminative Reranking for Natural Language Parsing
This article considers approaches which rerank the output of an existing probabilistic parser . The base parser produces a set of candidate parses for each input sentence , with associated probabilities that define an initial ranking of these parses . A second model then attempts to improve upon this initial ranking , using additional features of the tree as evidence. The strength of our approach is that it allows a tree to be represented as an arbitrary set of features , without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account . We introduce a new method for the reranking task , based on the boosting approach to ranking problems described in Freund et al. (1998). We apply the boosting method to parsing the Wall Street Journal treebank . The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model . The new model achieved 89.75% F-measure , a 13% relative decrease in F-measure error over the baseline model's score of 88.2%. The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data . Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach . We argue that the method is an appealing alternative - in terms of both simplicity and efficiency - to work on feature selection methods within log-linear (maximum-entropy) models . Although the experiments in this article are on natural language parsing (NLP) , the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks , for example, speech recognition , machine translation , or natural language generation .
[ { "id": "J05-1003.1", "char_start": 74, "char_end": 94 }, { "id": "J05-1003.2", "char_start": 106, "char_end": 112 }, { "id": "J05-1003.3", "char_start": 131, "char_end": 147 }, { "id": "J05-1003.4", "char_start": 163, "char_end": 171 }, { "id": "J05-1003.5", "char_start": 190, "char_end": 203 }, { "id": "J05-1003.6", "char_start": 227, "char_end": 234 }, { "id": "J05-1003.7", "char_start": 244, "char_end": 250 }, { "id": "J05-1003.8", "char_start": 262, "char_end": 267 }, { "id": "J05-1003.9", "char_start": 311, "char_end": 318 }, { "id": "J05-1003.10", "char_start": 338, "char_end": 346 }, { "id": "J05-1003.11", "char_start": 354, "char_end": 358 }, { "id": "J05-1003.12", "char_start": 421, "char_end": 425 }, { "id": "J05-1003.13", "char_start": 467, "char_end": 475 }, { "id": "J05-1003.14", "char_start": 511, "char_end": 519 }, { "id": "J05-1003.15", "char_start": 573, "char_end": 583 }, { "id": "J05-1003.16", "char_start": 589, "char_end": 605 }, { "id": "J05-1003.17", "char_start": 624, "char_end": 632 }, { "id": "J05-1003.18", "char_start": 682, "char_end": 696 }, { "id": "J05-1003.19", "char_start": 712, "char_end": 729 }, { "id": "J05-1003.20", "char_start": 733, "char_end": 749 }, { "id": "J05-1003.21", "char_start": 798, "char_end": 813 }, { "id": "J05-1003.22", "char_start": 817, "char_end": 824 }, { "id": "J05-1003.23", "char_start": 829, "char_end": 857 }, { "id": "J05-1003.24", "char_start": 884, "char_end": 898 }, { "id": "J05-1003.25", "char_start": 907, "char_end": 921 }, { "id": "J05-1003.26", "char_start": 988, "char_end": 996 }, { "id": "J05-1003.27", "char_start": 1002, "char_end": 1013 }, { "id": "J05-1003.28", "char_start": 1053, "char_end": 1058 }, { "id": "J05-1003.29", "char_start": 1069, "char_end": 1074 }, { "id": "J05-1003.30", "char_start": 1091, "char_end": 1100 }, { "id": "J05-1003.31", "char_start": 1130, "char_end": 1139 }, { "id": "J05-1003.32", "char_start": 1155, "char_end": 1177 }, { "id": "J05-1003.33", "char_start": 1240, "char_end": 1257 }, { "id": "J05-1003.34", "char_start": 1287, "char_end": 1316 }, { "id": "J05-1003.35", "char_start": 1324, "char_end": 1336 }, { "id": "J05-1003.36", "char_start": 1424, "char_end": 1438 }, { "id": "J05-1003.37", "char_start": 1446, "char_end": 1463 }, { "id": "J05-1003.38", "char_start": 1577, "char_end": 1602 }, { "id": "J05-1003.39", "char_start": 1610, "char_end": 1645 }, { "id": "J05-1003.40", "char_start": 1696, "char_end": 1726 }, { "id": "J05-1003.41", "char_start": 1777, "char_end": 1789 }, { "id": "J05-1003.42", "char_start": 1820, "char_end": 1833 }, { "id": "J05-1003.43", "char_start": 1849, "char_end": 1867 }, { "id": "J05-1003.44", "char_start": 1870, "char_end": 1889 }, { "id": "J05-1003.45", "char_start": 1895, "char_end": 1922 } ]
[ { "label": 3, "arg1": "J05-1003.3", "arg2": "J05-1003.4", "reverse": false }, { "label": 3, "arg1": "J05-1003.10", "arg2": "J05-1003.11", "reverse": false }, { "label": 1, "arg1": "J05-1003.16", "arg2": "J05-1003.17", "reverse": true }, { "label": 1, "arg1": "J05-1003.19", "arg2": "J05-1003.20", "reverse": false }, { "label": 1, "arg1": "J05-1003.22", "arg2": "J05-1003.23", "reverse": false }, { "label": 2, "arg1": "J05-1003.29", "arg2": "J05-1003.30", "reverse": false }, { "label": 3, "arg1": "J05-1003.34", "arg2": "J05-1003.35", "reverse": false }, { "label": 4, "arg1": "J05-1003.38", "arg2": "J05-1003.39", "reverse": false } ]
J05-4003
Improving Machine Translation Performance by Exploiting Non-Parallel Corpora
We present a novel method for discovering parallel sentences in comparable, non-parallel corpora . We train a maximum entropy classifier that, given a pair of sentences , can reliably determine whether or not they are translations of each other. Using this approach, we extract parallel data from large Chinese, Arabic, and English non-parallel newspaper corpora . We evaluate the quality of the extracted data by showing that it improves the performance of a state-of-the-art statistical machine translation system . We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words ) and exploiting a large non-parallel corpus . Thus, our method can be applied with great benefit to language pairs for which only scarce resources are available.
[ { "id": "J05-4003.1", "char_start": 32, "char_end": 62 }, { "id": "J05-4003.2", "char_start": 66, "char_end": 98 }, { "id": "J05-4003.3", "char_start": 112, "char_end": 138 }, { "id": "J05-4003.4", "char_start": 161, "char_end": 170 }, { "id": "J05-4003.5", "char_start": 220, "char_end": 232 }, { "id": "J05-4003.6", "char_start": 280, "char_end": 293 }, { "id": "J05-4003.7", "char_start": 305, "char_end": 364 }, { "id": "J05-4003.8", "char_start": 383, "char_end": 412 }, { "id": "J05-4003.9", "char_start": 479, "char_end": 517 }, { "id": "J05-4003.10", "char_start": 553, "char_end": 562 }, { "id": "J05-4003.11", "char_start": 619, "char_end": 634 }, { "id": "J05-4003.12", "char_start": 644, "char_end": 649 }, { "id": "J05-4003.13", "char_start": 675, "char_end": 694 }, { "id": "J05-4003.14", "char_start": 751, "char_end": 765 }, { "id": "J05-4003.15", "char_start": 788, "char_end": 797 } ]
[ { "label": 1, "arg1": "J05-4003.1", "arg2": "J05-4003.2", "reverse": false }, { "label": 4, "arg1": "J05-4003.6", "arg2": "J05-4003.7", "reverse": false }, { "label": 1, "arg1": "J05-4003.10", "arg2": "J05-4003.11", "reverse": true } ]
P05-1032
Scaling Phrase-Based Statistical Machine Translation to Larger Corpora and Longer Phrases
In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memory than is required by current decoder implementations. We detail the computational complexity and average retrieval times for looking up phrase translations in our suffix array-based data structure . We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality .
[ { "id": "P05-1032.1", "char_start": 35, "char_end": 49 }, { "id": "P05-1032.2", "char_start": 54, "char_end": 98 }, { "id": "P05-1032.3", "char_start": 120, "char_end": 129 }, { "id": "P05-1032.4", "char_start": 150, "char_end": 157 }, { "id": "P05-1032.5", "char_start": 190, "char_end": 196 }, { "id": "P05-1032.6", "char_start": 225, "char_end": 232 }, { "id": "P05-1032.7", "char_start": 264, "char_end": 288 }, { "id": "P05-1032.8", "char_start": 293, "char_end": 316 }, { "id": "P05-1032.9", "char_start": 332, "char_end": 351 }, { "id": "P05-1032.10", "char_start": 359, "char_end": 392 }, { "id": "P05-1032.11", "char_start": 407, "char_end": 415 }, { "id": "P05-1032.12", "char_start": 442, "char_end": 456 }, { "id": "P05-1032.13", "char_start": 496, "char_end": 515 } ]
[ { "label": 4, "arg1": "P05-1032.9", "arg2": "P05-1032.10", "reverse": false } ]
P05-1034
Dependency Treelet Translation: Syntactically Informed Phrasal SMT
We describe a novel approach to statistical machine translation that combines syntactic information in the source language with recent advances in phrasal translation . This method requires a source-language dependency parser , target language word segmentation and an unsupervised word alignment component . We align a parallel corpus , project the source dependency parse onto the target sentence , extract dependency treelet translation pairs , and train a tree-based ordering model . We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser .
[ { "id": "P05-1034.1", "char_start": 33, "char_end": 64 }, { "id": "P05-1034.2", "char_start": 79, "char_end": 100 }, { "id": "P05-1034.3", "char_start": 108, "char_end": 123 }, { "id": "P05-1034.4", "char_start": 148, "char_end": 167 }, { "id": "P05-1034.5", "char_start": 193, "char_end": 208 }, { "id": "P05-1034.6", "char_start": 209, "char_end": 226 }, { "id": "P05-1034.7", "char_start": 229, "char_end": 244 }, { "id": "P05-1034.8", "char_start": 245, "char_end": 262 }, { "id": "P05-1034.9", "char_start": 270, "char_end": 307 }, { "id": "P05-1034.10", "char_start": 321, "char_end": 336 }, { "id": "P05-1034.11", "char_start": 351, "char_end": 374 }, { "id": "P05-1034.12", "char_start": 391, "char_end": 399 }, { "id": "P05-1034.13", "char_start": 410, "char_end": 446 }, { "id": "P05-1034.14", "char_start": 461, "char_end": 486 }, { "id": "P05-1034.15", "char_start": 514, "char_end": 521 }, { "id": "P05-1034.16", "char_start": 548, "char_end": 565 }, { "id": "P05-1034.17", "char_start": 599, "char_end": 609 }, { "id": "P05-1034.18", "char_start": 671, "char_end": 682 }, { "id": "P05-1034.19", "char_start": 729, "char_end": 735 } ]
[ { "label": 3, "arg1": "P05-1034.2", "arg2": "P05-1034.3", "reverse": false }, { "label": 3, "arg1": "P05-1034.11", "arg2": "P05-1034.12", "reverse": false } ]
P05-1048
Word Sense Disambiguation vs. Statistical Machine Translation
We directly investigate a subject of much recent debate: do word sense disambigation models help statistical machine translation quality ? We present empirical results casting doubt on this common, but unproved, assumption. Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system , we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone. Error analysis suggests several key factors behind this surprising finding, including inherent limitations of current statistical MT architectures .
[ { "id": "P05-1048.1", "char_start": 61, "char_end": 92 }, { "id": "P05-1048.2", "char_start": 98, "char_end": 129 }, { "id": "P05-1048.3", "char_start": 130, "char_end": 137 }, { "id": "P05-1048.4", "char_start": 250, "char_end": 289 }, { "id": "P05-1048.5", "char_start": 300, "char_end": 322 }, { "id": "P05-1048.6", "char_start": 337, "char_end": 362 }, { "id": "P05-1048.7", "char_start": 378, "char_end": 403 }, { "id": "P05-1048.8", "char_start": 440, "char_end": 459 }, { "id": "P05-1048.9", "char_start": 469, "char_end": 507 }, { "id": "P05-1048.10", "char_start": 515, "char_end": 529 }, { "id": "P05-1048.11", "char_start": 633, "char_end": 661 } ]
[ { "label": 1, "arg1": "P05-1048.4", "arg2": "P05-1048.5", "reverse": false }, { "label": 2, "arg1": "P05-1048.7", "arg2": "P05-1048.8", "reverse": false } ]
P05-1067
Machine Translation Using Probabilistic Synchronous Dependency Insertion Grammars
Syntax-based statistical machine translation (MT) aims at applying statistical models to structured data . In this paper, we present a syntax-based statistical machine translation system based on a probabilistic synchronous dependency insertion grammar . Synchronous dependency insertion grammars are a version of synchronous grammars defined on dependency trees . We first introduce our approach to inducing such a grammar from parallel corpora . Second, we describe the graphical model for the machine translation task , which can also be viewed as a stochastic tree-to-tree transducer . We introduce a polynomial time decoding algorithm for the model . We evaluate the outputs of our MT system using the NIST and Bleu automatic MT evaluation software . The result shows that our system outperforms the baseline system based on the IBM models in both translation speed and quality .
[ { "id": "P05-1067.1", "char_start": 1, "char_end": 50 }, { "id": "P05-1067.2", "char_start": 68, "char_end": 86 }, { "id": "P05-1067.3", "char_start": 90, "char_end": 105 }, { "id": "P05-1067.4", "char_start": 136, "char_end": 187 }, { "id": "P05-1067.5", "char_start": 199, "char_end": 253 }, { "id": "P05-1067.6", "char_start": 256, "char_end": 297 }, { "id": "P05-1067.7", "char_start": 315, "char_end": 335 }, { "id": "P05-1067.8", "char_start": 347, "char_end": 363 }, { "id": "P05-1067.9", "char_start": 417, "char_end": 424 }, { "id": "P05-1067.10", "char_start": 430, "char_end": 446 }, { "id": "P05-1067.11", "char_start": 473, "char_end": 488 }, { "id": "P05-1067.12", "char_start": 497, "char_end": 521 }, { "id": "P05-1067.13", "char_start": 554, "char_end": 588 }, { "id": "P05-1067.14", "char_start": 606, "char_end": 640 }, { "id": "P05-1067.15", "char_start": 649, "char_end": 654 }, { "id": "P05-1067.16", "char_start": 688, "char_end": 697 }, { "id": "P05-1067.17", "char_start": 708, "char_end": 754 }, { "id": "P05-1067.18", "char_start": 806, "char_end": 821 }, { "id": "P05-1067.19", "char_start": 835, "char_end": 845 }, { "id": "P05-1067.20", "char_start": 854, "char_end": 883 } ]
[ { "label": 1, "arg1": "P05-1067.2", "arg2": "P05-1067.3", "reverse": false }, { "label": 1, "arg1": "P05-1067.4", "arg2": "P05-1067.5", "reverse": true }, { "label": 4, "arg1": "P05-1067.9", "arg2": "P05-1067.10", "reverse": false }, { "label": 1, "arg1": "P05-1067.11", "arg2": "P05-1067.12", "reverse": false }, { "label": 1, "arg1": "P05-1067.18", "arg2": "P05-1067.19", "reverse": true } ]
P05-1069
A Localized Prediction Model for Statistical Machine Translation
In this paper, we present a novel training method for a localized phrase-based prediction model for statistical machine translation (SMT) . The model predicts blocks with orientation to handle local phrase re-ordering . We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score ) as well as binary features based on the block identities themselves, e.g. block bigram features. Our training algorithm can easily handle millions of features . The best system obtains a 18.6% improvement over the baseline on a standard Arabic-English translation task .
[ { "id": "P05-1069.1", "char_start": 35, "char_end": 50 }, { "id": "P05-1069.2", "char_start": 57, "char_end": 96 }, { "id": "P05-1069.3", "char_start": 101, "char_end": 138 }, { "id": "P05-1069.4", "char_start": 145, "char_end": 150 }, { "id": "P05-1069.5", "char_start": 160, "char_end": 166 }, { "id": "P05-1069.6", "char_start": 194, "char_end": 218 }, { "id": "P05-1069.7", "char_start": 230, "char_end": 258 }, { "id": "P05-1069.8", "char_start": 270, "char_end": 299 }, { "id": "P05-1069.9", "char_start": 311, "char_end": 331 }, { "id": "P05-1069.10", "char_start": 340, "char_end": 360 }, { "id": "P05-1069.11", "char_start": 374, "char_end": 389 }, { "id": "P05-1069.12", "char_start": 403, "char_end": 408 }, { "id": "P05-1069.13", "char_start": 464, "char_end": 482 }, { "id": "P05-1069.14", "char_start": 513, "char_end": 521 }, { "id": "P05-1069.15", "char_start": 577, "char_end": 585 }, { "id": "P05-1069.16", "char_start": 600, "char_end": 631 } ]
[ { "label": 1, "arg1": "P05-1069.2", "arg2": "P05-1069.3", "reverse": false }, { "label": 1, "arg1": "P05-1069.8", "arg2": "P05-1069.9", "reverse": true } ]
P05-1074
Paraphrasing with Bilingual Parallel Corpora
Previous work has used monolingual parallel corpora to extract and generate paraphrases . We show that this task can be done using bilingual parallel corpora , a much more commonly available resource . Using alignment techniques from phrase-based statistical machine translation , we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities , and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments , and contrast the quality with paraphrases extracted from automatic alignments .
[ { "id": "P05-1074.1", "char_start": 24, "char_end": 52 }, { "id": "P05-1074.2", "char_start": 77, "char_end": 88 }, { "id": "P05-1074.3", "char_start": 132, "char_end": 158 }, { "id": "P05-1074.4", "char_start": 192, "char_end": 200 }, { "id": "P05-1074.5", "char_start": 209, "char_end": 229 }, { "id": "P05-1074.6", "char_start": 235, "char_end": 279 }, { "id": "P05-1074.7", "char_start": 294, "char_end": 305 }, { "id": "P05-1074.8", "char_start": 313, "char_end": 321 }, { "id": "P05-1074.9", "char_start": 348, "char_end": 354 }, { "id": "P05-1074.10", "char_start": 399, "char_end": 421 }, { "id": "P05-1074.11", "char_start": 434, "char_end": 445 }, { "id": "P05-1074.12", "char_start": 463, "char_end": 488 }, { "id": "P05-1074.13", "char_start": 508, "char_end": 533 }, { "id": "P05-1074.14", "char_start": 575, "char_end": 597 }, { "id": "P05-1074.15", "char_start": 628, "char_end": 669 }, { "id": "P05-1074.16", "char_start": 685, "char_end": 707 }, { "id": "P05-1074.17", "char_start": 727, "char_end": 734 }, { "id": "P05-1074.18", "char_start": 740, "char_end": 751 }, { "id": "P05-1074.19", "char_start": 767, "char_end": 787 } ]
[ { "label": 4, "arg1": "P05-1074.1", "arg2": "P05-1074.2", "reverse": true }, { "label": 4, "arg1": "P05-1074.11", "arg2": "P05-1074.12", "reverse": false }, { "label": 4, "arg1": "P05-1074.18", "arg2": "P05-1074.19", "reverse": false } ]
P05-2016
Dependency-Based Statistical Machine Translation
We present a Czech-English statistical machine translation system which performs tree-to-tree translation of dependency structures . The only bilingual resource required is a sentence-aligned parallel corpus . All other resources are monolingual . We also refer to an evaluation method and plan to compare our system's output with a benchmark system .
[ { "id": "P05-2016.1", "char_start": 14, "char_end": 66 }, { "id": "P05-2016.2", "char_start": 82, "char_end": 106 }, { "id": "P05-2016.3", "char_start": 110, "char_end": 131 }, { "id": "P05-2016.4", "char_start": 143, "char_end": 161 }, { "id": "P05-2016.5", "char_start": 176, "char_end": 208 }, { "id": "P05-2016.6", "char_start": 221, "char_end": 230 }, { "id": "P05-2016.7", "char_start": 235, "char_end": 246 }, { "id": "P05-2016.8", "char_start": 269, "char_end": 286 }, { "id": "P05-2016.9", "char_start": 311, "char_end": 326 }, { "id": "P05-2016.10", "char_start": 334, "char_end": 350 } ]
[ { "label": 1, "arg1": "P05-2016.1", "arg2": "P05-2016.2", "reverse": false }, { "label": 6, "arg1": "P05-2016.9", "arg2": "P05-2016.10", "reverse": false } ]
E06-1018
Word Sense Induction: Triplet-Based Clustering and Automatic Evaluation
In this paper a novel solution to automatic and unsupervised word sense induction (WSI) is introduced. It represents an instantiation of the one sense per collocation observation (Gale et al., 1992). Like most existing approaches it utilizes clustering of word co-occurrences . This approach differs from other approaches to WSI in that it enhances the effect of the one sense per collocation observation by using triplets of words instead of pairs. The combination with a two-step clustering process using sentence co-occurrences as features allows for accurate results. Additionally, a novel and likewise automatic and unsupervised evaluation method inspired by Schutze's (1992) idea of evaluation of word sense disambiguation algorithms is employed. Offering advantages like reproducability and independency of a given biased gold standard it also enables automatic parameter optimization of the WSI algorithm .
[ { "id": "E06-1018.1", "char_start": 49, "char_end": 88 }, { "id": "E06-1018.2", "char_start": 142, "char_end": 179 }, { "id": "E06-1018.3", "char_start": 243, "char_end": 276 }, { "id": "E06-1018.4", "char_start": 326, "char_end": 329 }, { "id": "E06-1018.5", "char_start": 368, "char_end": 405 }, { "id": "E06-1018.6", "char_start": 427, "char_end": 432 }, { "id": "E06-1018.7", "char_start": 474, "char_end": 501 }, { "id": "E06-1018.8", "char_start": 508, "char_end": 531 }, { "id": "E06-1018.9", "char_start": 535, "char_end": 543 }, { "id": "E06-1018.10", "char_start": 622, "char_end": 652 }, { "id": "E06-1018.11", "char_start": 704, "char_end": 740 }, { "id": "E06-1018.12", "char_start": 830, "char_end": 843 }, { "id": "E06-1018.13", "char_start": 860, "char_end": 892 }, { "id": "E06-1018.14", "char_start": 900, "char_end": 913 } ]
[ { "label": 1, "arg1": "E06-1018.7", "arg2": "E06-1018.8", "reverse": true } ]
E06-1022
Addressee Identification in Face-to-Face Meetings
We present results on addressee identification in four-participants face-to-face meetings using Bayesian Network and Naive Bayes classifiers . First, we investigate how well the addressee of a dialogue act can be predicted based on gaze , utterance and conversational context features . Then, we explore whether information about meeting context can aid classifiers ' performances . Both classifiers perform the best when conversational context and utterance features are combined with speaker's gaze information . The classifiers show little gain from information about meeting context .
[ { "id": "E06-1022.1", "char_start": 23, "char_end": 47 }, { "id": "E06-1022.2", "char_start": 51, "char_end": 90 }, { "id": "E06-1022.3", "char_start": 97, "char_end": 113 }, { "id": "E06-1022.4", "char_start": 118, "char_end": 141 }, { "id": "E06-1022.5", "char_start": 179, "char_end": 188 }, { "id": "E06-1022.6", "char_start": 194, "char_end": 206 }, { "id": "E06-1022.7", "char_start": 233, "char_end": 237 }, { "id": "E06-1022.8", "char_start": 240, "char_end": 249 }, { "id": "E06-1022.9", "char_start": 254, "char_end": 285 }, { "id": "E06-1022.10", "char_start": 331, "char_end": 346 }, { "id": "E06-1022.11", "char_start": 355, "char_end": 366 }, { "id": "E06-1022.12", "char_start": 369, "char_end": 381 }, { "id": "E06-1022.13", "char_start": 389, "char_end": 400 }, { "id": "E06-1022.14", "char_start": 423, "char_end": 445 }, { "id": "E06-1022.15", "char_start": 450, "char_end": 468 }, { "id": "E06-1022.16", "char_start": 487, "char_end": 513 }, { "id": "E06-1022.17", "char_start": 520, "char_end": 531 }, { "id": "E06-1022.18", "char_start": 544, "char_end": 548 }, { "id": "E06-1022.19", "char_start": 572, "char_end": 587 } ]
[ { "label": 1, "arg1": "E06-1022.1", "arg2": "E06-1022.2", "reverse": false }, { "label": 2, "arg1": "E06-1022.17", "arg2": "E06-1022.18", "reverse": false } ]
E06-1031
CDER: Efficient MT Evaluation Using Block Movements
Most state-of-the-art evaluation measures for machine translation assign high costs to movements of word blocks. In many cases though such movements still result in correct or almost correct sentences . In this paper, we will present a new evaluation measure which explicitly models block reordering as an edit operation . Our measure can be exactly calculated in quadratic time . Furthermore, we will show how some evaluation measures can be improved by the introduction of word-dependent substitution costs . The correlation of the new measure with human judgment has been investigated systematically on two different language pairs . The experimental results will show that it significantly outperforms state-of-the-art approaches in sentence-level correlation . Results from experiments with word dependent substitution costs will demonstrate an additional increase of correlation between automatic evaluation measures and human judgment .
[ { "id": "E06-1031.1", "char_start": 23, "char_end": 42 }, { "id": "E06-1031.2", "char_start": 47, "char_end": 66 }, { "id": "E06-1031.3", "char_start": 79, "char_end": 84 }, { "id": "E06-1031.4", "char_start": 101, "char_end": 105 }, { "id": "E06-1031.5", "char_start": 192, "char_end": 201 }, { "id": "E06-1031.6", "char_start": 241, "char_end": 259 }, { "id": "E06-1031.7", "char_start": 284, "char_end": 300 }, { "id": "E06-1031.8", "char_start": 307, "char_end": 321 }, { "id": "E06-1031.9", "char_start": 328, "char_end": 335 }, { "id": "E06-1031.10", "char_start": 365, "char_end": 379 }, { "id": "E06-1031.11", "char_start": 417, "char_end": 436 }, { "id": "E06-1031.12", "char_start": 476, "char_end": 509 }, { "id": "E06-1031.13", "char_start": 539, "char_end": 546 }, { "id": "E06-1031.14", "char_start": 552, "char_end": 566 }, { "id": "E06-1031.15", "char_start": 621, "char_end": 635 }, { "id": "E06-1031.16", "char_start": 738, "char_end": 764 }, { "id": "E06-1031.17", "char_start": 797, "char_end": 830 }, { "id": "E06-1031.18", "char_start": 894, "char_end": 923 }, { "id": "E06-1031.19", "char_start": 928, "char_end": 942 } ]
[ { "label": 1, "arg1": "E06-1031.1", "arg2": "E06-1031.2", "reverse": false }, { "label": 1, "arg1": "E06-1031.6", "arg2": "E06-1031.7", "reverse": false }, { "label": 2, "arg1": "E06-1031.11", "arg2": "E06-1031.12", "reverse": true }, { "label": 6, "arg1": "E06-1031.13", "arg2": "E06-1031.14", "reverse": false }, { "label": 6, "arg1": "E06-1031.18", "arg2": "E06-1031.19", "reverse": false } ]
E06-1035
Automatic Segmentation of Multiparty Dialogue
In this paper, we investigate the problem of automatically predicting segment boundaries in spoken multiparty dialogue . We extend prior work in two ways. We first apply approaches that have been proposed for predicting top-level topic shifts to the problem of identifying subtopic boundaries . We then explore the impact on performance of using ASR output as opposed to human transcription . Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries , the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries , the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues , such as cue phrases and overlapping speech , are better indicators for the top-level prediction task. We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features , but do not change the general preference of approach for the two tasks.
[ { "id": "E06-1035.1", "char_start": 71, "char_end": 89 }, { "id": "E06-1035.2", "char_start": 93, "char_end": 119 }, { "id": "E06-1035.3", "char_start": 210, "char_end": 243 }, { "id": "E06-1035.4", "char_start": 262, "char_end": 293 }, { "id": "E06-1035.5", "char_start": 326, "char_end": 337 }, { "id": "E06-1035.6", "char_start": 347, "char_end": 357 }, { "id": "E06-1035.7", "char_start": 372, "char_end": 391 }, { "id": "E06-1035.8", "char_start": 423, "char_end": 431 }, { "id": "E06-1035.9", "char_start": 443, "char_end": 498 }, { "id": "E06-1035.10", "char_start": 542, "char_end": 561 }, { "id": "E06-1035.11", "char_start": 568, "char_end": 599 }, { "id": "E06-1035.12", "char_start": 647, "char_end": 678 }, { "id": "E06-1035.13", "char_start": 685, "char_end": 710 }, { "id": "E06-1035.14", "char_start": 725, "char_end": 769 }, { "id": "E06-1035.15", "char_start": 793, "char_end": 812 }, { "id": "E06-1035.16", "char_start": 823, "char_end": 834 }, { "id": "E06-1035.17", "char_start": 839, "char_end": 857 }, { "id": "E06-1035.18", "char_start": 939, "char_end": 959 }, { "id": "E06-1035.19", "char_start": 974, "char_end": 984 }, { "id": "E06-1035.20", "char_start": 1031, "char_end": 1075 } ]
[ { "label": 4, "arg1": "E06-1035.1", "arg2": "E06-1035.2", "reverse": false }, { "label": 6, "arg1": "E06-1035.6", "arg2": "E06-1035.7", "reverse": false }, { "label": 1, "arg1": "E06-1035.12", "arg2": "E06-1035.13", "reverse": true }, { "label": 3, "arg1": "E06-1035.18", "arg2": "E06-1035.19", "reverse": false } ]
P06-1013
Ensemble Methods for Unsupervised WSD
Combination methods are an effective way of improving system performance . This paper examines the benefits of system combination for unsupervised WSD . We investigate several voting- and arbiter-based combination strategies over a diverse pool of unsupervised WSD systems . Our combination methods rely on predominant senses which are derived automatically from raw text . Experiments using the SemCor and Senseval-3 data sets demonstrate that our ensembles yield significantly better results when compared with state-of-the-art.
[ { "id": "P06-1013.1", "char_start": 1, "char_end": 20 }, { "id": "P06-1013.2", "char_start": 55, "char_end": 73 }, { "id": "P06-1013.3", "char_start": 112, "char_end": 130 }, { "id": "P06-1013.4", "char_start": 135, "char_end": 151 }, { "id": "P06-1013.5", "char_start": 177, "char_end": 225 }, { "id": "P06-1013.6", "char_start": 249, "char_end": 273 }, { "id": "P06-1013.7", "char_start": 280, "char_end": 299 }, { "id": "P06-1013.8", "char_start": 308, "char_end": 326 }, { "id": "P06-1013.9", "char_start": 364, "char_end": 372 }, { "id": "P06-1013.10", "char_start": 397, "char_end": 403 }, { "id": "P06-1013.11", "char_start": 408, "char_end": 428 } ]
[ { "label": 2, "arg1": "P06-1013.1", "arg2": "P06-1013.2", "reverse": false }, { "label": 1, "arg1": "P06-1013.7", "arg2": "P06-1013.8", "reverse": true } ]
P06-1052
An Improved Redundancy Elimination Algorithm for Underspecified Representations
We present an efficient algorithm for the redundancy elimination problem : Given an underspecified semantic representation (USR) of a scope ambiguity , compute an USR with fewer mutually equivalent readings . The algorithm operates on underspecified chart representations which are derived from dominance graphs ; it can be applied to the USRs computed by large-scale grammars . We evaluate the algorithm on a corpus , and show that it reduces the degree of ambiguity significantly while taking negligible runtime.
[ { "id": "P06-1052.1", "char_start": 43, "char_end": 73 }, { "id": "P06-1052.2", "char_start": 85, "char_end": 129 }, { "id": "P06-1052.3", "char_start": 135, "char_end": 150 }, { "id": "P06-1052.4", "char_start": 164, "char_end": 167 }, { "id": "P06-1052.5", "char_start": 188, "char_end": 207 }, { "id": "P06-1052.6", "char_start": 236, "char_end": 272 }, { "id": "P06-1052.7", "char_start": 296, "char_end": 312 }, { "id": "P06-1052.8", "char_start": 340, "char_end": 344 }, { "id": "P06-1052.9", "char_start": 357, "char_end": 377 }, { "id": "P06-1052.10", "char_start": 411, "char_end": 417 }, { "id": "P06-1052.11", "char_start": 459, "char_end": 468 } ]
[ { "label": 3, "arg1": "P06-1052.2", "arg2": "P06-1052.3", "reverse": false } ]
P06-2001
Using Machine Learning Techniques to Build a Comma Checker for Basque
In this paper, we describe the research using machine learning techniques to build a comma checker to be integrated in a grammar checker for Basque . After several experiments, and trained with a little corpus of 100,000 words , the system guesses correctly not placing commas with a precision of 96% and a recall of 98%. It also gets a precision of 70% and a recall of 49% in the task of placing commas . Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author .
[ { "id": "P06-2001.1", "char_start": 47, "char_end": 74 }, { "id": "P06-2001.2", "char_start": 86, "char_end": 99 }, { "id": "P06-2001.3", "char_start": 122, "char_end": 137 }, { "id": "P06-2001.4", "char_start": 142, "char_end": 148 }, { "id": "P06-2001.5", "char_start": 204, "char_end": 210 }, { "id": "P06-2001.6", "char_start": 222, "char_end": 227 }, { "id": "P06-2001.7", "char_start": 271, "char_end": 277 }, { "id": "P06-2001.8", "char_start": 285, "char_end": 294 }, { "id": "P06-2001.9", "char_start": 308, "char_end": 314 }, { "id": "P06-2001.10", "char_start": 338, "char_end": 347 }, { "id": "P06-2001.11", "char_start": 361, "char_end": 367 }, { "id": "P06-2001.12", "char_start": 398, "char_end": 404 }, { "id": "P06-2001.13", "char_start": 503, "char_end": 509 }, { "id": "P06-2001.14", "char_start": 538, "char_end": 544 }, { "id": "P06-2001.15", "char_start": 567, "char_end": 573 } ]
[ { "label": 1, "arg1": "P06-2001.1", "arg2": "P06-2001.2", "reverse": false }, { "label": 1, "arg1": "P06-2001.3", "arg2": "P06-2001.4", "reverse": false }, { "label": 4, "arg1": "P06-2001.5", "arg2": "P06-2001.6", "reverse": true }, { "label": 3, "arg1": "P06-2001.14", "arg2": "P06-2001.15", "reverse": true } ]
P06-2012
Unsupervised Relation Disambiguation Using Spectral Clustering
This paper presents an unsupervised learning approach to disambiguate various relations between named entities by use of various lexical and syntactic features from the contexts . It works by calculating eigenvectors of an adjacency graph 's Laplacian to recover a submanifold of data from a high dimensionality space and then performing cluster number estimation on the eigenvectors . Experiment results on ACE corpora show that this spectral clustering based approach outperforms the other clustering methods .
[ { "id": "P06-2012.1", "char_start": 24, "char_end": 54 }, { "id": "P06-2012.2", "char_start": 97, "char_end": 111 }, { "id": "P06-2012.3", "char_start": 130, "char_end": 160 }, { "id": "P06-2012.4", "char_start": 170, "char_end": 178 }, { "id": "P06-2012.5", "char_start": 205, "char_end": 217 }, { "id": "P06-2012.6", "char_start": 224, "char_end": 239 }, { "id": "P06-2012.7", "char_start": 243, "char_end": 252 }, { "id": "P06-2012.8", "char_start": 266, "char_end": 277 }, { "id": "P06-2012.9", "char_start": 293, "char_end": 318 }, { "id": "P06-2012.10", "char_start": 339, "char_end": 364 }, { "id": "P06-2012.11", "char_start": 372, "char_end": 384 }, { "id": "P06-2012.12", "char_start": 409, "char_end": 420 }, { "id": "P06-2012.13", "char_start": 436, "char_end": 470 }, { "id": "P06-2012.14", "char_start": 493, "char_end": 511 } ]
[ { "label": 1, "arg1": "P06-2012.1", "arg2": "P06-2012.3", "reverse": true }, { "label": 6, "arg1": "P06-2012.13", "arg2": "P06-2012.14", "reverse": false } ]
P06-2059
Automatic Construction of Polarity-tagged Corpus from HTML Documents
This paper proposes a novel method of building polarity-tagged corpus from HTML documents . The characteristics of this method is that it is fully automatic and can be applied to arbitrary HTML documents . The idea behind our method is to utilize certain layout structures and linguistic pattern . By using them, we can automatically extract such sentences that express opinion. In our experiment, the method could construct a corpus consisting of 126,610 sentences .
[ { "id": "P06-2059.1", "char_start": 48, "char_end": 70 }, { "id": "P06-2059.2", "char_start": 76, "char_end": 90 }, { "id": "P06-2059.3", "char_start": 190, "char_end": 204 }, { "id": "P06-2059.4", "char_start": 256, "char_end": 273 }, { "id": "P06-2059.5", "char_start": 278, "char_end": 296 }, { "id": "P06-2059.6", "char_start": 348, "char_end": 357 }, { "id": "P06-2059.7", "char_start": 428, "char_end": 434 }, { "id": "P06-2059.8", "char_start": 457, "char_end": 466 } ]
[ { "label": 4, "arg1": "P06-2059.1", "arg2": "P06-2059.2", "reverse": false }, { "label": 4, "arg1": "P06-2059.7", "arg2": "P06-2059.8", "reverse": true } ]
H01-1040
Intelligent Access to Text: Integrating Information Extraction Technology into Text Browsers
In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser . We describe how this information is used in a prototype system designed to support information workers ' access to a pharmaceutical news archive as part of their industry watch function. We also report results of a preliminary, qualitative user evaluation of the system, which while broadly positive indicates further work needs to be done on the interface to make users aware of the increased potential of IE-enhanced text browsers .
[ { "id": "H01-1040.1", "char_start": 55, "char_end": 90 }, { "id": "H01-1040.2", "char_start": 93, "char_end": 117 }, { "id": "H01-1040.3", "char_start": 122, "char_end": 140 }, { "id": "H01-1040.4", "char_start": 176, "char_end": 192 }, { "id": "H01-1040.5", "char_start": 208, "char_end": 220 }, { "id": "H01-1040.6", "char_start": 269, "char_end": 285 }, { "id": "H01-1040.7", "char_start": 306, "char_end": 325 }, { "id": "H01-1040.8", "char_start": 340, "char_end": 367 }, { "id": "H01-1040.9", "char_start": 385, "char_end": 399 }, { "id": "H01-1040.10", "char_start": 451, "char_end": 478 }, { "id": "H01-1040.11", "char_start": 570, "char_end": 579 }, { "id": "H01-1040.12", "char_start": 588, "char_end": 593 }, { "id": "H01-1040.13", "char_start": 630, "char_end": 655 } ]
[ { "label": 1, "arg1": "H01-1040.3", "arg2": "H01-1040.5", "reverse": false }, { "label": 1, "arg1": "H01-1040.6", "arg2": "H01-1040.9", "reverse": false }, { "label": 4, "arg1": "H01-1040.11", "arg2": "H01-1040.13", "reverse": false } ]
H01-1055
Natural Language Generation in Dialog Systems
Recent advances in Automatic Speech Recognition technology have put the goal of naturally sounding dialog systems within reach. However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user . The issue of system response to users has been extensively studied by the natural language generation community , though rarely in the context of dialog systems . We show how research in generation can be adapted to dialog systems , and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques .
[ { "id": "H01-1055.1", "char_start": 22, "char_end": 61 }, { "id": "H01-1055.2", "char_start": 102, "char_end": 116 }, { "id": "H01-1055.3", "char_start": 153, "char_end": 171 }, { "id": "H01-1055.4", "char_start": 211, "char_end": 225 }, { "id": "H01-1055.5", "char_start": 254, "char_end": 258 }, { "id": "H01-1055.6", "char_start": 327, "char_end": 331 }, { "id": "H01-1055.7", "char_start": 347, "char_end": 362 }, { "id": "H01-1055.8", "char_start": 366, "char_end": 371 }, { "id": "H01-1055.9", "char_start": 408, "char_end": 445 }, { "id": "H01-1055.10", "char_start": 480, "char_end": 494 }, { "id": "H01-1055.11", "char_start": 521, "char_end": 531 }, { "id": "H01-1055.12", "char_start": 550, "char_end": 564 }, { "id": "H01-1055.13", "char_start": 606, "char_end": 640 }, { "id": "H01-1055.14", "char_start": 670, "char_end": 697 } ]
[ { "label": 1, "arg1": "H01-1055.1", "arg2": "H01-1055.2", "reverse": false }, { "label": 5, "arg1": "H01-1055.7", "arg2": "H01-1055.9", "reverse": true }, { "label": 1, "arg1": "H01-1055.11", "arg2": "H01-1055.12", "reverse": false }, { "label": 1, "arg1": "H01-1055.13", "arg2": "H01-1055.14", "reverse": true } ]
H01-1068
A Three-Tiered Evaluation Approach for Interactive Spoken Dialogue Systems
We describe a three-tiered approach for evaluation of spoken dialogue systems . The three tiers measure user satisfaction , system support of mission success and component performance . We describe our use of this approach in numerous fielded user studies conducted with the U.S. military.
[ { "id": "H01-1068.1", "char_start": 57, "char_end": 80 }, { "id": "H01-1068.2", "char_start": 107, "char_end": 124 }, { "id": "H01-1068.3", "char_start": 127, "char_end": 160 }, { "id": "H01-1068.4", "char_start": 165, "char_end": 186 }, { "id": "H01-1068.5", "char_start": 246, "char_end": 258 } ]
[]
N03-4004
TAP-XL: An Automated Analyst's Assistant
The TAP-XL Automated Analyst's Assistant is an application designed to help an English -speaking analyst write a topical report , culling information from a large inflow of multilingual, multimedia data . It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology .
[ { "id": "N03-4004.1", "char_start": 7, "char_end": 43 }, { "id": "N03-4004.2", "char_start": 82, "char_end": 89 }, { "id": "N03-4004.3", "char_start": 116, "char_end": 130 }, { "id": "N03-4004.4", "char_start": 176, "char_end": 205 }, { "id": "N03-4004.5", "char_start": 342, "char_end": 351 }, { "id": "N03-4004.6", "char_start": 366, "char_end": 391 } ]
[ { "label": 1, "arg1": "N03-4004.1", "arg2": "N03-4004.4", "reverse": true } ]
H05-1101
Some Computational Complexity Results for Synchronous Context-Free Grammars
This paper investigates some computational problems associated with probabilistic translation models that have recently been adopted in the literature on machine translation . These models can be viewed as pairs of probabilistic context-free grammars working in a 'synchronous' way. Two hardness results for the class NP are reported, along with an exponential time lower-bound for certain classes of algorithms that are currently used in the literature.
[ { "id": "H05-1101.1", "char_start": 32, "char_end": 54 }, { "id": "H05-1101.2", "char_start": 71, "char_end": 103 }, { "id": "H05-1101.3", "char_start": 157, "char_end": 176 }, { "id": "H05-1101.4", "char_start": 185, "char_end": 191 }, { "id": "H05-1101.5", "char_start": 218, "char_end": 253 }, { "id": "H05-1101.6", "char_start": 290, "char_end": 298 }, { "id": "H05-1101.7", "char_start": 321, "char_end": 323 }, { "id": "H05-1101.8", "char_start": 352, "char_end": 380 } ]
[ { "label": 3, "arg1": "H05-1101.1", "arg2": "H05-1101.2", "reverse": false }, { "label": 3, "arg1": "H05-1101.4", "arg2": "H05-1101.5", "reverse": true }, { "label": 3, "arg1": "H05-1101.6", "arg2": "H05-1101.7", "reverse": true } ]
I05-2014
BLEU in characters: towards automatic MT evaluation in languages without word delimiters
Automatic evaluation metrics for Machine Translation (MT) systems , such as BLEU or NIST , are now well established. Yet, they are scarcely used for the assessment of language pairs like English-Chinese or English-Japanese , because of the word segmentation problem . This study establishes the equivalence between the standard use of BLEU in word n-grams and its application at the character level. The use of BLEU at the character level eliminates the word segmentation problem : it makes it possible to directly compare commercial systems outputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs .
[ { "id": "I05-2014.1", "char_start": 13, "char_end": 31 }, { "id": "I05-2014.2", "char_start": 36, "char_end": 68 }, { "id": "I05-2014.3", "char_start": 79, "char_end": 83 }, { "id": "I05-2014.4", "char_start": 87, "char_end": 91 }, { "id": "I05-2014.5", "char_start": 170, "char_end": 184 }, { "id": "I05-2014.6", "char_start": 190, "char_end": 205 }, { "id": "I05-2014.7", "char_start": 209, "char_end": 225 }, { "id": "I05-2014.8", "char_start": 243, "char_end": 268 }, { "id": "I05-2014.9", "char_start": 338, "char_end": 342 }, { "id": "I05-2014.10", "char_start": 346, "char_end": 358 }, { "id": "I05-2014.11", "char_start": 386, "char_end": 395 }, { "id": "I05-2014.12", "char_start": 414, "char_end": 418 }, { "id": "I05-2014.13", "char_start": 426, "char_end": 435 }, { "id": "I05-2014.14", "char_start": 457, "char_end": 482 }, { "id": "I05-2014.15", "char_start": 556, "char_end": 573 }, { "id": "I05-2014.16", "char_start": 594, "char_end": 616 }, { "id": "I05-2014.17", "char_start": 645, "char_end": 652 } ]
[ { "label": 1, "arg1": "I05-2014.1", "arg2": "I05-2014.2", "reverse": false }, { "label": 3, "arg1": "I05-2014.7", "arg2": "I05-2014.8", "reverse": true }, { "label": 1, "arg1": "I05-2014.9", "arg2": "I05-2014.10", "reverse": false }, { "label": 1, "arg1": "I05-2014.12", "arg2": "I05-2014.13", "reverse": false } ]
P05-3025
Interactively Exploring a Machine Translation Model
This paper describes a method of interactively visualizing and directing the process of translating a sentence . The method allows a user to explore a model of syntax-based statistical machine translation (MT) , to understand the model 's strengths and weaknesses, and to compare it to other MT systems . Using this visualization method , we can find and address conceptual and practical problems in an MT system . In our demonstration at ACL , new users of our tool will drive a syntax-based decoder for themselves.
[ { "id": "P05-3025.1", "char_start": 36, "char_end": 87 }, { "id": "P05-3025.2", "char_start": 91, "char_end": 113 }, { "id": "P05-3025.3", "char_start": 136, "char_end": 140 }, { "id": "P05-3025.4", "char_start": 154, "char_end": 159 }, { "id": "P05-3025.5", "char_start": 163, "char_end": 212 }, { "id": "P05-3025.6", "char_start": 233, "char_end": 238 }, { "id": "P05-3025.7", "char_start": 295, "char_end": 305 }, { "id": "P05-3025.8", "char_start": 319, "char_end": 339 }, { "id": "P05-3025.9", "char_start": 406, "char_end": 415 }, { "id": "P05-3025.10", "char_start": 442, "char_end": 445 }, { "id": "P05-3025.11", "char_start": 452, "char_end": 457 }, { "id": "P05-3025.12", "char_start": 483, "char_end": 503 } ]
[ { "label": 6, "arg1": "P05-3025.6", "arg2": "P05-3025.7", "reverse": false }, { "label": 1, "arg1": "P05-3025.8", "arg2": "P05-3025.9", "reverse": false } ]
E06-1004
Computational Complexity of Statistical Machine Translation
In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community . Over the last decade, a variety of SMT algorithms have been built and empirically tested whereas little is known about the computational complexity of some of the fundamental problems of SMT . Our work aims at providing useful insights into the the computational complexity of those problems. We prove that while IBM Models 1-2 are conceptually and computationally simple, computations involving the higher (and more useful) models are hard . Since it is unlikely that there exists a polynomial time solution for any of these hard problems (unless P = NP and P#P = P ), our results highlight and justify the need for developing polynomial time approximations for these computations. We also discuss some practical ways of dealing with complexity .
[ { "id": "E06-1004.1", "char_start": 83, "char_end": 120 }, { "id": "E06-1004.2", "char_start": 177, "char_end": 199 }, { "id": "E06-1004.3", "char_start": 237, "char_end": 251 }, { "id": "E06-1004.4", "char_start": 325, "char_end": 349 }, { "id": "E06-1004.5", "char_start": 389, "char_end": 392 }, { "id": "E06-1004.6", "char_start": 451, "char_end": 475 }, { "id": "E06-1004.7", "char_start": 515, "char_end": 529 }, { "id": "E06-1004.8", "char_start": 627, "char_end": 633 }, { "id": "E06-1004.9", "char_start": 638, "char_end": 642 }, { "id": "E06-1004.10", "char_start": 686, "char_end": 710 }, { "id": "E06-1004.11", "char_start": 728, "char_end": 741 }, { "id": "E06-1004.12", "char_start": 750, "char_end": 756 }, { "id": "E06-1004.13", "char_start": 761, "char_end": 768 }, { "id": "E06-1004.14", "char_start": 830, "char_end": 860 }, { "id": "E06-1004.15", "char_start": 937, "char_end": 947 } ]
[ { "label": 5, "arg1": "E06-1004.1", "arg2": "E06-1004.2", "reverse": true }, { "label": 3, "arg1": "E06-1004.4", "arg2": "E06-1004.5", "reverse": false }, { "label": 1, "arg1": "E06-1004.10", "arg2": "E06-1004.11", "reverse": false } ]
E06-1041
Structuring Knowledge for Reference Generation: A Clustering Algorithm
This paper discusses two problems that arise in the Generation of Referring Expressions : (a) numeric-valued attributes , such as size or location; (b) perspective-taking in reference . Both problems, it is argued, can be resolved if some structure is imposed on the available knowledge prior to content determination . We describe a clustering algorithm which is sufficiently general to be applied to these diverse problems, discuss its application, and evaluate its performance.
[ { "id": "E06-1041.1", "char_start": 55, "char_end": 65 }, { "id": "E06-1041.2", "char_start": 69, "char_end": 90 }, { "id": "E06-1041.3", "char_start": 97, "char_end": 122 }, { "id": "E06-1041.4", "char_start": 155, "char_end": 173 }, { "id": "E06-1041.5", "char_start": 177, "char_end": 186 }, { "id": "E06-1041.6", "char_start": 299, "char_end": 320 }, { "id": "E06-1041.7", "char_start": 337, "char_end": 357 } ]
[]
N06-2009
Answering the Question You Wish They Had Asked: The Impact of Paraphrasing for Question Answering
State-of-the-art Question Answering (QA) systems are very sensitive to variations in the phrasing of an information need . Finding the preferred language for such a need is a valuable task. We investigate that claim by adopting a simple MT-based paraphrasing technique and evaluating QA system performance on paraphrased questions . We found a potential increase of 35% in MRR with respect to the original question .
[ { "id": "N06-2009.1", "char_start": 20, "char_end": 51 }, { "id": "N06-2009.2", "char_start": 107, "char_end": 123 }, { "id": "N06-2009.3", "char_start": 148, "char_end": 156 }, { "id": "N06-2009.4", "char_start": 168, "char_end": 172 }, { "id": "N06-2009.5", "char_start": 240, "char_end": 271 }, { "id": "N06-2009.6", "char_start": 287, "char_end": 296 }, { "id": "N06-2009.7", "char_start": 312, "char_end": 333 }, { "id": "N06-2009.8", "char_start": 376, "char_end": 379 }, { "id": "N06-2009.9", "char_start": 409, "char_end": 417 } ]
[ { "label": 1, "arg1": "N06-2009.5", "arg2": "N06-2009.6", "reverse": false } ]
N06-2038
A Comparison of Tagging Strategies for Statistical Information Extraction
There are several approaches that model information extraction as a token classification task , using various tagging strategies to combine multiple tokens . We describe the tagging strategies that can be found in the literature and evaluate their relative performances. We also introduce a new strategy, called Begin/After tagging or BIA , and show that it is competitive to the best other strategies.
[ { "id": "N06-2038.1", "char_start": 43, "char_end": 65 }, { "id": "N06-2038.2", "char_start": 71, "char_end": 96 }, { "id": "N06-2038.3", "char_start": 113, "char_end": 131 }, { "id": "N06-2038.4", "char_start": 152, "char_end": 158 }, { "id": "N06-2038.5", "char_start": 177, "char_end": 195 }, { "id": "N06-2038.6", "char_start": 315, "char_end": 334 }, { "id": "N06-2038.7", "char_start": 338, "char_end": 341 } ]
[ { "label": 1, "arg1": "N06-2038.2", "arg2": "N06-2038.3", "reverse": true } ]
N06-4001
InfoMagnets: Making Sense of Corpus Data
We introduce a new interactive corpus exploration tool called InfoMagnets . InfoMagnets aims at making exploratory corpus analysis accessible to researchers who are not experts in text mining . As evidence of its usefulness and usability, it has been used successfully in a research context to uncover relationships between language and behavioral patterns in two distinct domains: tutorial dialogue (Kumar et al., submitted) and on-line communities (Arguello et al., 2006). As an educational tool , it has been used as part of a unit on protocol analysis in an Educational Research Methods course .
[ { "id": "N06-4001.1", "char_start": 22, "char_end": 57 }, { "id": "N06-4001.2", "char_start": 65, "char_end": 76 }, { "id": "N06-4001.3", "char_start": 79, "char_end": 90 }, { "id": "N06-4001.4", "char_start": 106, "char_end": 133 }, { "id": "N06-4001.5", "char_start": 183, "char_end": 194 }, { "id": "N06-4001.6", "char_start": 327, "char_end": 335 }, { "id": "N06-4001.7", "char_start": 340, "char_end": 359 }, { "id": "N06-4001.8", "char_start": 385, "char_end": 403 }, { "id": "N06-4001.9", "char_start": 433, "char_end": 452 }, { "id": "N06-4001.10", "char_start": 484, "char_end": 500 }, { "id": "N06-4001.11", "char_start": 541, "char_end": 558 }, { "id": "N06-4001.12", "char_start": 565, "char_end": 600 } ]
[ { "label": 1, "arg1": "N06-4001.3", "arg2": "N06-4001.4", "reverse": false }, { "label": 4, "arg1": "N06-4001.7", "arg2": "N06-4001.8", "reverse": false }, { "label": 1, "arg1": "N06-4001.10", "arg2": "N06-4001.11", "reverse": false } ]
P06-1018
Polarized Unification Grammars
This paper proposes a generic mathematical formalism for the combination of various structures : strings , trees , dags , graphs , and products of them. The polarization of the objects of the elementary structures controls the saturation of the final structure . This formalism is both elementary and powerful enough to strongly simulate many grammar formalisms , such as rewriting systems , dependency grammars , TAG , HPSG and LFG .
[ { "id": "P06-1018.1", "char_start": 33, "char_end": 55 }, { "id": "P06-1018.2", "char_start": 87, "char_end": 97 }, { "id": "P06-1018.3", "char_start": 100, "char_end": 107 }, { "id": "P06-1018.4", "char_start": 110, "char_end": 115 }, { "id": "P06-1018.5", "char_start": 118, "char_end": 122 }, { "id": "P06-1018.6", "char_start": 125, "char_end": 131 }, { "id": "P06-1018.7", "char_start": 160, "char_end": 172 }, { "id": "P06-1018.8", "char_start": 195, "char_end": 216 }, { "id": "P06-1018.9", "char_start": 230, "char_end": 240 }, { "id": "P06-1018.10", "char_start": 254, "char_end": 263 }, { "id": "P06-1018.11", "char_start": 346, "char_end": 364 }, { "id": "P06-1018.12", "char_start": 375, "char_end": 392 }, { "id": "P06-1018.13", "char_start": 395, "char_end": 414 }, { "id": "P06-1018.14", "char_start": 417, "char_end": 420 }, { "id": "P06-1018.15", "char_start": 423, "char_end": 427 }, { "id": "P06-1018.16", "char_start": 432, "char_end": 435 } ]
[ { "label": 1, "arg1": "P06-1018.7", "arg2": "P06-1018.8", "reverse": false }, { "label": 3, "arg1": "P06-1018.9", "arg2": "P06-1018.10", "reverse": false } ]
P06-2110
Word Vectors and Two Kinds of Similarity
This paper examines what kind of similarity between words can be represented by what kind of word vectors in the vector space model . Through two experiments, three methods for constructing word vectors , i.e., LSA-based, cooccurrence-based and dictionary-based methods , were compared in terms of the ability to represent two kinds of similarity , i.e., taxonomic similarity and associative similarity . The result of the comparison was that the dictionary-based word vectors better reflect taxonomic similarity , while the LSA-based and the cooccurrence-based word vectors better reflect associative similarity .
[ { "id": "P06-2110.1", "char_start": 36, "char_end": 46 }, { "id": "P06-2110.2", "char_start": 55, "char_end": 60 }, { "id": "P06-2110.3", "char_start": 96, "char_end": 108 }, { "id": "P06-2110.4", "char_start": 116, "char_end": 134 }, { "id": "P06-2110.5", "char_start": 168, "char_end": 205 }, { "id": "P06-2110.6", "char_start": 214, "char_end": 272 }, { "id": "P06-2110.7", "char_start": 339, "char_end": 349 }, { "id": "P06-2110.8", "char_start": 358, "char_end": 378 }, { "id": "P06-2110.9", "char_start": 383, "char_end": 405 }, { "id": "P06-2110.10", "char_start": 450, "char_end": 479 }, { "id": "P06-2110.11", "char_start": 495, "char_end": 515 }, { "id": "P06-2110.12", "char_start": 528, "char_end": 577 }, { "id": "P06-2110.13", "char_start": 593, "char_end": 615 } ]
[ { "label": 3, "arg1": "P06-2110.1", "arg2": "P06-2110.3", "reverse": true }, { "label": 1, "arg1": "P06-2110.6", "arg2": "P06-2110.7", "reverse": false }, { "label": 1, "arg1": "P06-2110.10", "arg2": "P06-2110.11", "reverse": false }, { "label": 1, "arg1": "P06-2110.12", "arg2": "P06-2110.13", "reverse": false } ]
P06-3007
Investigations on Event-Based Summarization
We investigate independent and relevant event-based extractive mutli-document summarization approaches . In this paper, events are defined as event terms and associated event elements . With independent approach, we identify important contents by frequency of events . With relevant approach, we identify important contents by PageRank algorithm on the event map constructed from documents . Experimental results are encouraging.
[ { "id": "P06-3007.1", "char_start": 66, "char_end": 105 }, { "id": "P06-3007.2", "char_start": 123, "char_end": 129 }, { "id": "P06-3007.3", "char_start": 145, "char_end": 156 }, { "id": "P06-3007.4", "char_start": 161, "char_end": 186 }, { "id": "P06-3007.5", "char_start": 238, "char_end": 246 }, { "id": "P06-3007.6", "char_start": 263, "char_end": 269 }, { "id": "P06-3007.7", "char_start": 330, "char_end": 348 }, { "id": "P06-3007.8", "char_start": 356, "char_end": 365 }, { "id": "P06-3007.9", "char_start": 383, "char_end": 392 } ]
[ { "label": 3, "arg1": "P06-3007.2", "arg2": "P06-3007.3", "reverse": true }, { "label": 1, "arg1": "P06-3007.8", "arg2": "P06-3007.9", "reverse": true } ]
P06-4007
FERRET: Interactive Question-Answering for Real-World Environments
This paper describes FERRET , an interactive question-answering (Q/A) system designed to address the challenges of integrating automatic Q/A applications into real-world environments. FERRET utilizes a novel approach to Q/A known as predictive questioning which attempts to identify the questions (and answers ) that users need by analyzing how a user interacts with a system while gathering information related to a particular scenario.
[ { "id": "P06-4007.1", "char_start": 24, "char_end": 30 }, { "id": "P06-4007.2", "char_start": 36, "char_end": 79 }, { "id": "P06-4007.3", "char_start": 130, "char_end": 143 }, { "id": "P06-4007.4", "char_start": 187, "char_end": 193 }, { "id": "P06-4007.5", "char_start": 223, "char_end": 226 }, { "id": "P06-4007.6", "char_start": 236, "char_end": 258 }, { "id": "P06-4007.7", "char_start": 290, "char_end": 299 }, { "id": "P06-4007.8", "char_start": 305, "char_end": 312 }, { "id": "P06-4007.9", "char_start": 320, "char_end": 325 }, { "id": "P06-4007.10", "char_start": 350, "char_end": 354 } ]
[ { "label": 1, "arg1": "P06-4007.2", "arg2": "P06-4007.3", "reverse": false }, { "label": 1, "arg1": "P06-4007.5", "arg2": "P06-4007.6", "reverse": true } ]
P06-4011
Computational Analysis of Move Structures in Academic Abstracts
This paper introduces a method for computational analysis of move structures in abstracts of research articles . In our approach, sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions . The method involves automatically gathering a large number of abstracts from the Web and building a language model of abstract moves . We also present a prototype concordancer , CARE , which exploits the move-tagged abstracts for digital learning . This system provides a promising approach to Web-based computer-assisted academic writing .
[ { "id": "P06-4011.1", "char_start": 38, "char_end": 79 }, { "id": "P06-4011.2", "char_start": 83, "char_end": 92 }, { "id": "P06-4011.3", "char_start": 96, "char_end": 113 }, { "id": "P06-4011.4", "char_start": 133, "char_end": 142 }, { "id": "P06-4011.5", "char_start": 154, "char_end": 162 }, { "id": "P06-4011.6", "char_start": 204, "char_end": 208 }, { "id": "P06-4011.7", "char_start": 229, "char_end": 249 }, { "id": "P06-4011.8", "char_start": 314, "char_end": 323 }, { "id": "P06-4011.9", "char_start": 333, "char_end": 336 }, { "id": "P06-4011.10", "char_start": 352, "char_end": 366 }, { "id": "P06-4011.11", "char_start": 370, "char_end": 384 }, { "id": "P06-4011.12", "char_start": 415, "char_end": 427 }, { "id": "P06-4011.13", "char_start": 430, "char_end": 434 }, { "id": "P06-4011.14", "char_start": 456, "char_end": 477 }, { "id": "P06-4011.15", "char_start": 482, "char_end": 498 }, { "id": "P06-4011.16", "char_start": 546, "char_end": 590 } ]
[ { "label": 4, "arg1": "P06-4011.2", "arg2": "P06-4011.3", "reverse": false }, { "label": 4, "arg1": "P06-4011.4", "arg2": "P06-4011.5", "reverse": false }, { "label": 4, "arg1": "P06-4011.8", "arg2": "P06-4011.9", "reverse": false }, { "label": 3, "arg1": "P06-4011.10", "arg2": "P06-4011.11", "reverse": false }, { "label": 1, "arg1": "P06-4011.14", "arg2": "P06-4011.15", "reverse": false } ]
P06-4014
Re-Usable Tools for Precision Machine Translation
The LOGON MT demonstrator assembles independently valuable general-purpose NLP components into a machine translation pipeline that capitalizes on output quality . The demonstrator embodies an interesting combination of hand-built, symbolic resources and stochastic processes .
[ { "id": "P06-4014.1", "char_start": 7, "char_end": 28 }, { "id": "P06-4014.2", "char_start": 62, "char_end": 92 }, { "id": "P06-4014.3", "char_start": 100, "char_end": 128 }, { "id": "P06-4014.4", "char_start": 149, "char_end": 163 }, { "id": "P06-4014.5", "char_start": 222, "char_end": 252 }, { "id": "P06-4014.6", "char_start": 257, "char_end": 277 } ]
[ { "label": 4, "arg1": "P06-4014.2", "arg2": "P06-4014.3", "reverse": false } ]
T78-1001
Testing The Psychological Reality of a Representational Model
A research program is described in which a particular representational format for meaning is tested as broadly as possible. In this format, developed by the LNR research group at The University of California at San Diego, verbs are represented as interconnected sets of subpredicates . These subpredicates may be thought of as the almost inevitable inferences that a listener makes when a verb is used in a sentence . They confer a meaning structure on the sentence in which the verb is used.
[ { "id": "T78-1001.1", "char_start": 57, "char_end": 92 }, { "id": "T78-1001.2", "char_start": 225, "char_end": 230 }, { "id": "T78-1001.3", "char_start": 273, "char_end": 286 }, { "id": "T78-1001.4", "char_start": 295, "char_end": 308 }, { "id": "T78-1001.5", "char_start": 352, "char_end": 362 }, { "id": "T78-1001.6", "char_start": 370, "char_end": 378 }, { "id": "T78-1001.7", "char_start": 392, "char_end": 396 }, { "id": "T78-1001.8", "char_start": 410, "char_end": 418 }, { "id": "T78-1001.9", "char_start": 435, "char_end": 452 }, { "id": "T78-1001.10", "char_start": 460, "char_end": 468 }, { "id": "T78-1001.11", "char_start": 482, "char_end": 486 } ]
[ { "label": 3, "arg1": "T78-1001.2", "arg2": "T78-1001.3", "reverse": true }, { "label": 4, "arg1": "T78-1001.7", "arg2": "T78-1001.8", "reverse": false }, { "label": 4, "arg1": "T78-1001.10", "arg2": "T78-1001.11", "reverse": true } ]
T78-1028
Fragments of a Theory of Human Plausible Reasoning
The paper outlines a computational theory of human plausible reasoning constructed from analysis of people's answers to everyday questions. Like logic , the theory is expressed in a content-independent formalism . Unlike logic , the theory specifies how different information in memory affects the certainty of the conclusions drawn. The theory consists of a dimensionalized space of different inference types and their certainty conditions , including a variety of meta-inference types where the inference depends on the person's knowledge about his own knowledge. The protocols from people's answers to questions are analyzed in terms of the different inference types . The paper also discusses how memory is structured in multiple ways to support the different inference types , and how the information found in memory determines which inference types are triggered.
[ { "id": "T78-1028.1", "char_start": 24, "char_end": 44 }, { "id": "T78-1028.2", "char_start": 48, "char_end": 73 }, { "id": "T78-1028.3", "char_start": 148, "char_end": 153 }, { "id": "T78-1028.4", "char_start": 160, "char_end": 166 }, { "id": "T78-1028.5", "char_start": 185, "char_end": 214 }, { "id": "T78-1028.6", "char_start": 224, "char_end": 229 }, { "id": "T78-1028.7", "char_start": 236, "char_end": 242 }, { "id": "T78-1028.8", "char_start": 282, "char_end": 288 }, { "id": "T78-1028.9", "char_start": 341, "char_end": 347 }, { "id": "T78-1028.10", "char_start": 362, "char_end": 383 }, { "id": "T78-1028.11", "char_start": 397, "char_end": 412 }, { "id": "T78-1028.12", "char_start": 423, "char_end": 443 }, { "id": "T78-1028.13", "char_start": 469, "char_end": 489 }, { "id": "T78-1028.14", "char_start": 500, "char_end": 509 }, { "id": "T78-1028.15", "char_start": 657, "char_end": 672 }, { "id": "T78-1028.16", "char_start": 704, "char_end": 710 }, { "id": "T78-1028.17", "char_start": 767, "char_end": 782 }, { "id": "T78-1028.18", "char_start": 818, "char_end": 824 }, { "id": "T78-1028.19", "char_start": 842, "char_end": 857 } ]
[ { "label": 3, "arg1": "T78-1028.1", "arg2": "T78-1028.2", "reverse": false }, { "label": 3, "arg1": "T78-1028.4", "arg2": "T78-1028.5", "reverse": true }, { "label": 6, "arg1": "T78-1028.6", "arg2": "T78-1028.7", "reverse": false }, { "label": 4, "arg1": "T78-1028.9", "arg2": "T78-1028.10", "reverse": true }, { "label": 3, "arg1": "T78-1028.11", "arg2": "T78-1028.12", "reverse": true }, { "label": 3, "arg1": "T78-1028.13", "arg2": "T78-1028.14", "reverse": false } ]
T78-1031
PATH-BASED AND NODE-BASED INFERENCE IN SEMANTIC NETWORKS
Two styles of performing inference in semantic networks are presented and compared. Path-based inference allows an arc or a path of arcs between two given nodes to be inferred from the existence of another specified path between the same two nodes . Path-based inference rules may be written using a binary relational calculus notation . Node-based inference allows a structure of nodes to be inferred from the existence of an instance of a pattern of node structures . Node-based inference rules can be constructed in a semantic network using a variant of a predicate calculus notation . Path-based inference is more efficient, while node-based inference is more general. A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. Applications of path-based inference rules to the representation of the extensional equivalence of intensional concepts , and to the explication of inheritance in hierarchies are sketched.
[ { "id": "T78-1031.1", "char_start": 28, "char_end": 37 }, { "id": "T78-1031.2", "char_start": 41, "char_end": 58 }, { "id": "T78-1031.3", "char_start": 87, "char_end": 107 }, { "id": "T78-1031.4", "char_start": 118, "char_end": 121 }, { "id": "T78-1031.5", "char_start": 127, "char_end": 139 }, { "id": "T78-1031.6", "char_start": 158, "char_end": 163 }, { "id": "T78-1031.7", "char_start": 219, "char_end": 223 }, { "id": "T78-1031.8", "char_start": 245, "char_end": 250 }, { "id": "T78-1031.9", "char_start": 253, "char_end": 279 }, { "id": "T78-1031.10", "char_start": 303, "char_end": 338 }, { "id": "T78-1031.11", "char_start": 341, "char_end": 361 }, { "id": "T78-1031.12", "char_start": 371, "char_end": 380 }, { "id": "T78-1031.13", "char_start": 384, "char_end": 389 }, { "id": "T78-1031.14", "char_start": 455, "char_end": 470 }, { "id": "T78-1031.15", "char_start": 473, "char_end": 499 }, { "id": "T78-1031.16", "char_start": 524, "char_end": 540 }, { "id": "T78-1031.17", "char_start": 562, "char_end": 589 }, { "id": "T78-1031.18", "char_start": 592, "char_end": 612 }, { "id": "T78-1031.19", "char_start": 638, "char_end": 658 }, { "id": "T78-1031.20", "char_start": 814, "char_end": 840 }, { "id": "T78-1031.21", "char_start": 870, "char_end": 893 }, { "id": "T78-1031.22", "char_start": 897, "char_end": 917 }, { "id": "T78-1031.23", "char_start": 931, "char_end": 942 }, { "id": "T78-1031.24", "char_start": 946, "char_end": 957 }, { "id": "T78-1031.25", "char_start": 961, "char_end": 972 } ]
[ { "label": 1, "arg1": "T78-1031.9", "arg2": "T78-1031.10", "reverse": true }, { "label": 1, "arg1": "T78-1031.11", "arg2": "T78-1031.14", "reverse": true }, { "label": 1, "arg1": "T78-1031.15", "arg2": "T78-1031.17", "reverse": true }, { "label": 6, "arg1": "T78-1031.18", "arg2": "T78-1031.19", "reverse": false }, { "label": 3, "arg1": "T78-1031.24", "arg2": "T78-1031.25", "reverse": false } ]
C80-1039
ON FROFF: A TEXT PROCESSING SYSTEM FOR ENGLISH TEXTS AND FIGURES
In order to meet the needs of a publication of papers in English, many systems to run off texts have been developed. In this paper, we report a system FROFF which can make a fair copy of not only texts but also graphs and tables indispensable to our papers. Its selection of fonts , specification of character size are dynamically changeable, and the typing location can be also changed in lateral or longitudinal directions. Each character has its own width and a line length is counted by the sum of each character . By using commands or rules which are defined to facilitate the construction of format expected or some mathematical expressions , elaborate and pretty documents can be successfully obtained.
[ { "id": "C80-1039.1", "char_start": 154, "char_end": 159 }, { "id": "C80-1039.2", "char_start": 278, "char_end": 283 }, { "id": "C80-1039.3", "char_start": 303, "char_end": 312 }, { "id": "C80-1039.4", "char_start": 354, "char_end": 369 }, { "id": "C80-1039.5", "char_start": 434, "char_end": 443 }, { "id": "C80-1039.6", "char_start": 510, "char_end": 519 }, { "id": "C80-1039.7", "char_start": 543, "char_end": 548 }, { "id": "C80-1039.8", "char_start": 625, "char_end": 649 } ]
[ { "label": 1, "arg1": "C80-1039.7", "arg2": "C80-1039.8", "reverse": false } ]
C80-1073
ATNS USED AS A PROCEDURAL DIALOG MODEL
An attempt has been made to use an Augmented Transition Network as a procedural dialog model . The development of such a model appears to be important in several respects: as a device to represent and to use different dialog schemata proposed in empirical conversation analysis ; as a device to represent and to use models of verbal interaction ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs . A standard ATN should be further developed in order to account for the verbal interactions of task-oriented dialogs .
[ { "id": "C80-1073.1", "char_start": 38, "char_end": 66 }, { "id": "C80-1073.2", "char_start": 83, "char_end": 95 }, { "id": "C80-1073.3", "char_start": 124, "char_end": 129 }, { "id": "C80-1073.4", "char_start": 221, "char_end": 236 }, { "id": "C80-1073.5", "char_start": 259, "char_end": 280 }, { "id": "C80-1073.6", "char_start": 319, "char_end": 347 }, { "id": "C80-1073.7", "char_start": 388, "char_end": 403 }, { "id": "C80-1073.8", "char_start": 414, "char_end": 432 }, { "id": "C80-1073.9", "char_start": 454, "char_end": 493 }, { "id": "C80-1073.10", "char_start": 507, "char_end": 510 }, { "id": "C80-1073.11", "char_start": 567, "char_end": 586 }, { "id": "C80-1073.12", "char_start": 590, "char_end": 611 } ]
[ { "label": 1, "arg1": "C80-1073.1", "arg2": "C80-1073.2", "reverse": false }, { "label": 5, "arg1": "C80-1073.4", "arg2": "C80-1073.5", "reverse": true }, { "label": 4, "arg1": "C80-1073.11", "arg2": "C80-1073.12", "reverse": false } ]
P80-1004
Metaphor - A Key to Extensible Semantic Analysis
Interpreting metaphors is an integral and inescapable process in human understanding of natural language . This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings . Each generalized metaphor contains a recognition network , a basic mapping , additional transfer mappings , and an implicit intention component . It is argued that the method reduces metaphor interpretation from a reconstruction to a recognition task . Implications towards automating certain aspects of language learning are also discussed.
[ { "id": "P80-1004.1", "char_start": 16, "char_end": 25 }, { "id": "P80-1004.2", "char_start": 68, "char_end": 107 }, { "id": "P80-1004.3", "char_start": 133, "char_end": 162 }, { "id": "P80-1004.4", "char_start": 207, "char_end": 236 }, { "id": "P80-1004.5", "char_start": 244, "char_end": 264 }, { "id": "P80-1004.6", "char_start": 276, "char_end": 295 }, { "id": "P80-1004.7", "char_start": 300, "char_end": 313 }, { "id": "P80-1004.8", "char_start": 327, "char_end": 344 }, { "id": "P80-1004.9", "char_start": 354, "char_end": 382 }, { "id": "P80-1004.10", "char_start": 422, "char_end": 445 }, { "id": "P80-1004.11", "char_start": 453, "char_end": 467 }, { "id": "P80-1004.12", "char_start": 473, "char_end": 489 }, { "id": "P80-1004.13", "char_start": 543, "char_end": 560 } ]
[ { "label": 4, "arg1": "P80-1004.1", "arg2": "P80-1004.2", "reverse": false }, { "label": 1, "arg1": "P80-1004.3", "arg2": "P80-1004.4", "reverse": true }, { "label": 4, "arg1": "P80-1004.5", "arg2": "P80-1004.6", "reverse": true } ]
P80-1019
Expanding the Horizons of Natural Language Interfaces
Current natural language interfaces have concentrated largely on determining the literal meaning of input from their users . While such decoding is an essential underpinning, much recent work suggests that natural language interfaces will never appear cooperative or graceful unless they also incorporate numerous non-literal aspects of communication , such as robust communication procedures . This paper defends that view, but claims that direct imitation of human performance is not the best way to implement many of these non-literal aspects of communication ; that the new technology of powerful personal computers with integral graphics displays offers techniques superior to those of humans for these aspects, while still satisfying human communication needs . The paper proposes interfaces based on a judicious mixture of these techniques and the still valuable methods of more traditional natural language interfaces.
[ { "id": "P80-1019.1", "char_start": 11, "char_end": 38 }, { "id": "P80-1019.2", "char_start": 92, "char_end": 99 }, { "id": "P80-1019.3", "char_start": 103, "char_end": 108 }, { "id": "P80-1019.4", "char_start": 120, "char_end": 125 }, { "id": "P80-1019.5", "char_start": 139, "char_end": 147 }, { "id": "P80-1019.6", "char_start": 209, "char_end": 236 }, { "id": "P80-1019.7", "char_start": 317, "char_end": 353 }, { "id": "P80-1019.8", "char_start": 371, "char_end": 395 }, { "id": "P80-1019.9", "char_start": 529, "char_end": 565 }, { "id": "P80-1019.10", "char_start": 604, "char_end": 622 }, { "id": "P80-1019.11", "char_start": 637, "char_end": 654 }, { "id": "P80-1019.12", "char_start": 743, "char_end": 768 }, { "id": "P80-1019.13", "char_start": 790, "char_end": 800 }, { "id": "P80-1019.14", "char_start": 901, "char_end": 928 } ]
[ { "label": 3, "arg1": "P80-1019.2", "arg2": "P80-1019.3", "reverse": false }, { "label": 1, "arg1": "P80-1019.5", "arg2": "P80-1019.7", "reverse": true }, { "label": 4, "arg1": "P80-1019.10", "arg2": "P80-1019.11", "reverse": true }, { "label": 6, "arg1": "P80-1019.13", "arg2": "P80-1019.14", "reverse": false } ]
P80-1026
Flexiable Parsing
When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc.. Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibilities that such a system should provide. We go, on to describe FlexP , a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.
[ { "id": "P80-1026.1", "char_start": 18, "char_end": 34 }, { "id": "P80-1026.2", "char_start": 187, "char_end": 202 }, { "id": "P80-1026.3", "char_start": 279, "char_end": 294 }, { "id": "P80-1026.4", "char_start": 312, "char_end": 334 }, { "id": "P80-1026.5", "char_start": 344, "char_end": 349 }, { "id": "P80-1026.6", "char_start": 445, "char_end": 466 }, { "id": "P80-1026.7", "char_start": 524, "char_end": 529 }, { "id": "P80-1026.8", "char_start": 534, "char_end": 567 }, { "id": "P80-1026.9", "char_start": 641, "char_end": 668 } ]
[ { "label": 1, "arg1": "P80-1026.8", "arg2": "P80-1026.9", "reverse": false } ]