{ "paper_id": "W01-0706", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:00:18.636845Z" }, "title": "Exploring Evidence for Shallow Parsing", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign Urbana", "location": { "postCode": "61801", "region": "IL" } }, "email": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign Urbana", "location": { "postCode": "61801", "region": "IL" } }, "email": "danr@cs.uiuc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Significant amount of work has been devoted recently to develop learning techniques that can be used to generate partial (shallow) analysis of natural language sentences rather than a full parse. In this work we set out to evaluate whether this direction is worthwhile by comparing a learned shallow parser to one of the best learned full parsers on tasks both can perform-identifying phrases in sentences. We conclude that directly learning to perform these tasks as shallow parsers do is advantageous over full parsers both in terms of performance and robustness to new and lower quality texts.", "pdf_parse": { "paper_id": "W01-0706", "_pdf_hash": "", "abstract": [ { "text": "Significant amount of work has been devoted recently to develop learning techniques that can be used to generate partial (shallow) analysis of natural language sentences rather than a full parse. In this work we set out to evaluate whether this direction is worthwhile by comparing a learned shallow parser to one of the best learned full parsers on tasks both can perform-identifying phrases in sentences. We conclude that directly learning to perform these tasks as shallow parsers do is advantageous over full parsers both in terms of performance and robustness to new and lower quality texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Shallow parsing is studied as an alternative to full-sentence parsing. Rather than producing a complete analysis of sentences, the alternative is to perform only partial analysis of the syntactic structures in a text (Harris, 1957; Abney, 1991; Greffenstette, 1993) . A lot of recent work on shallow parsing has been influenced by Abney's work (Abney, 1991) , who has suggested to \"chunk\" sentences to base level phrases. For example, the sentence \"He reckons the current account deficit will narrow to only $ 1.8 billion in September .\" would be chunked as follows (Tjong Kim Sang and Buchholz, 2000) While earlier work in this direction concentrated on manual construction of rules, most of the recent work has been motivated by the observation that shallow syntactic information can be extracted using local information -by examining the pattern itself, its nearby context and the local part-of-speech information. Thus, over the past few years, along with advances in the use of learning and statistical methods for acquisition of full parsers (Collins, 1997; Charniak, 1997a; Charniak, 1997b; Ratnaparkhi, 1997) , significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns -syntactic phrases or words that participate in a syntactic relationship (Church, 1988; Ramshaw and Marcus, 1995; Argamon et al., 1998; Cardie and Pierce, 1998; Munoz et al., 1999; Punyakanok and Roth, 2001; Buchholz et al., 1999; Tjong Kim Sang and Buchholz, 2000) .", "cite_spans": [ { "start": 217, "end": 231, "text": "(Harris, 1957;", "ref_id": "BIBREF14" }, { "start": 232, "end": 244, "text": "Abney, 1991;", "ref_id": "BIBREF0" }, { "start": 245, "end": 265, "text": "Greffenstette, 1993)", "ref_id": "BIBREF12" }, { "start": 344, "end": 357, "text": "(Abney, 1991)", "ref_id": "BIBREF0" }, { "start": 586, "end": 601, "text": "Buchholz, 2000)", "ref_id": "BIBREF15" }, { "start": 1048, "end": 1063, "text": "(Collins, 1997;", "ref_id": "BIBREF10" }, { "start": 1064, "end": 1080, "text": "Charniak, 1997a;", "ref_id": "BIBREF6" }, { "start": 1081, "end": 1097, "text": "Charniak, 1997b;", "ref_id": "BIBREF7" }, { "start": 1098, "end": 1116, "text": "Ratnaparkhi, 1997)", "ref_id": "BIBREF20" }, { "start": 1308, "end": 1322, "text": "(Church, 1988;", "ref_id": "BIBREF8" }, { "start": 1323, "end": 1348, "text": "Ramshaw and Marcus, 1995;", "ref_id": "BIBREF19" }, { "start": 1349, "end": 1370, "text": "Argamon et al., 1998;", "ref_id": "BIBREF2" }, { "start": 1371, "end": 1395, "text": "Cardie and Pierce, 1998;", "ref_id": "BIBREF4" }, { "start": 1396, "end": 1415, "text": "Munoz et al., 1999;", "ref_id": "BIBREF17" }, { "start": 1416, "end": 1442, "text": "Punyakanok and Roth, 2001;", "ref_id": "BIBREF18" }, { "start": 1443, "end": 1465, "text": "Buchholz et al., 1999;", "ref_id": "BIBREF3" }, { "start": 1466, "end": 1500, "text": "Tjong Kim Sang and Buchholz, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Research on shallow parsing was inspired by psycholinguistics arguments (Gee and Grosjean, 1983 ) that suggest that in many scenarios (e.g., conversational) full parsing is not a realistic strategy for sentence processing and analysis, and was further motivated by several arguments from a natural language engineering viewpoint.", "cite_spans": [ { "start": 72, "end": 95, "text": "(Gee and Grosjean, 1983", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, it has been noted that in many natural language applications it is sufficient to use shallow parsing information; information such as noun phrases (NPs) and other syntactic sequences have been found useful in many large-scale language processing applications including information extraction and text summarization (Grishman, 1995; Appelt et al., 1993) . Second, while training a full parser requires a collection of fully parsed sentences as training corpus, it is possible to train a shallow parser incrementally. If all that is available is a collection of sentences annotated for NPs, it can be used to produce this level of analysis. This can be augmented later if more information is available. Finally, the hope behind this research direction was that this incremental and modular processing might result in more robust parsing decisions, especially in cases of spoken language or other cases in which the quality of the natural language inputs is low -sentences which may have repeated words, missing words, or any other lexical and syntactic mistakes.", "cite_spans": [ { "start": 322, "end": 338, "text": "(Grishman, 1995;", "ref_id": "BIBREF13" }, { "start": 339, "end": 359, "text": "Appelt et al., 1993)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Overall, the driving force behind the work on learning shallow parsers was the desire to get better performance and higher reliability. However, since work in this direction has started, a significant progress has also been made in the research on statistical learning of full parsers, both in terms of accuracy and processing time (Charniak, 1997b; Charniak, 1997a; Collins, 1997; Ratnaparkhi, 1997) .", "cite_spans": [ { "start": 332, "end": 349, "text": "(Charniak, 1997b;", "ref_id": "BIBREF7" }, { "start": 350, "end": 366, "text": "Charniak, 1997a;", "ref_id": "BIBREF6" }, { "start": 367, "end": 381, "text": "Collins, 1997;", "ref_id": "BIBREF10" }, { "start": 382, "end": 400, "text": "Ratnaparkhi, 1997)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper investigates the question of whether work on shallow parsing is worthwhile. That is, we attempt to evaluate quantitatively the intuitions described above -that learning to perform shallow parsing could be more accurate and more robust than learning to generate full parses. We do that by concentrating on the task of identifying the phrase structure of sentences -a byproduct of full parsers that can also be produced by shallow parsers. We investigate two instantiations of this task, \"chucking\" and identifying atomic phrases. And, to study robustness, we run our experiments both on standard Penn Treebank data (part of which is used for training the parsers) and on lower quality data -the Switchboard data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our conclusions are quite clear. Indeed, shallow parsers that are specifically trained to perform the tasks of identifying the phrase structure of a sentence are more accurate and more robust than full parsers. We believe that this finding, not only justifies work in this direction, but may even suggest that it would be worthwhile to use this methodology incrementally, to learn a more complete parser, if needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to run a fair comparison between full parsers and shallow parsers -which could pro-duce quite different outputs -we have chosen the task of identifying the phrase structure of a sentence. This structure can be easily extracted from the outcome of a full parser and a shallow parser can be trained specifically on this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "2" }, { "text": "There is no agreement on how to define phrases in sentences. The definition could depend on downstream applications and could range from simple syntactic patterns to message units people use in conversations. For the purpose of this study, we chose to use two different definitions. Both can be formally defined and they reflect different levels of shallow parsing patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "2" }, { "text": "The first is the one used in the chunking competition in CoNLL-2000 (Tjong Kim Sang and Buchholz, 2000) . In this case, a full parse tree is represented in a flat form, producing a representation as in the example above. The goal in this case is therefore to accurately predict a collection of \u00a2 \u00a3 \u00a2 different types of phrases. The chunk types are based on the syntactic category part of the bracket label in the Treebank. Roughly, a chunk contains everything to the left of and including the syntactic head of the constituent of the same name. The phrases are: adjective phrase (ADJP), adverb phrase (ADVP), conjunction phrase (CONJP), interjection phrase (INTJ), list marker (LST), noun phrase (NP), preposition phrase (PP), particle (PRT), subordinated clause (SBAR), unlike coordinated phrase (UCP), verb phrase (VP). (See details in (Tjong Kim Sang and Buchholz, 2000) .)", "cite_spans": [ { "start": 57, "end": 87, "text": "CoNLL-2000 (Tjong Kim Sang and", "ref_id": null }, { "start": 88, "end": 103, "text": "Buchholz, 2000)", "ref_id": "BIBREF15" }, { "start": 858, "end": 873, "text": "Buchholz, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "2" }, { "text": "The second definition used is that of atomic phrases. An atomic phrase represents the most basic phrase with no nested sub-phrases. For example, in the parse tree, ( (S (NP (NP Pierre Vinken) , (ADJP (NP 61 years) old) ,) (VP will (VP join (NP the board) (PP as (NP a nonexecutive director)) (NP Nov. 29))) .)) Pierre Vinken, 61 years, the board, a nonexecutive director and Nov. 29 are atomic phrases while other higher-level phrases are not. That is, an atomic phrase denotes a tightly coupled message unit which is just above the level of single words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "2" }, { "text": "We perform our comparison using two state-ofthe-art parsers. For the full parser, we use the one developed by Michael Collins (Collins, 1996; Collins, 1997 ) -one of the most accurate full parsers around. It represents a full parse tree as a set of basic phrases and a set of dependency relationships between them. Statistical learning techniques are used to compute the probabilities of these phrases and of candidate dependency relations occurring in that sentence. After that, it will choose the candidate parse tree with the highest probability as output. The experiments use the version that was trained (by Collins) on sections 02-21 of the Penn Treebank. The reported results for the full parse tree (on section 23) are recall/precision of 88.1/87.5 (Collins, 1997) .", "cite_spans": [ { "start": 126, "end": 141, "text": "(Collins, 1996;", "ref_id": "BIBREF9" }, { "start": 142, "end": 155, "text": "Collins, 1997", "ref_id": "BIBREF10" }, { "start": 757, "end": 772, "text": "(Collins, 1997)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Parsers", "sec_num": "2.1" }, { "text": "The shallow parser used is the SNoW-based CSCL parser (Punyakanok and Roth, 2001; Munoz et al., 1999) . SNoW (Carleson et al., 1999; Roth, 1998 ) is a multi-class classifier that is specifically tailored for learning in domains in which the potential number of information sources (features) taking part in decisions is very large, of which NLP is a principal example. It works by learning a sparse network of linear functions over a pre-defined or incrementally learned feature space. Typically, SNoW is used as a classifier, and predicts using a winner-take-all mechanism over the activation value of the target classes. However, in addition to the prediction, it provides a reliable confidence level in the prediction, which enables its use in an inference algorithm that combines predictors to produce a coherent inference. Indeed, in CSCL (constraint satisfaction with classifiers), SNoW is used to learn several different classifiers -each detects the beginning or end of a phrase of some type (noun phrase, verb phrase, etc.). The outcomes of these classifiers are then combined in a way that satisfies some constraints -non-overlapping constraints in this case -using an efficient constraint satisfaction mechanism that makes use of the confidence in the classifier's outcomes.", "cite_spans": [ { "start": 54, "end": 81, "text": "(Punyakanok and Roth, 2001;", "ref_id": "BIBREF18" }, { "start": 82, "end": 101, "text": "Munoz et al., 1999)", "ref_id": "BIBREF17" }, { "start": 109, "end": 132, "text": "(Carleson et al., 1999;", "ref_id": "BIBREF5" }, { "start": 133, "end": 143, "text": "Roth, 1998", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Parsers", "sec_num": "2.1" }, { "text": "Since earlier versions of the SNoW based CSCL were used only to identify single phrases (Punyakanok and Roth, 2001; Munoz et al., 1999) and never to identify a collection of several phrases at the same time, as we do here, we also trained and tested it under the exact conditions of CoNLL-2000 (Tjong Kim Sang and Buchholz, 2000) to compare it to other shallow parsers. Table 1 shows that it ranks among the top shallow parsers evaluated there 1 . ", "cite_spans": [ { "start": 88, "end": 115, "text": "(Punyakanok and Roth, 2001;", "ref_id": "BIBREF18" }, { "start": 116, "end": 135, "text": "Munoz et al., 1999)", "ref_id": "BIBREF17" }, { "start": 283, "end": 313, "text": "CoNLL-2000 (Tjong Kim Sang and", "ref_id": null }, { "start": 314, "end": 329, "text": "Buchholz, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 370, "end": 377, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Parsers", "sec_num": "2.1" }, { "text": "Training was done on the Penn Treebank (Marcus et al., 1993) Wall Street Journal data, sections 02-21. To train the CSCL shallow parser we had first to convert the WSJ data to a flat format that directly provides the phrase annotations. This is done using the \"Chunklink\" program provided for CoNLL-2000 (Tjong Kim Sang and Buchholz, 2000) . Testing was done on two types of data. For the first experiment, we used the WSJ section 00 (which contains about 45,000 tokens and 23,500 phrases). The goal here was simply to evaluate the full parser and the shallow parser on text that is similar to the one they were trained on. 1 We note that some of the variations in the results are due to variations in experimental methodology rather than parser's quality. For example, in [KM00], rather than learning a classifier for each of the different phrases, a discriminator is learned for each of the phrase pairs which, statistically, yields better results. [Hal00] also uses different parsers and reports the results of some voting mechanism on top of these.", "cite_spans": [ { "start": 324, "end": 339, "text": "Buchholz, 2000)", "ref_id": "BIBREF15" }, { "start": 624, "end": 625, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2.2" }, { "text": "Our robustness test (section 3.2) makes use of section 4 in the Switchboard (SWB) data (which contains about 57,000 tokens and 17,000 phrases), taken from Treebank 3. The Switchboard data contains conversation records transcribed from phone calls. The goal here was two fold. First, to evaluate the parsers on a data source that is different from the training source. More importantly, the goal was to evaluate the parsers on low quality data and observe the absolute performance as well as relative degradation in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2.2" }, { "text": "The following sentence is a typical example of the SWB data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2.2" }, { "text": "The fact that it has some missing words, repeated words and frequent interruptions makes it a suitable data to test robustness of parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Huh/UH ,/, well/UH ,/, um/UH ,/, you/PRP know/VBP ,/, I/PRP guess/VBP it/PRP 's/BES pretty/RB deep/JJ feelings/NNS ,/, uh/UH ,/, I/PRP just/RB ,/, uh/UH ,/, went/VBD back/RB and/CC rented/VBD ,/, uh/UH ,/, the/DT movie/NN ,/, what/WP is/VBZ it/PRP ,/, GOOD/JJ MORNING/NN VIET/NNP NAM/NNP ./.", "sec_num": null }, { "text": "We had to do some work in order to unify the input and output representations for both parsers. Both parsers take sentences annotated with POS tags as their input. We used the POS tags in the WSJ and converted both the WSJ and the SWB data into the parsers' slightly different input formats. We also had to convert the outcomes of the parsers in order to evaluate them in a fair way. We choose CoNLL-2000's chunking format as our standard output format and converted Collins' parser outcome into this format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation", "sec_num": "2.3" }, { "text": "The results are reported in terms of precision, recall, and ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Measure", "sec_num": "2.4" }, { "text": "\u00a5 \u00a6 ! # \" % $ \u00a2 ' &", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Measure", "sec_num": "2.4" }, { "text": "We", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Measure", "sec_num": "2.4" }, { "text": "have used the evaluation procedure of CoNLL-2000 to produce the results below. Although we do not report significance results here, note that all experiments were done on tens of thousands of instances and clearly all differences and ratios measured are statistically significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Measure", "sec_num": "2.4" }, { "text": "We start by reporting the results in which we compare the full parser and the shallow parser on the \"clean\" WSJ data. Table 2 shows the results on identifying all phrases -chunking in CoNLL-2000 (Tjong Kim Sang and Buchholz, 2000) terminology. The results show that for the tasks of identifying phrases, learning directly, as done by the shallow parser outperforms the outcome from the full parser. Next, we compared the performance of the parsers on the task of identifying atomic phrases 2 . Here, again, the shallow parser exhibits significantly better performance. Table 3 shows the results of extracting atomic phrases.", "cite_spans": [ { "start": 184, "end": 214, "text": "CoNLL-2000 (Tjong Kim Sang and", "ref_id": null }, { "start": 215, "end": 230, "text": "Buchholz, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 569, "end": 576, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Performance", "sec_num": "3.1" }, { "text": "Next we present the results of evaluating the robustness of the parsers on lower quality data. Table 4 describes the results of evaluating the same parsers as above, (both trained as before on the same WSJ sections) on the SWB data. It is evident that on this data the difference between the performance of the two parsers is even more significant. This is shown more clearly in Table 5 which compares the relative degradation in performance each of the parsers suffers when moving from the WSJ to the SWB data (Table 2 vs. Table 4 ). While the performances of both parsers goes down when they are tested on the SWB, relative to the WSJ performance, it is clear that the shallow parser's performance degrades more gracefully. These results clearly indicate the higher-level robustness of the shallow parser.", "cite_spans": [], "ref_spans": [ { "start": 379, "end": 386, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 511, "end": 531, "text": "(Table 2 vs. Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Robustness", "sec_num": "3.2" }, { "text": "Analyzing the results shown above is outside the scope of this short abstract. We will only provide one example that might shed some light on the reasons for the more significant degradation in the results of the full parser. Table 6 exhibits the results of chunking as given by Collins' parser. The four columns are the original words, POS tags, and the phrases -encoded using the BIO scheme results for Chunking. For each parser the result shown is the ratio between the result on the \"noisy\" SWB data and the \"clean\" WSJ corpus data. (B-beginning of phrase; I-inside the phrase; Ooutside the phrase) -with the true annotation and Collins' annotation.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "The mistakes in the phrase identification (e.g., in \"word processing applications\") seem to be a result of assuming, perhaps due to the \"um\" and additional punctuation marks, that this is a separate sentence, rather than a phrase. Under this assumption, the full parser tries to make it a complete sentence and decides that \"processing\" is a \"verb\" in the parsing result. This seems to be a typical example for mistakes made by the full parser. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "Full parsing and shallow parsing are two different strategies for parsing natural languages. While full parsing provides more complete information about a sentence, shallow parsing is more flexible, easier to train and could be targeted for specific, limited subtasks. Several arguments have been used in the past to argue, on an intuitive basis, that (1) shallow parsing is sufficient for a wide range of applications and that (2) shallow parsing could be more reliable than full parsing in handling ill-formed real-world sentences, such as sentences that occur in conversational situations. While the former is an experimental issue that is still open, this paper has tried to evaluate experimentally the latter argument. Although the experiments reported here only compare the performance of one full parser and one shallow parser, we believe that these state-of-the-art parsers represent their class quite well. Our results show that on the specific tasks for which we have trained the shallow parser -identifying several kinds of phrases -the shallow parser performs more accurately and more robustly than the full parser. In some sense, these results validate the research in this direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Clearly, there are several directions for future work that this preliminary work suggests. First, in our experiments, the Collins' parser is trained on the Treebank data and tested on the lower quality data. It would be interesting to see what are the results if lower quality data is also used for training. Second, our decision to run the experiments on two different ways of decomposing a sentence into phrases was somewhat arbitrary (although we believe that selecting phrases in a different way would not affect the results). It does reflect, however, the fact that it is not completely clear what kinds of shallow parsing information should one try to extract in real applications. Making progress in the direction of a formal definition of phrases and experimenting with these along the lines of the current study would also be useful. Finally, an experimental comparison on several other shallow parsing tasks such as various attachments and relations detection is also an important direction that will enhance this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "As a side note -the fact that the same program could be trained to recognize patterns of different level in such an easy way, only by changing the annotations of the training data, could also be viewed as an advantage of the shallow parsing paradigm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to Vasin Punyakanok for his advice in this project and for his help in using the CSCL parser. We also thank Michael Collins for making his parser available to us.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "5" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Parsing by chunks", "authors": [ { "first": "S", "middle": [ "P" ], "last": "Abney", "suffix": "" } ], "year": 1991, "venue": "Principlebased parsing: Computation and Psycholinguistics", "volume": "", "issue": "", "pages": "257--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. P. Abney. 1991. Parsing by chunks. In S. P. Ab- ney R. C. Berwick and C. Tenny, editors, Principle- based parsing: Computation and Psycholinguistics, pages 257-278. Kluwer, Dordrecht.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "FASTUS: A finite-state processor for information extraction from real-world text", "authors": [ { "first": "D", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "J", "middle": [], "last": "Hobbs", "suffix": "" }, { "first": "J", "middle": [], "last": "Bear", "suffix": "" }, { "first": "D", "middle": [], "last": "Israel", "suffix": "" }, { "first": "M", "middle": [], "last": "Tyson", "suffix": "" } ], "year": 1993, "venue": "Proc. International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Appelt, J. Hobbs, J. Bear, D. Israel, and M. Tyson. 1993. FASTUS: A finite-state processor for infor- mation extraction from real-world text. In Proc. International Joint Conference on Artificial Intelli- gence.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A memory-based approach to learning shallow natural language patterns", "authors": [ { "first": "S", "middle": [], "last": "Argamon", "suffix": "" }, { "first": "I", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Krymolowski", "suffix": "" } ], "year": 1998, "venue": "The 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Argamon, I. Dagan, and Y. Krymolowski. 1998. A memory-based approach to learning shallow nat- ural language patterns. In COLING-ACL 98, The 17th International Conference on Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Cascaded grammatical relation assignment", "authors": [ { "first": "S", "middle": [], "last": "Buchholz", "suffix": "" }, { "first": "J", "middle": [], "last": "Veenstra", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 1999, "venue": "EMNLP-VLC'99, the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Buchholz, J. Veenstra, and W. Daelemans. 1999. Cascaded grammatical relation assignment. In EMNLP-VLC'99, the Joint SIGDAT Conference on Empirical Methods in Natural Language Process- ing and Very Large Corpora, June.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Error-driven pruning of Treebanks grammars for base noun phrase identification", "authors": [ { "first": "C", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "D", "middle": [], "last": "Pierce", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ACL-98", "volume": "", "issue": "", "pages": "218--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Cardie and D. Pierce. 1998. Error-driven pruning of Treebanks grammars for base noun phrase iden- tification. In Proceedings of ACL-98, pages 218- 224.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The SNoW learning architecture", "authors": [ { "first": "A", "middle": [], "last": "Carleson", "suffix": "" }, { "first": "C", "middle": [], "last": "Cumby", "suffix": "" }, { "first": "J", "middle": [], "last": "Rosen", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Carleson, C. Cumby, J. Rosen, and D. Roth. 1999. The SNoW learning architecture. Technical Re- port UIUCDCS-R-99-2101, UIUC Computer Sci- ence Department, May.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical parsing with a contextfree grammar and word statistics", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1997, "venue": "Proc. National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 1997a. Statistical parsing with a context- free grammar and word statistics. In Proc. National Conference on Artificial Intelligence.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Statistical techniques for natural language parsing. The AI Magazine", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 1997b. Statistical techniques for natural language parsing. The AI Magazine.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A stochastic parts program and noun phrase parser for unrestricted text", "authors": [ { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1988, "venue": "Proc. of ACL Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proc. of ACL Conference on Applied Natural Lan- guage Processing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A new statistical parser based on bigram lexical dependencies", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "184--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Annual Meeting of the Association for Compu- tational Linguistics, pages 184-191.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Three generative, lexicalised models for statistical parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 1997. Three generative, lexicalised mod- els for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Com- putational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Performance structures:a psycholinguistic and linguistic appraisal", "authors": [ { "first": "J", "middle": [ "P" ], "last": "Gee", "suffix": "" }, { "first": "F", "middle": [], "last": "Grosjean", "suffix": "" } ], "year": 1983, "venue": "Cognitive Psychology", "volume": "15", "issue": "", "pages": "411--458", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. P. Gee and F. Grosjean. 1983. Performance struc- tures:a psycholinguistic and linguistic appraisal. Cognitive Psychology, 15:411-458.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluation techniques for automatic semantic extraction: comparing semantic and window based approaches", "authors": [ { "first": "G", "middle": [], "last": "Greffenstette", "suffix": "" } ], "year": 1993, "venue": "ACL'93 workshop on the Acquisition of Lexical Knowledge from Text", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Greffenstette. 1993. Evaluation techniques for au- tomatic semantic extraction: comparing semantic and window based approaches. In ACL'93 work- shop on the Acquisition of Lexical Knowledge from Text.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The NYU system for MUC-6 or where's syntax?", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Grishman. 1995. The NYU system for MUC-6 or where's syntax? In B. Sundheim, editor, Proceed- ings of the Sixth Message Understanding Confer- ence. Morgan Kaufmann Publishers.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Co-occurrence and transformation in linguistic structure", "authors": [ { "first": "Z", "middle": [ "S" ], "last": "Harris", "suffix": "" } ], "year": 1957, "venue": "Language", "volume": "33", "issue": "3", "pages": "283--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. S. Harris. 1957. Co-occurrence and transformation in linguistic structure. Language, 33(3):283-340.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Introduction to the CoNLL-2000 shared task: Chunking", "authors": [ { "first": "E", "middle": [ "F" ], "last": "Tjong Kim Sang", "suffix": "" }, { "first": "S", "middle": [], "last": "Buchholz", "suffix": "" } ], "year": 2000, "venue": "Proceedings of CoNLL-2000 and LLL-2000", "volume": "", "issue": "", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. F. Tjong Kim Sang and S. Buchholz. 2000. In- troduction to the CoNLL-2000 shared task: Chunk- ing. In Proceedings of CoNLL-2000 and LLL-2000, pages 127-132.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. P. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of En- glish: The Penn Treebank. Computational Linguis- tics, 19(2):313-330, June.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A learning approach to shallow parsing", "authors": [ { "first": "M", "middle": [], "last": "Munoz", "suffix": "" }, { "first": "V", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "D", "middle": [], "last": "Zimak", "suffix": "" } ], "year": 1999, "venue": "EMNLP-VLC'99, the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Munoz, V. Punyakanok, D. Roth, and D. Zimak. 1999. A learning approach to shallow parsing. In EMNLP-VLC'99, the Joint SIGDAT Conference on Empirical Methods in Natural Language Process- ing and Very Large Corpora, June.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The use of classifiers in sequential inference", "authors": [ { "first": "V", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2001, "venue": "NIPS-13; The 2000 Conference on Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Punyakanok and D. Roth. 2001. The use of clas- sifiers in sequential inference. In NIPS-13; The 2000 Conference on Advances in Neural Informa- tion Processing Systems.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Third Annual Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. A. Ramshaw and M. P. Marcus. 1995. Text chunk- ing using transformation-based learning. In Pro- ceedings of the Third Annual Workshop on Very Large Corpora.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A linear observed time statistical parser based on maximum entropy models", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1997, "venue": "EMNLP-97, The Second Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Ratnaparkhi. 1997. A linear observed time statis- tical parser based on maximum entropy models. In EMNLP-97, The Second Conference on Empirical Methods in Natural Language Processing, pages 1- 10.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning to resolve natural language ambiguities: A unified approach", "authors": [ { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 1998, "venue": "Proc. National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "806--813", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Roth. 1998. Learning to resolve natural language ambiguities: A unified approach. In Proc. National Conference on Artificial Intelligence, pages 806- 813.", "links": null } }, "ref_entries": { "TABREF0": { "content": "", "text": ": [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP \u00a1 This research is supported by NSF grants IIS-9801638, ITR-IIS-0085836 and an ONR MURI Award. to ] [NP only $ 1.8 billion ] [PP in ] [NP September] .", "html": null, "num": null, "type_str": "table" }, "TABREF1": { "content": "
Parsers KM00\u00a9Precision(\u00a4 ) Recall(\u00a4 ) 93.45 93.51\u00a5 \u00a7 \u00a6 (\u00a4 ) 93.48
Hal00\u00a993.1393.5193.32
CSCL\u00a9 *93.4192.6493.02
TKS00\u00a994.0491.0092.50
ZST00\u00a991.9992.2592.12
Dej00\u00a991.8791.3192.09
Koe00\u00a992.0891.8691.97
Osb00\u00a991.6592.2391.94
VB00\u00a991.0592.0391.54
PMP00\u00a990.6389.6590.14
Joh00\u00a986.2488.2587.23
VD00\u00a988.8282.9185.76
Baseline72.5882.1477.07
", "text": "See (Tjong Kim Sang and Buchholz, 2000) for details.", "html": null, "num": null, "type_str": "table" }, "TABREF3": { "content": "
Full ParserShallow Parser
AvrP 91.71 92.21 91.96 R 4 6 5P 93.85 95.45 94.64 R 4 6 5
NP93.10 92.05 92.5793.83 95.92 94.87
VP86.00 90.42 88.1595.50 95.05 95.28
", "text": "the full and the shallow parser on the WSJ data. Results are shown for an (weighted) average of 11 types of phrases as well as for two of the most common phrases, NP and VP.", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "content": "
Full ParserShallow Parser
AvrP 88.68 90.45 89.56 R 4 6 5P 92.02 93.61 92.81 R 4 6 5
NP91.86 92.16 92.0193.54 95.88 94.70
", "text": "the WSJ data. Results are shown for an (weighted) average of 11 types of phrases as well as for the most common phrase, NP. VP occurs very infrequently as an atomic phrase.", "html": null, "num": null, "type_str": "table" }, "TABREF5": { "content": "
data: Precision & Re-
call for phrase identification (chunking) on the
Switchboard data. Results are shown for an
(weighted) average of 11 types of phrases as well
as for two of the most common phrases, NP, VP.
Full ParserShallow Parser
AvrP 81.54 83.79 82.65 R 4 6 5P 86.50 90.54 88.47 R 4 6 5
NP88.29 88.96 88.6290.50 92.59 91.54
VP70.61 83.53 76.5285.30 89.76 87.47
", "text": "Switchboard", "html": null, "num": null, "type_str": "table" }, "TABREF6": { "content": "", "text": "Robustness: Relative degradation in \u00a5 \u00a6", "html": null, "num": null, "type_str": "table" }, "TABREF8": { "content": "
WORDPOSTRUECollins
UmUHB-INTJB-INTJ
COMMACOMMAOI-INTJ
MostlyRBOI-INTJ
COMMACOMMAOO
umUHB-INTJB-INTJ
COMMACOMMAOO
word processing applications andNN NN NNS CCB-NP I-NP I-NP O8 @ 9 B A D C 8 @ 9 F E G C 8 @ 9 B A D C O
COMMACOMMAOO
uhUHB-INTJB-INTJ
COMMACOMMAOO
justRBB-ADVPB-PP
asINB-PPI-PP
aDTB-NPB-NP
dumbJJI-NPI-NP
terminalNNI-NPI-NP
...OO
", "text": "", "html": null, "num": null, "type_str": "table" } } } }