Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W01-0521",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:00:53.753586Z"
},
"title": "Corpus Variation and Parser Performance",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Berkeley"
}
},
"email": "gildea@cs.berkeley.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most work in statistical parsing has focused on a single corpus: the Wall Street Journal portion of the Penn Treebank. While this has allowed for quantitative comparison of parsing techniques, it has left open the question of how other types of text might a ect parser performance, and how portable parsing models are across corpora. We examine these questions by comparing results for the Brown and WSJ corpora, and also consider which parts of the parser's probability model are particularly tuned to the corpus on which it was trained. This leads us to a technique for pruning parameters to reduce the size of the parsing model.",
"pdf_parse": {
"paper_id": "W01-0521",
"_pdf_hash": "",
"abstract": [
{
"text": "Most work in statistical parsing has focused on a single corpus: the Wall Street Journal portion of the Penn Treebank. While this has allowed for quantitative comparison of parsing techniques, it has left open the question of how other types of text might a ect parser performance, and how portable parsing models are across corpora. We examine these questions by comparing results for the Brown and WSJ corpora, and also consider which parts of the parser's probability model are particularly tuned to the corpus on which it was trained. This leads us to a technique for pruning parameters to reduce the size of the parsing model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The past several years have seen great progress in the eld of natural language parsing, through the use of statistical methods trained using large corpora of hand-parsed training data. The techniques of Charniak 1997 , Collins 1997 , and Ratnaparkhi 1997 achieved roughly comparable results using the same sets of training and test data. In each case, the corpus used was the Penn Treebank's hand-annotated parses of Wall Street Journal articles. Relatively few quantitative parsing results have been reported on other corpora though see Stolcke e t al. 1996 for results on Switchboard, as well as Collins et al. 1999 for results on Czech and Hwa 1999 for bootstrapping from WSJ to ATIS. The inclusion of parses for the Brown corpus in the Penn Treebank allows us to compare parser performance across corpora. In this paper we examine the following questions:",
"cite_spans": [
{
"start": 203,
"end": 216,
"text": "Charniak 1997",
"ref_id": "BIBREF1"
},
{
"start": 217,
"end": 231,
"text": ", Collins 1997",
"ref_id": "BIBREF5"
},
{
"start": 232,
"end": 254,
"text": ", and Ratnaparkhi 1997",
"ref_id": "BIBREF11"
},
{
"start": 538,
"end": 558,
"text": "Stolcke e t al. 1996",
"ref_id": null
},
{
"start": 598,
"end": 617,
"text": "Collins et al. 1999",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To what extent is the performance of statistical parsers on the WSJ task due to its relatively uniform style, and how m i g h t s u c h parsers fare on the more varied Brown corpus? Can training data from one corpus be applied to parsing another? What aspects of the parser's probability m o d e l are particularly tuned to one corpus, and which are more general?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our investigation of these questions leads us to a surprising result about parsing the WSJ corpus: over a third of the model's parameters can be eliminated with little impact on performance. Aside from cross-corpus considerations, this is an important nding if a lightweight parser is desired or memory usage is a consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A great deal of work has been done outside of the parsing community analyzing the variations between corpora and di erent genres of text. Biber 1993 investigated variation in a number syntactic features over genres, or registers, of language. Of particular importance to statistical parsers is the investigation of frequencies for verb subcategorizations such a s R o l a n d a n d Jurafsky 1998. Roland et al. 2000 nd that subcategorization frequencies for certain verbs vary signi cantly between the Wall Street Journal corpus and the mixed-genre Brown corpus, but that they vary less so between genre-balanced British and American corpora. Argument structure is essentially the task that automatic parsers attempt to solve, and the frequencies of various structures in training data are re ected in a statistical parser's probability model. The variation in verb argument structure found by previous research caused us to wonder to what extent a model trained on one corpus would be useful in parsing another. The probability models of modern parsers include not only the number and syntactic type of a word's arguments, but lexical information about their llers. Although we a r e n o t a ware of previous comparisons of the frequencies of argument llers, we can only assume that they vary at least as much as the syntactic subcategorization frames.",
"cite_spans": [
{
"start": 397,
"end": 415,
"text": "Roland et al. 2000",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Comparisons of Corpora",
"sec_num": "2"
},
{
"text": "We take as our baseline parser the statistical model of Model 1 of Collins 1997. The model is a historybased, generative model, in which the probability for a parse tree is found by expanding each node in the tree in turn into its child nodes, and multiplying the probabilities for each action in the derivation. It can be thought o f a s a v ariety of lexicalized probabilistic context-free grammar, with the rule probabilities factored into three distributions. The rst distribution gives probability of the syntactic category H of the head child of a parent node with category P , head word H h wwith the head tag the part of speech tag of the head word H h t : P h HjP; Hht; Hhw The head word and head tag of the new node H are de ned to be the same as those of its parent. The remaining two distributions generate the non-head children one after the other. A special STOP symbol is generated to terminate the sequence of children for a given parent. Each child is generated in two steps: rst its syntactic category C and head tag C h tare chosen given the parent's and head child's features and a function representing the distance from the head child: P c C;ChtjP;H;Hht;Hhw; Then the new child's head word C h wis chosen: P cw C h w jP;H;Hht;Hhw;; C ; C h t For each of the three distributions, the empirical distribution of the training data is interpolated with less speci c backo distributions, as we will see in Section 5. Further details of the model, including the distance features used and special handling of punctuation, conjunctions, and base noun phrases, are described in Collins 1999. The fundamental features of used in the probability distributions are the lexical heads and head tags of each constituent, the co-occurrences of parent nodes and their head children, and the cooccurrences of child nodes with their head siblings and parents. The probability models of Charniak 1997 , Magerman 1995 and Ratnaparkhi 1997 di er in their details but are based on similar features. Models 2 and 3 of Collins 1997 add some slightly more elaborate features to the probability model, as do the additions of Charniak 2000 to the model of Charniak 1997.",
"cite_spans": [
{
"start": 1591,
"end": 1604,
"text": "Collins 1999.",
"ref_id": "BIBREF6"
},
{
"start": 1889,
"end": 1902,
"text": "Charniak 1997",
"ref_id": "BIBREF1"
},
{
"start": 1903,
"end": 1918,
"text": ", Magerman 1995",
"ref_id": "BIBREF9"
},
{
"start": 1919,
"end": 1939,
"text": "and Ratnaparkhi 1997",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Model",
"sec_num": "3"
},
{
"text": "Our implementation of Collins' Model 1 performs at 86 precision and recall of labeled parse constituents on the standard Wall Street Journal training and test sets. While this does not re ect the state-of-the-art performance on the WSJ task achieved by the more the complex models of Charniak 2000 and Collins 2000, we regard it as a reasonable baseline for the investigation of corpus e ects on statistical parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Model",
"sec_num": "3"
},
{
"text": "We conducted separate experiments using WSJ data, Brown data, and a combination of the two as training material. For the WSJ data, we observed the standard division into training sections 2 through 21 of the treebank and test section 23 sets. For the Brown data, we reserved every tenth sentence in the corpus as test data, using the other nine for training. This may underestimate the difculty of the Brown corpus by including sentences from the same documents in training and test sets. However, because of the variation within the Brown corpus, we felt that a single contiguous test section might not be representative. Only the subset of the Brown corpus available in the Treebank II bracketing format was used. This subset consists primarily of various ction genres. Corpus sizes are shown in Table 1 Table 2 . The basic mismatch between the two corpora is shown in the signi cantly lower performance of the WSJtrained model on Brown data than on WSJ data rows 1 and 2. A model trained on Brown data only does signi cantly better, despite the smaller size of the training set. Combining the WSJ and Brown training data in one model improves performance further, but by less than 0.5 absolute. Similarly, adding the Brown data to the WSJ model increased performance on WSJ by less than 0.5. Thus, even a large amount of additional data seems to have relatively little impact if it is not matched to the test material.",
"cite_spans": [],
"ref_spans": [
{
"start": 798,
"end": 805,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 806,
"end": 813,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing Results on the Brown Corpus",
"sec_num": "4"
},
{
"text": "The more varied nature of the Brown corpus also seems to impact results, as all the results on Brown are lower than the WSJ result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Results on the Brown Corpus",
"sec_num": "4"
},
{
"text": "The parsers cited above all use some variety of lexical dependency feature to capture statistics on the co-occurrence of pairs of words being found in parentchild relations within the parse tree. These word pair relations, also called lexical bigrams Collins, 1996 , are reminiscent of dependency grammars such as Me l cuk 1988 and the link grammar of Sleator and Temperley 1993. In Collins' Model 1, the word pair statistics occur in the distribution P cw C h w jP;H;Hht;Hhw;; C ; C h t where H h w represent the head word of a parent n o d e in the tree and C h wthe head word of its non-head child. The head word of a parent is the same as the head word of its head child. Because this is the only part of the model that involves pairs of words, it is also where the bulk of the parameters are found. The large number of possible pairs of words in the vocabulary make the training data necessarily sparse. In order to avoid assigning zero probability to unseen events, it is necessary to smooth the training data. The Collins model uses linear interpolation to estimate probabilities from empirical distributions of varying speci cities:",
"cite_spans": [
{
"start": 251,
"end": 264,
"text": "Collins, 1996",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The E ect of Lexical Dependencies",
"sec_num": "5"
},
{
"text": "P cw C h w jP;H;Hht;Hhw;; C ; C h t = 1P C h w jP;H;Hht;Hhw;; C ; C h t + 1 , 1 2P C h w jP; H; Hht; ; C ; C h t + 1 , 2 P C h w jC h t 1 whereP represents the empirical distribution derived directly from the counts in the training data. The interpolation weights 1 , 2 are chosen as a function of the number of examples seen for the conditioning events and the number of unique values seen for the predicted variable. Only the rst distribution in this interpolation scheme involves pairs of words, and the third component is simply the probability o f a w ord given its part of speech. Because the word pair feature is the most speci c in the model, it is likely to be the most corpusspeci c. The vocabularies used in corpora vary, as do the word frequencies. It is reasonable to expect word co-occurrences to vary as well. In order to test this hypothesis, we removed the distribu-tionPC h w jP;H;Hht;Hhw;C;Cht from the parsing model entirely, relying on the interpolation of the two less speci c distributions in the parser: P cw2 C h w jP;H;Hht;; C ; C h t = 2P C h w jP; H; Hht; ; C ; C h t + 1 , 2 P C h w jC h t 2 We performed cross-corpus experiments as before to determine whether the simpler parsing model might be more robust to corpus e ects. Results are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1276,
"end": 1283,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The E ect of Lexical Dependencies",
"sec_num": "5"
},
{
"text": "Perhaps the most striking result is just how little the elimination of lexical bigrams a ects the baseline system: performance on the WSJ corpus decreases by less than 0.5 absolute. Moreover, the performance of a WSJ-trained system without lexical bigrams on Brown test data is identical to the WSJtrained system with lexical bigrams. Lexical cooccurrence statistics seem to be of no bene t when attempting to generalize to a new corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The E ect of Lexical Dependencies",
"sec_num": "5"
},
{
"text": "The relatively high performance of a parsing model with no lexical bigram statistics on the WSJ task led us to explore whether it might be possible to signi cantly reduce the size of the parsing model by selectively removing parameters without sacricing performance. Such a technique reduces the parser's memory requirements as well as the overhead of loading and storing the model, which could be desirable for an application where limited computing resources are available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning Parser Parameters",
"sec_num": "6"
},
{
"text": "Signi cant e ort has gone into developing techniques for pruning statistical language models for speech recognition, and we borrow from this work, using the weighted di erence technique of Seymore and Rosenfeld 1996. This technique applies to any statistical model which estimates probabilities by backing o , that is, using probabilities from a less speci c distribution when no data are available are available for the full distribution, as the following equations show for the general case: P ejh =P 1 ejh if e 6 2 BOh = hP 2 ejh 0 if e 2 BOh Here e is the event to be predicted, h is the set of conditioning events or history, is a backo weight, and h 0 is the subset of conditioning events used for the less speci c backo distribution. BOis the backo s e t o f e v ents for which no data are present in the speci c distribution P 1 . In the case of n-gram language modeling, e is the next word to be predicted, and the conditioning events are the n , 1 preceding words. In our case the speci c distribution P 1 of the backo model is P cw of equation 1, itself a linear interpolation of three empirical distributions from the training data. The less speci c distribution P 2 of the backo model is P cw2 of equation 2, an interpolation of two empirical distributions. The backo weight is simply 1 , 1 in our linear interpolation model. The Seymore Rosenfeld pruning technique can be used to prune backo probability models regardless of whether the backo weights are derived from linear interpolation weights or discounting techniques such as Good-Turing. In order to ensure that the model's probabilities still sum to one, the backo Table 3 : Parsing results by training and test corpus weight must be adjusted whenever a parameter is removed from the model. In the Seymore Rosenfeld approach, parameters are pruned according to the following criterion:",
"cite_spans": [],
"ref_spans": [
{
"start": 1636,
"end": 1643,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pruning Parser Parameters",
"sec_num": "6"
},
{
"text": "Ne; hlog pejh , log p 0 ejh 0 3 where p 0 ejh 0 represents the new backed o probability estimate after removing pejh from the model and adjusting the backo weight, and Ne; h is the count in the training data. This criterion aims to prune probabilities that are similar to their backo estimates, and that are not frequently used. As shown by S t o l c ke 1998, this criterion is an approximation of the relative e n tropy b e t ween the original and pruned distributions, but does not take i n to account the e ect of changing the backo weight on other events' probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning Parser Parameters",
"sec_num": "6"
},
{
"text": "Adjusting the threshold below which parameters are pruned allows us to successively remove more and more parameters. Results for di erent v alues of are shown in Table 4 . The complete parsing model derived from the WSJ training set has 735,850 parameters in a total of nine distributions: three levels of backo for each of the three distributions P h , P c and P cw . The lexical bigrams are contained in the most speci c distribution for P cw . Removing all these parameters reduces the total model size by 43. The results show a gradual degradation as more parameters are pruned.",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pruning Parser Parameters",
"sec_num": "6"
},
{
"text": "The ten lexical bigrams with the highest scores for the pruning metric are shown in Table 5 for WSJ and Table 6 . The pruning metric of equation 3 has been normalized by corpus size to allow comparison between WSJ and Brown. The only overlap between the two sets is for pairs of unknown word tokens. The WSJ bigrams are almost all speci c to nance, are all word pairs that are likely to appear immediately adjacent to one another, and are all children of the base NP syntactic category. The Brown bigrams, which have lower correlation values by our metric, include verb subject and preposition object relations and seem more broadly applicable as a model of English. However, the pairs are not strongly related semantically, no doubt because the rst term of the pruning criterion favors the most frequent w ords, such as forms of the verbs be\" and have\". ",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 104,
"end": 111,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Pruning Parser Parameters",
"sec_num": "6"
},
{
"text": "Our results show strong corpus e ects for statistical parsing models: a small amount o f matched training data appears to be more useful than a large amount of unmatched data. The standard WSJ task seems to be simpli ed by its homogenous style. Adding training data from from an unmatched corpus doesn't hurt, but doesn't help a great deal either. In particular, lexical bigram statistics appear to be corpus-speci c, and our results show that they Table 4 : Parsing results with pruned probability models. The complete parsing model contains 736K parameters in nine distributions. Removing all lexical bigram parameters reducing the size of the model by 43.",
"cite_spans": [],
"ref_spans": [
{
"start": 449,
"end": 456,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "are of no use when attempting to generalize to new training data. In fact, they are of surprisingly little bene t even for matched training and test data | removing them from the model entirely reduces performance by less than 0.5 on the standard WSJ parsing task. Our selective pruning technique allows for a more ne grained tuning of parser model size, and would be particularly applicable to cases where large amounts of training data are available but memory usage is a consideration. In our implementation, pruning allowed models to run within 256MB that, unpruned, required larger machines. The parsing models of Charniak 2000 and Collins 2000 add more complex features to the parsing model that we use as our baseline. An area for future work is investigation of the degree to which s u c h features apply across corpora, or, on the other hand, further tune the parser to the peculiarities of the Wall Street Journal. Of particular interest are the automatic clusterings of lexical co-occurrences used in Charniak 1997 and Magerman 1995 . Cross-corpus experiments could reveal whether these clusters uncover generally applicable semantic categories for the parser's use.",
"cite_spans": [
{
"start": 1012,
"end": 1029,
"text": "Charniak 1997 and",
"ref_id": "BIBREF1"
},
{
"start": 1030,
"end": 1043,
"text": "Magerman 1995",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "Acknowledgments This work was undertaken as part of the FrameNet project at ICSI, with funding from National Science Foundation grant ITR HCI 0086132.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using register-diversi ed corpora for general language studies",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Biber",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "192",
"issue": "",
"pages": "219--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Biber. 1993. Using register-diversi ed cor- pora for general language studies. Computational Linguistics, 192:219 241, June.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical parsing with a context-free grammar and word statistics",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1997,
"venue": "AAAI97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In AAAI97, Brown University, Providence, Rhode Island, August.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum-entropyinspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st Annual Meeting of the North American Chapter of the ACL NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy- inspired parser. In Proceedings of the 1st Annual Meeting of the North American Chapter of the ACL NAACL, Seattle, Washington.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A statistical parser for czech",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Jan Hajic, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for czech. In Proceedings of the 37th Annual Meeting of the ACL, College Park, Maryland.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A new statistical parser based on bigram lexical dependencies",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceed- ings of the 34th Annual Meeting of the ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.D. thesis, University o f P ennsylvania, Philadelphia.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discriminative reranking for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of the ICML.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Supervised grammar induction using training data with limited constituent information",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent infor- mation. In Proceedings of the 37th Annual Meet- ing of the ACL, College Park, Maryland.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical decision-tree models for parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Magerman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd A nnual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Magerman. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33rd A n- nual Meeting of the ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dependency Syntax: Theory and Practice",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ivan",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan A. Me l cuk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A linear observed time statistical parser based on maximum entropy models",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In Proceedings of the Second Conference on Empirical Methods in Natural Language Pro- cessing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How verb subcategorization frequencies are a ected by corpus choice",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Roland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING ACL",
"volume": "1122",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Roland and Daniel Jurafsky. 1998. How verb subcategorization frequencies are a ected by corpus choice. In Proceedings of COLING ACL, pages 1122 1128.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Verb subcategorization frequency di erences between business-news and balanced corpora: the role of verb sense",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Roland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Menn",
"suffix": ""
},
{
"first": "Susanne",
"middle": [],
"last": "Gahl",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Elder",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Riddoch",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Association for Computational Linguistics ACL-2000 Workshop on Comparing Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Roland, Daniel Jurafsky, Lise Menn, Su- sanne Gahl, Elizabeth Elder, and Chris Riddoch. 2000. Verb subcategorization frequency di er- ences between business-news and balanced cor- pora: the role of verb sense. In Proceedings of the Association for Computational Linguistics ACL- 2000 Workshop on Comparing Corpora.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Scalable backo language models",
"authors": [
{
"first": "Kristie",
"middle": [],
"last": "Seymore",
"suffix": ""
},
{
"first": "Roni",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1996,
"venue": "ICSLP-96",
"volume": "",
"issue": "",
"pages": "232--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristie Seymore and Roni Rosenfeld. 1996. Scalable backo language models. In ICSLP-96, v olume 1, pages 232 235, Philadelphia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing english with a link grammar",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Sleator",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Temperley",
"suffix": ""
}
],
"year": 1993,
"venue": "Third International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Sleator and Davy Temperley. 1993. Pars- ing english with a link grammar. In Third Inter- national Workshop on Parsing Technologies, Au- gust.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dependency language modeling",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Engle",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Printz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ristad",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 1996,
"venue": "Summer Workshop Final Report",
"volume": "24",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke, C. Chelba, D. Engle, V. Jimenez, L. Mangu, H. Printz, E. Ristad, R. Rosenfeld, D. Wu, F. Jelinek, and S. Khudanpur. 1996. De- pendency language modeling. Summer Workshop Final Report 24, Center for Language and Speech Processing, Johns Hopkins University, Baltimore, April.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Entropy-based pruning of backo language models",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. DARPA Broadcast News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "270--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 1998. Entropy-based pruning of backo language models. In Proc. DARPA Broadcast News Transcription and Understanding Workshop, pages 270 274, Lansdowne, Va.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td>Training Data Test Set Recall Prec. WSJ WSJ 86.1 86.6 WSJ Brown 80.3 81.0 Brown Brown 83.6 84.6 WSJ+Brown Brown 83.9 84.8 WSJ+Brown WSJ 86.3 86.9</td></tr><tr><td>Table 2: Parsing results by training and test corpus</td></tr><tr><td>Results for the Brown corpus, along with WSJ results for comparison, are shown in</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Corpus sizes. Both test sets were restricted to sentences of 40 words or less. The Brown test set's average sentence was shorter despite the length restriction."
},
"TABREF4": {
"content": "<table><tr><td>Child word Head word Parent Pruning Chw Hhw P Metric It was S .0174 it was S .0169 unk of PP .0156 unk in PP .0097 course Of PP .0090 been had VP .0088 unk unk NPB .0079 they were S .0077 I 'm S .0073 time at PP .0073</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Ten most signi cant lexical bigrams from WSJ, with parent category other syntactic context variables not shown and pruning metric . NPB is Collins' base NP\" category."
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Ten most signi cant lexical bigrams from Brown"
}
}
}
}