Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W01-0714",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:01:30.869633Z"
},
"title": "Distributional Phrase Structure Induction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305-9040",
"region": "CA"
}
},
"email": "klein@cs.stanford.edu"
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305-9040",
"region": "CA"
}
},
"email": "manning\u00a1@cs.stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Unsupervised grammar induction systems commonly judge potential constituents on the basis of their effects on the likelihood of the data. Linguistic justifications of constituency, on the other hand, rely on notions such as substitutability and varying external contexts. We describe two systems for distributional grammar induction which operate on such principles, using part-of-speech tags as the contextual features. The advantages and disadvantages of these systems are examined, including precision/recall trade-offs, error analysis, and extensibility.",
"pdf_parse": {
"paper_id": "W01-0714",
"_pdf_hash": "",
"abstract": [
{
"text": "Unsupervised grammar induction systems commonly judge potential constituents on the basis of their effects on the likelihood of the data. Linguistic justifications of constituency, on the other hand, rely on notions such as substitutability and varying external contexts. We describe two systems for distributional grammar induction which operate on such principles, using part-of-speech tags as the contextual features. The advantages and disadvantages of these systems are examined, including precision/recall trade-offs, error analysis, and extensibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While early work showed that small, artificial context-free grammars could be induced with the EM algorithm (Lari and Young, 1990) or with chunk-merge systems (Stolcke and Omohundro, 1994) , studies with large natural language grammars have shown that these methods of completely unsupervised acquisition are generally ineffective. For instance, Charniak (1993) describes experiments running the EM algorithm from random starting points, which produced widely varying grammars of extremely poor quality. Because of these kinds of results, the vast majority of statistical parsing work has focused on parsing as a supervised learning problem (Collins, 1997; Charniak, 2000) .",
"cite_spans": [
{
"start": 108,
"end": 130,
"text": "(Lari and Young, 1990)",
"ref_id": "BIBREF11"
},
{
"start": 159,
"end": 188,
"text": "(Stolcke and Omohundro, 1994)",
"ref_id": "BIBREF16"
},
{
"start": 346,
"end": 361,
"text": "Charniak (1993)",
"ref_id": "BIBREF4"
},
{
"start": 641,
"end": 656,
"text": "(Collins, 1997;",
"ref_id": "BIBREF7"
},
{
"start": 657,
"end": 672,
"text": "Charniak, 2000)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1"
},
{
"text": "It remains an open problem whether an entirely unsupervised method can either produce linguistically sensible grammars or accurately parse free text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1"
},
{
"text": "However, there are compelling motivations for unsupervised grammar induction. Building supervised training data requires considerable resources, including time and linguistic expertise. Furthermore, investigating unsupervised methods can shed light on linguistic phenomena which are implicitly captured within a supervised parser's supervisory information, and, therefore, often not explicitly modeled in such systems. For example, our system and others have difficulty correctly attaching subjects to verbs above objects. For a supervised CFG parser, this ordering is implicit in the given structure of VP and S constituents, however, it seems likely that to learn attachment order reliably, an unsupervised system will have to model it explicitly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1"
},
{
"text": "Our goal in this work is the induction of highquality, linguistically sensible grammars, not parsing accuracy. We present two systems, one which does not do disambiguation well and one which does not do it at all. Both take tagged but unparsed Penn treebank sentences as input. 1 To whatever degree our systems parse well, it can be taken as evidence that their grammars are sensible, but no effort was taken to improve parsing accuracy directly.",
"cite_spans": [
{
"start": 278,
"end": 279,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1"
},
{
"text": "There is no claim that human language acquisition is in any way modeled by the systems described here. However, any success of these methods is evidence of substantial cues present in the data, which could potentially be exploited by humans as well. Furthermore, mistakes made by these systems could indicate points where human acquisition is likely not being driven by these kinds of statistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1"
},
{
"text": "At the heart of any iterative grammar induction system is a method, implicit or explicit, for deciding how to update the grammar. Two linguistic criteria for constituency in natural language grammars form the basis of this work (Radford, 1988 ):",
"cite_spans": [
{
"start": 228,
"end": 242,
"text": "(Radford, 1988",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "1. External distribution: A constituent is a sequence of words which appears in various structural positions within larger constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "2. Substitutability: A constituent is a sequence of words with (simple) variants which can be substituted for that sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "To make use of these intuitions, we use a distributional notion of context. Let \u00a2 be a part-of-speech tag sequence. Every occurence of \u00a2 will be in some context",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": ", where \u00a3 and \u00a5 are the adjacent tags or sentence boundaries. The distribution over contexts in which \u00a2 occurs is called its signature, which we denote by \u00a7 \u00a9 \u00a2 . Criterion 1 regards constituency itself. Consider the tag sequences IN DT NN and IN DT. The former is a canonical example of a constituent (of category PP), while the later, though strictly more common, is, in general, not a constituent. Frequency alone does not distinguish these two sequences, but Criterion 1 points to a distributional fact which does. In particular, IN DT NN occurs in many environments. It can follow a verb, begin a sentence, end a sentence, and so on. On the other hand, IN DT is generally followed by some kind of a noun or adjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "This example suggests that a sequence's constituency might be roughly indicated by the entropy of its signature, \u00a7 \u00a9 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": ". This turns out to be somewhat true, given a few qualifications. Figure 1 shows the actual most frequent constituents along with their rankings by several other measures. Tag entropy by itself gives a list that is not particularly impressive. There are two primary causes for this. One is that uncommon but possible contexts have little impact on the tag entropy value. Given the skewed distribution of short sentences in the treebank, this is somewhat of a problem. To correct for this, let \u00a7 \u00a8 \u00a2 be the uniform distribution over the observed contexts for \u00a2 . Using ! \u00a7 \" \u00a2 would have the obvious effect of boosting rare contexts, and the more subtle effect of biasing the rankings slightly towards more common sequences. However, while ! \u00a7 \u00a9 \u00a2 presumably converges to some sensible limit given infinite data, \u00a7 \" \u00a2 will not, as noise eventually makes all or most counts non-zero. Let # be the uniform distribution over all contexts. The scaled entropy",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "% $ & \u00a7 \u00a9 \u00a2 ( ' ) \u00a7 \u00a9 \u00a2 1 0 ! \u00a7 2 \u00a2 3 4 ! 5 # 6 8 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "turned out to be a useful quantity in practice. Multiplying entropies is not theoretically meaningful, but this quantity does converge to \u00a7 \u00a9 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "given infinite (noisy) data. The list for scaled entropy still has notable flaws, mainly relatively low ranks for common NPs, which does not hurt system perfor- -9 10 JJ NN 7 3 -7 6 3 CD NN 8 -----IN NN 9 --9 10 -IN DT NN 10 -----NN NNS --5 6 3 7 NN NN -8 -10 7 5 TO VB --1 mance, and overly high ranks for short subject-verb sequences, which does. The other fundamental problem with these entropy-based rankings stems from the context features themselves. The entropy values will change dramatically if, for example, all noun tags are collapsed, or if functional tags are split. This dependence on the tagset for constituent identification is very undesirable. One appealing way to remove this dependence is to distinguish only two tags: one for the sentence boundary (#) and another for words. Scaling entropies by the entropy of this reduced signature produces the improved list labeled \"Boundary.\" This quantity was not used in practice because, although it is an excellent indicator of NP, PP, and intransitive S constituents, it gives too strong a bias against other constituents. However, neither system is driven exclusively by the entropy measure used, and duplicating the above rankings more accurately did not always lead to better end results.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 300,
"text": "-9 10 JJ NN 7 3 -7 6 3 CD NN 8 -----IN NN 9 --9 10 -IN DT NN 10 -----NN NNS --5 6 3 7 NN NN -8 -10 7 5 TO VB --1",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "1 - - DT JJ - 6 - - - - MD VB - - 10 - - - IN DT - 4 - - - - PRP VBZ - - - - 8 9 PRP VBD - - - - 5 - NNS VBP - - 2 4 - - NN VBZ - 10 7 5 - - RB IN - - 8 - - - NN IN - 5 - - - - NNS VBD - - 9 8 - - NNS IN - - 6 - - -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "Criterion 2 regards the similarity of sequences. Assume the data were truly generated by a categorically unambiguous PCFG (i.e., whenever a token of a sequence is a constituent, its label is determined) and that we were given infinite data. If so, then two sequences, restricted to those occurrences where they are constituents, would have the same signatures. In practice, the data is finite, not statistically context-free, and even short sequences can be categorically ambiguous. However, it remains true that similar raw signatures indicate similar syntactic behavior. For example, DT JJ NN and DT NN have extremely similar signatures, and both are common NPs. Also, NN IN and NN NN IN have very similar signatures, and both are primarily non-constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "For our experiments, the metric of similarity between sequences was the Jensen-Shannon divergence of the sequences' signatures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "9 JS\u00a8 \u00a7 A @ 1 B C \u00a7 \" D & \u00a9 ' @ D 0 9 KL\u00a8 \u00a7 6 @ 4 E 5 F 4 G I H 2 F Q P D 4 R 9 KL\u00a8 \u00a7 D S E 5 F 4 G T H 2 F Q P D 8 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "Where 9 KL is the Kullback-Leibler divergence between probability distributions. Of course, just as various notions of context are possible, so are various metrics between signatures. The issues of tagset dependence and data skew did not seem to matter for the similarity measure, and unaltered Jensen-Shannon divergence was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "Given these ideas, section 4.1 discusses a system whose grammar induction steps are guided by sequence entropy and interchangeability, and section 4.2 discusses a maximum likelihood system where the objective being maximized is the quality of the constituent/non-constituent distinction, rather than the likelihood of the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4 \u00a2 \u00a6 \u00a5",
"sec_num": null
},
{
"text": "Viewing grammar induction as a search problem, there are three principal ways in which one can induce a \"bad\" grammar: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems with ML/MDL",
"sec_num": "2.1"
},
{
"text": "Be too sensitive to initial conditions. Our current systems primarily attempt to address the first two points. Common objective functions include maximum likelihood (ML) which asserts that a good grammar is one which best encodes or compresses the given data. This is potentially undesirable for two reasons. First, it is strongly data-dependent. The grammar",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "U",
"sec_num": null
},
{
"text": "V which maximizes \u1e849 E X V Y depends on the corpus 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "U",
"sec_num": null
},
{
"text": ", which, in some sense, the core of a given language's phrase structure should not. Second, and more importantly, in an ML approach, there is pressure for the symbols and rules in a PCFG to align in ways which maximize the truth of the conditional independence assumptions embodied by that PCFG. The symbols and rules of a natural language grammar, on the other hand, represent syntactically and semantically coherent units, for which a host of linguistic arguments have been made (Radford, 1988) . None of these arguments have anything to do with conditional independence; traditional linguistic con-stituency reflects only grammatical possibilty of expansion. Indeed, there are expected to be strong connections across phrases (such as are captured by argument dependencies). For example, in the treebank data used, CD CD is a common object of a verb, but a very rare subject. However, a linguist would take this as a selectional characteristic of the data set, not an indication that CD CD is not an NP. Of course, it could be that the ML and linguistic criteria align, but in practice they do not always seem to, and one should not expect that, by maximizing the former, one will also maximize the latter.",
"cite_spans": [
{
"start": 481,
"end": 496,
"text": "(Radford, 1988)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "U",
"sec_num": null
},
{
"text": "Another common objective function is minimum description length (MDL), which asserts that a good analysis is a short one, in that the joint encoding of the grammar and the data is compact. The \"compact grammar\" aspect of MDL is perhaps closer to some traditional linguistic argumentation which at times has argued for minimal grammars on grounds of analytical (Harris, 1951) or cognitive (Chomsky and Halle, 1968) economy. However, some CFGs which might possibly be seen as the acquisition goal are anything but compact; take the Penn treebank covering grammar for an extreme example. Another serious issue with MDL is that the target grammar is presumably bounded in size, while adding more and more data will on average cause MDL methods to choose ever larger grammars.",
"cite_spans": [
{
"start": 360,
"end": 374,
"text": "(Harris, 1951)",
"ref_id": "BIBREF10"
},
{
"start": 388,
"end": 413,
"text": "(Chomsky and Halle, 1968)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "U",
"sec_num": null
},
{
"text": "In addition to optimizing questionable objective functions, many systems begin their search procedure from an extremely unfavorable region of the grammar space. For example, the randomly weighted grammars in Carroll and Charniak (1992) rarely converged to remotely sensible grammars. As they point out, and quite independently of whether ML is a good objective function, the EM algorithm is only locally optimal, and it seems that the space of PCFGs is riddled with numerous local maxima. Of course, the issue of initialization is somewhat tricky in terms of the bias given to the system; for example, Brill (1994) begins with a uniformly rightbranching structure. For English, right-branching structure happens to be astonishingly good both as an initial point for grammar learning and even as a baseline parsing model. However, it would be unlikely to perform nearly as well for a VOS language like Malagasy or VSO languages like Hebrew.",
"cite_spans": [
{
"start": 208,
"end": 235,
"text": "Carroll and Charniak (1992)",
"ref_id": "BIBREF3"
},
{
"start": 602,
"end": 614,
"text": "Brill (1994)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "U",
"sec_num": null
},
{
"text": "Whether grammar induction is viewed as a search problem or a clustering problem is a matter of per-spective, and the two views are certainly not mutually exclusive. The search view focuses on the recursive relationships between the non-terminals in the grammar. The clustering view, which is perhaps more applicable to the present work, focuses on membership of (terminal) sequences to classes represented by the non-terminals. For example, the non-terminal symbol NP can be thought of as a cluster of (terminal) sequences which can be generated starting from NP. This clustering is then inherently soft clustering, since sequences can be ambiguous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search vs. Clustering",
"sec_num": "3"
},
{
"text": "Unlike standard clustering tasks, though, a sequence token in a given sentence need not be a constituent at all. For example, DT NN is an extremely common NP, and when it occurs, it is a constituent around 82% of the time in the data. However, when it occurs as a subsequence of DT NN NN it is usually not a constituent. In fact, the difficult decisions for a supervised parser, such as attachment level or coordination scope, are decisions as to which sequences are constituents, not what their tags would be if they were. For example, DT NN IN DT NN is virtually always an NP when it is a constituent, but it is only a constituent 66% of the time, mostly because the PP, IN DT NN, is attached elsewhere.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search vs. Clustering",
"sec_num": "3"
},
{
"text": "One way to deal with this issue is to have an explicit class for \"not a constituent\" (see section 4.2). There are difficulties in modeling such a class, mainly stemming from the differences between this class and the constituent classes. In particular, this class will not be distributionally cohesive. Also, for example, DT NN and DT JJ NN being generally of category NP seems to be a highly distributional fact, while DT NN not being a constituent in the context DT NN NN seems more properly modeled by the competing productions of the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search vs. Clustering",
"sec_num": "3"
},
{
"text": "Another approach is to model the nonconstituents either implicitly or independently of the clustering model (see section 4.1). The drawback to insufficiently modeling non-constituency is that for acquisition systems which essentially work bottom-up, non-constituent chunks such as NN IN or IN DT are hard to rule out locally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search vs. Clustering",
"sec_num": "3"
},
{
"text": "We present two systems. The first, GREEDY-MERGE, learns symbolic CFGs for partial parsing. ity that a constituent is realized as that sequence (see figure 1 ). It produces full binary parses.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Systems",
"sec_num": "4"
},
{
"text": "GREEDY-MERGE is a precision-oriented system which, to a first approximation, can be seen as an agglomerative clustering process over sequences. For each pair of sequences, a normalized divergence is calculated as follows:\u00a8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GREEDY-MERGE",
"sec_num": "4.1"
},
{
"text": "\u00a2 ( B a b ' c d f e h gF g p i S q rF g p s t q u q v x w gF g p i S q u qH v y w gF g s q q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GREEDY-MERGE",
"sec_num": "4.1"
},
{
"text": "The pair with the least divergence is merged. 2 Merging two sequences involves the creation of a single new non-terminal category which rewrites as either sequence. Once there are non-terminal categories, the definitions of sequences and contexts become slightly more complex. The input sentences are parsed with the previous grammar state, using a shallow parser which ties all parentless nodes together under a TOP root node. Sequences are then the ordered sets of adjacent sisters in this parse, and the context of a sequence can either be the preceding and following tags or a higher node in the tree. Merging a sequence and a single non-terminal results in a rule which rewrites the non-terminal as the sequence (i.e., that sequence is added to that nonterminal's class), and merging two non-terminals involves collapsing the two symbols in the grammar (i.e., those classes are merged). After the merge, re-analysis of the grammar rule RHSs is necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GREEDY-MERGE",
"sec_num": "4.1"
},
{
"text": "An important point about GREEDY-MERGE is that stopping the system at the correct point is critical. Since our greedy criterion is not a measure over entire grammar states, we have no way to detect the optimal point beyond heuristics (the same category appears in several merges in a row, for example) or by using a small supervision set to detect a parse performance drop. The figures shown are from stopping the system manually just before the first significant drop in parsing accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GREEDY-MERGE",
"sec_num": "4.1"
},
{
"text": "The grammar rules produced by the system are a strict subset of general CFG rules in several ways. First, no unary rewriting is learned. Second, no nonterminals which have only a single rewrite are ever proposed, though this situation can occur as a result of later merges. The effect of these restrictions is discussed below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GREEDY-MERGE",
"sec_num": "4.1"
},
{
"text": "The second system, CONSTITUENCY-PARSER, is recall-oriented. Unlike GREEDY-MERGE, this system always produces a full, binary parse of each input sentence. However, its parsing behavior is secondary. It is primarily a clustering system which views the data as the entire set of (sequence, context) pairs\u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTITUENCY-PARSER",
"sec_num": "4.2"
},
{
"text": "that occurred in the sentences. Each pair token comes from some specific sentence and is classified with a binary judgement of that token's constituency in that sentence. We assume that these pairs are generated by the following model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 B \u00a3 6",
"sec_num": null
},
{
"text": "\u1e84 \u00a2 \u00a9 B \u00a3 6 ( ' I r \u1e84 \u00a2 E Q \u1e84 5 \u00a3 E Q \u1e84 Q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 B \u00a3 6",
"sec_num": null
},
{
"text": "We use EM to maximize the likelihood of these pairs given the hidden judgements , subject to the constraints that the judgements for the pairs from a given sentence must form a valid binary parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 B \u00a3 6",
"sec_num": null
},
{
"text": "Initialization was either done by giving initial seeds for the probabilities above or by forcing a certain set of parses on the first round. To do the reestimation, we must have some method of deciding which binary bracketing to prefer. As we are considering each pair independently from the rest of the parse, this model does not correspond to a generative model of the kind standardly associated with PCFGs, but can be seen as a random field over the possible parses, with the features being the sequences and contexts (see (Abney, 1997) ). However, note that we were primarily interested in the clustering behavior, not the parsing behavior, and that the random field parameters have not been fit to any distribution over trees. The parsing model is very crude, primarily serving to eliminate systematically mutually incompatible analyses.",
"cite_spans": [
{
"start": 526,
"end": 539,
"text": "(Abney, 1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a9 B \u00a3 6",
"sec_num": null
},
{
"text": "Since this system does not postulate any nonterminal symbols, but works directly with terminal sequences, sparsity will be extremely severe for any reasonably long sequences. Substantial smoothing was done to all terms; for the \u1e84 E\u00a2 estimates we interpolated the previous counts equally with a uniform \u1e84 Q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sparsity",
"sec_num": "4.2.1"
},
{
"text": ", otherwise most sequences would remain locked in their initial behaviors. This heavy smoothing made rare sequences behave primarily according to their contexts, removed the initial invariance problem, and, after a few rounds of re-estimation, had little effect on parser performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sparsity",
"sec_num": "4.2.1"
},
{
"text": "CONSTITUENCY-PARSER's behavior is determined by the initialization it is given, either by initial parameter estimates, or fixed first-round parses. We used four methods: RANDOM, ENTROPY, RIGHT-BRANCH, and GREEDY.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters",
"sec_num": "4.2.2"
},
{
"text": "For RANDOM, we initially parsed randomly. For ENTROPY, we weighted",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters",
"sec_num": "4.2.2"
},
{
"text": "\u1e84 S E\u00a2 proportionally to % $ Q \u00a7 \u00a9 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters",
"sec_num": "4.2.2"
},
{
"text": ". For RIGHTBRANCH, we forced rightbranching structures (thereby introducing a bias towards English structure). Finally, GREEDY used the output from GREEDY-MERGE (using the grammar state in figure 3) to parse initially.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters",
"sec_num": "4.2.2"
},
{
"text": "Two kinds of results are presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "First, we discuss the grammars learned by GREEDY-MERGE and the constituent distributions learned by CONSTITUENCY-PARSER. Then we apply both systems to parsing free text from the WSJ section of the Penn treebank. Figure 3 shows a grammar learned at one stage of a run of GREEDY-MERGE on the sentences in the WSJ section of up to 10 words after the removal of punctuation (q 7500 sentences). The non-terminal categories proposed by the systems are internally given arbitrary designations, but we have relabeled them to indicate the best recall match for each.",
"cite_spans": [],
"ref_spans": [
{
"start": 212,
"end": 220,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Categories corresponding to NP, VP, PP, and S are learned, although some are split into sub-categories (transitive and intransitive VPs, proper NPs and two Figure 4 : A learned grammar (with verbs split).",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grammars learned by GREEDY-MERGE",
"sec_num": "5.1"
},
{
"text": "grammar where X t X X and X t (any terminal). However, very incorrect merges are sometimes made relatively early on (such as merging VPs with PPs, or merging the sequences IN NNP IN and IN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammars learned by GREEDY-MERGE",
"sec_num": "5.1"
},
{
"text": "The CONSTITUENCY-PARSER's state is not a symbolic grammar, but estimates of constituency for terminal sequences. These distributions, while less compelling a representation for syntactic knowledge than CFGs, clearly have significant facts about language embedded in them, and accurately learning them can be seen as a kind of acquisiton. Figure 5 shows the sequences whose constituency counts are most incorrect for the GREEDY-RE setting. An interesting analysis given by the system is the constituency of NNP POS NN sequences as NNP (POS NN) which is standard in linguistic analyses (Radford, 1988) , as opposed to the treebank's systematic (NNP POS) NN. Other common errors, like the overcount of JJ NN or JJ NNS are partially due to parsing inside NPs which are flat in the treebank (see section 5.3).",
"cite_spans": [
{
"start": 584,
"end": 599,
"text": "(Radford, 1988)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 338,
"end": 346,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "CONSTITUENCY-PARSER's Distributions",
"sec_num": "5.2"
},
{
"text": "It is informative to see how re-estimation with CONSTITUENCY-PARSER improves and worsens the GREEDY-MERGE initial parses. Coverage is improved; for example NPs and PPs involving the CD tag are consistently parsed as constituents while GREEDY-MERGE did not include them in parses at all. On the other hand, the GREEDY-MERGE sys- identified as constituents by CONSTITUENCY-PARSER using GREEDY-RE (ENTROPY-RE is similar). \"Total\" is the frequency of the sequence in the flat data. \"True\" is the frequency as a constituent in the treebank's parses. \"Estimated\" is the frequency as a constituent in the system's parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTITUENCY-PARSER's Distributions",
"sec_num": "5.2"
},
{
"text": "tem had learned the standard subject-verb-object attachment order, though this has disappeared, as can be seen in the undercounts of VP sequences. Since many VPs did not fit the conservative VP grammar in figure 3, subjects and verbs were often grouped together frequently even on the initial parses, and the CONSTITUENCY-PARSER has a further bias towards over-identifying frequent constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTITUENCY-PARSER's Distributions",
"sec_num": "5.2"
},
{
"text": "Some issues impact the way the results of parsing treebank sentences should be interpreted. Both systems, but especially the CONSTITUENCY-PARSER, tend to form verb groups and often attach the subject below the object for transitive verbs. \") , an arbitrary inconsistency which is unlikely to be learned automatically. The treebank is also, somewhat purposefully, very flat. For example, there is no analysis of the inside of many short noun phrases. The GREEDY-MERGE grammars above, however, give a (correct) analysis of the insides of NPs like DT JJ NN NN for which it will be penalized in terms of unlabeled precision (though not crossing brackets) when compared to the treebank. An issue with GREEDY-MERGE is that the grammar learned is symbolic, not probabilistic. Any disambiguation is done arbitrarily. Therefore, even adding a linguistically valid rule can degrade numerical performance (sometimes dramatically) by introducing ambiguity to a greater degree than it improves coverage.",
"cite_spans": [],
"ref_spans": [
{
"start": 239,
"end": 241,
"text": "\")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing results",
"sec_num": "5.3"
},
{
"text": "In figure 6, we report summary results for each system on the w 10-word sentences of the WSJ section. GREEDY is the above snapshot of the GREEDY-MERGE system. RANDOM, EN-TROPY, and RIGHTBRANCH are the behaviors of the random-parse baseline, the right-branching baseline, and the entropy-scored initialization for CONSTITUENCY-PARSER. The -RE settings are the result of context-based re-estimation from the respective baselines using CONSTITUENCY-PARSER. 6 NCB precision is the percentage of pro- 5 The RIGHTBRANCH baseline is in the opposite situation. Its high overall figures are in a large part due to extremely high VP accuracy, while NP and PP accuracy (which is more important for tasks such as information extraction) is very low.",
"cite_spans": [
{
"start": 496,
"end": 497,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing results",
"sec_num": "5.3"
},
{
"text": "6 RIGHTBRANCH was invariant under re-estimation, and RIGHTBRANCH-RE is therefore omitted. posed brackets which do not cross a correct bracket. Recall is also shown separately for VPs and NPs to illustrate the VP effect noted above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing results",
"sec_num": "5.3"
},
{
"text": "The general results are encouraging. GREEDY is, as expected, higher precision than the other settings. Re-estimation from that initial point improves recall at the expense of precision. In general, reestimation improves parse accuracy, despite the indirect relationship between the criterion being maximized (constituency cluster fit) and parse quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing results",
"sec_num": "5.3"
},
{
"text": "This study presents preliminary investigations and has several significant limitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of this study",
"sec_num": "6"
},
{
"text": "A possible criticism of this work is that it relies on part-of-speech tagged data as input. In particular, while there has been work on acquiring parts-ofspeech distributionally (Finch et al., 1995; Sch\u00fctze, 1995) , it is clear that manually constructed tag sets and taggings embody linguistic facts which are not generally detected by a distributional learner. For example, transitive and intransitive verbs are identically tagged yet distributionally dissimilar.",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "(Finch et al., 1995;",
"ref_id": "BIBREF8"
},
{
"start": 199,
"end": 213,
"text": "Sch\u00fctze, 1995)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagged Data",
"sec_num": "6.1"
},
{
"text": "In principle, an acquisition system could be designed to exploit non-distributionality in the tags. For example, verb subcategorization or selection could be induced from the ways in which a given lexical verb's distribution differs from the average, as in (Resnik, 1993) . However, rather than being exploited by the systems here, the distributional nonunity of these tags appears to actually degrade performance. As an example, the systems more reliably group verbs and their objects together (rather than verbs and their subjects) when transitive and intransitive verbs are given separate tags.",
"cite_spans": [
{
"start": 257,
"end": 271,
"text": "(Resnik, 1993)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagged Data",
"sec_num": "6.1"
},
{
"text": "Future experiments will investigate the impact of distributional tagging, but, despite the degradation in tag quality that one would expect, it is also possible that some current mistakes will be corrected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagged Data",
"sec_num": "6.1"
},
{
"text": "For GREEDY-MERGE, the primary limitations are that there is no clear halting condition, there is no ability to un-merge or to stop merging existing classes while still increasing coverage, and the system is potentially very sensitive to the tagset used. For CONSTITUENCY-PARSER, the primary limitations are that no labels or recursive grammars are learned, and that the behavior is highly dependent on initialization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual system limitations",
"sec_num": "6.2"
},
{
"text": "We present two unsupervised grammar induction systems, one of which is capable of producing declarative, linguistically plausible grammars and another which is capable of reliably identifying frequent constituents. Both parse free text with accuracy rivaling that of weakly supervised systems. Ongoing work includes lexicalization, incorporating unary rules, enriching the models learned, and addressing the limitations of the systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The Penn tag and category sets used in examples in this paper are documented in Manning andSch\u00fctze (1999, 413).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We required that the candidates be among the 250 most frequent sequences. The exact threshold was not important, but without some threshold, long singleton sequences with zero divergence are always chosen. This suggests that we need a greater bias towards quantity of evidence in our basic method.3 An option which was not tried would be to consider a nonterminal as a distribution over the tags of the right or left corners of the sequences belonging to that non-terminal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "kinds of common NPs, and so on). 4 Provided one is willing to accept a verb-group analysis, this grammar seems sensible, though quite a few constructions, such as relative clauses, are missing entirely. Figure 4 shows a grammar learned at one stage of a run when verbs were split by transitivity. This grammar is similar, but includes analyses of sentencial coordination and adverbials, and subordinate clauses. The only rule in this grammar which seems overly suspect is ZVP t IN ZS which analyzes complementized subordinate clauses as VPs.In general, the major mistakes the GREEDY-MERGE system makes are of three sorts: U Mistakes of omission. Even though the grammar shown has correct, recursive analyses of many categories, no rule can non-trivially incorporate a number (CD). There is also no analysis for many common constructions.",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Alternate analyses. The system almost invariably forms verb groups, merging MD VB sequences with single main verbs to form verb group constituents (argued for at times by some linguists (Halliday, 1994) ). Also, PPs are sometimes attached to NPs below determiners (which is in fact a standard linguistic analysis (Abney, 1987)). It is not always clear whether these analyses should be considered mistakes. U Over-merging. These errors are the most serious. Since at every step two sequences are merged, the process will eventually learn the 4 Splits often occur because unary rewrites are not learned in the current system. ",
"cite_spans": [
{
"start": 186,
"end": 202,
"text": "(Halliday, 1994)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "U",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The English Noun Phrase in its Sentential Aspect",
"authors": [
{
"first": "P",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen P. Abney. 1987. The English Noun Phrase in its Sen- tential Aspect. Ph.D. thesis, MIT.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Stochastic attribute-value grammars",
"authors": [
{
"first": "P",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "4",
"pages": "597--618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven P. Abney. 1997. Stochastic attribute-value grammars. Computational Linguistics, 23(4):597-618.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic grammar induction and parsing free text: A transformation-based approach",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. ARPA Human Language Technology Workshop '93",
"volume": "",
"issue": "",
"pages": "237--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill. 1994. Automatic grammar induction and parsing free text: A transformation-based approach. In Proc. ARPA Hu- man Language Technology Workshop '93, pages 237-242, Princeton, NJ.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Two experiments on learning probabilistic dependency grammars from corpora",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1992,
"venue": "Working Notes of the Workshop Statistically-Based NLP Techniques",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from cor- pora. In Carl Weir, Stephen Abney, Ralph Grishman, and Ralph Weischedel, editors, Working Notes of the Workshop Statistically-Based NLP Techniques, pages 1-13. AAAI Press, Menlo Park, CA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical Language Learning",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 1993. Statistical Language Learning. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "NAACL 1",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In NAACL 1, pages 132-139.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Sound Pattern of English",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
},
{
"first": "Morris",
"middle": [],
"last": "Halle",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper & Row, New York.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael John",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Collins",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1997,
"venue": "ACL 35/EACL 8",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael John Collins. 1997. Three generative, lexicalised mod- els for statistical parsing. In ACL 35/EACL 8, pages 16-23.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Acquiring syntactic information from distributional statistics",
"authors": [
{
"first": "P",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chater",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Redington",
"suffix": ""
}
],
"year": 1995,
"venue": "Connectionist models of memory and language",
"volume": "",
"issue": "",
"pages": "229--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven P. Finch, Nick Chater, and Martin Redington. 1995. Ac- quiring syntactic information from distributional statistics. In J. Levy, D. Bairaktaris, J. A. Bullinaria, and P. Cairns, ed- itors, Connectionist models of memory and language, pages 229-242. UCL Press, London.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An introduction to functional grammar",
"authors": [
{
"first": "M",
"middle": [
"A K"
],
"last": "Halliday",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. K. Halliday. 1994. An introduction to functional gram- mar. Edward Arnold, London, 2nd edition.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Methods in Structural Linguistics. University of",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1951,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1951. Methods in Structural Linguistics. Uni- versity of Chicago Press, Chicago.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The estimation of stochastic context-free grammars using the inside-outside algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech and Language, 4:35-56.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning and Hinrich Sch\u00fctze. 1999. Foun- dations of Statistical Natural Language Processing. MIT Press, Boston, MA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Transformational Grammar. Cambridge University Press",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Radford",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Radford. 1988. Transformational Grammar. Cam- bridge University Press, Cambridge.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Selection and Information: A Class-Based Approach to Lexical Relationships",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Stuart Resnik",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Stuart Resnik. 1993. Selection and Information: A Class-Based Approach to Lexical Relationships. Ph.D. the- sis, University of Pennsylvania.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributional part-of-speech tagging",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1995,
"venue": "EACL 7",
"volume": "",
"issue": "",
"pages": "141--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1995. Distributional part-of-speech tagging. In EACL 7, pages 141-148.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Inducing probabilistic grammars by Bayesian model merging",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"M"
],
"last": "Omohundro",
"suffix": ""
}
],
"year": 1994,
"venue": "Grammatical Inference and Applications: Proceedings of the Second International Colloquium on Grammatical Inference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke and Stephen M. Omohundro. 1994. Induc- ing probabilistic grammars by Bayesian model merging. In Grammatical Inference and Applications: Proceedings of the Second International Colloquium on Grammatical In- ference. Springer Verlag.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Optimize the wrong objective function.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Choose bad initial conditions.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "The rules it learns are of high quality (see figures 3 and 4), but parsing coverage is relatively shallow. The second, CONSTITUENCY-PARSER, learns distributions over sequences representing the probabil-The possible contexts of a sequence.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Unlabeled precision (left) and recall (right) values for various settings.",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Sequences most commonly over-and under-",
"type_str": "figure"
}
}
}
}