|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:08:36.350318Z" |
|
}, |
|
"title": "Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?", |
|
"authors": [ |
|
{ |
|
"first": "Yian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "yian.zhang@nyu.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recent latent tree learning models can learn constituency parsing without any exposure to human-annotated tree structures. One such model is ON-LSTM (Shen et al., 2019), which is trained on language modelling and has nearstate-of-the-art performance on unsupervised parsing. In order to better understand the performance and consistency of the model as well as how the parses it generates are different from gold-standard PTB parses, we replicate the model with different restarts and examine their parses. We find that (1) the model has reasonably consistent parsing behaviors across different restarts, (2) the model struggles with the internal structures of complex noun phrases, (3) the model has a tendency to overestimate the height of the split points right before verbs. We speculate that both problems could potentially be solved by adopting a different training task other than unidirectional language modelling.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recent latent tree learning models can learn constituency parsing without any exposure to human-annotated tree structures. One such model is ON-LSTM (Shen et al., 2019), which is trained on language modelling and has nearstate-of-the-art performance on unsupervised parsing. In order to better understand the performance and consistency of the model as well as how the parses it generates are different from gold-standard PTB parses, we replicate the model with different restarts and examine their parses. We find that (1) the model has reasonably consistent parsing behaviors across different restarts, (2) the model struggles with the internal structures of complex noun phrases, (3) the model has a tendency to overestimate the height of the split points right before verbs. We speculate that both problems could potentially be solved by adopting a different training task other than unidirectional language modelling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Grammar induction is the task of learning the grammar of a target corpus without exposure to the parsing ground truth or any expert-labeled tree structures (Charniak and Carroll, 1992; Klein and Manning, 2002) . Recently emerging latent tree learning models provide a new approach to this problem Maillard et al., 2017; Choi et al., 2018; Shen et al., 2018; Kim et al., 2019) . They learn syntactic parsing under only indirect supervision from their main training tasks such as language modelling and natural language inference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 184, |
|
"text": "(Charniak and Carroll, 1992;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 209, |
|
"text": "Klein and Manning, 2002)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 319, |
|
"text": "Maillard et al., 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 338, |
|
"text": "Choi et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 357, |
|
"text": "Shen et al., 2018;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 375, |
|
"text": "Kim et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this study, we analyze ON-LSTM (Shen et al., 2019) , a new latent tree learning model that set the state of the art on unsupervised constituency parsing on WSJ test (Marcus et al., 1993) when it was published at ICLR 2019. The model is trained on language modelling and can generate binary constituency parsing trees of input sentences like the one in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 53, |
|
"text": "(Shen et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 189, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 363, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As far as we know, though there is an excellent theoretical analysis paper of the ON-LSTM model that focuses on the model's architecture and its parsing algorithm, there is no systematic analysis of the parses the model generates. There are no in-depth investigations of (i) whether the model's parsing behavior is consistent among different restarts or (ii) how the parses it produces are different from PTB gold standards. Answering these questions is crucial for a better understanding of the capability of the model and may bring insights into how to build more advanced latent tree learning models in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Therefore, we replicate the model with 5 random restarts and look into the parses it generates. We find that (1) ON-LSTM has fairly consistent parsing behaviors across different restarts, achieving a self F1 of 65.7 on WSJ test. (2) The model struggles to correctly parse the internal structures of complex noun phrases. (3) The model has a consistent tendency to overestimate the height of the split points right before verbs or auxiliary verbs, leading to a major difference between its parses and the Penn Treebank gold-standard parses. We speculate that both problems can be explained by the training task, unidirectional language modelling, and thus we hypothesize that training a bidirectional model on a more syntax-related task like acceptability judgement might be a good choice for future latent tree learning models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "ST-Gumbel (Choi et al., 2018) and RL-SPINN (Yogatama et al., 2017) are two earlier latent tree learning models. These models are designed to learn to parse input sentences in order to help solve a downstream sentence understanding task such as natural language inference. Since they are not designed to approximate PTB grammar (Marcus et al., 1993) , their unsupervised parsing F1's on WSJ test are relatively low (20.1 and 25.0).", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 29, |
|
"text": "(Choi et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 34, |
|
"end": 66, |
|
"text": "RL-SPINN (Yogatama et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 348, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "PRPN (Shen et al., 2017) and URNNG (Kim et al., 2019) are two of the stronger latent tree learning models that have comparable unsupervised parsing performance (F1=42.8 and 52.4) with ON-LSTM (F1=49.4). URNNG is based on Recurrent Neural Network Grammar (Dyer et al., 2016) , a probablitic generative model; PRPN is a neural language model that implicitly models syntax using a structured attention mechanism. Williams et al. (2018) analyze ST-Gumbel and RL-SPINN. They find that though the two models perform well on sentence understanding, neither of the models induces consistent and non-trivial grammars.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 24, |
|
"text": "(Shen et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 35, |
|
"end": 53, |
|
"text": "(Kim et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 273, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 432, |
|
"text": "Williams et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Prior to this work, also analyze ON-LSTM. They raise doubts on the necessity of the model's novel gates and mathematically prove that it is impossible for the parsing algorithm used by Shen et al. (2019) to correctly parse a certain class of structures. In comparison, this study takes a more empirical approach that is similar to that of Williams et al. (2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 203, |
|
"text": "Shen et al. (2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 361, |
|
"text": "Williams et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "WSJ Dataset WSJ is the Wall Street Journal Section of PTB (Marcus et al., 1993) , which is the most commonly used dataset for training and evaluating parsers including latent tree learning models (Williams et al., 2018; Htut et al., 2018) . It is also the dataset ON-LSTM is originally trained on. We follow the traditional split of WSJ: sections 0-21 as WSJ train, section 22 as WSJ dev, and section 23 as WSJ test. We also use WSJ 10, a subset of WSJ that includes all sentences with length < 10.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 79, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 219, |
|
"text": "(Williams et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 238, |
|
"text": "Htut et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the experiments, the model is always trained on WSJ train on language modelling, and evaluated on WSJ test, WSJ dev and/or WSJ 10 on constituency parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Models ON-LSTM is an LSTM model (Hochreiter and Schmidhuber, 1997) plus a novel activation function, which causes the model to learn to store long-term information in high-ranking dimensions, implicitly encoding a constituency parse. The model is equipped with a master forget gate f t and a master input gate i t . At each timestep, f t is multiplied element-wise to the previous cell state c t\u22121 and thus controls to what extent the value of each dimension in the previous cell state can be forgotten; i t is multiplied element-wise to the candidate update values\u0109 t and thus controls how much new information can be written to each dimension in the cell state. The values of f t and i t are computed at each timestep based on the input token and the cell state. The model uses cumax() as the activation function of the master gates, where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "cumax( * ) := cumsum(sof tmax( * )) cumsum( a) := [a 1 , a 1 + a 2 , ..., k i=1 a i , ..., n i=1 a i ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Therefore, the values in f t are always monotonically increasing from 0 to 1, and the values in i t are always monotonically decreasing from 1 to 0. As a result, when a dimension is updated/erased, all of the dimensions whose ranks are lower than it are also updated/erased. Intuitively, in an extreme and simplified example where f t = (0, ..., 0, 1, ..., 1), the model is just picking a dimension d, erases all the dimensions from 1 to d \u2212 1 in c t\u22121 , and keeps dimensions > d unchanged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a result of this novel updating rule, the model will tend to store long-term information in highranking dimensions and short-term information in low-ranking dimensions so that when the model frequently erases and updates low-ranking dimensions of the cell state, the long-term information stored in high-ranking dimensions will stay unaffected. When the model is trained to perform language modelling, since a higher-level constituent always spans more words than its children, its related information will continuously be useful for word prediction in a longer term and will thus be stored in higher-ranking dimensions (see Fig 2 for an example). Therefore, intuitively, if a high-ranking dimension is erased/updated, it probably means that the currently processed input token is the start of a new high-level constituent. Figure 2 : An example of the correspondences between a constituency parse tree and the hidden states of ON-LSTM. Intuitively, when performing language modelling, the information related to the highest-level constituent \"S\" is useful when predicting both token x 2 and token x 3 , while the information related to the first \"N\" is only useful in predicting x 2 , and can be erased after the prediction of x 2 . In order to avoid removing \"S\" information when removing \"N\" information, the model will store \"S\" information in higher dimensions, and information of \"N\" in lower dimensions. Image source: Shen et al. 2019Based on this intuition, the master forget gates can be used to perform binary constituency parsing. In binary parsing, a constituent (which is initially the whole sentence) is recursively split into two constituents until each constituent contains only one word. Therefore, each space between each pair of adjacent words is a split point the parsing algorithm will use at some point to make a split, and the order in which these split points are used decides what the resultant parsing tree will be like. In the case of ON-LSTM, the parsing algorithm uses the split points in the decreasing order of their \"height\", where the height of a split point between x t\u22121 and x t is usually defined asd f t 1 , an estimate of the transition point 2 in f t from the 0-segment 3 where values are small and close to 0 to the 1-segment where values are large and close to 1. Intuitively, the more information is forgotten at a timestep, the higher the split point before the token.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 628, |
|
"end": 637, |
|
"text": "Fig 2 for", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 835, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data and Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We train ON-LSTM with 5 different random seeds on WSJ train using hyperparameters shared by Shen et al. (2019) . Note that the training objective is language modelling, so we only use the sentences from WSJ train and the model never has access to the parsing trees in the dataset or any other tree structures. On WSJ test, our models achieve an average perplexity of 56.33 (\u00b10.06) on language modelling and average F1 score of 46.43 (\u00b11.79) on unsupervised parsing, while the original paper reports 56.17 (\u00b10.12) and 47.7 (\u00b11.5). This shows that we roughly reproduce their work 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 110, |
|
"text": "Shen et al. (2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 579, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To analyze the model's consistency, we use the 5 models we train to parse WSJ test and WSJ 10, calculate the self F1 and standard deviation on each dataset, and compare them to that of the random baseline. Self F1 is the average of unlabeled binary F1 scores between every pairing of the five parses, each produced by one model. It shows to what extent each model agrees with the parsing decisions of the other four.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To take a closer look at the parses generated by the model, we then use our 5 models to parse WSJ dev, and report the average of the models' parsing accuracies on each constituent type. We use the constituent-level accuracy as a guide to analyze how the parses ON-LSTM produces are different from PTB gold standards.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the experiments, we include a simple random baseline that produces parses by recursively and randomly splitting the sequences to two halves. This is the same with ON-LSTM's parsing algorithm except that the baseline model chooses split points in a random order. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The self F1's of ON-LSTM are shown in Table 1 . On both datasets, all three layers of ON-LSTM show much higher self F1 than the random baseline. This shows the model produces fairly consistent parses across different restarts. The 2nd layer, the layer with the highest parsing F1, is also the most consistent layer according to its self F1 and standard deviation. Its self F1 scores on both WSJ test and WSJ 10 are \u223c 41 higher than that of the random baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 46, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does the model learn consistent grammars?", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In this experiment, we focus on layer 2 of the model. For each model restart, we compute its parsing accuracy of every non-unary constituent type that occurs > 5 times in WSJ dev (sentence-level occurrences aside, explained below). We average the accuracies of the 5 restarts and list the results in Table 2 . The constituent types are listed in decreasing order based on the difference between ON-LSTM accuracy and the random baseline accuracy. Different from the previous works (Williams et al., 2018; Shen et al., 2019 ; Htut et al., 2018), we do not take into account any constituent that spans over an entire sentence, because any parser has 100% accuracy on these constituents. The way we compute the accuracy better reveals the model's command of each constituent type, and makes comparisons across constituent types more fair, since some types are more likely to appear as full sentences. We follow the clues in the accuracies to look into the parses generated by the models and find two cases where the models struggle, as we discuss in the following sections. Complex Noun Phrases As shown in table 2, the model has a poor parsing performance on NX (\u2206acc=9.7) and NAC (\u2206acc=\u221215.6), in contrast to the good performance on NP (\u2206acc=34.3). NX and NAC are marker constituents that split an NP into smaller chunks. NX marks individual conjuncts in an NP, e.g. (NP the (NX (NX white shirt) and (NX blue jeans))). NAC shows the scope of a modifier within an NP, e.g. (NP (NAC Secretary (of (State))) James Baker). This contrast suggests that the model is able to identify noun phrases in a sentence, but fails to understand their internal structures. We inspect the model's parses of noun phrases that contain NX and NAC and find that the way the model splits these phrases is very random. We do not identify any pattern.", |
|
"cite_spans": [ |
|
{ |
|
"start": 480, |
|
"end": 503, |
|
"text": "(Williams et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 504, |
|
"end": 521, |
|
"text": "Shen et al., 2019", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 307, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How are ON-LSTM's parses different from PTB parses?", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "One can possibly attribute this failure to the use of language modelling as the training task. Whether ON-LSTM makes a split at a token depends on how much information the model chooses to forget at this timestep. Since the model is trained for unidirectional language modelling, it decides whether to forget certain information based on whether the information will be helpful for word predictions in the future. However, constituents inside the same complex noun phrase are sometimes closely related, and cross-constituent hints can be helpful to word predictions. In the NX example we give, \"white shirt\" gives important hints for the model to predict the tokens \"blue jeans\", as it suggests that the tokens after \"and\" might be a color followed by a is the gold-standard parsing tree; (b) is the binary tree produced by ON-LSTM. 1 , 2 , and 3 mark the order/height of the split points in each parse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How are ON-LSTM's parses different from PTB parses?", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "type of clothing. This may be why the model might choose not to forget much information after \"white shirt and\", leading to a missing split between \"and\" and \"blue\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How are ON-LSTM's parses different from PTB parses?", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Split Points Right Before Verbs As shown in Table 2 , the model's parsing performances on SQ and VP are the best (\u2206acc=62.2 and 43.0, ranking the first and second among all constituents), while it does not parse SBAR (subordinate clauses) in a way that is similar to PTB parses (\u2206acc=10.5, Acc=20.5). Based on this clue, we look into the parses and find the model has a consistent and strong tendency to overestimate the height of the split point right before a verb. We inspect the model's parses of sentences that contain subordinate clauses, and find that a common mistake made by ON-LSTM is to assign a higher height to the split point right before the main verb of the clause than to the split points right before/after the start/end of the clause. Since ON-LSTM parses a sentence by recursively splitting the sentence at the highest split point, this means the subordinate clause will show up separately in two different constituents rather than a complete single constituent in the parse generated by ON-LSTM. For example, as shown in figure 3, split point 1 of a gold-standard parser is right before the token \"before\" and it splits the upper constituents into two parts: \"...\" and SBAR, where SBAR contains exactly three words: \"before\", \"prices\", and \"stabilize\". In contrast, ON-LSTM chooses the split point right before the verb \"stabilize\" as split point 1 and thus in its parse there is no constituent that contains exactly these three words. According to our observations, this behavior is not incidental. We randomly sample 30 SBARs from WSJ dev. For each SBAR, we observe whether the first (highest) split point inside the clause (border tokens included) chosen by each model is (1) right before the main verb/auxiliary verb, (2) right before the first token or right after the last token of the clause, or (3) the other tokens in the clause. For example, Figure 3 (a) is of case (2) and Figure 3 (b) is of case (1). We compute the percentages of case (1) and case (2) for each ON-LSTM model and show them in Table 3 . We find that all 5 models have a much stronger tendency than the gold-standard parser to choose the split point right before the verb as the highest split point inside a subordinate clause. On each row, the two numbers add to nearly 100, which means when the model makes a mistake on SBAR, it is almost always because it makes the highest split right before the verb.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 51, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1873, |
|
"end": 1881, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1905, |
|
"end": 1913, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 2026, |
|
"end": 2033, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How are ON-LSTM's parses different from PTB parses?", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "This tendency also explains why the model's parsing accuracy is the highest on VP and SQ, two constituents which almost always start with a verb. As discussed earlier in this section, a constituent will be correctly parsed if and only if no split point inside it is higher than the split points right before/after the start/end token. Therefore, constituents starting with a verb are naturally easier for ON-LSTM because of this tendency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How are ON-LSTM's parses different from PTB parses?", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "A possible reason of this tendency is that since the model is trained on unidirectional language modelling, when it predicts the height of a split point before a token, it only has access to the current token and all the tokens before it. However, when the current input token is a beginning word of a subordinate clause such as \"as\", \"which\", \"after\", it is usually impossible to tell whether it is the start of a subordinate clause. Counterexamples are \"as soon as possible\", \"which to choose\", \"after 2 hours\", etc. Meanwhile, the model probably learns that the appearance of a verb almost always means the start of a high-level constituent VP. As a result, it assigns high heights to split points right before verbs and ignores higher-level constituents including SBAR. If this is true, then a natural and direct fix of this problem is to adopt a bidirectional task such as masked language modelling instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How are ON-LSTM's parses different from PTB parses?", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In summary, the model shows basic selfconsistency on the task of constituency parsing, and it is consistently able to correctly identify certain constituents (SQ, VP, NP). All these results show that the unique design of the model brings us closer to developing consistently powerful unsupervised parsing models. However, the experiments show that it (a) struggles with the internal structures of complex NPs, and (b) often overestimates the height of the split points right before verbs. Based on our analysis, we hypothesize that both of the failures can be at least partially attributed to the use of unidirectional language modelling as the training task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "There are two potential problems with this training task. First, the motivation of language modelling generally does not perfectly match the target task constituency parsing, since cross-constituent hints are sometimes helpful, as revealed by (a). Second, it is very hard for a unidirectional model to correctly identify some high-level constituents, as revealed by (b). Therefore, we believe a promising research direction is to build latent tree learning models based on bidirectional model architectures like transformer (Vaswani et al., 2017) and the task of acceptability judgement with a dataset like CoLA (Warstadt et al., 2018) , which is a more syntax-related sentence-level task that requires the model to predict whether an input sentence is grammatically acceptable. Another option to consider is masked language modelling because it is also a bidirectional task and is much easier to scale up compared to acceptability judgement since it is a self-supervised task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 546, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 635, |
|
"text": "(Warstadt et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As an exception, the height of the split point between x1 and x2 is defined as max(d f 1 ,d f 2 ). 2Shen et al. (2019) use the term \"split point\". We use a different term to avoid confusion with the more frequently used \"split point\" concept in this paper, which means the space between two words.3 The master forget gate computed using the cumax() function is an expectation of a binary gate g=(0, ..., 0, 1, ..., 1), and the rank of the first \"1\" indicates to what extent the currently processed input word contains high-level information. For formal mathematical expressions of the model architecture, we encourage you to read the original model paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The 5 ONLSTM models we train and their parses can be found at https://github.com/YianZhang/ ONLSTM-analysis", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We appreciate Sam Bowman for giving valuable overall project feedbacks and suggestions; we appreciate Phu Mon Htut for patiently sharing and explaining the code and experiment details of her study; we appreciate Yikang Shen for making their code public and granting us the right to reuse the figures in their paper. We would also like to thank Alex Warstadt and Daniel Chin for their great writing suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Two experiments on learning probabilistic dependency grammars from corpora", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Glen", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the AAAI Workshop on Statistically-Based NLP Techniques", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak and Glen Carroll. 1992. Two exper- iments on learning probabilistic dependency gram- mars from corpora. Proceedings of the AAAI Work- shop on Statistically-Based NLP Techniques, page 113.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning to compose task-specific tree structures", |
|
"authors": [ |
|
{ |
|
"first": "Jihun", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sang", |
|
"middle": [], |
|
"last": "Kang Min Yoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirty-Second Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI-18)", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jihun Choi, Kang Min Yoo, and Sang goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of the Thirty-Second Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI-18), volume 2.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A critical analysis of biased parsers in unsupervised parsing", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, G\u00e1bor Melis, and Phil Blunsom. 2019. A critical analysis of biased parsers in unsupervised parsing. CoRR, abs/1909.09428.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Grammar induction with neural language models: An unusual replication", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Phu Mon Htut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "371--373", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5452" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phu Mon Htut, Kyunghyun Cho, and Samuel Bowman. 2018. Grammar induction with neural language models: An unusual replication. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 371-373, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Unsupervised recurrent neural network grammars", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1105--1117", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kun- coro, Chris Dyer, and G\u00e1bor Melis. 2019. Unsuper- vised recurrent neural network grammars. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 1105-1117, Minneapolis, Minnesota. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A generative constituent-context model for improved grammar induction", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "128--135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073106" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the 40th An- nual Meeting of the Association for Computational Linguistics, pages 128-135, Philadelphia, Pennsyl- vania, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Jointly learning sentence embeddings and syntax with unsupervised tree-lstms", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Maillard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Maillard, Stephen Clark, and Dani Yogatama. 2017. Jointly learning sentence embeddings and syntax with unsupervised tree-lstms. CoRR, abs/1705.09189.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Building a large annotated corpus of English: The Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Neural language modeling by jointly learning syntax and lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Yikang", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhouhan", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin", |
|
"middle": [], |
|
"last": "Wei Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yikang Shen, Zhouhan Lin, Chin wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. In Interna- tional Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Neural language modeling by jointly learning syntax and lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Yikang", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhouhan", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Wei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron C. Courville. 2017. Neural language model- ing by jointly learning syntax and lexicon. CoRR, abs/1711.02013.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Ordered neurons: Integrating tree structures into recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yikang", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shawn", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrat- ing tree structures into recurrent neural networks. In International Conference on Learning Representa- tions.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Bowman", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Warstadt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Neural network acceptability judgments. CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2018. Neural network acceptability judgments. CoRR, abs/1805.12471.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Drozdov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "253--267", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00019" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models iden- tify meaningful structure in sentences? Transac- tions of the Association for Computational Linguis- tics, 6:253-267.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Learning to compose words into sentences with reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "5th International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In 5th International Conference on Learn- ing Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The interest-only securities were priced at N NN to yield N.N % The interest-only securities were priced at N NN to yield N.N %Figure 1: An example of an ON-LSTM's parse (top) disagreeing with a binary parse tree converted from a PTB gold-standard parse." |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "An example of ON-LSTM overestimating the height of the split point right before the verb. (a)" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>: ON-LSTM (layer 2)'s average parsing accu-</td></tr><tr><td>racies of non-unary constituents in WSJ dev across 5</td></tr><tr><td>restarts. The last column is the difference between the</td></tr><tr><td>second and the fourth column.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |