|
{ |
|
"paper_id": "D07-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:19:11.289886Z" |
|
}, |
|
"title": "A New Perceptron Algorithm for Sequence Labeling with Non-local Features", |
|
"authors": [ |
|
{ |
|
"first": "Jun", |
|
"middle": [ |
|
"'" |
|
], |
|
"last": "Ichi Kazama", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Torisawa", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "torisawa@jaist.ac.jp" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We cannot use non-local features with current major methods of sequence labeling such as CRFs due to concerns about complexity. We propose a new perceptron algorithm that can use non-local features. Our algorithm allows the use of all types of non-local features whose values are determined from the sequence and the labels. The weights of local and non-local features are learned together in the training process with guaranteed convergence. We present experimental results from the CoNLL 2003 named entity recognition (NER) task to demonstrate the performance of the proposed algorithm.", |
|
"pdf_parse": { |
|
"paper_id": "D07-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We cannot use non-local features with current major methods of sequence labeling such as CRFs due to concerns about complexity. We propose a new perceptron algorithm that can use non-local features. Our algorithm allows the use of all types of non-local features whose values are determined from the sequence and the labels. The weights of local and non-local features are learned together in the training process with guaranteed convergence. We present experimental results from the CoNLL 2003 named entity recognition (NER) task to demonstrate the performance of the proposed algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Many NLP tasks such as POS tagging and named entity recognition have recently been solved as sequence labeling. Discriminative methods such as Conditional Random Fields (CRFs) (Lafferty et al., 2001) , Semi-Markov Random Fields (Sarawagi and Cohen, 2004) , and perceptrons (Collins, 2002a) have been popular approaches for sequence labeling because of their excellent performance, which is mainly due to their ability to incorporate many kinds of overlapping and non-independent features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 199, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 254, |
|
"text": "(Sarawagi and Cohen, 2004)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 289, |
|
"text": "(Collins, 2002a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, the common limitation of these methods is that the features are limited to \"local\" features, which only depend on a very small number of labels (usually two: the previous and the current). Although this limitation makes training and inference tractable, it also excludes the use of possibly useful \"non-local\" features that are accessible after all labels are determined. For example, non-local features such as \"same phrases in a document do not have different entity classes\" were shown to be useful in named entity recognition (Sutton and McCallum, 2004; Bunescu and Mooney, 2004; Finkel et al., 2005; Krishnan and Manning, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 539, |
|
"end": 566, |
|
"text": "(Sutton and McCallum, 2004;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 567, |
|
"end": 592, |
|
"text": "Bunescu and Mooney, 2004;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 613, |
|
"text": "Finkel et al., 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 614, |
|
"end": 641, |
|
"text": "Krishnan and Manning, 2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose a new perceptron algorithm in this paper that can use non-local features along with local features. Although several methods have already been proposed to incorporate non-local features (Sutton and McCallum, 2004; Bunescu and Mooney, 2004; Finkel et al., 2005; Roth and Yih, 2005; Krishnan and Manning, 2006; Nakagawa and Matsumoto, 2006) , these present a problem that the types of non-local features are somewhat constrained. For example, Finkel et al. (2005) enabled the use of non-local features by using Gibbs sampling. However, it is unclear how to apply their method of determining the parameters of a non-local model to other types of non-local features, which they did not used. Roth and Yih (2005) enabled the use of hard constraints on labels by using integer linear programming. However, this is equivalent to only allowing non-local features whose weights are fixed to negative infinity. Krishnan and Manning (2006) divided the model into two CRFs, where the second model uses the output of the first as a kind of non-local information. However, it is not possible to use non-local features that depend on the labels of the very candidate to be scored. Nakagawa and Matsumoto (2006) used a Bolzmann distribution to model the correlation of the POS of words having the same lexical form in a document. However, their method can only be applied when there are convenient links such as the same lexical form.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 224, |
|
"text": "(Sutton and McCallum, 2004;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 250, |
|
"text": "Bunescu and Mooney, 2004;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 271, |
|
"text": "Finkel et al., 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 291, |
|
"text": "Roth and Yih, 2005;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 319, |
|
"text": "Krishnan and Manning, 2006;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 349, |
|
"text": "Nakagawa and Matsumoto, 2006)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 472, |
|
"text": "Finkel et al. (2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 718, |
|
"text": "Roth and Yih (2005)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 939, |
|
"text": "Krishnan and Manning (2006)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1177, |
|
"end": 1206, |
|
"text": "Nakagawa and Matsumoto (2006)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since non-local features have not yet been extensively investigated, it is possible for us to find new useful non-local features. Therefore, our objective in this study was to establish a framework, where all types of non-local features are allowed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With non-local features, we cannot use efficient procedures such as forward-backward procedures and the Viterbi algorithm that are required in training CRFs (Lafferty et al., 2001 ) and perceptrons (Collins, 2002a) . Recently, several methods (Collins and Roark, 2004; Daum\u00e9 III and Marcu, 2005; Mc-Donald and Pereira, 2006) have been proposed with similar motivation to ours. These methods alleviate this problem by using some approximation in perceptron-type learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 179, |
|
"text": "(Lafferty et al., 2001", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 214, |
|
"text": "(Collins, 2002a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 268, |
|
"text": "(Collins and Roark, 2004;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 295, |
|
"text": "Daum\u00e9 III and Marcu, 2005;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 296, |
|
"end": 324, |
|
"text": "Mc-Donald and Pereira, 2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we follow this line of research and try to solve the problem by extending Collins' perceptron algorithm (Collins, 2002a) . We exploited the not-so-familiar fact that we can design a perceptron algorithm with guaranteed convergence if we can find at least one wrong labeling candidate even if we cannot perform exact inference. We first ran the A* search only using local features to generate n-best candidates (this can be efficiently performed), and then we only calculated the true score with non-local features for these candidates to find a wrong labeling candidate. The second key idea was to update the weights of local features during training if this was necessary to generate sufficiently good candidates. The proposed algorithm combined these ideas to achieve guaranteed convergence and effective learning with non-local features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 135, |
|
"text": "(Collins, 2002a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of the paper is organized as follows. Section 2 introduces the Collins' perceptron algorithm. Although this algorithm is the starting point for our algorithm, its baseline performance is not outstanding. Therefore, we present a margin extension to the Collins' perceptron in Section 3. This margin perceptron became the direct basis of our algorithm. We then explain our algorithm for nonlocal features in Section 4. We report the experimental results using the CoNLL 2003 shared task dataset in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Perceptron Algorithm for Sequence Labeling Collins (2002a) proposed an extension of the perceptron algorithm (Rosenblatt, 1958) to sequence labeling. Our aim in sequence labeling is to assign label y i \u2208 Y to each word x i \u2208 X in a sequence. We denote sequence x 1 , . . . , x T as x and the corresponding labels as y. We assume weight vector \u03b1 \u2208 R d and feature mapping \u03a6 that maps each (x, y) to feature vector", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 60, |
|
"text": "Collins (2002a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 111, |
|
"end": 129, |
|
"text": "(Rosenblatt, 1958)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u03a6(x, y) = (\u03a6 1 (x, y), \u2022 \u2022 \u2022 , \u03a6 d (x, y)) \u2208 R d .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The model determines the labels by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "y \u2032 = argmax y\u2208Y |x| \u03a6(x, y) \u2022 \u03b1,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "where \u2022 denotes the inner product. The aim of the learning algorithm is to obtain an appropriate weight vector, \u03b1, given training set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "{(x 1 , y * 1 ), \u2022 \u2022 \u2022 , (x L , y * L )}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The learning algorithm, which is illustrated in Collins (2002a) , proceeds as follows. The weight vector is initialized to zero. The algorithm passes over the training examples, and each sequence is decoded using the current weights. If y \u2032 is not the correct answer y * , the weights are updated according to the following rule.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 63, |
|
"text": "Collins (2002a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u03b1 new = \u03b1 + \u03a6(x, y * ) \u2212 \u03a6(x, y \u2032 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This algorithm is proved to converge (i.e., there are no more updates) in the separable case (Collins, 2002a) . 1 That is, if there exist weight vector U (with ||U || = 1), \u03b4 (> 0), and R (> 0) that satisfy:", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 109, |
|
"text": "(Collins, 2002a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2200i, \u2200y \u2208 Y |x i | \u03a6(x i , y i * ) \u2022 U \u2212 \u03a6(x i , y) \u2022 U \u2265 \u03b4, \u2200i, \u2200y \u2208 Y |x i | ||\u03a6(x i , y i * ) \u2212 \u03a6(x i , y)|| \u2264 R,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "the number of updates is at most R 2 /\u03b4 2 . The perceptron algorithm only requires one candidate y \u2032 for each sequence x i , unlike the training of CRFs where all possible candidates need to be considered. This inherent property is the key to training with non-local features. However, note that the tractability of learning and inference relies on how efficiently y \u2032 can be found. In practice, we can find y \u2032 efficiently using a Viterbi-type algorithm only when the features are all local, i.e., \u03a6 s (x, y) can be written as the sum of (two label) local features \u03c6 s as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u03a6 s (x, y) = \u2211 T i \u03c6 s (x, y i\u22121 , y i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": ". This locality constraint is also required to make the training of CRFs tractable (Lafferty et al., 2001) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 106, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One problem with the perceptron algorithm described so far is that it offers no treatment for overfitting. Thus, Collins (2002a) also proposed an averaged perceptron, where the final weight vector is Algorithm 3.1: Perceptron with margin for sequence labeling (parameters: C)", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 128, |
|
"text": "Collins (2002a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u03b1 \u2190 0 until no more updates do for i \u2190 1 to L do 8 > > > > > < > > > > > :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "y \u2032 = argmax y \u03a6(xi, y) \u2022 \u03b1 y \u2032\u2032 = 2nd-besty \u03a6(xi, y) \u2022 \u03b1 if y \u2032 \u0338 = y * i then \u03b1 = \u03b1 + \u03a6(xi, y * i ) \u2212 \u03a6(xi, y \u2032 ) else if \u03a6(xi, y * i ) \u2022 \u03b1 \u2212 \u03a6(xi, y \u2032\u2032 ) \u2022 \u03b1 \u2264 C then \u03b1 = \u03b1 + \u03a6(xi, y * i ) \u2212 \u03a6(xi, y \u2032\u2032 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "the average of all weight vectors during training. Howerver, we found in our experiments that the averaged perceptron performed poorly in our setting. We therefore tried to make the perceptron algorithm more robust to overfitting. We will describe our extension to the perceptron algorithm in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We extended a perceptron with a margin (Krauth and M\u00e9zard, 1987) to sequence labeling in this study, as Collins (2002a) extended the perceptron algorithm to sequence labeling. In the case of sequence labeling, the margin is defined as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 64, |
|
"text": "(Krauth and M\u00e9zard, 1987)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 119, |
|
"text": "Collins (2002a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u03b3(\u03b1) = min x i min y\u0338 =y * i \u03a6(x i , y i * ) \u2022 \u03b1 \u2212 \u03a6(x i , y) \u2022 \u03b1 ||\u03b1||", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Assuming that the best candidate, y \u2032 , equals the correct answer, y * , the margin can be re-written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "= min x i \u03a6(x i , y i * ) \u2022 \u03b1 \u2212 \u03a6(x i , y \u2032\u2032 ) \u2022 \u03b1 ||\u03b1|| ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "y \u2032\u2032 = 2nd-best y \u03a6(x i , y) \u2022 \u03b1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Using this relation, the resulting algorithm becomes Algorithm 3.1. The algorithm tries to enlarge the margin as much as possible, as well as make the best scoring candidate equal the correct answer. Constant C in Algorithm 3.1 is a tunable parameter, which controls the trade-off between the margin and convergence time. Based on the proofs in Collins (2002a) and Li et al. (2002) , we can prove that the algorithm converges within (2C + R 2 )/\u03b4 2 updates and that \u03b3(\u03b1) \u2265 \u03b4C/(2C + R 2 ) = (\u03b4/2)(1 \u2212 (R 2 /(2C + R 2 ))) after training. As can be seen, the margin approaches at least half of true margin \u03b4 (at the cost of infinite training time), as C \u2192 \u221e.", |
|
"cite_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 360, |
|
"text": "Collins (2002a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 381, |
|
"text": "Li et al. (2002)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Note that if the features are all local, the secondbest candidate (generally n-best candidates) can also be found efficiently by using an A* search that uses the best scores calculated during a Viterbi search as the heuristic estimation (Soong and Huang, 1991) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 260, |
|
"text": "(Soong and Huang, 1991)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are other methods for improving robustness by making margin larger for the structural output problem. Such methods include ALMA (Gentile, 2001 ) used in (Daum\u00e9 III and Marcu, 2005) 2 , MIRA (Crammer et al., 2006 ) used in (McDonald et al., 2005 , and Max-Margin Markov Networks (Taskar et al., 2003) . However, to the best of our knowledge, there has been no prior work that has applied a perceptron with a margin (Krauth and M\u00e9zard, 1987) to structured output. 3 Our method described in this section is one of the easiest to implement, while guaranteeing a large margin. We found in the experiments that our method outperformed the Collins' averaged perceptron by a large margin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 148, |
|
"text": "(Gentile, 2001", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 186, |
|
"text": "(Daum\u00e9 III and Marcu, 2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 217, |
|
"text": "(Crammer et al., 2006", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 250, |
|
"text": ") used in (McDonald et al., 2005", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 305, |
|
"text": "(Taskar et al., 2003)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 445, |
|
"text": "(Krauth and M\u00e9zard, 1987)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 469, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Margin Perceptron Algorithm for Sequence Labeling", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Having described the basic perceptron algorithms, we will know explain our algorithm that learns the weights of local and non-local features in a unified way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Assume that we have local features and nonlocal features. We use the superscript, l, for local features as \u03a6 l i (x, y) and g for non-local features as \u03a6 g i (x, y). Then, feature mapping is written as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03a6 a (x, y) = \u03a6 l (x, y) + \u03a6 g (x, y) = (\u03a6 l 1 (x, y), \u2022 \u2022 \u2022 , \u03a6 l n (x, y), \u03a6 g n+1 (x, y), \u2022 \u2022 \u2022 , \u03a6 g d (x, y)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here, we define:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03a6 l (x, y) = (\u03a6 l 1 (x, y), \u2022 \u2022 \u2022 , \u03a6 l n (x, y), 0, \u2022 \u2022 \u2022 , 0) \u03a6 g (x, y) = (0, \u2022 \u2022 \u2022 , 0, \u03a6 g n+1 (x, y), \u2022 \u2022 \u2022 , \u03a6 g d (x, y))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Ideally, we want to determine the labels using the whole feature set as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "y \u2032 = argmax y\u2208Y |x| \u03a6 a (x, y) \u2022 \u03b1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Algorithm 4.1: Candidate algorithm (parameters:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "n, C) \u03b1 \u2190 0 until no more updates do for i \u2190 1 to L do 8 > > > > > > > > > < > > > > > > > > > : {y n } = n-besty \u03a6 l (xi, y) \u2022 \u03b1 y \u2032 = argmax y\u2208{y n } \u03a6 a (xi, y) \u2022 \u03b1 y \u2032\u2032 = 2nd-best y\u2208{y n } \u03a6 a (xi, y) \u2022 \u03b1 if y \u2032 \u0338 = yi * & \u03a6 a (xi, y * i ) \u2022 \u03b1 \u2212 \u03a6 a (xi, y \u2032 ) \u2022 \u03b1 \u2264 C then \u03b1 = \u03b1 + \u03a6 a (xi, y * i ) \u2212 \u03a6 a (xi, y \u2032 ) else if \u03a6 a (xi, y * i ) \u2022 \u03b1 \u2212 \u03a6 a (xi, y \u2032\u2032 ) \u2022 \u03b1 \u2264 C then \u03b1 = \u03b1 + \u03a6 a (xi, y * i ) \u2212 \u03a6 a (xi, y \u2032\u2032 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "However, if there are non-local features, it is impossible to find the highest scoring candidate efficiently, since we cannot use the Viterbi algorithm. Thus, we cannot use the perceptron algorithms described in the previous sections. The training of CRFs is also intractable for the same reason.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To deal with this problem, we first relaxed our objective. The modified objective was to find a good model from those with the form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "{y n } = n-best y \u03a6 l (x, y) \u2022 \u03b1 y \u2032 = argmax y\u2208{y n } \u03a6 a (x, y) \u2022 \u03b1,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "That is, we first generate n-best candidates {y n } under the local model, \u03a6 l (x, y) \u2022 \u03b1. This can be done efficiently using the A* algorithm. We then find the best scoring candidate under the total model, \u03a6 a (x, y)\u2022\u03b1, only from these n-best candidates. If n is moderately small, this can also be done in a practical amount of time. This resembles the re-ranking approach (Collins and Duffy, 2002; Collins, 2002b) . However, unlike the re-ranking approach, the local model, \u03a6 l (x, y) \u2022 \u03b1, and the total model, \u03a6 a (x, y) \u2022 \u03b1, correlate since they share a part of the vector and are trained at the same time in our algorithm. The re-ranking approach has the disadvantage that it is necessary to use different training corpora for the first model and for the second, or to use cross validation type training, to make the training for the second meaningful. This reduces the effective size of training data or increases training time substantially. On the other hand, our algorithm has no such disadvantage.", |
|
"cite_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 399, |
|
"text": "(Collins and Duffy, 2002;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 415, |
|
"text": "Collins, 2002b)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "However, we are no longer able to find the highest scoring candidate under \u03a6 a (x, y) \u2022 \u03b1 exactly with this approach. We cannot thus use the perceptron algorithms directly. However, by examining the Algorithm 4.2: Perceptron with local and non-local features (parameters:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "n, C a , C l ) \u03b1 \u2190 0 until no more updates do for i \u2190 1 to L do 8 > > > > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > > > > :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "{y n } = n-besty \u03a6 l (xi, y) \u2022 \u03b1 y \u2032 = argmax y\u2208{y n } \u03a6 a (xi, y) \u2022 \u03b1 y \u2032\u2032 = 2nd-best y\u2208{y n } \u03a6 a (xi, y) \u2022 \u03b1 if y \u2032 \u0338 = y * i & \u03a6 a (xi, y * i ) \u2022 \u03b1 \u2212 \u03a6 a (xi, y \u2032 ) \u2022 \u03b1 \u2264 C a then \u03b1 = \u03b1 + \u03a6 a (xi, y * i ) \u2212 \u03a6 a (xi, y \u2032 ) (A) else if \u03a6 a (xi, y * i ) \u2022 \u03b1 \u2212 \u03a6 a (xi, y \u2032\u2032 ) \u2022 \u03b1 \u2264 C a then \u03b1 = \u03b1 + \u03a6 a (xi, y * i ) \u2212 \u03a6 a (xi, y \u2032\u2032 ) (A) else (B) 8 > > < > > :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "if y 1 \u0338 = yi * then (y 1 represents the best in {y n })", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03b1 = \u03b1 + \u03a6 l (xi, y * i ) \u2212 \u03a6 l (xi, y 1 ) else if \u03a6 l (xi, y * i ) \u2022 \u03b1 \u2212 \u03a6 l (xi, y 2 ) \u2022 \u03b1 \u2264 C l then \u03b1 = \u03b1 + \u03a6 l (xi, y * i ) \u2212 \u03a6 l (xi, y 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "proofs in Collins (2002a) , we can see that the essential condition for convergence is that the weights are always updated using some y (\u0338 = y * ) that satisfies:", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 25, |
|
"text": "Collins (2002a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03a6(x i , y * i ) \u2022 \u03b1 \u2212 \u03a6(x i , y)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 \u03b1 \u2264 0 (\u2264 C in the case of a perceptron with a margin). 2That is, y does not necessarily need to be the exact best candidate or the exact second-best candidate. The algorithm also converges in a finite number of iterations even with Eq. (1) as long as Eq. (2) is satisfied.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition and Basic Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The algorithm we came up with first based on the above idea, is Algorithm 4.1. We first find the nbest candidates using the local model, \u03a6 l (x, y) \u2022 \u03b1. At this point, we can determine the value of the nonlocal features, \u03a6 g (x, y), to form the whole feature vector, \u03a6 a (x, y), for the n-best candidates. Next, we re-score and sort them using the total model, \u03a6 a (x, y) \u2022 \u03b1, to find a candidate that violates the margin condition. We call this algorithm the \"candidate algorithm\". After the training has finished,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u03a6 a (x i , y * i ) \u2022 \u03b1 \u2212 \u03a6 a (x i , y) \u2022 \u03b1 > C is guaran- teed for all (x i , y) where y \u2208 {y n }, y \u0338 = y * .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "At first glance, this seems sufficient condition for good models. However, this is not true because if y * \u0338 \u2208 {y n }, the inference defined by Eq. (1) is not guaranteed to find the correct answer, y * . In fact, this algorithm does not work well with non-local features as we found in the experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Candidate Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our idea for improving the above algorithm is that the local model, \u03a6 l (x, y)\u2022\u03b1, must at least be so good that y * \u2208 {y n }. To achieve this, we added a modification term that was intended to improve the local model when the local model was not good enough even when the total model was good enough.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The final algorithm resulted in Algorithm 4.2. As can be seen, the part marked (B) has been added. We call this algorithm the \"proposed algorithm\". Note that the algorithm prioritizes the update of the total model, (A), over that of the local model, (B), although the opposite is also possible. Also note that the update of the local model in (B) is \"aggressive\" since it updates the weights until the best candidate output by the local model becomes the correct answer and satisfies the margin condition. A \"conservative\" updating, where we cease the update when the n-best candidates contain the correct answer, is also possible from our idea above. We made these choices since they worked better than the other alternatives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The tunable parameters are the local margin parameter, C l , the total margin parameter, C a , and n for the n-best search. We used C = C l = C a in this study to reduce the search space.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We can prove that the algorithm in Algorithm 4.2 also converges in a finite number of iterations. It converges within (2C + R 2 )/\u03b4 2 updates, assuming that there exist weight vector U l (with ||U l || = 1 and U l i = 0 (n + 1 \u2264 i \u2264 d)), \u03b4 (> 0), and R (> 0) that satisfy:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2200i, \u2200y \u2208 Y |x i | \u03a6 l (x i , y i * )\u2022U l \u2212\u03a6 l (x i , y)\u2022U l \u2265 \u03b4, \u2200i, \u2200y \u2208 Y |x i | ||\u03a6 a (x i , y i * ) \u2212 \u03a6 a (x i , y)|| \u2264 R.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition, we can prove that \u03b3 \u2032 (\u03b1) \u2265 \u03b4C/(2C + R 2 ) for the margin after convergence, where \u03b3 \u2032 (\u03b1) is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "min x i min y\u2208{y n },\u0338 =y * i \u03a6 a (x i , y i * ) \u2022 \u03b1 \u2212 \u03a6 a (x i , y) \u2022 \u03b1 ||\u03b1||", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "See Appendix A for the proofs. We also incorporated the idea behind Bayes point machines (BPMs) (Herbrich and Graepel, 2000) to improve the robustness of our method further. BPMs try to cancel out overfitting caused by the order of examples, by training several models by shuffling the training examples. 4 However, it is very time consuming to run the complete training process several times. We thus ran the training in only one pass over the shuffled examples several times, and used the averaged output weight vectors as a new initial weight vector, because we thought that the early part of training would be more seriously affected by the order of examples. We call this \"BPM initialization\". 5", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 124, |
|
"text": "(Herbrich and Graepel, 2000)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Algorithm", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We evaluated the performance of the proposed algorithm using the named entity recognition task. We adopted IOB (IOB2) labeling (Ramshaw and Marcus, 1995) , where the first word of an entity of class \"C\" is labeled \"B-C\", the words in the entity are labeled \"I-C\", and other words are labeled \"O\". We used non-local features based on Finkel et al. (2005) . These features are based on observations such as \"same phrases in a document tend to have the same entity class\" (phrase consistency) and \"a sub-phrase of a phrase tends to have the same entity class as the phrase\" (sub-phrase consistency). We also implemented the \"majority\" version of these features as used in Krishnan and Manning (2006) . In addition, we used non-local features, which are based on the observation that \"entities tend to have the same entity class if they are in the same conjunctive or disjunctive expression\" as in \"\u2022 \u2022 \u2022 in U.S., EU, and Japan\" (conjunction consistency). This type of non-local feature was not used by Finkel et al. (2005) or Krishnan and Manning (2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 153, |
|
"text": "(Ramshaw and Marcus, 1995)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 353, |
|
"text": "Finkel et al. (2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 696, |
|
"text": "Krishnan and Manning (2006)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 999, |
|
"end": 1019, |
|
"text": "Finkel et al. (2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1023, |
|
"end": 1050, |
|
"text": "Krishnan and Manning (2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition and Non-Local Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We used the English dataset of the CoNLL 2003 named entity shared task (Tjong et al., 2003) for the experiments. It is a corpus of English newspaper articles, where four entity classes, PER, LOC, ORG, and MISC are annotated. It consists of training, development, and testing sets (14,987, 3,466, and 3 ,684 sentences, respectively). Automatically assigned POS tags and chunk tags are also provided. The CoNLL 2003 dataset contains document boundary markers. We concatenated the sentences in the same document according to these markers. 6 This generated 964 documents for the training set, 216 documents for the development set, and 231 documents for the testing set. The documents generated as above become the sequence, x, in the learning algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 91, |
|
"text": "(Tjong et al., 2003)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 301, |
|
"text": "(14,987, 3,466, and 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 538, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We first evaluated the baseline performance of a CRF model, the Collins' perceptron, and the Collins' averaged perceptron, as well as the margin perceptron, with only local features. We next evaluated the performance of our perceptron algorithm proposed for non-local features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We used the local features summarized in Table 1 , which are similar to those used in other studies on named entity recognition. We omitted features whose surface part listed in Table 1 occurred less than twice in the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 49, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 186, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We used CRF++ (ver. 0.44) 7 as the basis of our implementation. We implemented scaling, which is similar to that for HMMs (see such as (Rabiner, 1989) ), in the forward-backward phase of CRF training to deal with very long sequences due to sentence concatenation. 8 We used Gaussian regularization (Chen and Rosenfeld, 2000) for CRF training to avoid overfitting. The parameter of the Gaussian, \u03c3 2 , was tuned using the development set. We also tuned the margin parameter, C, for the margin perceptron algorithm. 9 The convergence of CRF training was determined by checking the log-likelihood of the model. The convergence of perceptron algorithms was determined by checking the per-word labeling error, since the 6 We used sentence concatenation even when only using local features, since we found it does not degrade accuracy (rather we observed a slight increase).", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 150, |
|
"text": "(Rabiner, 1989)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 265, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 324, |
|
"text": "(Chen and Rosenfeld, 2000)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 515, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 716, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "7 http://chasen.org/\u02dctaku/software/CRF++ 8 We also replaced the optimization module in the original package with that used in the Amis maximum entropy estimator (http://www-tsujii.is.s.u-tokyo.ac.jp/amis) since we encountered problems with the provided module in some cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "9 For the Gaussian parameter, we tested {13, 25, 50, 100, 200, 400, 800} (the accuracy did not change drastically among these values and it seems that there is no accuracy hump even if we use smaller values). We tested {500, 1000, 1414, 2000, 2828, 4000, 5657, 8000, 11313, 16000, 32000} for the margin parameters. Table 1 : Local features used. The value of a node feature is determined from the current label, y 0 , and a surface feature determined only from x. The value of an edge feature is determined by the previous label, y \u22121 , the current label, y 0 , and a surface feature. Used surface features are the word (w), the downcased word (wl), the POS tag (pos), the chunk tag (chk), the prefix of the word of length n (pn), the suffix (sn), the word form features: 2d -cp (these are based on (Bikel et al., 1999) ), and the gazetteer features: go for ORG, gp for PER, and gm for MISC. These represent the (longest) match with an entry in the gazetteer by using IOB2 tags.", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 287, |
|
"text": "1000, 1414, 2000, 2828, 4000, 5657, 8000, 11313, 16000, 32000}", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 799, |
|
"end": 819, |
|
"text": "(Bikel et al., 1999)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 322, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Node features: w, wl, pos, chk, p1, p2, p3, p4, s1, s2, s3, s4, 2d, 4d, d&a, d&-, d&/, d&,, d&., n, ic, ac, l, cp, go, gp, gm Edge features: w, wl, pos, chk, p1, p2, p3, p4, s1, s2, s3, s4, 2d, 4d, d&a, d&-, d&/, d&,, d&., n, ic, ac, l, cp, go, gp, gm Bigram node features: pos, chk, go, gp, gm number of updates was not zero even after a large number of iterations in practice. We stopped training when the relative change in these values became less than a pre-defined threshold (0.0001) for at least three iterations. We used n = 20 (n of the n-best) for training since we could not use too a large n because it would have slowed down training. However, we could examine a larger n during testing, since the testing time did not dominate the time for the experiment. We found an interesting property for n in our preliminary experiment. We found that an even larger n in testing (written as n \u2032 ) achieved higher accuracy, although it is natural to assume that the same n that was used in training would also be appropriate for testing. We thus used n \u2032 = 100 to evaluate performance during parameter tuning. After finding the best C with n \u2032 = 100, we varied n \u2032 to investigate its Table 2 compares the results. CRF outperformed the perceptron by a large margin. Although the averaged perceptron outperformed the perceptron, the improvement was slight. However, the margin perceptron greatly outperformed compared to the averaged perceptron. Yet, CRF still had the best baseline performance with only local features. The proposed algorithm with non-local features improved the performance on the test set by 0.66 points over that of the margin perceptron without non-local features. The row \"Candidate\" refers to the candidate algorithm (Algorithm 4.1). From the results for the candidate algorithm, we can see that the modification part, (B), in Algorithm 4.2 was essential to make learning with non-local features effective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 125, |
|
"text": "w, wl, pos, chk, p1, p2, p3, p4, s1, s2, s3, s4, 2d, 4d, d&a, d&-, d&/, d&,, d&., n, ic, ac, l, cp, go, gp, gm", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 251, |
|
"text": "w, wl, pos, chk, p1, p2, p3, p4, s1, s2, s3, s4, 2d, 4d, d&a, d&-, d&/, d&,, d&., n, ic, ac, l, cp, go, gp, gm", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 294, |
|
"text": "pos, chk, go, gp, gm", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1186, |
|
"end": 1193, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "{\"\", x \u22122 , x \u22121 , x 0 , x +1 , x +2 } \u00d7 y 0 x =,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "{\"\", x \u22122 , x \u22121 , x 0 , x +1 , x +2 } \u00d7 y \u22121 \u00d7 y 0 x =,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "{x \u22122 x \u22121 , x \u22121 x 0 , x 0 x +1 } \u00d7 y 0 x = wl,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "{x \u22122 x \u22121 , x \u22121 x 0 , x 0 x +1 } \u00d7 y \u22121 \u00d7 y 0 x = wl,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Setting", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We next examined the effect of n \u2032 . As can be seen from Table 3 , an n \u2032 larger than that for training yields higher performance. The highest performance with the proposed algorithm was achieved when n \u2032 = 6400, where the improvement due to non-local features became 0.74 points.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 64, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The performance of the related work (Finkel et al., 2005; Krishnan and Manning, 2006) is listed in Table 4 . We can see that the final performance of our algorithm was worse than that of the related work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 57, |
|
"text": "(Finkel et al., 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 58, |
|
"end": 85, |
|
"text": "Krishnan and Manning, 2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 106, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We changed the experimental setting slightly to investigate our algorithm further. Instead of Finkel et al., 2005 (Finkel et al., 2005 baseline CRF -85.51 + non-local features -86.86 Krishnan and Manning, 2006 (Krishnan and Manning, 2006) baseline CRF -85.29 + non-local features -87.24 the POS/chunk tags provided in the CoNLL 2003 dataset, we used the tags assigned by TagChunk (Daum\u00e9 III and Marcu, 2005) 10 with the intention of using more accurate tags. The results with this setting are summarized in Table 5 . Performance was better than that in the previous experiment for all algorithms. We think this was due to the quality of the POS/chunk tags. It is interesting that the effect of non-local features rose to 0.93 points with n \u2032 = 6400, even though the baseline performance was also improved. The resulting performance of the proposed algorithm with non-local features is higher than that of Finkel et al. (2005) and comparable with that of Krishnan and Manning (2006) . This comparison, of course, is not fair because the setting was different. However, we think the results demonstrate a potential of our new algorithm. The effect of BPM initialization was also examined. The number of BPM runs was 10 in this experiment. The performance of the proposed algorithm dropped from 91.95/86.30 to 91.89/86.03 without BPM initialization as expected in the setting of the experiment of Table 2 . The performance of the margin perceptron, on the other hand, changed from 90.98/85.64 to 90.98/85.90 without BPM initialization. This result was unexpected from the result of our preliminary experiment. However, the performance was changed from 91.06/86.24 to 91.17/86.08 (i.e., dropped for the evaluation set as expected), in the setting of the experiment of Table 5 . Since the effect of BPM initialization is not conclusive only from these results, we need more experiments on this.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 113, |
|
"text": "Finkel et al., 2005", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 114, |
|
"end": 134, |
|
"text": "(Finkel et al., 2005", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 183, |
|
"end": 195, |
|
"text": "Krishnan and", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 223, |
|
"text": "Manning, 2006 (Krishnan and", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 238, |
|
"text": "Manning, 2006)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 407, |
|
"text": "(Daum\u00e9 III and Marcu, 2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 925, |
|
"text": "Finkel et al. (2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 981, |
|
"text": "Krishnan and Manning (2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 507, |
|
"end": 514, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1394, |
|
"end": 1401, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1764, |
|
"end": 1772, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Finally, we compared our algorithm with the reranking approach (Collins and Duffy, 2002; Collins, 2002b) , where we first generate the n-best candidates using a model with only local features (the first model) and then re-rank the candidates using a model with non-local features (the second model). We implemented two re-ranking models, \"reranking 1\" and \"re-ranking 2\". These models differ in how to incorporate the local information in the second model. \"re-ranking 1\" uses the score of the first model as a feature in addition to the non-local features as in Collins (2002b) . \"re-ranking 2\" uses the same local features as the first model 11 in addition to the non-local features. The first models were trained using the margin perceptron algorithm in Algorithm 3.1. The second models were trained using the algorithm, which is obtained by replacing {y n } with the n-best candidates by the first model. The first model used to generate n-best candidates for the development set and the test set was trained using the whole training data. However, CRFs or perceptrons generally have nearly zero error on the training data, although the first model should mis-label 11 The weights were re-trained for the second model. to some extent to make the training of the second model meaningful. To avoid this problem, we adopt cross-validation training as used in Collins (2002b) . We split the training data into 5 sets. We then trained five first models using 4/5 of the data, each of which was used to generate n-best candidates for the remaining 1/5 of the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 88, |
|
"text": "(Collins and Duffy, 2002;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 89, |
|
"end": 104, |
|
"text": "Collins, 2002b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 578, |
|
"text": "Collins (2002b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1172, |
|
"text": "11", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1360, |
|
"end": 1375, |
|
"text": "Collins (2002b)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with re-ranking approach", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "As in the previous experiments, we tuned C using the development set with n \u2032 = 100 and then tested other values for n \u2032 . Table 6 shows the results. As can be seen, re-ranking models were outperformed by our proposed algorithm, although they also outperformed the margin perceptron with only local features (\"re-ranking 2\" seems better than \"re-ranking 1\"). Table 7 shows the training time of each algorithm. 12 Our algorithm is much faster than the reranking approach that uses cross-validation training, while achieving the same or higher level of performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 130, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 366, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison with re-ranking approach", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "As we mentioned, there are some algorithms similar to ours (Collins and Roark, 2004; Daum\u00e9 III and Marcu, 2005; McDonald and Pereira, 2006; Liang et al., 2006) . The differences of our algorithm from these algorithms are as follows. Daum\u00e9 III and Marcu (2005) presented the method called LaSO (Learning as Search Optimization), in which intractable exact inference is approximated by optimizing the behavior of the search process. The method can access non-local features at each search point, if their values can be determined from the search decisions already made. They provided robust training algorithms with guaranteed convergence for this framework. However, a difference is that our method can use non-local features whose value depends on all labels throughout training, and it is unclear whether the features whose values can only be determined at the end of the search (e.g., majority features) can be learned effectively with such an incremental manner of LaSO.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 84, |
|
"text": "(Collins and Roark, 2004;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 111, |
|
"text": "Daum\u00e9 III and Marcu, 2005;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 139, |
|
"text": "McDonald and Pereira, 2006;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 159, |
|
"text": "Liang et al., 2006)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 259, |
|
"text": "Daum\u00e9 III and Marcu (2005)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The algorithm proposed by McDonald and Pereira (2006) is also similar to ours. Their target was non-projective dependency parsing, where exact inference is intractable. Instead of using n-best/re-scoring approach as ours, their method modifies the single best projective parse, which can be found efficiently, to find a candidate with higher score under non-local features. Liang et al. (2006) used n candidates of a beam search in the Collins' perceptron algorithm for machine translation. Collins and Roark (2004) proposed an approximate incremental method for parsing. Their method can be used for sequence labeling as well. These studies, however, did not explain the validity of their updating methods in terms of convergence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 53, |
|
"text": "McDonald and Pereira (2006)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 393, |
|
"text": "Liang et al. (2006)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 515, |
|
"text": "Collins and Roark (2004)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "To achieve robust training, Daum\u00e9 III and Marcu (2005) employed the averaged perceptron (Collins, 2002a) and ALMA (Gentile, 2001 ). Collins and Roark (2004) used the averaged perceptron (Collins, 2002a) . McDonald and Pereira (2006) used MIRA (Crammer et al., 2006) . On the other hand, we employed the margin perceptron (Krauth and M\u00e9zard, 1987) , extending it to sequence labeling. We demonstrated that this greatly improved robustness.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 54, |
|
"text": "Daum\u00e9 III and Marcu (2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 104, |
|
"text": "(Collins, 2002a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 114, |
|
"end": 128, |
|
"text": "(Gentile, 2001", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 156, |
|
"text": "Collins and Roark (2004)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 202, |
|
"text": "(Collins, 2002a)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 232, |
|
"text": "McDonald and Pereira (2006)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 265, |
|
"text": "(Crammer et al., 2006)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 346, |
|
"text": "(Krauth and M\u00e9zard, 1987)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "With regard to the local update, (B), in Algorithm 4.2, \"early updates\" (Collins and Roark, 2004) and \"y-good\" requirement in (Daum\u00e9 III and Marcu, 2005 ) resemble our local update in that they tried to avoid the situation where the correct answer cannot be output. Considering such commonality, the way of combining the local update and the non-local update might be one important key for further improvement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 97, |
|
"text": "(Collins and Roark, 2004)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 126, |
|
"end": 152, |
|
"text": "(Daum\u00e9 III and Marcu, 2005", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "It is still open whether these differences are advantages or disadvantages. However, we think our algorithm can be a contribution to the study for incorporating non-local features. The convergence guarantee is important for the confidence in the training results, although it does not mean high performance directly. Our algorithm could at least improve the accuracy of NER with non-local features and it was indicated that our algorithm was superior to the re-ranking approach in terms of accuracy and training cost. However, the achieved accuracy was not better than that of related work (Finkel et al., 2005; Krishnan and Manning, 2006) based on CRFs. Although this might indicate the limitation of perceptron-based methods, it has also been shown that there is still room for improvement in perceptron-based algorithms as our margin perceptron algorithm demonstrated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 590, |
|
"end": 611, |
|
"text": "(Finkel et al., 2005;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 639, |
|
"text": "Krishnan and Manning, 2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this paper, we presented a new perceptron algorithm for learning with non-local features. We think the proposed algorithm is an important step towards achieving our final objective. We would like to investigate various types of new non-local features using the proposed algorithm in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Appendix A: Convergence of Algorithm 4.2 Let \u03b1 k be a weight vector before the kth update and \u03f5 k be a variable that takes 1 when the kth update is done in (A) and 0 when done in (B). The update rule can then be written as \u03b1 k+1 = \u03b1 k + \u03f5 k (\u03a6 a * \u2212 \u03a6 a + (1 \u2212 \u03f5 k )(\u03a6 l * \u2212 \u03a6 l ). 13 First, we obtain", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "\u03b1 k+1 \u2022 U l = \u03b1 k \u2022 U l + \u03f5 k (\u03a6 a * \u2022 U l \u2212 \u03a6 a \u2022 U l ) +(1 \u2212 \u03f5 k )(\u03a6 l * \u2022 U l \u2212 \u03a6 l \u2022 U l ) \u2265 \u03b1 k \u2022 U l + \u03f5 k \u03b4 + (1 \u2212 \u03f5 k )\u03b4 = \u03b1 k \u2022 U l + \u03b4 \u2265 \u03b1 1 \u2022 U l + k\u03b4 = k\u03b4 Therefore, (k\u03b4) 2 \u2264 (\u03b1 k+1 \u2022 U l ) 2 \u2264 (||\u03b1 k+1 ||||U l ||) 2 = ||\u03b1 k+1 || 2 -(1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "On the other hand, we also obtain", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "||\u03b1 k+1 || 2 \u2264 ||\u03b1 k || 2 + 2\u03f5 k \u03b1 k (\u03a6 a * \u2212 \u03a6 a ) +2(1 \u2212 \u03f5 k )\u03b1 k (\u03a6 l * \u2212 \u03a6 l ) +{\u03f5 k (\u03a6 a * \u2212 \u03a6 a ) + (1 \u2212 \u03f5 k )(\u03a6 l * \u2212 \u03a6 l )} 2 \u2264 ||\u03b1 k || 2 + 2C + R 2 \u2264 ||\u03b1 1 || 2 + k(R 2 + 2C) = k(R 2 + 2C)-(2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We used \u03b1 k (\u03a6 a * \u2212 \u03a6 a ) \u2264 C a , \u03b1 k (\u03a6 l * \u2212 \u03a6 l ) \u2264 C l and C l = C a = C to derive 2C in the second inequality. We used ||\u03a6 l * \u2212\u03a6 l || \u2264 ||\u03a6 a * \u2212\u03a6 a || \u2264 R to derive R 2 . Combining (1) and (2), we obtain k \u2264 (R 2 + 2C)/\u03b4 2 . Substituting this into (2) gives ||\u03b1 k || \u2264 (R 2 +2C)/\u03b4. Since y * = y \u2032 and \u03a6 a * \u2022\u03b1\u2212\u03a6 a \u2032\u2032 \u2022\u03b1 > C after convergence, we obtain", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "\u03b3 \u2032 (\u03b1) = min x i \u03a6 a * \u2022 \u03b1 \u2212 \u03a6 a \u2032\u2032 \u2022 \u03b1 ||\u03b1|| \u2265 C\u03b4/(2C + R 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Collins (2002a) also provided proof that guaranteed \"good\" learning for the non-separable case. However, we have only considered the separable case throughout the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(Daum\u00e9 III and Marcu, 2005) also presents the method using the averaged perceptron(Collins, 2002a) 3 For re-ranking problems,Shen and Joshi (2004) proposed a perceptron algorithm that also uses margins. The difference is that our algorithm trains the sequence labeler itself and is much simpler because it only aims at labeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results for the perceptron algorithms generally depend on the order of the training examples.5 Note that we can prove that the perceptron algorithms converge even though the weight vector is not initialized as \u03b1 = 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.cs.utah.edu/\u02dchal/TagChunk/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Training time was measured on a machine with 2.33 GHz QuadCore Intel Xeons and 8 GB of memory. C was fixed to 5657.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the shorthand\u03a6 a * = \u03a6 a (xi, y * i ), \u03a6 a = \u03a6 a (xi, y), \u03a6 l * = \u03a6 l (xi, y * i ), and\u03a6 l = \u03a6 l (xi,y)where y represents the candidate used to update (y \u2032 , y \u2032\u2032 , y 1 , or y 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "An algorithm that learns what's in a name. Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "211--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Bikel, R. L. Schwartz, and R. M. Weischedel. 1999. An algorithm that learns what's in a name. Ma- chine Learning, 34(1-3):211-231.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Collective information extraction with relational markov networks", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bunescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Bunescu and R. J. Mooney. 2004. Collective infor- mation extraction with relational markov networks. In ACL 2004.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A survey of smoothing techniques for ME models", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "IEEE Transactions on Speech and Audio Processing", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "37--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. F. Chen and R. Rosenfeld. 2000. A survey of smooth- ing techniques for ME models. IEEE Transactions on Speech and Audio Processing, 8(1):37-50.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Duffy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACL 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "263--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins and N. Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete struc- tures, and the voted perceptron. In ACL 2002, pages 263-270.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Incremental parsing with the perceptron algorithm", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins and B. Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL 2004.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 2002a. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP 2002.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Ranking algorithms for named-entity extraction: Boosting and the voted perceptron", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Collins. 2002b. Ranking algorithms for named-entity extraction: Boosting and the voted perceptron. In ACL 2002.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Online passive-aggressive algorithms", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Keshet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Shalev-Shwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. 2006. Online passive-aggressive al- gorithms. Journal of Machine Learning Research.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning as search optimization: Approximate large margin methods for structured prediction", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Daum\u00e9 III and D. Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. In ICML 2005.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Incorporating non-local information into information extraction systems by Gibbs sampling", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. R. Finkel, T. Grenager, and C. Manning. 2005. In- corporating non-local information into information ex- traction systems by Gibbs sampling. In ACL 2005.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A new approximate maximal margin classification algorithm", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gentile", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "JMLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Gentile. 2001. A new approximate maximal margin classification algorithm. JMLR, 3.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Large scale Bayes point machines", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Herbrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Graepel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Herbrich and T. Graepel. 2000. Large scale Bayes point machines. In NIPS 2000.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Learning algorithms with optimal stability in neural networks", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Krauth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "M\u00e9zard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Journal of Physics A", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "745--752", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Krauth and M. M\u00e9zard. 1987. Learning algorithms with optimal stability in neural networks. Journal of Physics A 20, pages 745-752.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "An effective twostage model for exploiting non-local dependencies in named entity recognitioin", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Krishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACL-COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Krishnan and C. D. Manning. 2006. An effective two- stage model for exploiting non-local dependencies in named entity recognitioin. In ACL-COLING 2006.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In ICML 2001, pages 282-289.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The perceptron algorithm with uneven margins", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Zaragoza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Herbrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Shawe-Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kandola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Li, H. Zaragoza, R. Herbrich, J. Shawe-Taylor, and J. Kandola. 2002. The perceptron algorithm with un- even margins. In ICML 2002.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "An end-to-end discriminative approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACL-COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Liang, A. Bouchard-C\u00f4t\u00e9, D. Klein, and B. Taskar. 2006. An end-to-end discriminative approach to ma- chine translation. In ACL-COLING 2006.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Online learning of approximate dependency parsing algorithms", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In EACL 2006.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Online large-margin training of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In ACL 2005.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Guessing partsof-speech of unknown words using global information", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACL-COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Nakagawa and Y. Matsumoto. 2006. Guessing parts- of-speech of unknown words using global information. In ACL-COLING 2006.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A tutorial on hidden Markov models and selected applications in speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rabiner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the IEEE", |
|
"volume": "77", |
|
"issue": "2", |
|
"pages": "257--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. R. Rabiner. 1989. A tutorial on hidden Markov mod- els and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Text chunking using transformation-based learning", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "third ACL Workshop on very large corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. A. Ramshaw and M. P. Marcus. 1995. Text chunk- ing using transformation-based learning. In third ACL Workshop on very large corpora.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The perceptron: A probabilistic model for information storage and organization in the brain", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Rosenblatt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "Psycological Review", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "386--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Rosenblatt. 1958. The perceptron: A probabilistic model for information storage and organization in the brain. Psycological Review, pages 386-407.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Integer linear programming inference for conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Roth and W. Yih. 2005. Integer linear program- ming inference for conditional random fields. In ICML 2005.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Semi-Markov random fields for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Sarawagi and W. W. Cohen. 2004. Semi-Markov ran- dom fields for information extraction. In NIPS 2004.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Flexible margin selection for reranking with full pairwise samples", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Shen and A. K. Joshi. 2004. Flexible margin selection for reranking with full pairwise samples. In IJCNLP 2004.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A tree-trellis based fast search for finding the n best sentence hypotheses in continuous speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Soong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. K. Soong and E. Huang. 1991. A tree-trellis based fast search for finding the n best sentence hypotheses in continuous speech recognition. In ICASSP-91.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Collective segmenation and labeling of distant entitites in information extraction", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Sutton and A. McCallum. 2004. Collective segme- nation and labeling of distant entitites in information extraction. University of Massachusetts Rechnical Re- port TR 04-49.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Max-margin Markov networks", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Taskar, C. Guestrin, and D. Koller. 2003. Max-margin Markov networks. In NIPS 2003.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Tjong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "De Meulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. F. Tjong, K. Sang, and F. De Meulder. 2003. Intro- duction to the CoNLL-2003 shared task: Language- independent named entity recognition. In CoNLL 2003.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td>dev</td><td>test</td><td>C (or \u03c3 2 )</td></tr><tr><td colspan=\"2\">local features</td><td/><td/></tr><tr><td>CRF</td><td colspan=\"2\">91.10 86.26</td><td>100</td></tr><tr><td>Perceptron</td><td colspan=\"2\">89.01 84.03</td><td>-</td></tr><tr><td>Averaged perceptron</td><td colspan=\"2\">89.32 84.08</td><td>-</td></tr><tr><td>Margin perceptron</td><td colspan=\"2\">90.98 85.64</td><td>11313</td></tr><tr><td colspan=\"3\">+ non-local features</td><td/></tr><tr><td colspan=\"3\">Candidate (n \u2032 = 100) 90.71 84.90</td><td>4000</td></tr><tr><td>Proposed (n \u2032 = 100)</td><td colspan=\"2\">91.95 86.30</td><td>5657</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Summary of performance (F 1 )." |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td>dev</td><td>test</td><td>C</td></tr><tr><td>Proposed (n \u2032 = 20)</td><td colspan=\"2\">91.76 86.19</td><td>5657</td></tr><tr><td>Proposed (n \u2032 = 100)</td><td colspan=\"2\">91.95 86.30</td><td>5657</td></tr><tr><td>Proposed (n \u2032 = 400)</td><td colspan=\"2\">92.13 86.39</td><td>5657</td></tr><tr><td>Proposed (n \u2032 = 800)</td><td colspan=\"2\">92.09 86.39</td><td>5657</td></tr><tr><td colspan=\"3\">Proposed (n \u2032 = 1600) 92.13 86.46</td><td>5657</td></tr><tr><td colspan=\"3\">Proposed (n \u2032 = 6400) 92.19 86.38</td><td>5657</td></tr><tr><td>effects further.</td><td/><td/><td/></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Effect of n \u2032 ." |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td>dev</td><td>test</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "The performance of the related work." |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td>dev</td><td>test</td><td>C (or \u03c3 2 )</td></tr><tr><td colspan=\"2\">local features</td><td/><td/></tr><tr><td>CRF</td><td colspan=\"2\">91.39 86.30</td><td>200</td></tr><tr><td>Perceptron</td><td colspan=\"2\">89.36 84.35</td><td>-</td></tr><tr><td>Averaged perceptron</td><td colspan=\"2\">89.76 84.50</td><td>-</td></tr><tr><td>Margin perceptron</td><td colspan=\"2\">91.06 86.24</td><td>32000</td></tr><tr><td colspan=\"3\">+ non-local features</td><td/></tr><tr><td>Proposed (n \u2032 = 100)</td><td colspan=\"2\">92.23 87.04</td><td>5657</td></tr><tr><td colspan=\"3\">Proposed (n \u2032 = 6400) 92.54 87.17</td><td>5657</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Summary of performance with POS/chunk tags by TagChunk." |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td>dev</td><td>test</td><td>C</td></tr><tr><td colspan=\"2\">local features</td><td/><td/></tr><tr><td>Margin Perceptron</td><td colspan=\"2\">91.06 86.24</td><td>32000</td></tr><tr><td colspan=\"2\">+ non-local features</td><td/><td/></tr><tr><td colspan=\"3\">Re-ranking 1 (n \u2032 = 100) 91.62 86.57</td><td>4000</td></tr><tr><td>Re-ranking 1 (n \u2032 = 80)</td><td colspan=\"2\">91.71 86.58</td><td>4000</td></tr><tr><td colspan=\"3\">Re-ranking 2 (n \u2032 = 100) 92.08 86.86</td><td>16000</td></tr><tr><td colspan=\"3\">Re-ranking 2 (n \u2032 = 800) 92.26 86.95</td><td>16000</td></tr><tr><td>Proposed (n \u2032 = 100)</td><td colspan=\"2\">92.23 87.04</td><td>5657</td></tr><tr><td>Proposed (n \u2032 = 6400)</td><td colspan=\"2\">92.54 87.17</td><td>5657</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Comparison with re-ranking approach." |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td>dev</td><td>test</td><td>time (sec.)</td></tr><tr><td/><td>local features</td><td/><td/></tr><tr><td>Margin Perceptron</td><td colspan=\"2\">91.04 86.28</td><td>15,977</td></tr><tr><td colspan=\"3\">+ non-local features</td><td/></tr><tr><td colspan=\"3\">Re-ranking 1 (n \u2032 = 100) 91.48 86.53</td><td>86,742</td></tr><tr><td colspan=\"3\">Re-ranking 2 (n \u2032 = 100) 92.02 86.85</td><td>112,138</td></tr><tr><td>Proposed (n \u2032 = 100)</td><td colspan=\"2\">92.23 87.04</td><td>28,880</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Comparison of training time (C = 5657)." |
|
} |
|
} |
|
} |
|
} |