{ "paper_id": "I11-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:31:18.939706Z" }, "title": "A Fast Accurate Two-stage Training Algorithm for L1-regularized CRFs with Heuristic Line Search Strategy", "authors": [ { "first": "Jinlong", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University Shanghai", "location": { "country": "China" } }, "email": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University Shanghai", "location": { "country": "China" } }, "email": "xpqiu@fudan.edu.cn" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fudan University Shanghai", "location": { "country": "China" } }, "email": "xjhuang@fudan.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sparse learning framework, which is very popular in the field of nature language processing recently due to the advantages of efficiency and generalizability, can be applied to Conditional Random Fields (CRFs) with L1 regularization method. Stochastic gradient descent (SGD) method has been used in training L1-regularized CRFs, because it often requires much less training time than the batch training algorithm like quasi-Newton method in practice. Nevertheless, SGD method sometimes fails to converge to the optimum, and it can be very sensitive to the learning rate parameter settings. We present a two-stage training algorithm which guarantees the convergence, and use heuristic line search strategy to make the first stage of SGD training process more robust and stable. Experimental evaluations on Chinese word segmentation and name entity recognition tasks demonstrate that our method can produce more accurate and compact model with less training time for L1 regularization.", "pdf_parse": { "paper_id": "I11-1008", "_pdf_hash": "", "abstract": [ { "text": "Sparse learning framework, which is very popular in the field of nature language processing recently due to the advantages of efficiency and generalizability, can be applied to Conditional Random Fields (CRFs) with L1 regularization method. Stochastic gradient descent (SGD) method has been used in training L1-regularized CRFs, because it often requires much less training time than the batch training algorithm like quasi-Newton method in practice. Nevertheless, SGD method sometimes fails to converge to the optimum, and it can be very sensitive to the learning rate parameter settings. We present a two-stage training algorithm which guarantees the convergence, and use heuristic line search strategy to make the first stage of SGD training process more robust and stable. Experimental evaluations on Chinese word segmentation and name entity recognition tasks demonstrate that our method can produce more accurate and compact model with less training time for L1 regularization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Conditional Random Fields (CRFs) (Lafferty et al., 2001; Sutton and McCallum, 2006) are one of the most widely-used machine learning approach in the field of nature language processing, for their ability to handle large feature sets and structural dependency between output labels. The applications of CRFs cover a wide range of tasks such as part-of-speech (POS) tagging (Lafferty et al., 2001) , semantic role labeling (Toutannova et al., 2005) and syntactic parsing (Finkel et al., 2008) . CRFs outperform other models like Maximum Entropy Markov models (McCallum et al., 2000) , because they overcome the problem of \"label bias\". Moreover, CRFs can output the probabilistic of labeling result for further use as pipeline or reranking.", "cite_spans": [ { "start": 33, "end": 56, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF13" }, { "start": 57, "end": 83, "text": "Sutton and McCallum, 2006)", "ref_id": "BIBREF20" }, { "start": 372, "end": 395, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF13" }, { "start": 421, "end": 446, "text": "(Toutannova et al., 2005)", "ref_id": null }, { "start": 469, "end": 490, "text": "(Finkel et al., 2008)", "ref_id": "BIBREF7" }, { "start": 557, "end": 580, "text": "(McCallum et al., 2000)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For all types of CRFs, the maximum-likelihood method can be applied for parameter estimation, which means training the model is done by maximizing the log-likelihood on the training data. To avoid overfitting the likelihood is often penalized with the regularization term. There were two common regularization methods named L1 and L2 regularization. L1 regularization, also called Laplace prior, penalizes the weight vector with its L1-norm. L2 regularization, also called Gaussian prior, uses L2-form. Based on the work of , there is no significant difference between these two regularization methods in terms of accuracy. But L1 regularization has a major advantage that L1-regularized training can produce models, of which the feature weights can be very sparse, then the size of the model will be much smaller than that produced by L2 regularization. Compact models are more interpretable, generalizable and manageable, require less resources like memory and storage. It is very meaningful especially for the rapid development of mobile application nowadays, which suffer the scarcity of resources. In many NLP tasks, the feature sets can reach the magnitude of several million.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Besides, L1 regularization method can implicitly perform the feature selection, and provide the result for further process such as iterative approach (Vail et al., 2007; Peng and McCallum, 2004) . This task requires that we need to train the model as accurate as possible, as to converge to the optimum. The feature selection can be regarded as reliable and unbiased after such a process.", "cite_spans": [ { "start": 150, "end": 169, "text": "(Vail et al., 2007;", "ref_id": "BIBREF23" }, { "start": 170, "end": 194, "text": "Peng and McCallum, 2004)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Quasi-Newtion method was successfully and efficiently used in L1-regularized model by Andrew and .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "They presented an algorithm called Orthant-Wise Limited-memory Quasi-Newton (OWL-QN), which is based on L-BFGS algorithm (Liu and Nocedal, 1989) and achieve better convergence than the method introduced by Kazama and Tsujii (2003) .", "cite_spans": [ { "start": 121, "end": 144, "text": "(Liu and Nocedal, 1989)", "ref_id": "BIBREF15" }, { "start": 206, "end": 230, "text": "Kazama and Tsujii (2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Stochastic gradient descent (SGD) methods are another kind of L1-regularized training methods. It is a very attractive framework for it often requires much less training time than the batch training algorithm in practice. Tsuruoka et al. (2009) presented a variant of SGD that can efficiently produce compact models with L1 regularization. The main idea is to keep track of the total penalty and the penalty each weight has applied, so that the penalization smooth away the noisy gradient.", "cite_spans": [ { "start": 222, "end": 244, "text": "Tsuruoka et al. (2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although SGD method with cumulative penalty is very efficient, it sometimes fails to converge to the optimum, because the training process is usually terminated at a certain number of iterations without explicit stop criteria as in quasi-Newton method. Another problem is that the training result of SGD method is very sensitive to the parameter settings of learning rate, therefore we have to tune the values of parameters for different tasks, which is not efficient in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a two-stage L1regularized training algorithm to solve these two problems. In the first stage, we use the SGD method to get a relative good solution quickly. In the second stage, we use the OWL-QN method to improve the model which has been dealt with the SGD method. By this means we can fast get the accurate model. The learning rate scheduling in the first stage is done by heuristic line search, which makes the process more robust and stable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our experiments are conducted on two tasks, Chinese word segmentation and name entity recognition. We show that our method can produce more accurate and compact model with less training time for L1 regularization. We also verify that the result of SGD training method will be more robust when using the heuristic line search strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. Section 2 introduces the basics of CRFs. Section 3 describe the two-stage algorithm for L1regularized models. Experimental results are shown in Section 4. We conclude the work in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we briefly describe the basics of conditional random fields (CRFs) (Lafferty et al., 2001; Sutton and McCallum, 2006) and introduce the definition of some concepts and parameters.", "cite_spans": [ { "start": 84, "end": 107, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF13" }, { "start": 108, "end": 134, "text": "Sutton and McCallum, 2006)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "2" }, { "text": "CRFs defines the conditional probabilistic distribution over possible output sequences y for observation x as following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRFs", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y|x) = 1 Z(x) exp K k=1 \u03bb k F k (x, y) ,", "eq_num": "(1)" } ], "section": "Linear-chain CRFs", "sec_num": "2.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRFs", "sec_num": "2.1" }, { "text": "F k (x, y)is equal to T t=1 f k (x, y t\u22121 , y t , t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRFs", "sec_num": "2.1" }, { "text": ". {f k } is a set of feature function and \u03bb k is the weight of the feature, and Z(x) is the normalization factor defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRFs", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Z(x) = y exp K k=1 \u03bb k F k (x, y) .", "eq_num": "(2)" } ], "section": "Linear-chain CRFs", "sec_num": "2.1" }, { "text": "The feature function can be divided into unigram features and bigram features, here we simply rewrite f k (x, y t , t) as f k (x, y t\u22121 , y t , t) for convenient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear-chain CRFs", "sec_num": "2.1" }, { "text": "The maximum-likelihood method is a commonly used way applied for parameter estimation, which means we train the model by minimize the negated conditional log-likelihood L(\u03bb) on the training data:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "L(\u03bb) = \u2212 (x,y) log p(x, y) (3) = (x,y) log Z(x) \u2212 K k=1 \u03bb k F k (x, y) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "To avoid overfitting, the likelihood is often penalized with the regularization term, which we will talk about in the later sections. The partial derivative of L(\u03bb) by the feature weights \u03bb k are given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202 \u2202\u03bb k L = (x,y) T t=1 E p(y|x) f k (x, y t\u22121 , y t , t) \u2212 (x,y) T t=1 f k (x, y t\u22121 , y t , t)", "eq_num": "(4)" } ], "section": "Training", "sec_num": "2.2" }, { "text": "where E p(y|x) denotes the conditional expectation under the model distribution:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p ( y|x) f k (x, y t\u22121 , y t , t) = (y ,y) f k (x, y , y, t)P (y t\u22121 = y , y t = y|x)", "eq_num": "(5)" } ], "section": "Training", "sec_num": "2.2" }, { "text": "Computing the conditional expectation directly is impractical for the large number of possible tag sequences, which is exponential in the length of the observation. Thus, a dynamic programming approach known as the Forward-Backward algorithm originally described for Hidden Markov models (Rabiner, 1989) , is applied in a slightly modified form. For the forward recursions, we have", "cite_spans": [ { "start": 288, "end": 303, "text": "(Rabiner, 1989)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "\u03b1 0 (\u22a5) = 1 \u03b1 t+1 (y) = y \u03b1 t (y ) exp K k=1 \u03bb k f k (x, y , y, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "and for the backward recursion, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "\u03b2 T +1 ( ) = 1 \u03b2 t (y ) = y \u03b2 t+1 (y) exp K k=1 \u03bb k f k (x, y , y, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "for 0 \u2264 t \u2264 T and y \u2208 Y , where \u22a5 and are defined as special states for the begin and end of the sequence. Then the normalization factor is computed by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Z(x) = \u03b2 0 (\u22a5),", "eq_num": "(6)" } ], "section": "Training", "sec_num": "2.2" }, { "text": "and the conditional probabilities P (y t\u22121 = y , y t = y|x) are given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "\u03b1 t (y ) exp K k=1 \u03bb k f k (x, y , y, t) \u03b2 t+1 (y)/Z(x) 3 L1 Regularization in CRFs 3.1 Regularization", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "The logarithmic loss function L(\u03bb) defined by (3) is usually penalized with an additional regularization term, which prevents the model from overfitting the training data. There are two common regularization methods named L1 and L2 regularization, in the case of L1 regularization, the term is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R(\u03bb) = C k |\u03bb k |,", "eq_num": "(7)" } ], "section": "Training", "sec_num": "2.2" }, { "text": "where C is the regularization parameter that controls the trade-off between fitting exactly the observations and the L1-norm of the weight vector. This value is usually tuned by cross-validation or using the heldout data. Now we can redefine the objective loss function as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u03bb) + R(\u03bb).", "eq_num": "(8)" } ], "section": "Training", "sec_num": "2.2" }, { "text": "3.2 Orthant-Wise Limited-memory Quasi-Newton", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "It is not easy to use some common numerical optimization strategies such as limited memory BFGS (Liu and Nocedal, 1989) directly with L1 regularization, because the regularization term is not differentiable when the weight is zero. A very efficient strategy called Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method is introduced in . This algorithm is motivated by the observation that the L1 regularization term is differentiable when restricted to a set of points in which each coordinate never changes sign (called its \"orthant\"). Furthermore it is a linear function of its argument, which means the second-order behavior of the regularized objective function on a given orthant is determined by the log-likelihood component alone. Only a few steps of the standard L-BFGS algorithm have been changed in the OWL-QN method, and these differences are listed below:", "cite_spans": [ { "start": 96, "end": 119, "text": "(Liu and Nocedal, 1989)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "1. The \"pseudo-gradient\" is used in place of the gradient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "2. The resulting search direction is constrained to match the sign pattern of the negated pseudo-gradient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "3. Each parameter is projected back onto the initial orthant of the previous value during the line search. proved that OWL-QN method is guaranteed to converge to a globally optimal result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "2.2" }, { "text": "Stochastic gradient approaches use a small batch of the observations to get a crude approximation of the gradient of the objective function given by (3). The small batch size makes us possible to update the parameters more frequently than the origin gradient descent and speed up the convergence. Only considering the log-likelihood term, the updates have the following form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "\u03bb j k = \u03bb j k + \u03b7 j \u2202L(\u03bb) \u2202\u03bb k \u03bb=\u03bb j (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "where j is the iteration counter and \u03b7 j is the learning rate. It should be noted that the partial derivative we presented here is not the true gradient but the crude approximation from small randomlyselected subset of the training samples. In (Tsuruoka et al., 2009) , a variant of SGD that can efficiently train L1-regularized CRFs was presented. The main idea can be concluded as follows:", "cite_spans": [ { "start": 244, "end": 267, "text": "(Tsuruoka et al., 2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "1. Only update the weights of the features that are used in the current observation, called \"lazy update\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "2. \"Clip\" the parameter value when it crosses zero.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "3. Keep track of the cumulated penalty that the weights of features should been received if the fluctuationless gradient were used, and use this value to the update.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "Let z j be the total L1-penalty that each weight should been received, it is simply accumulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z j = C N j t=1 \u03b7 t .", "eq_num": "(10)" } ], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "Then the process of regularization can be formalized as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "\u03bb j+1 k = max(0,\u03bb j k \u2212 (z j + q j\u22121 k )\u03bb j k > 0 min(0,\u03bb j k + (z j \u2212 q j\u22121 k )\u03bb j k < 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "where q j k is the total L1-penalty that \u03bb k has actually received: Tsuruoka et al. (2009) demonstrated that this algorithm can be much more quickly than the OWL-QN method and yield a comparable performance, while the value of objective function and the number of active features are not as good as OWL-QN. The reason is that we usually terminated the training process at a certain number of iterations, because there are no explicit stop criteria for SGD.", "cite_spans": [ { "start": 68, "end": 90, "text": "Tsuruoka et al. (2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q j k = j t=1 (\u03bb t+1 k \u2212\u03bb t k ).", "eq_num": "(11)" } ], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "Another issue is that the scheduling of learning rates can be very tricky. Tsuruoka et al. (2009) suggest that exponential decay is a good choice in practice compared with the method used in (Collins et al., 2008) . This kind of scheduling of learning rates have the following form:", "cite_spans": [ { "start": 75, "end": 97, "text": "Tsuruoka et al. (2009)", "ref_id": "BIBREF22" }, { "start": 191, "end": 213, "text": "(Collins et al., 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b7 j = \u03b7 0 \u03b1 j/N ,", "eq_num": "(12)" } ], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "where \u03b7 0 and \u03b1 are both constant. We name \u03b7 0 the initiation learning rate parameter and \u03b1 the descent learning rate parameter. These learning rate parameters have a great influence on the result of SGD training, and they need to be tuned for different tasks, which is not very efficient in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Gradient Descent", "sec_num": "3.3" }, { "text": "Based on what we have mentioned above, we know that the SGD method sometimes fails to converge to the global optimal solution, for it does not have the explicit stop criteria as in the quasi-Newton method. Although the scheduling of learning rate found in (Collins et al., 2008) :", "cite_spans": [ { "start": 256, "end": 278, "text": "(Collins et al., 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Two-stage L1-regularized Training", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b7 j = \u03b7 0 1 + j/N", "eq_num": "(13)" } ], "section": "Two-stage L1-regularized Training", "sec_num": "3.4" }, { "text": "guarantees ultimate convergence theoretically. Its actual convergence speed is poor in practice (Darken and Moody,1990) . We have to take quite a number of iterations if we need the result close enough to the best solution. This contradicts to the main motivation we use SGD method for parameter estimation that can speed up the training process.", "cite_spans": [ { "start": 96, "end": 119, "text": "(Darken and Moody,1990)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Two-stage L1-regularized Training", "sec_num": "3.4" }, { "text": "On the other hand, based on the work of Andrew and Gao 2007, we know that OWL-QN method guarantees the convergence. And we can test the relative change in the objective function value averaged over the several previous iterations for stop criteria. Tsuruoka et al. (2009) demonstrated that SGD method converges much faster than OWL-QN method especially in the first few iterations. This fact motivates us to use a two-stage training strategy. In the first stage, we use the SGD method to quickly get a relative good solution. In the second stage, we use the OWL-QN method to improve the model which has been dealt with the SGD method.", "cite_spans": [ { "start": 249, "end": 271, "text": "Tsuruoka et al. (2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Two-stage L1-regularized Training", "sec_num": "3.4" }, { "text": "This method can be also driven from an alternative view. In the theory of convex optimization (Boyd and Vandenberghe, 2004), the asymptotic convergence rate of Newton's method is quadratic if we start at a point close enough to the global optimum. 1 In fact, the iterations in Newton's method can fall into two stages. The second stage, which occurs once the searching point is quite close to the optimum solution, is called \"quadratically convergent stage\". The first stage is usually referred as the \"damped Newton phase\", because the algorithm may choose a step size that is different from the exact Newton step to satisfy the backtracking condition. 2 The quadratically convergent stage is also called the \"pure Newton phase\" since the full Newton step is always chosen in these iterations . This fact demonstrates that if we can find a solution close to the global optimum, it will not only increase the average convergence rates, but also reduce the time consuming needed for backtracking line search. This is what we achieved by the first stage of SGD training method.", "cite_spans": [ { "start": 248, "end": 249, "text": "1", "ref_id": null }, { "start": 654, "end": 655, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Two-stage L1-regularized Training", "sec_num": "3.4" }, { "text": "Another problem of SGD method is the troublesome learning rate parameters tuning, and these parameters have a significant influence on the result of SGD training. No matter which way to set the learning rate, if it is fixed without taking the actual effect of the current training sample update into consideration, it may be too large or too small for some situation. In order to get a more robust and stable method for learning rate scheduling, we present a heuristic line search strategy inspired by the implementation of CRFsgd (Bottou, 2007) for learning rate calibration.", "cite_spans": [ { "start": 531, "end": 545, "text": "(Bottou, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "For the purpose of convenience, we define the objective function for a single sample as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "l(\u03bb, x) = \u2212 log p(x, y) + C N \u03bb k \u2208x |\u03bb k |.", "eq_num": "(14)" } ], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "Notice here we only use these active features in the current sample as the L1 regularization term, for we only update these associate parameters in a lazy fashion. Now we try to find the learning rate that decrease the value of this objective function as much as possible without consuming too much search time. We simply use a heuristic line search strategy as follows: (1) We use the learning rate calculated by Eq.12 as initiation and get the initial value of Eq.14. (2) Then we go on to increase the learning rate until the maximum number of trials for search is reached or the value of Eq.14 is worse than the initial value, we just decrease the learning rate from the initiation if the latter situation happens. (3) At last we just use the learning rate that yields the best result of Eq.14 during the search. For the calculation of Eq.14 only needs the weights of the features that are used in the current sample, so it will still be very efficient. The whole algorithm in pseudo-code was showed in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "Algorithm 1 SGD heuristic line search 1. for k = 0 to MaxIterations 2. Select sample j 3. bestr \u2190 LearningRate(k) 4. UpdateWeights (j,bestr) 5. procedure LearningRate(k) 6. r(0) \u2190 initialed by Eq.12 7. init obj \u2190 initialed by Eq.14 8. for i = 0 to MaxTrialTime 9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "UpdateWeights(j,r(i)) 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "obj(i) \u2190 Eq.14 11.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "Recover weights before update 12.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "if obj(i) is worse than init obj then 13.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "label f lag 14.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "if f lag status is changed then 15.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "(i + 1) \u2190 r(0) * decay 16. else 17.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "if f lag is not set then 18.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "(i + 1) \u2190 r(i)/decay 19. else 20. r(i + 1) \u2190 r(i) * decay 21. bestr = argmin r(i) obj(i) 22. return bestr.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "It should be noted that we need not set a large number for maximum trial time, because it will generally take a lot of search time and may not yield a good result but arrives at the local optimum, for we only optimize the objective value of a single training sample. Here we set the value to 3 empirically. The changing rate for line search can be any positive number that smaller than 1, it is set to 0.5 as mostly accepted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Line Search", "sec_num": "3.5" }, { "text": "We evaluate the effectiveness and performance of our training algorithm using two NLP tasks that includes Chinese words segmentation and name entity recognition, which are very typical problems in the field of NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "To show the improvement of our algorithm, we compare it with the OWL-QN algorithm and SGD algorithm on the same data sets. For the purpose of run-times comparison, we implemented all the algorithm in a quite similar way, especially in feature extraction and gradient computation. For example, we compute the forward/backward scores in logarithm domain instead of scaling (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "1) c i\u22121 y i , c i y i , c i+1 y i (2) c i\u22121 c i y i , c i c i+1 y i , c i\u22121 c i+1 y i (3) y i\u22121 y i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "method, though the latter method was claimed faster (Lavergne et al., 2010) . All experiments were performed on a server with Xeon 2.66GHz.", "cite_spans": [ { "start": 52, "end": 75, "text": "(Lavergne et al., 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The first set of experiments used the Chinese word segmentation corpus from the Second International SIGHAN Bakeoff data sets (Emerson, 2005) , provided by Peking University. The training data consists of 19,054 sentences, 1,109,947 Chinese words, 1,826,448 Chinese characters and the testing data consists of 1,944 sentences, 104,372 Chinese words, 172,733 Chinese characters. We separated 1,000 sentences from the training data and use them as the heldout data. The test data was only used for the final accuracy report.", "cite_spans": [ { "start": 126, "end": 141, "text": "(Emerson, 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "The feature templates we used in this experiment were listed in Table 1 , where c i denotes the i th Chinese character in an instance, y i denotes the i th label in the instance. Based on the work of Huang and Zhao (2007) , it was shown that 6 label representation is a better choice in practice. Compare with the origin 2 label representation or 4 label representation, it can represent richer label information. We did not use any extra knowledge such as Chinese and Arabic numbers.", "cite_spans": [ { "start": 200, "end": 221, "text": "Huang and Zhao (2007)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "For OWL-QN method and SGD method, we followed the experiment settings in (Tsuruoka et al., 2009) . The meta-parameters for OWL-QN method were the same with the default settings of the optimizer developed by , the convergence tolerance was 1e-4; the L-BFGS memory parameter was 10. The regularization parameter C was tuned in the way that it maximized the log-likelihood of the heldout data when using the OWL-QN algorithm. We also used this value as the regularization parameter in the SGD method. The learning rate parameters for SGD were tuned in the way that they maximized the value of the objective function in 30 passes. We first set the initiation learning rate parameter (\u03b7 0 ) by testing 1.0, 0.5, 0.2, and 0.1, then we set the descent learning rate parameter (\u03b1) by testing 0.9, 0.85, and 0.8 with the fixed initiation learning For our method, we first measured the progress of the SGD algorithm with heuristic line search we presented against the origin SGD method. We use the same parameters settings with the former method including both regularization parameter and learning rate parameters. The number of passes performed over the training data was also set to 30. Then we compare the results of both methods during the training process of the model with the same parameters, and they were shown in Figure 1 and Figure 2 . Figure 1 shows how the value of the objective function changed as the training proceeded with the same descent learning rate parameter (\u03b1 = 0.85), the figure contains six curves representing the results of SGD method with heuristic line search and the origin SGD method with different initiation learning rate parameter settings (\u03b7 0 ). \"HLS\" stands for the heuristic line search strategy. The results shows SGD method with heuristic line search shows better convergence and more robust result than the origin SGD method when using the same learning rate parameter settings. Figure 2 shows the results with different settings of learning rate parameters (fixed \u03b7 0 = 1), and it demonstrates the same trend as Figure 1 . Then we trained the models with the training data and evaluated the accuracy of the Chinese word segmenter on the test data. The number of passes performed over the training data in SGD was also set to 30. In our method, we set the SGD iteration times to 5. It is worth noting that we didn't spend much time in tuning the value of this parameter. Based on a cursory view of the training process, we found that it converge to a relative \"good\" result after the first 5 iteration. We used this value throughout all the experiments. Because we would take the OWL-QN method to guarantee the final convergence, and the SGD method with heuristic line search strategy is insensitive to the learning rate parameters, this value would not have a significant influence on the performance.", "cite_spans": [ { "start": 73, "end": 96, "text": "(Tsuruoka et al., 2009)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 1314, "end": 1322, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1327, "end": 1335, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 1338, "end": 1346, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1913, "end": 1921, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 2047, "end": 2055, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "The results are shown in Table 2 and Table 3 . In Table 2 , the second column shows the final value of the objective function. The third column shows the number of active features in the final resulting model. The fourth column shows the F score of the Chinese word segment results, which is the harmonic mean of precision P (percentage of output Chinese words that exactly match the golden standard Chinese words) and recall R (percentage of golden standard Chinese words that returned by our system). In Table 3 , the second column shows the number of passes performed in the training, in our method, this value includes the the number of (", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 44, "text": "Table 2 and Table 3", "ref_id": "TABREF1" }, { "start": 50, "end": 57, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 506, "end": 513, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "1) c i\u22122 y i , c i\u22121 y i , c i y i , c i+1 y i , c i+2 y i (2) c i\u22121 c i y i , c i c i+1 y i (3) y i\u22121 y i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "passes both in the first stage of SGD and the second stage of OWL-QN process. The third column shows the training time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "In the terms of accuracy, there was no significant difference between all the models, the origin SGD method yield the slightly better result, probably due to the model has larger features sets. This doesn't contradict to our original purpose, for we have got a substantial improved result in both the final value of the objective function and the number of active features compared with the origin SGD method, and to the same level as OWL-QN method. Notice the origin feature sets are over 6 millon, L1 regularization methods produced the models which are compact indeed. The official best result in the closed test achieved an F score of 95.00, and our result is quite close to that, ranked 4th of 23 official runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "On the other hand, our method took about 30% less than the OWL-QN method in the training time. Our method only needs 88 passes over the whole training data in the second stage for convergence compared with 141 in the OWL-QN method, which shows a significant improvement in training time consuming, for we have used the first stage of SGD method to get a nearly optimal and stable result beforehand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Word Segmentation", "sec_num": "4.1" }, { "text": "The second set of experiments used the name entity recognition corpus from the Fourth International SIGHAN Bakeoff data sets (Jin and Chen, 2008) , provided by Microsoft Research Asia. The training data consists of 23,182 sentences, 1,089,050 Chinese characters and the testing data consists of 4,636 sentences, 219,197 Chinese characters. We separated 1,000 sentences from the training data and use them as the heldout data. The training data is annotated with the \"IOB\" tags representing name entities including person, location and organization.", "cite_spans": [ { "start": 125, "end": 145, "text": "(Jin and Chen, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Name Entity Recognition", "sec_num": "4.2" }, { "text": "The feature templates we used in this experiment were listed in Table 4 . Notice we did not change the label representation made by the origin training data for convenient. Again a richer label representation may yield a better performance. The other experiment settings are the same with the experiment on Chinese word segmentation. The comparison results are shown in Table 5 , Figure 3 and Figure 4 . The trend in the results is the same as that of the Chinese word segmentation task. SGD method with heuristic line search strategy produced more stable and robust result than the origin SGD method. Although there will have fluctuations sometimes (in Figure 4) , the line search strategy shows the ability to find an appreciate step size in that case. Again our method converged to a much better solution against SGD in both the final value of the objective function and number of active features, and took about 40% less training time than OWL-QN.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 370, "end": 377, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 380, "end": 388, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 393, "end": 401, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 654, "end": 663, "text": "Figure 4)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Name Entity Recognition", "sec_num": "4.2" }, { "text": "The accuracy of the results is shown in Table 6 , there was no significant difference between all the models as well. The F score of organization name entity recognition was worse than the results in person and location name entity, for organization name entities in Chinese often have a relative long distance dependency, which is not easy to be captured by our local feature templates in the Chinese character level.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 48, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Name Entity Recognition", "sec_num": "4.2" }, { "text": "We have presented a two-stage algorithm that can efficiently train L1-regularized CRFs. Experiments on two NLP tasks demonstrated that our method is effective and efficient by utilizing both In the future, we intend to study how to use the results of the first stage of SGD learning to estimate the Hessian information, which can be provided for the second stage of quasi-Newton method to enhance the effectiveness of training. Borders et al. (2009) looked into this problem in a similar way. It is also worthwhile to investigate whether other adaptive learning rate scheduling algorithms can result in fast training with our method, as in (Vishwanathan et al., 2006; .", "cite_spans": [ { "start": 428, "end": 449, "text": "Borders et al. (2009)", "ref_id": null }, { "start": 640, "end": 667, "text": "(Vishwanathan et al., 2006;", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Quasi-Newton method shares many properties with Newton's method, though its convergence rate is generally superlinear but not quadratic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To ensure the objective function decrease a certain value and guarantee the convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author wishes to thank the anonymous reviewer for their helpful suggestions. This work was (partially) funded by NSFC (No. 61003091 and No. 61073069) and Shanghai Committee of Science and Technology (No. 10511500703) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Scalable training of l1-regularized log-linear models", "authors": [ { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galen Andrew and Jianfeng Gao. 2007. Scalable train- ing of l1-regularized log-linear models. In Proceed- ings of the International Conference on Machine Learning, pages 33-40. Corvalis, Oregon, USA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SGD-QN: Careful quasi-Newton stochastic gradient descent", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Gallinari", "suffix": "" } ], "year": 2009, "venue": "The Journal of Machine Learning Research", "volume": "10", "issue": "", "pages": "1737--1754", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, L\u00e9on Bottou, and Patrick Gallinari. 2009. SGD-QN: Careful quasi-Newton stochastic gradient descent. In The Journal of Machine Learn- ing Research, 10: 1737-1754.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Stochastic gradient descent (sgd) implementation", "authors": [ { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L\u00e9on Bottou. 2007. Stochastic gradient descent (sgd) implementation. http://leon.bottou.org/projects/sgd.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Convex Optimiaztion", "authors": [ { "first": "Stephen", "middle": [], "last": "Boyed", "suffix": "" }, { "first": "Lieven", "middle": [], "last": "Vandenberghe", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Boyed and Lieven Vandenberghe. 2004. Con- vex Optimiaztion. Cambridge University Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Exponentiated gradient algorithms for conditional random fields and max-margin markov networks", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Peter", "middle": [ "L" ], "last": "Bartlett", "suffix": "" } ], "year": 2008, "venue": "The Journal of Machine Learning Research", "volume": "9", "issue": "", "pages": "1775--1822", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter L. Bartlett. 2008. Exponentiated gradient algorithms for conditional random fields and max-margin markov networks. In The Journal of Machine Learning Research, 9: 1775-1822.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Note on learning rate schedules for stochastic optimization", "authors": [ { "first": "Christian", "middle": [], "last": "Darken", "suffix": "" }, { "first": "John", "middle": [], "last": "Moody", "suffix": "" } ], "year": 1990, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "3", "issue": "", "pages": "832--838", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Darken and John Moody. 1990. Note on learning rate schedules for stochastic optimization. In Proceedings of Advances in Neural Information Processing Systems 3, pages 832-838. Colorado, USA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The second international Chinese word segmentation bakeoff", "authors": [ { "first": "Tom", "middle": [ "Emerson" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Emerson. 2005. The second international Chi- nese word segmentation bakeoff. In Proceedings of Fourth SIGHAN Workshop on Chinese Language Processing. Korea.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Efficient, feature-based, conditional random field parsing", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Kleeman", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "959--967", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Alex Kleeman, and Christopher D.Manning. 2008. Efficient, feature-based, con- ditional random field parsing. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics, pages 959-967. Columbus, Ohio, USA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A comparative study of parameter estimation methods for statistical natural language processing", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "824--831", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Gao, Galen Andrew, Mark Johnson, and Kristina Toutanova. 2007. A comparative study of parameter estimation methods for statistical natural language processing. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics, pages 824-831. Prague, Czech republic.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Chinese word segmentation: A decade review", "authors": [ { "first": "Changning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2007, "venue": "Journal of Chinese Information Processing", "volume": "21", "issue": "", "pages": "8--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. In Journal of Chi- nese Information Processing, 21(3): 8-19.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Training conditional random fields by periodic step size adaptation for large-scale text mining", "authors": [ { "first": "Han-Shen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yu-Ming", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Chun-Nan", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "511--516", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han-Shen Huang, Yu-Ming Chang, and Chun-Nan Hsu. 2007. Training conditional random fields by periodic step size adaptation for large-scale text min- ing. In Proceedings of the IEEE International Con- ference on Data Mining, pages 511-516. Omaha, Nebraska, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The fourth international Chinese language processing bakeoff: Chinese word segmentation, named entity recognition and Chinese pos tagging", "authors": [ { "first": "Guangjin", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Sixth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guangjin Jin and Xiao Chen. 2008. The fourth inter- national Chinese language processing bakeoff: Chi- nese word segmentation, named entity recognition and Chinese pos tagging. In Proceedings of Sixth SIGHAN Workshop on Chinese Language Process- ing. India.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluation and extension of maximum entropy models with inequality constraints", "authors": [ { "first": "Junichi", "middle": [], "last": "Kazama", "suffix": "" }, { "first": "Junichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "137--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junichi Kazama and Junichi Tsujii. 2003. Evalua- tion and extension of maximum entropy models with inequality constraints. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 137-144.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Conditional random fields: probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the International Conference on Machine Learning, pages 282-289. Prague, Czech republic.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Practical very large scale CRFs", "authors": [ { "first": "Thomas", "middle": [], "last": "Lavergne", "suffix": "" }, { "first": "Capp\u00e9", "middle": [], "last": "Olivier", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" } ], "year": 2010, "venue": "Proceedings the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "504--513", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Lavergne, Olivier,Capp\u00e9, and Fran\u00e7ois Yvon. 2010. Practical very large scale CRFs. In Pro- ceedings the Annual Meeting of the Association for Computational Linguistics, pages 504-513. Upp- sala, Sweden.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "On the limited memory BFGS method for large scale optimization", "authors": [ { "first": "C", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Nocedal", "suffix": "" } ], "year": 1989, "venue": "Mathematical Programming", "volume": "45", "issue": "", "pages": "503--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45: 503-528.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Maximum Entropy Markov Models for Information Extraction and Segmentation", "authors": [ { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Dayne", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "591--598", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Mccallum, Dayne Freitag, and Fernando Pereira. 2000. Maximum Entropy Markov Mod- els for Information Extraction and Segmentation. In Proceedings of the International Conference on Ma- chine Learning, pages 591-598. California, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "CRFsuite: A fast implementation of conditional random fields (CRFs)", "authors": [ { "first": "Naoaki", "middle": [], "last": "Okazaki", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naoaki Okazaki. 2007. CRFsuite: A fast im- plementation of conditional random fields (CRFs). http://www.chokkan.org/software/crfsuite/.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Accurate Information Extraction from Research Papers using Conditional Random Fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "329--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng, and Andrew McCallum. 2004. Accurate Information Extraction from Research Papers us- ing Conditional Random Fields. In Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 329-336. Mas- sachusetts, USA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", "authors": [ { "first": "Lawrence", "middle": [], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the IEEE", "volume": "77", "issue": "", "pages": "257--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Rabiner. 1989. A Tutorial on Hid- den Markov Models and Selected Applications in Speech Recognition. In Proceedings of the IEEE, 77(2): 257-286.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An introduction to conditional random fields for relational learning. Introduction to Statistical Relational Learning", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton and Andrew McCallum. 2006. An introduction to conditional random fields for rela- tional learning. Introduction to Statistical Rela- tional Learning. The MIT Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Joint learning improves semantic role labeling", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "589--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the Annual Meet- ing of the Association for Computational Linguis- tics, pages 589-596. Michigan, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty", "authors": [ { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Junichi", "middle": [], "last": "Tsujii", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 2009, "venue": "Proceedings the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "477--485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshimasa Tsuruoka, Junichi Tsujii, and Sophia Ana- niadou. 2009. Stochastic gradient descent train- ing for l1-regularized log-linear models with cumu- lative penalty. In Proceedings the Annual Meeting of the Association for Computational Linguistics, pages 477-485. Suntec, Singapore.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Feature Selection in Conditional Random Fields for Activity Recognition", "authors": [ { "first": "Douglas", "middle": [], "last": "Vail", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Manuela", "middle": [], "last": "Veloso", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems", "volume": "", "issue": "", "pages": "3379--3384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Vail, John Lafferty, and Manuela Veloso. 2007. Feature Selection in Conditional Random Fields for Activity Recognition. In Proceedings of the IEEE/RSJ International Conference on Intelli- gent Robots and Systems, pages 3379-3384. Cali- fornia, USA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Accelerated training of conditional random fields with stochastic gradient methods", "authors": [ { "first": "S", "middle": [ "V N" ], "last": "Vishwanathan", "suffix": "" }, { "first": "Nicol", "middle": [ "N" ], "last": "Schraudolph", "suffix": "" }, { "first": "Mark", "middle": [ "W" ], "last": "Schmidt", "suffix": "" }, { "first": "Kevin", "middle": [ "P" ], "last": "Murphy", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the International Conference on Machine learning", "volume": "", "issue": "", "pages": "969--976", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. V. N. Vishwanathan, Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. Accelerated training of conditional random fields with stochastic gradient methods. In Proceedings of the Interna- tional Conference on Machine learning, pages 969- 976. Pittsburgh, Pennsylvania, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Bakeoff 2005 Chinese word segmentation task: Objective function with fixed \u03b1.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Bakeoff 2005 Chinese word segmentation task: Objective function with fixed \u03b7 0 . rate parameter.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Fourth SIGHAN Bakeoff name entity recognition task: Objective function with fixed \u03b1.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Fourth SIGHAN Bakeoff name entity recognition task: Objective function with fixed \u03b7 0 . the advantages of SGD and OWL-QN.", "uris": null, "num": null }, "TABREF0": { "html": null, "text": "Feature templates for Chinese word segmentation task.", "type_str": "table", "num": null, "content": "" }, "TABREF1": { "html": null, "text": "Bakeoff 2005 Chinese word segmentation task. Accuracy of the model on the testdata.", "type_str": "table", "num": null, "content": "
L + R # Features F score
OWL-QN 56451.8114,94294.81
SGD61398.6232,58594.92
Ours56481.9117,37494.78
" }, "TABREF2": { "html": null, "text": "Bakeoff 2005 Chinese word segmentation task. Training time of the model on the testdata.", "type_str": "table", "num": null, "content": "
PassesTime
OWL-QN1412h59min
SGD3058min
Ours5 + 88 2h06min
" }, "TABREF3": { "html": null, "text": "Feature templates for name entity recognition task.", "type_str": "table", "num": null, "content": "" }, "TABREF4": { "html": null, "text": "Fourth SIGHAN Bakeoff name entity recognition task. Training time of the model on the testdata.", "type_str": "table", "num": null, "content": "
L + RPassesTime
OWL-QN 11247.1219 5h26min
SGD13993.330 1h08min
Ours11245.5 5 + 122 3h10min
" }, "TABREF5": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
: Fourth SIGHAN Bakeoff name entity
recognition task. Accuracy of the model on the
testdata.
# Feat. LOC ORG PER
OWL-QN 34,579 89.94 82.61 90.65
SGD113,005 89.39 82.75 90.78
Ours36,709 90.05 82.25 90.49
" } } } }