id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1506.02626#22
Learning both Weights and Connections for Efficient Neural Networks
The ï¬ gure shows how accuracy drops as parameters are pruned on a layer-by-layer basis. The CONV layers (on the left) are more sensitive to pruning than the fully connected layers (on the right). The ï¬ rst convolutional layer, which interacts with the input image directly, is most sensitive to pruning. We suspect this sensitivity is due to the input layer having only 3 channels and thus less redundancy than the other convolutional layers. We used the sensitivity results to ï¬ nd each layerâ s threshold: for example, the smallest threshold was applied to the most sensitive layer, which is the ï¬ rst convolutional layer. Storing the pruned layers as sparse matrices has a storage overhead of only 15.6%. Storing relative rather than absolute indices reduces the space taken by the FC layer indices to 5 bits. Similarly, CONV layer indices can be represented with only 8 bits.
1506.02626#21
1506.02626#23
1506.02626
[ "1507.06149" ]
1506.02626#23
Learning both Weights and Connections for Efficient Neural Networks
7 Table 6: Comparison with other model reduction methods on AlexNet. Data-free pruning [28] saved only 1.5Ã parameters with much loss of accuracy. Deep Fried Convnets [29] worked on fully connected layers only and reduced the parameters by less than 4Ã . [30] reduced the parameters by 4Ã with inferior accuracy. Naively cutting the layer size saves parameters but suffers from 4% loss of accuracy. [12] exploited the linear structure of convnets and compressed each layer individually, where model compression on a single layer incurred 0.9% accuracy penalty with biclustering + SVD. Network Baseline Caffemodel [26] Data-free pruning [28] Fastfood-32-AD [29] Fastfood-16-AD [29] Collins & Kohli [30] Naive Cut SVD [12] Network Pruning Top-1 Error Top-5 Error 42.78% 44.40% 41.93% 42.90% 44.40% 47.18% 44.02% 42.77% 19.73% - - - - 23.23% 20.56% 19.67% Parameters 61.0M 39.6M 32.8M 16.4M 15.2M 13.8M 11.9M 6.7M Compression Rate 1Ã
1506.02626#22
1506.02626#24
1506.02626
[ "1507.06149" ]
1506.02626#24
Learning both Weights and Connections for Efficient Neural Networks
1.5à 2à 3.7à 4à 4.4à 5à 9à # Count Figure 7: Weight distribution before and after parameter pruning. The right ï¬ gure has 10à smaller scale. After pruning, the storage requirements of AlexNet and VGGNet are are small enough that all weights can be stored on chip, instead of off-chip DRAM which takes orders of magnitude more energy to access (Table 1). We are targeting our pruning method for ï¬ xed-function hardware specialized for sparse DNN, given the limitation of general purpose hardware on sparse computation. Figure 7 shows histograms of weight distribution before (left) and after (right) pruning. The weight is from the ï¬
1506.02626#23
1506.02626#25
1506.02626
[ "1507.06149" ]
1506.02626#25
Learning both Weights and Connections for Efficient Neural Networks
rst fully connected layer of AlexNet. The two panels have different y-axis scales. The original distribution of weights is centered on zero with tails dropping off quickly. Almost all parameters are between [â 0.015, 0.015]. After pruning the large center region is removed. The network parameters adjust themselves during the retraining phase. The result is that the parameters form a bimodal distribution and become more spread across the x-axis, between [â 0.025, 0.025]. # 6 Conclusion We have presented a method to improve the energy efï¬ ciency and storage of neural networks without affecting accuracy by ï¬ nding the right connections. Our method, motivated in part by how learning works in the mammalian brain, operates by learning which connections are important, pruning the unimportant connections, and then retraining the remaining sparse network. We highlight our experiments on AlexNet and VGGNet on ImageNet, showing that both fully connected layer and convolutional layer can be pruned, reducing the number of connections by 9Ã
1506.02626#24
1506.02626#26
1506.02626
[ "1507.06149" ]
1506.02626#26
Learning both Weights and Connections for Efficient Neural Networks
to 13à without loss of accuracy. This leads to smaller memory capacity and bandwidth requirements for real-time image processing, making it easier to be deployed on mobile systems. # References [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. 8 [2] Alex Graves and J¨urgen Schmidhuber.
1506.02626#25
1506.02626#27
1506.02626
[ "1507.06149" ]
1506.02626#27
Learning both Weights and Connections for Efficient Neural Networks
Framewise phoneme classiï¬ cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602â 610, 2005. [3] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493â 2537, 2011. [4] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. [5] Yaniv Taigman, Ming Yang, Marcâ Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face veriï¬
1506.02626#26
1506.02626#28
1506.02626
[ "1507.06149" ]
1506.02626#28
Learning both Weights and Connections for Efficient Neural Networks
cation. In CVPR, pages 1701â 1708. IEEE, 2014. [6] Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with cots hpc systems. In 30th ICML, pages 1337â 1345, 2013. [7] Mark Horowitz. Energy table for 45nm process, Stanford VLSI wiki. [8] JP Rauschecker.
1506.02626#27
1506.02626#29
1506.02626
[ "1507.06149" ]
1506.02626#29
Learning both Weights and Connections for Efficient Neural Networks
Neuronal mechanisms of developmental plasticity in the catâ s visual system. Human neurobiology, 3(2):109â 114, 1983. [9] Christopher A Walsh. Peter huttenlocher (1931-2013). Nature, 502(7470):172â 172, 2013. [10] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148â 2156, 2013. [11] Vincent Vanhoucke, Andrew Senior, and Mark Z Mao.
1506.02626#28
1506.02626#30
1506.02626
[ "1507.06149" ]
1506.02626#30
Learning both Weights and Connections for Efficient Neural Networks
Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011. [12] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬ cient evaluation. In NIPS, pages 1269â 1277, 2014. [13] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev.
1506.02626#29
1506.02626#31
1506.02626
[ "1507.06149" ]
1506.02626#31
Learning both Weights and Connections for Efficient Neural Networks
Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014. [14] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [15] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [16] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. [17] Stephen Jos´e Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with back-propagation. In Advances in neural information processing systems, pages 177â 185, 1989. [18] Yann Le Cun, John S. Denker, and Sara A. Solla.
1506.02626#30
1506.02626#32
1506.02626
[ "1507.06149" ]
1506.02626#32
Learning both Weights and Connections for Efficient Neural Networks
Optimal brain damage. In Advances in Neural Information Processing Systems, pages 598â 605. Morgan Kaufmann, 1990. [19] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pages 164â 164, 1993. [20] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015. [21] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and SVN Vishwanathan. Hash kernels for structured data. The Journal of Machine Learning Research, 10:2615â 2637, 2009. [22] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg.
1506.02626#31
1506.02626#33
1506.02626
[ "1507.06149" ]
1506.02626#33
Learning both Weights and Connections for Efficient Neural Networks
Feature hashing for large scale multitask learning. In ICML, pages 1113â 1120. ACM, 2009. [23] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬ tting. JMLR, 15:1929â 1958, 2014. [24] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson.
1506.02626#32
1506.02626#34
1506.02626
[ "1507.06149" ]
1506.02626#34
Learning both Weights and Connections for Efficient Neural Networks
How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320â 3328, 2014. [25] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬ cult. Neural Networks, IEEE Transactions on, 5(2):157â 166, 1994. [26] Yangqing Jia, et al. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [27] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. CoRR, abs/1409.1556, 2014. [28] Suraj Srinivas and R Venkatesh Babu. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149, 2015. [29] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014. [30] Maxwell D Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014.
1506.02626#33
1506.02626#35
1506.02626
[ "1507.06149" ]
1506.02626#35
Learning both Weights and Connections for Efficient Neural Networks
9
1506.02626#34
1506.02626
[ "1507.06149" ]
1506.01186#0
Cyclical Learning Rates for Training Neural Networks
7 1 0 2 r p A 4 ] V C . s c [ 6 v 6 8 1 1 0 . 6 0 5 1 : v i X r a # Cyclical Learning Rates for Training Neural Networks Leslie N. Smith U.S. Naval Research Laboratory, Code 5514 4555 Overlook Ave., SW., Washington, D.C. 20375 leslie.smith@nrl.navy.mil # Abstract
1506.01186#1
1506.01186
[ "1504.01716" ]
1506.01186#1
Cyclical Learning Rates for Training Neural Networks
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically elim- inates the need to experimentally ï¬ nd the best values and schedule for the global learning rates. Instead of mono- tonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable bound- ary values. Training with cyclical learning rates instead of ï¬ xed values achieves improved classiï¬ cation accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate â reasonable boundsâ â
1506.01186#0
1506.01186#2
1506.01186
[ "1504.01716" ]
1506.01186#2
Cyclical Learning Rates for Training Neural Networks
linearly increasing the learning rate of the net- work for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks. â Xponential â CLR (our approach) 0 1 2 3 4 5 6 7 Iteration x 10° Figure 1. Classiï¬ cation accuracy while training CIFAR-10. The red curve shows the result of training with one of the new learning rate policies. ing training. This paper demonstrates the surprising phe- nomenon that a varying learning rate during training is ben- eï¬ cial overall and thus proposes to let the global learning rate vary cyclically within a band of values instead of set- ting it to a ï¬ xed value. In addition, this cyclical learning rate (CLR) method practically eliminates the need to tune the learning rate yet achieve near optimal classiï¬ cation accu- racy. Furthermore, unlike adaptive learning rates, the CLR methods require essentially no additional computation. # 1. Introduction
1506.01186#1
1506.01186#3
1506.01186
[ "1504.01716" ]
1506.01186#3
Cyclical Learning Rates for Training Neural Networks
Deep neural networks are the basis of state-of-the-art re- sults for image recognition [17, 23, 25], object detection [7], face recognition [26], speech recognition [8], machine translation [24], image caption generation [28], and driver- less car technology [14]. However, training a deep neural network is a difï¬ cult global optimization problem. The potential beneï¬ ts of CLR can be seen in Figure 1, which shows the test data classiï¬ cation accuracy of the CIFAR-10 dataset during training1. The baseline (blue curve) reaches a ï¬ nal accuracy of 81.4% after 70, 000 it- erations. In contrast, it is possible to fully train the network using the CLR method instead of tuning (red curve) within 25,000 iterations and attain the same accuracy.
1506.01186#2
1506.01186#4
1506.01186
[ "1504.01716" ]
1506.01186#4
Cyclical Learning Rates for Training Neural Networks
The contributions of this paper are: A deep neural network is typically updated by stochastic gradient descent and the parameters 0 (weights) are updated by 0 = ott â 5, where L is a loss function and â ¬; is the learning rate. It is well known that too small a learning rate will make a training algorithm converge slowly while too large a learning rate will make the training algorithm diverge [2]. Hence, one must experiment with a variety of learning rates and schedules. 1. A methodology for setting the global learning rates for training neural networks that eliminates the need to perform numerous experiments to ï¬ nd the best values and schedule with essentially no additional computa- tion. 2. A surprising phenomenon is demonstrated - allowing the learning rate should be a single value that monotonically decreases dur- 1Hyper-parameters and architecture were obtained in April 2015 from caffe.berkeleyvision.org/gathered/examples/cifar10.html the learning rate to rise and fall is beneï¬ cial overall even though it might temporarily harm the networkâ s performance. 3.
1506.01186#3
1506.01186#5
1506.01186
[ "1504.01716" ]
1506.01186#5
Cyclical Learning Rates for Training Neural Networks
Cyclical learning rates are demonstrated with ResNets, Stochastic Depth networks, and DenseNets on the CIFAR-10 and CIFAR-100 datasets, and on ImageNet with two well-known architectures: AlexNet [17] and GoogleNet [25]. # 2. Related work The book â Neural Networks: Tricks of the Tradeâ is a terriï¬ c source of practical advice. In particular, Yoshua Bengio [2] discusses reasonable ranges for learning rates and stresses the importance of tuning the learning rate. A technical report by Breuel [3] provides guidance on a vari- ety of hyper-parameters. There are also a numerous web- sites giving practical suggestions for setting the learning rates. Adaptive learning rates: Adaptive learning rates can be considered a competitor to cyclical learning rates because one can rely on local adaptive learning rates in place of global learning rate experimentation but there is a signiï¬ - cant computational cost in doing so. CLR does not possess this computational costs so it can be used freely. A review of the early work on adaptive learning rates can be found in George and Powell [6]. Duchi, et al. [5] pro- posed AdaGrad, which is one of the early adaptive methods that estimates the learning rates from the gradients. RMSProp is discussed in the slides by Geoffrey Hinton2 [27]. RMSProp is described there as â Divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.â RMSProp is a funda- mental adaptive learning rate method that others have built on. Schaul et al. [22] discuss an adaptive learning rate based on a diagonal estimation of the Hessian of the gradients. One of the features of their method is that they allow their automatic method to decrease or increase the learning rate. However, their paper seems to limit the idea of increasing learning rate to non-stationary problems. On the other hand, this paper demonstrates that a schedule of increasing the learning rate is more universally valuable. Zeiler [29] describes his AdaDelta method, which im- proves on AdaGrad based on two ideas: limiting the sum of squared gradients over all time to a limited window, and making the parameter update rule consistent with a units evaluation on the relationship between the update and the Hessian. More recently, several papers have appeared on adaptive learning rates.
1506.01186#4
1506.01186#6
1506.01186
[ "1504.01716" ]
1506.01186#6
Cyclical Learning Rates for Training Neural Networks
Gulcehre and Bengio [9] propose an adaptive learning rate algorithm, called AdaSecant, that utilizes the 2www.cs.toronto.edu/ tijmen/csc321/slides/lecture slides lec6.pdf root mean square statistics and variance of the gradients. Dauphin et al. [4] show that RMSProp provides a biased estimate and go on to describe another estimator, named ESGD, that is unbiased. Kingma and Lei-Ba [16] introduce Adam that is designed to combine the advantages from Ada- Grad and RMSProp. Bache, et al. [1] propose exploiting solutions to a multi-armed bandit problem for learning rate selection. A summary and tutorial of adaptive learning rates can be found in a recent paper by Ruder [20]. Adaptive learning rates are fundamentally different from CLR policies, and CLR can be combined with adaptive learning rates, as shown in Section 4.1. In addition, CLR policies are computationally simpler than adaptive learning rates. CLR is likely most similar to the SGDR method [18] that appeared recently. # 3. Optimal Learning Rates # 3.1. Cyclical Learning Rates The essence of this learning rate policy comes from the observation that increasing the learning rate might have a short term negative effect and yet achieve a longer term ben- eï¬ cial effect. This observation leads to the idea of letting the learning rate vary within a range of values rather than adopt- ing a stepwise ï¬ xed or exponentially decreasing value. That is, one sets minimum and maximum boundaries and the learning rate cyclically varies between these bounds. Ex- periments with numerous functional forms, such as a trian- gular window (linear), a Welch window (parabolic) and a Hann window (sinusoidal) all produced equivalent results This led to adopting a triangular window (linearly increas- ing then linearly decreasing), which is illustrated in Figure 2, because it is the simplest function that incorporates this idea. The rest of this paper refers to this as the triangular learning rate policy. Maximum bound (max_Ir) Minimum bound - (base_Ir) stepsize Figure 2. Triangular learning rate policy. The blue lines represent learning rate values changing between bounds. The input parame- ter stepsize is the number of iterations in half a cycle.
1506.01186#5
1506.01186#7
1506.01186
[ "1504.01716" ]
1506.01186#7
Cyclical Learning Rates for Training Neural Networks
An intuitive understanding of why CLR methods work comes from considering the loss function topology. Dauphin et al. [4] argue that the difï¬ culty in minimizing the loss arises from saddle points rather than poor local minima. Saddle points have small gradients that slow the learning process. However, increasing the learning rate allows more rapid traversal of saddle point plateaus. A more practical reason as to why CLR works is that, by following the meth- ods in Section 3.3, it is likely the optimum learning rate will be between the bounds and near optimal learning rates will be used throughout training. The red curve in Figure 1 shows the result of the triangular policy on CIFAR-10. The settings used to cre- ate the red curve were a minimum learning rate of 0.001 (as in the original parameter ï¬ le) and a maximum of 0.006. Also, the cycle length (i.e., the number of iterations until the learning rate returns to the initial value) is set to 4, 000 iterations (i.e., stepsize = 2000) and Figure 1 shows that the accuracy peaks at the end of each cycle. Implementation of the code for a new learning rate policy is straightforward. An example of the code added to Torch 7 in the experiments shown in Section 4.1.2 is the following few lines: l o c a l c y c l e = math . f l o o r ( 1 + e p o c h C o u n t e r / ( 2 â s t e p s i z e ) ) l o c a l x = math . a b s ( e p o c h C o u n t e r / s t e p s i z e â 2â c y c l e + 1 ) l o c a l l r = o p t . LR + ( maxLR â o p t . LR ) (1â x ) ) â math . max ( 0 , where opt.LR is the speciï¬ ed lower (i.e., base) learning rate, epochCounter is the number of epochs of training, and lr is the computed learning rate. This policy is named triangular and is as described above, with two new in- put parameters deï¬ ned: stepsize (half the period or cycle length) and max lr (the maximum learning rate boundary).
1506.01186#6
1506.01186#8
1506.01186
[ "1504.01716" ]
1506.01186#8
Cyclical Learning Rates for Training Neural Networks
This code varies the learning rate linearly between the min- imum (base lr) and the maximum (max lr). In addition to the triangular policy, the following CLR policies are discussed in this paper: 1. triangular2; the same as the triangular policy ex- cept the learning rate difference is cut in half at the end of each cycle. This means the learning rate difference drops after each cycle. 2. exp range; the learning rate varies between the min- imum and maximum boundaries and each bound- factor of ary value declines by an exponential gammaiteration. # 3.2. How can one estimate a good value for the cycle length? The length of a cycle and the input parameter stepsize can be easily computed from the number of iterations in an epoch. An epoch is calculated by dividing the number of training images by the batchsize used. For example, CIFAR-10 has 50, 000 training images and the batchsize is 100 so an epoch = 50, 000/100 = 500 iterations.
1506.01186#7
1506.01186#9
1506.01186
[ "1504.01716" ]
1506.01186#9
Cyclical Learning Rates for Training Neural Networks
The ï¬ nal accuracy results are actually quite robust to cycle length but experiments show that it often is good to set stepsize equal to 2 â 10 times the number of iterations in an epoch. For example, setting stepsize = 8 â epoch with the CIFAR-10 training run (as shown in Figure 1) only gives slightly better results than setting stepsize = 2 â epoch. Furthermore, there is a certain elegance to the rhythm of these cycles and it simpliï¬
1506.01186#8
1506.01186#10
1506.01186
[ "1504.01716" ]
1506.01186#10
Cyclical Learning Rates for Training Neural Networks
es the decision of when to drop learning rates and when to stop the current training run. Experiments show that replacing each step of a con- stant learning rate with at least 3 cycles trains the network weights most of the way and running for 4 or more cycles will achieve even better performance. Also, it is best to stop training at the end of a cycle, which is when the learning rate is at the minimum value and the accuracy peaks. # 3.3. How can one estimate reasonable minimum and maximum boundary values? There is a simple way to estimate reasonable minimum and maximum boundary values with one training run of the network for a few epochs.
1506.01186#9
1506.01186#11
1506.01186
[ "1504.01716" ]
1506.01186#11
Cyclical Learning Rates for Training Neural Networks
It is a â LR range testâ ; run your model for several epochs while letting the learning rate in- crease linearly between low and high LR values. This test is enormously valuable whenever you are facing a new ar- chitecture or dataset. CIFAR-10 0.6 Accuracy 0.1 0 0.005 0.01 Learning rate 0.015 0.02 Figure 3. Classiï¬ cation accuracy as a function of increasing learn- ing rate for 8 epochs (LR range test). The triangular learning rate policy provides a simple mechanism to do this. For example, in Caffe, set base lr to the minimum value and set max lr to the maximum value. Set both the stepsize and max iter to the same number of iterations. In this case, the learning rate will increase lin- early from the minimum value to the maximum value dur- ing this short run. Next, plot the accuracy versus learning rate. Note the learning rate value when the accuracy starts to increase and when the accuracy slows, becomes ragged, or Dataset CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 AlexNet AlexNet AlexNet AlexNet AlexNet GoogLeNet GoogLeNet GoogLeNet GoogLeNet LR policy f ixed triangular2 decay exp exp range f ixed triangular2 exp exp exp range f ixed triangular2 exp exp range Iterations Accuracy (%) 70,000 25, 000 25,000 70,000 42,000 400,000 400,000 300,000 460,000 300,000 420,000 420,000 240,000 240,000 81.4 81.4 78.5 79.1 82.2 58.0 58.4 56.0 56.5 56.5 63.0 64.4 58.2 60.2 Table 1. Comparison of accuracy results on test/validation data at the end of the training.
1506.01186#10
1506.01186#12
1506.01186
[ "1504.01716" ]
1506.01186#12
Cyclical Learning Rates for Training Neural Networks
starts to fall. These two learning rates are good choices for bounds; that is, set base lr to the ï¬ rst value and set max lr to the latter value. Alternatively, one can use the rule of thumb that the optimum learning rate is usually within a factor of two of the largest one that converges [2] and set base lr to 1 Figure 3 shows an example of making this type of run with the CIFAR-10 dataset, using the architecture and hyper-parameters provided by Caffe. One can see from Fig- ure 3 that the model starts converging right away, so it is rea- sonable to set base lr = 0.001. Furthermore, above a learn- ing rate of 0.006 the accuracy rise gets rough and eventually begins to drop so it is reasonable to set max lr = 0.006. Whenever one is starting with a new architecture or dataset, a single LR range test provides both a good LR value and a good range. Then one should compare runs with a ï¬ xed LR versus CLR with this range. Whichever wins can be used with conï¬ dence for the rest of oneâ s experiments. # 4. Experiments The purpose of this section is to demonstrate the effec- tiveness of the CLR methods on some standard datasets and with a range of architectures. In the subsections below, CLR policies are used for training with the CIFAR-10, CIFAR- 100, and ImageNet datasets. These three datasets and a va- riety of architectures demonstrate the versatility of CLR. # 4.1. CIFAR-10 and CIFAR-100 # 4.1.1 Caffeâ s CIFAR-10 architecture The CIFAR-10 architecture and hyper-parameter settings on the Caffe website are fairly standard and were used here as a baseline.
1506.01186#11
1506.01186#13
1506.01186
[ "1504.01716" ]
1506.01186#13
Cyclical Learning Rates for Training Neural Networks
As discussed in Section 3.2, an epoch is equal CIFAR-10 ---Exp policy â â Exp Range 0.2 1 2 3 4 5 6 7 Iteration * 10° Figure 4. Classiï¬ cation accuracy as a function of iteration for 70, 000 iterations. CIFAR10; Combining adaptive LR and CLR â Nesterov + CLR â Adam > 03 â Adam + CLR Iteration % isâ Figure 5. Classiï¬
1506.01186#12
1506.01186#14
1506.01186
[ "1504.01716" ]
1506.01186#14
Cyclical Learning Rates for Training Neural Networks
cation accuracy as a function of iteration for the CIFAR-10 dataset using adaptive learning methods. See text for explanation. to 500 iterations and a good setting for stepsize is 2, 000. Section 3.3 discussed how to estimate reasonable minimum and maximum boundary values for the learning rate from Figure 3. All that is needed to optimally train the network is to set base lr = 0.001 and max lr = 0.006. This is all that is needed to optimally train the network. For the triangular2 policy run shown in Figure 1, the stepsize and learning rate bounds are shown in Table 2. base lr 0.001 0.0001 0.00001 max lr 0.005 0.0005 0.00005 stepsize 2,000 1,000 500 start 0 16,000 22,000 max iter 16,000 22,000 25,000 Table 2. Hyper-parameter settings for CIFAR-10 example in Fig- ure 1. running with the the result of triangular2 policy with the parameter setting in Table 2. As shown in Table 1, one obtains the same test classiï¬ ca- tion accuracy of 81.4% after only 25, 000 iterations with the triangular2 policy as obtained by running the standard hyper-parameter settings for 70, 000 iterations. 8 CIFAR10; Sigmoid + Batch Normalization â . 3.0.6 a g4 to2 %% 1 3 3 4 5 6 Iteration x104 Figure 6. Batch Normalization CIFAR-10 example (provided with the Caffe download). from the triangular policy derive from reducing the learning rate because this is when the accuracy climbs the most. As a test, a decay policy was implemented where the learn- ing rate starts at the max lr value and then is linearly re- duced to the base lr value for stepsize number of itera- tions. After that, the learning rate is ï¬ xed to base lr. For the decay policy, max lr = 0.007, base lr = 0.001, and stepsize = 4000. Table 1 shows that the ï¬
1506.01186#13
1506.01186#15
1506.01186
[ "1504.01716" ]
1506.01186#15
Cyclical Learning Rates for Training Neural Networks
nal accuracy is only 78.5%, providing evidence that both increasing and decreasing the learning rate are essential for the beneï¬ ts of the CLR method. Figure 4 compares the exp learning rate policy in Caffe with the new exp range policy using gamma = 0.99994 for both policies. is that when using the exp range policy one can stop training at iteration 42, 000 with a test accuracy of 82.2% (going to iteration 70, 000 does not improve on this result). This is substantially better than the best test accuracy of 79.1% one obtains from using the exp learning rate policy. The current Caffe download contains additional archi- tectures and hyper-parameters for CIFAR-10 and in partic- ular there is one with sigmoid non-linearities and batch nor- malization. Figure 6 compares the training accuracy using the downloaded hyper-parameters with a ï¬ xed learning rate (blue curve) to using a cyclical learning rate (red curve). As can be seen in this Figure, the ï¬ nal accuracy for the ï¬ xed learning rate (60.8%) is substantially lower than the cyclical learning rate ï¬ nal accuracy (72.2%). There is clear perfor- mance improvement when using CLR with this architecture containing sigmoids and batch normalization. Experiments were carried out with architectures featur- ing both adaptive learning rate methods and CLR.
1506.01186#14
1506.01186#16
1506.01186
[ "1504.01716" ]
1506.01186#16
Cyclical Learning Rates for Training Neural Networks
Table 3 lists the ï¬ nal accuracy values from various adaptive learning rate methods, run with and without CLR. All of the adap- tive methods in Table 3 were run by invoking the respective option in Caffe. The learning rate boundaries are given in Table 3 (just below the methodâ s name), which were deter- mined by using the technique described in Section 3.3. Just the lower bound was used for base lr for the f ixed policy. LR type/bounds Nesterov [19] 0.001 - 0.006 ADAM [16] 0.0005 - 0.002 RMSprop [27] 0.0001 - 0.0003 AdaGrad [5] 0.003 - 0.035 AdaDelta [29] 0.01 - 0.1 LR policy f ixed triangular f ixed triangular triangular f ixed triangular triangular f ixed triangular f ixed triangular Iterations Accuracy (%) 70,000 25,000 70,000 25,000 70,000 70,000 25,000 70,000 70,000 25,000 70,000 25,000 82.1 81.3 81.4 79.8 81.1 75.2 72.8 75.1 74.6 76.0 67.3 67.3 Table 3. Comparison of CLR with adaptive learning rate methods. The table shows accuracy results for the CIFAR-10 dataset on test data at the end of the training. Table 3 shows that for some adaptive learning rate meth- ods combined with CLR, the ï¬ nal accuracy after only 25,000 iterations is equivalent to the accuracy obtained without CLR after 70,000 iterations. For others, it was nec- essary (even with CLR) to run until 70,000 iterations to ob- tain similar results. Figure 5 shows the curves from running the Nesterov method with CLR (reached 81.3% accuracy in only 25,000 iterations) and the Adam method both with and without CLR (both needed 70,000 iterations). When using adaptive learning rate methods, the beneï¬ ts from CLR are sometimes reduced, but CLR can still valuable as it some- times provides beneï¬ t at essentially no cost.
1506.01186#15
1506.01186#17
1506.01186
[ "1504.01716" ]
1506.01186#17
Cyclical Learning Rates for Training Neural Networks
# 4.1.2 ResNets, Stochastic Depth, and DenseNets Residual networks [10, 11], and the family of variations that have subsequently emerged, achieve state-of-the-art re- sults on a variety of tasks. Here we provide comparison experiments between the original implementations and ver- sions with CLR for three members of this residual net- work family: the original ResNet [10], Stochastic Depth networks [13], and the recent DenseNets [12]. Our ex- periments can be readily replicated because the authors of these papers make their Torch code available3. Since all three implementation are available using the Torch 7 frame- work, the experiments in this section were performed using Torch. In addition to the experiment in the previous Sec- tion, these networks also incorporate batch normalization [15] and demonstrate the value of CLR for architectures with batch normalization. Both CIFAR-10 and the CIFAR-100 datasets were used # 3https://github.com/facebook/fb.resnet.torch, https://github.com/yueatsprograms/Stochastic Depth, https://github.com/liuzhuang13/DenseNet in these experiments. The CIFAR-100 dataset is similar to the CIFAR-10 data but it has 100 classes instead of 10 and each class has 600 labeled examples. Architecture ResNet ResNet ResNet ResNet+CLR SD SD SD SD+CLR DenseNet DenseNet DenseNet CIFAR-10 (LR) CIFAR-100 (LR) 92.8(0.1) 93.3(0.2) 91.8(0.3) 93.6(0.1 â 0.3) 94.6(0.1) 94.5(0.2) 94.2(0.3) 94.5(0.1 â 0.3) 94.5(0.1) 94.5(0.2) 94.2(0.3) 71.2(0.1) 71.6(0.2) 71.9(0.3) 72.5(0.1 â 0.3) 75.2(0.1) 75.2(0.2) 74.6(0.3) 75.4(0.1 â
1506.01186#16
1506.01186#18
1506.01186
[ "1504.01716" ]
1506.01186#18
Cyclical Learning Rates for Training Neural Networks
0.3) 75.2(0.1) 75.3(0.2) 74.5(0.3) 75.9(0.1 â 0.2) DenseNet+CLR 94.9(0.1 â 0.2) Table 4. Comparison of CLR with ResNets [10, 11], Stochastic Depth (SD) [13], and DenseNets [12]. The table shows the average accuracy of 5 runs for the CIFAR-10 and CIFAR-100 datasets on test data at the end of the training. The results for these two datasets on these three archi- tectures are summarized in Table 4. The left column give the architecture and whether CLR was used in the experi- ments. The other two columns gives the average ï¬ nal ac- curacy from ï¬ ve runs and the initial learning rate or range used in parenthesis, which are reduced (for both the ï¬ xed learning rate and the range) during the training according to the same schedule used in the original implementation. For all three architectures, the original implementation uses an initial LR of 0.1 which we use as a baseline. The accuracy results in Table 4 in the right two columns are the average ï¬ nal test accuracies of ï¬
1506.01186#17
1506.01186#19
1506.01186
[ "1504.01716" ]
1506.01186#19
Cyclical Learning Rates for Training Neural Networks
ve runs. The Stochastic Depth implementation was slightly different than the ResNet and DenseNet implementation in that the au- thors split the 50,000 training images into 45,000 training images and 5,000 validation images. However, the reported results in Table 4 for the SD architecture is only test accura- cies for the ï¬ ve runs. The learning rate range used by CLR was determined by the LR range test method and the cycle length was choosen as a tenth of the maximum number of epochs that was speciï¬ ed in the original implementation. In addition to the accuracy results shown in Table 4, similar results were obtained in Caffe for DenseNets [12] on CIFAR-10 using the prototxt ï¬ les provided by the au- thors. The average accuracy of ï¬ ve runs with learning rates of 0.1, 0.2, 0.3 was 91.67%, 92.17%, 92.46%, respectively, but running with CLR within the range of 0.1 to 0.3, the average accuracy was 93.33%. The results from all of these experiments show similar or better accuracy performance when using CLR versus using a ï¬ xed learning rate, even though the performance drops at ImageNet on AlexNet 0.2 Accuracy ° a we 0.05) ° 6.005 0.01 0.015 0.02 0.025 6.03 6.035 0.04 6.045 Learning rate Figure 7. AlexNet LR range test; validation classiï¬ cation accuracy as a function of increasing learning rate. ImageNet/AlexNet architecture S a se 2s Row es) Validation Accuracy ses â
1506.01186#18
1506.01186#20
1506.01186
[ "1504.01716" ]
1506.01186#20
Cyclical Learning Rates for Training Neural Networks
ob â Triangular2 os. 1 15.3225 °3°«35 Iteration x 10° Figure 8. Validation data classiï¬ cation accuracy as a function of iteration for f ixed versus triangular. some of the learning rate values within this range. These experiments conï¬ rm that it is beneï¬ cial to use CLR for a variety of residual architectures and for both CIFAR-10 and CIFAR-100. # 4.2. ImageNet The ImageNet dataset [21] is often used in deep learning literature as a standard for comparison. The ImageNet clas- siï¬ cation challenge provides about 1, 000 training images for each of the 1, 000 classes, giving a total of 1, 281, 167 labeled training images. # 4.2.1 AlexNet The Caffe website provides the architecture and hyper- parameter ï¬ les for a slightly modiï¬ ed AlexNet [17]. These were downloaded from the website and used as a baseline. In the training results reported in this section, all weights ImageNet/AlexNet architecture = a S a S nS me (ees) Validation Accuracy â Triangular2 0 â F ; q 4 1 005 1 15. 2. 25. 3. 335 Iteration <i Figure 9. Validation data classiï¬ cation accuracy as a function of iteration for f ixed versus triangular. were initialized the same so as to avoid differences due to different random initializations. Since the batchsize in the architecture ï¬ le is 256, an epoch is equal to 1, 281, 167/256 = 5, 005 iterations. Hence, a reasonable setting for stepsize is 6 epochs or 30, 000 iterations. Next, one can estimate reasonable minimum and maxi- mum boundaries for the learning rate from Figure 7.
1506.01186#19
1506.01186#21
1506.01186
[ "1504.01716" ]
1506.01186#21
Cyclical Learning Rates for Training Neural Networks
It can be seen from this ï¬ gure that the training doesnâ t start con- verging until at least 0.006 so setting base lr = 0.006 is reasonable. However, for a fair comparison to the baseline where base lr = 0.01, it is necessary to set the base lr to 0.01 for the triangular and triangular2 policies or else the majority of the apparent improvement in the accuracy will be from the smaller learning rate. As for the maxi- mum boundary value, the training peaks and drops above a learning rate of 0.015 so max lr = 0.015 is reasonable. For comparing the exp range policy to the exp policy, set- ting base lr = 0.006 and max lr = 0.014 is reasonable and in this case one expects that the average accuracy of the exp range policy to be equal to the accuracy from the exp policy. Figure 9 compares the results of running with the f ixed versus the triangular2 policy for the AlexNet architecture. Here, the peaks at iterations that are multiples of 60,000 should produce a classiï¬ cation accuracy that corresponds to the f ixed policy. Indeed, the accuracy peaks at the end of a cycle for the triangular2 policy are similar to the ac- curacies from the standard f ixed policy, which implies that the baseline learning rates are set quite well (this is also im- plied by Figure 7). As shown in Table 1, the ï¬ nal accuracies from the CLR training run are only 0.4% better than the ac- curacies from the f ixed policy. Figure 10 compares the results of running with the exp versus the exp range policy for the AlexNet architecture with gamma = 0.999995 for both policies. As expected, ImageNet/AlexNet architecture 0.5 S & Validation Accuracy Ss bs 0.2 0.1 â Exp Range 0 I 2 3 4 Iteration x10° Figure 10. Validation data classiï¬ cation accuracy as a function of iteration for exp versus exp range. ImageNet/GoogleNet architecture 0.08 id S HB Validation Accuracy 2 2° o Peg Ne eS 0 0.01 002 003 004 0.05 0.06 0.07 Learning rate Figure 11.
1506.01186#20
1506.01186#22
1506.01186
[ "1504.01716" ]
1506.01186#22
Cyclical Learning Rates for Training Neural Networks
GoogleNet LR range test; validation classiï¬ cation ac- curacy as a function of increasing learning rate. Figure 10 shows that the accuracies from the exp range policy do oscillate around the exp policy accuracies. The advantage of the exp range policy is that the accuracy of 56.5% is already obtained at iteration 300, 000 whereas the exp policy takes until iteration 460, 000 to reach 56.5%. Finally, a comparison between the f ixed and exp poli- cies in Table 1 shows the f ixed and triangular2 policies produce accuracies that are almost 2% better than their ex- ponentially decreasing counterparts, but this difference is probably due to not having tuned gamma.
1506.01186#21
1506.01186#23
1506.01186
[ "1504.01716" ]
1506.01186#23
Cyclical Learning Rates for Training Neural Networks
# 4.2.2 GoogLeNet/Inception Architecture The GoogLeNet architecture was a winning entry to the ImageNet 2014 image classiï¬ cation competition. Szegedy et al. [25] describe the architecture in detail but did not provide the architecture ï¬ le. The architecture ï¬ le publicly available from Princeton4 was used in the following exper- iments. The GoogLeNet paper does not state the learning rate values and the hyper-parameter solver ï¬ le is not avail- 4vision.princeton.edu/pvt/GoogLeNet/ Imagenet with GoogLeNet architecture 2 g 2 a 2 in 2 5 2 io Validation Accuracy ° is e 0 05 1 15 2 25) 3 3.5 4 45 Iteration x10 Figure 12. Validation data classiï¬ cation accuracy as a function of iteration for f ixed versus triangular. able for a baseline but not having these hyper-parameters is a typical situation when one is developing a new architec- ture or applying a network to a new dataset. This is a situa- tion that CLR readily handles. Instead of running numerous experiments to ï¬ nd optimal learning rates, the base lr was set to a best guess value of 0.01. The ï¬ rst step is to estimate the stepsize setting. Since the architecture uses a batchsize of 128 an epoch is equal to 1, 281, 167/128 = 10, 009 iterations. Hence, good settings for stepsize would be 20, 000, 30, 000, or possibly 40, 000. The results in this section are based on stepsize = 30000. The next step is to estimate the bounds for the learning rate, which is found with the LR range test by making a run for 4 epochs where the learning rate linearly increases from 0.001 to 0.065 (Figure 11).
1506.01186#22
1506.01186#24
1506.01186
[ "1504.01716" ]
1506.01186#24
Cyclical Learning Rates for Training Neural Networks
This ï¬ gure shows that one can use bounds between 0.01 and 0.04 and still have the model reach convergence. However, learning rates above 0.025 cause the training to converge erratically. For both triangular2 and the exp range policies, the base lr was set to 0.01 and max lr was set to 0.026. As above, the accuracy peaks for both these learning rate policies corre- spond to the same learning rate value as the f ixed and exp policies. Hence, the comparisons below will focus on the peak accuracies from the LCR methods. Figure 12 compares the results of running with the f ixed versus the triangular2 policy for this architecture (due to time limitations, each training stage was not run until it fully plateaued). In this case, the peaks at the end of each cycle for the triangular2 policy produce better accuracies than the f ixed policy.
1506.01186#23
1506.01186#25
1506.01186
[ "1504.01716" ]
1506.01186#25
Cyclical Learning Rates for Training Neural Networks
The ï¬ nal accuracy shows an improvement from the network trained by the triangular2 policy (Ta- ble 1) to be 1.4% better than the accuracy from the f ixed policy. This demonstrates that the triangular2 policy im- proves on a â best guessâ for a ï¬ xed learning rate. Figure 13 compares the results of running with the exp versus the exp range policy with gamma = 0.99998. Once again, the peaks at the end of each cycle for the Imagenet with GoogLeNet architecture Ld © = Validation Accuracy ° hed ie -â ExpLR â Exp range 0 0.5 1 15 2 Iteration x10 Figure 13. Validation data classiï¬ cation accuracy as a function of iteration for exp versus exp range. exp range policy produce better validation accuracies than the exp policy.
1506.01186#24
1506.01186#26
1506.01186
[ "1504.01716" ]
1506.01186#26
Cyclical Learning Rates for Training Neural Networks
The ï¬ nal accuracy from the exp range pol- icy (Table 1) is 2% better than from the exp policy. # 5. Conclusions The results presented in this paper demonstrate the ben- eï¬ ts of the cyclic learning rate (CLR) methods. A short run of only a few epochs where the learning rate linearly in- creases is sufï¬ cient to estimate boundary learning rates for the CLR policies. Then a policy where the learning rate cyclically varies between these bounds is sufï¬ cient to ob- tain near optimal classiï¬ cation results, often with fewer it- erations. This policy is easy to implement and unlike adap- tive learning rate methods, incurs essentially no additional computational expense. This paper shows that use of cyclic functions as a learn- ing rate policy provides substantial improvements in perfor- mance for a range of architectures. In addition, the cyclic nature of these methods provides guidance as to times to drop the learning rate values (after 3 - 5 cycles) and when to stop the the training. All of these factors reduce the guess- work in setting the learning rates and make these methods practical tools for everyone who trains neural networks. This work has not explored the full range of applications for cyclic learning rate methods. We plan to determine if equivalent policies work for training different architectures, such as recurrent neural networks. Furthermore, we believe that a theoretical analysis would provide an improved un- derstanding of these methods, which might lead to improve- ments in the algorithms.
1506.01186#25
1506.01186#27
1506.01186
[ "1504.01716" ]
1506.01186#27
Cyclical Learning Rates for Training Neural Networks
# References [1] K. Bache, D. DeCoste, and P. Smyth. Hot swapping for online adaptation of optimization hyperparameters. arXiv preprint arXiv:1412.6599, 2014. 2 [2] Y. Bengio. Neural Networks: Tricks of the Trade, chap- ter Practical recommendations for gradient-based training of deep architectures, pages 437â 478. Springer Berlin Heidel- berg, 2012. 1, 2, 4 [3] T. M. Breuel.
1506.01186#26
1506.01186#28
1506.01186
[ "1504.01716" ]
1506.01186#28
Cyclical Learning Rates for Training Neural Networks
The effects of hyperparameters on sgd training of neural networks. arXiv preprint arXiv:1508.02788, 2015. 2 [4] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rm- sprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015. 2 [5] J. Duchi, E. Hazan, and Y.
1506.01186#27
1506.01186#29
1506.01186
[ "1504.01716" ]
1506.01186#29
Cyclical Learning Rates for Training Neural Networks
Singer. Adaptive subgradi- ent methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â 2159, 2011. 2, 5 [6] A. P. George and W. B. Powell. Adaptive stepsizes for re- cursive estimation with applications in approximate dynamic programming. Machine learning, 65(1):167â 198, 2006. 2 [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik.
1506.01186#28
1506.01186#30
1506.01186
[ "1504.01716" ]
1506.01186#30
Cyclical Learning Rates for Training Neural Networks
Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â 587. IEEE, 2014. 1 [8] A. Graves and N. Jaitly. Towards end-to-end speech recog- nition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML- 14), pages 1764â
1506.01186#29
1506.01186#31
1506.01186
[ "1504.01716" ]
1506.01186#31
Cyclical Learning Rates for Training Neural Networks
1772, 2014. 1 [9] C. Gulcehre and Y. Bengio. Adasecant: Robust adap- tive secant method for stochastic gradient. arXiv preprint arXiv:1412.7419, 2014. 2 [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Computer Vision and Pattern Recog- nition (CVPR), 2016 IEEE Conference on, 2015. 5, 6 [11] K.
1506.01186#30
1506.01186#32
1506.01186
[ "1504.01716" ]
1506.01186#32
Cyclical Learning Rates for Training Neural Networks
He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. 5, 6 [12] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. 5, 6 [13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. arXiv preprint Deep networks with stochastic depth. arXiv:1603.09382, 2016. 5, 6 [14] B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, R. Cheng-Yue, F. Mujica, A. Coates, et al. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716, 2015. 1 [15] S. Ioffe and C. Szegedy.
1506.01186#31
1506.01186#33
1506.01186
[ "1504.01716" ]
1506.01186#33
Cyclical Learning Rates for Training Neural Networks
Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 5 [16] D. Kingma and J. Lei-Ba. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2015. 2, 5 Imagenet classiï¬ cation with deep convolutional neural networks. Ad- vances in neural information processing systems, 2012. 1, 2, 6 [18] I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient de- scent with restarts. arXiv preprint arXiv:1608.03983, 2016. 2 [19] Y. Nesterov. A method of solving a convex programming In Soviet Mathe- problem with convergence rate o (1/k2). matics Doklady, volume 27, pages 372â 376, 1983. 5 [20] S.
1506.01186#32
1506.01186#34
1506.01186
[ "1504.01716" ]
1506.01186#34
Cyclical Learning Rates for Training Neural Networks
Ruder. An overview of gradient descent optimization al- gorithms. arXiv preprint arXiv:1600.04747, 2016. 2 [21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. 6 [22] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. arXiv preprint arXiv:1206.1106, 2012. 2 [23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1 [24] I. Sutskever, O. Vinyals, and Q. V. Le.
1506.01186#33
1506.01186#35
1506.01186
[ "1504.01716" ]
1506.01186#35
Cyclical Learning Rates for Training Neural Networks
Sequence to sequence learning with neural networks. In Advances in Neural Infor- mation Processing Systems, pages 3104â 3112, 2014. 1 [25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 1, 2, 7 [26] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face veriï¬
1506.01186#34
1506.01186#36
1506.01186
[ "1504.01716" ]
1506.01186#36
Cyclical Learning Rates for Training Neural Networks
ca- tion. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1701â 1708. IEEE, 2014. 1 [27] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. 2, 5 [28] O. Vinyals, A. Toshev, S. Bengio, and D.
1506.01186#35
1506.01186#37
1506.01186
[ "1504.01716" ]
1506.01186#37
Cyclical Learning Rates for Training Neural Networks
Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014. 1 [29] M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. 2, 5 # A. Instructions for adding CLR to Caffe Modify SGDSolver¡Dtype¿::GetLearningRate() which is in sgd solver.cpp (near line 38): } e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == â t r i a n g u l a r â ) { i t r = t h i s â > i t e r â t h i s â >p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s â >p a r a m . b a s e l r ( ) + ( t h i s â >p a r a m . m a x l r () â t h i s â >p a r a m . b a s e l r ( ) ) c y c l e = i t r ( 2 â t h i s â >p a r a m . s t e p s i z e ( ) ) ; ( i t r â ( 2 â c y c l e +1)â t h i s â >p a r a m . s t e p s i z e ( ) ) ; t h i s â >p a r a m . s t e p s i z e ( ) ; â s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 â f a b s ( x ) ) ) ; } e l s e { r a t e = t h i s â >p a r a m . b a s e l r ( ) ; } } e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == â t r i a n g u l a r 2 â
1506.01186#36
1506.01186#38
1506.01186
[ "1504.01716" ]
1506.01186#38
Cyclical Learning Rates for Training Neural Networks
) { i t r = t h i s â > i t e r â t h i s â >p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s â >p a r a m . b a s e l r ( ) + ( t h i s â >p a r a m . m a x l r () â t h i s â >p a r a m . b a s e l r ( ) ) ( 2 â t h i s â >p a r a m . s t e p s i z e ( ) ) ; ( i t r â ( 2 â c y c l e +1)â t h i s â >p a r a m . s t e p s i z e ( ) ) ; c y c l e = i t r t h i s â >p a r a m . s t e p s i z e ( ) ; â s t d : : min ( d o u b l e ( 1 ) , f a b s ( x ) ) / pow ( 2 . 0 , d o u b l e ( c y c l e ) ) ) ) ; s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 â } e l s e { r a t e = t h i s â >p a r a m . b a s e l r ( ) ; } Modify message SolverParameter which is in caffe.proto (near line 100): o p t i o n a l o p t i o n a l f l o a t s t a r t f l o a t m a x l r = 4 2 ; l r p o l i c y = 4 1 ; / / The maximum l e a r n i n g r a t e f o r CLR p o l i c i e s # B. Instructions for adding CLR to Keras Please see https://github.com/bckenstler/CLR.
1506.01186#37
1506.01186
[ "1504.01716" ]
1506.01066#0
Visualizing and Understanding Neural Models in NLP
6 1 0 2 n a J 8 ] L C . s c [ 2 v 6 6 0 1 0 . 6 0 5 1 : v i X r a # Visualizing and Understanding Neural Models in NLP Jiwei Li1, Xinlei Chen2, Eduard Hovy2 and Dan Jurafsky1 1Computer Science Department, Stanford University, Stanford, CA 94305, USA 2Language Technology Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA {jiweil,jurafsky}@stanford.edu {xinleic,ehovy}@andrew.cmu.edu # Abstract While neural networks have been success- fully applied to many NLP tasks the re- sulting vector-based models are very difï¬
1506.01066#1
1506.01066
[ "1510.03055" ]
1506.01066#1
Visualizing and Understanding Neural Models in NLP
- cult to interpret. For example itâ s not clear how they achieve compositionality, build- ing sentence meaning from the meanings of words and phrases. In this paper we describe strategies for visualizing composi- tionality in neural models for NLP, inspired by similar work in computer vision. We ï¬ rst plot unit values to visualize composi- tionality of negation, intensiï¬ cation, and concessive clauses, allowing us to see well- known markedness asymmetries in nega- tion. We then introduce methods for visu- alizing a unitâ s salience, the amount that it contributes to the ï¬ nal composed meaning from ï¬
1506.01066#0
1506.01066#2
1506.01066
[ "1510.03055" ]
1506.01066#2
Visualizing and Understanding Neural Models in NLP
rst-order derivatives. Our general- purpose methods may have wide applica- tions for understanding compositionality and other semantic properties of deep net- works. # Introduction Neural models match or outperform the perfor- mance of other state-of-the-art systems on a va- riety of NLP tasks. Yet unlike traditional feature- based classiï¬ ers that assign and optimize weights to varieties of human interpretable features (parts- of-speech, named entities, word shapes, syntactic parse features etc) the behavior of deep learning models is much less easily interpreted. Deep learn- ing models mainly operate on word embeddings (low-dimensional, continuous, real-valued vectors) through multi-layer neural architectures, each layer of which is characterized as an array of hidden neu- ron units. It is unclear how deep learning models deal with composition, implementing functions like negation or intensiï¬ cation, or combining meaning from different parts of the sentence, ï¬ ltering away the informational chaff from the wheat, to build sentence meaning. In this paper, we explore multiple strategies to interpret meaning composition in neural models. We employ traditional methods like representation plotting, and introduce simple strategies for measur- ing how much a neural unit contributes to meaning composition, its â salienceâ or importance using ï¬
1506.01066#1
1506.01066#3
1506.01066
[ "1510.03055" ]
1506.01066#3
Visualizing and Understanding Neural Models in NLP
rst derivatives. Visualization techniques/models represented in this work shed important light on how neural mod- els work: For example, we illustrate that LSTMâ s success is due to its ability in maintaining a much sharper focus on the important key words than other models; Composition in multiple clauses works competitively, and that the models are able to cap- ture negative asymmetry, an important property of semantic compositionally in natural language understanding; there is sharp dimensional local- ity, with certain dimensions marking negation and quantiï¬ cation in a manner that was surprisingly localist. Though our attempts only touch superï¬ - cial points in neural models, and each method has its pros and cons, together they may offer some insights into the behaviors of neural models in lan- guage based tasks, marking one initial step toward understanding how they achieve meaning composi- tion in natural language processing. The next section describes some visualization models in vision and NLP that have inspired this work. We describe datasets and the adopted neu- ral models in Section 3. Different visualization strategies and correspondent analytical results are presented separately in Section 4,5,6, followed by a brief conclusion. # 2 A Brief Review of Neural Visualization Similarity is commonly visualized graphically, gen- erally by projecting the embedding space into two dimensions and observing that similar words tend to be clustered together (e.g., Elman (1989), Ji and Eisenstein (2014), Faruqui and Dyer (2014)). (Karpathy et al., 2015) attempts to interpret recur- rent neural models from a statical point of view and does deeply touch compositionally of mean- ings. Other relevant attempts include (Fyshe et al., 2015; Faruqui et al., 2015). Methods for interpreting and visualizing neu- ral models have been much more signiï¬ cantly ex- plored in vision, especially for Convolutional Neu- ral Networks (CNNs or ConvNets) (Krizhevsky et al., 2012), multi-layer neural networks in which the original matrix of image pixels is convolved and pooled as it is passed on to hidden layers.
1506.01066#2
1506.01066#4
1506.01066
[ "1510.03055" ]
1506.01066#4
Visualizing and Understanding Neural Models in NLP
ConvNet visualizing techniques consist mainly in mapping the different layers of the network (or other fea- tures like SIFT (Lowe, 2004) and HOG (Dalal and Triggs, 2005)) back to the initial image input, thus capturing the human-interpretable information they represent in the input, and how units in these layers contribute to any ï¬ nal decisions (Simonyan et al., 2013; Mahendran and Vedaldi, 2014; Nguyen et al., 2014; Szegedy et al., 2013; Girshick et al., 2014; Zeiler and Fergus, 2014). Such methods include: (1) Inversion: Inverting the representations by training an additional model to project outputs from different neural levels back to the initial input im- ages (Mahendran and Vedaldi, 2014; Vondrick et al., 2013; Weinzaepfel et al., 2011). The intuition behind reconstruction is that the pixels that are re- constructable from the current representations are the content of the representation. The inverting algorithms allow the current representation to align with corresponding parts of the original images. (2) Back-propagation (Erhan et al., 2009; Si- monyan et al., 2013) and Deconvolutional Net- works (Zeiler and Fergus, 2014): Errors are back propagated from output layers to each intermedi- ate layer and ï¬ nally to the original image inputs. Deconvolutional Networks work in a similar way by projecting outputs back to initial inputs layer by layer, each layer associated with one supervised model for projecting upper ones to lower ones These strategies make it possible to spot active regions or ones that contribute the most to the ï¬
1506.01066#3
1506.01066#5
1506.01066
[ "1510.03055" ]
1506.01066#5
Visualizing and Understanding Neural Models in NLP
nal classiï¬ cation decision. (3) Generation: This group of work generates images in a speciï¬ c class from a sketch guided by already trained neural models (Szegedy et al., 2013; Nguyen et al., 2014). Models begin with an image whose pixels are randomly initialized and mutated at each step. The speciï¬ c layers that are activated at different stages of image construction can help in interpretation. While the above strategies inspire the work we present in this paper, there are fundamental dif- ferences between vision and NLP. In NLP words function as basic units, and hence (word) vectors rather than single pixels are the basic units. Se- quences of words (e.g., phrases and sentences) are also presented in a more structured way than ar- rangements of pixels. In parallel to our research, independent researches (Karpathy et al., 2015) have been conducted to explore similar direction from an error-analysis point of view, by analyzing pre- dictions and errors from a recurrent neural models. Other distantly relevant works include: Murphy et al. (2012; Fyshe et al. (2015) used an manual task to quantify the interpretability of semantic dimen- sions by presetting human users with a list of words and ask them to choose the one that does not belong to the list. Faruqui et al. (2015). Similar strategy is adopted in (Faruqui et al., 2015) by extracting top-ranked words in each vector dimension. # 3 Datasets and Neural Models We explored two datasets on which neural models are trained, one of which is of relatively small scale and the other of large scale. # 3.1 Stanford Sentiment Treebank Stanford Sentiment Treebank is a benchmark dataset widely used for neural model evaluations. The dataset contains gold-standard sentiment labels for every parse tree constituent, from sentences to phrases to individual words, for 215,154 phrases in 11,855 sentences.
1506.01066#4
1506.01066#6
1506.01066
[ "1510.03055" ]
1506.01066#6
Visualizing and Understanding Neural Models in NLP
The task is to perform both ï¬ ne-grained (very positive, positive, neutral, nega- tive and very negative) and coarse-grained (positive vs negative) classiï¬ cation at both the phrase and sentence level. For more details about the dataset, please refer to Socher et al. (2013). While many studies on this dataset use recursive parse-tree models, in this work we employ only standard sequence models (RNNs and LSTMs) since these are the most widely used current neu- ral models, and sequential visualization is more straightforward.
1506.01066#5
1506.01066#7
1506.01066
[ "1510.03055" ]
1506.01066#7
Visualizing and Understanding Neural Models in NLP
We therefore ï¬ rst transform each parse tree node to a sequence of tokens. The sequence is ï¬ rst mapped to a phrase/sentence representation and fed into a softmax classiï¬ er. Phrase/sentence representations are built with the following three models: Standard Recurrent Se- quence with TANH activation functions, LSTMs and Bidirectional LSTMs. For details about the three models, please refer to Appendix. Training AdaGrad with mini-batch was used for training, with parameters (L2 penalty, learning rate, mini batch size) tuned on the development set. The number of iterations is treated as a variable to tune and parameters are harvested based on the best performance on the dev set. The number of dimen- sions for the word and hidden layer are set to 60 with 0.1 dropout rate. Parameters are tuned on the dev set. The standard recurrent model achieves 0.429 (ï¬ ne grained) and 0.850 (coarse grained) accuracy at the sentence level; LSTM achieves 0.469 and 0.870, and Bidirectional LSTM 0.488 and 0.878, respectively. # 3.2 Sequence-to-Sequence Models SEQ2SEQ are neural models aiming at generating a sequence of output texts given inputs. Theoret- ically, SEQ2SEQ models can be adapted to NLP tasks that can be formalized as predicting outputs given inputs and serve for different purposes due to different inputs and outputs, e.g., machine trans- lation where inputs correspond to source sentences and outputs to target sentences (Sutskever et al., 2014; Luong et al., 2014); conversational response generation if inputs correspond to messages and outputs correspond to responses (Vinyals and Le, 2015; Li et al., 2015). SEQ2SEQ need to be trained on massive amount of data for implicitly semantic and syntactic relations between pairs to be learned. SEQ2SEQ models map an input sequence to a vector representation using LSTM models and then sequentially predicts tokens based on the pre- obtained representation. The model deï¬ nes a dis- tribution over outputs (Y) and sequentially predicts tokens given inputs (X) using a softmax function. ny P(Y|X) = [] ewes.
1506.01066#6
1506.01066#8
1506.01066
[ "1510.03055" ]
1506.01066#8
Visualizing and Understanding Neural Models in NLP
LQ, oy Vt, Y1y Y2s +) Yt-1) i=1 exp(f (hi-1, ey) t=1 Ly exp(f (ht-1, eyâ )) where f (htâ 1, eyt) denotes the activation function between htâ 1 and eyt, where htâ 1 is the represen- tation output from the LSTM at time t â 1. For each time step in word prediction, SEQ2SEQ mod- els combine the current token with previously built embeddings for next-step word prediction. For easy visualization purposes, we turn to the most straightforward taskâ autoencoderâ
1506.01066#7
1506.01066#9
1506.01066
[ "1510.03055" ]
1506.01066#9
Visualizing and Understanding Neural Models in NLP
where inputs and outputs are identical. The goal of an autoencoder is to reconstruct inputs from the pre- obtained representation. We would like to see how individual input tokens affect the overall sentence representation and each of the tokens to predict in outputs. We trained the auto-encoder on a subset of WMTâ 14 corpus containing 4 million english sentences with an average length of 22.5 words. We followed training protocols described in (Sutskever et al., 2014). # 4 Representation Plotting We begin with simple plots of representations to shed light on local compositions using Stanford Sentiment Treebank. Local Composition Figure 1 shows a 60d heat- map vector for the representation of selected words/phrases/sentences, with an emphasis on ex- tent modiï¬
1506.01066#8
1506.01066#10
1506.01066
[ "1510.03055" ]
1506.01066#10
Visualizing and Understanding Neural Models in NLP
cations (adverbial and adjectival) and negation. Embeddings for phrases or sentences are attained by composing word representations from the pretrained model. The intensiï¬ cation part of Figure 1 shows sug- gestive patterns where values for a few dimensions are strengthened by modiï¬ ers like â a lotâ (the red bar in the ï¬ rst example) â so muchâ (the red bar in the second example), and â incrediblyâ . Though the patterns for negations are not as clear, there is still a consistent reversal for some dimensions, visible as a shift between blue and red for dimensions boxed on the left. We then visualize words and phrases using t- sne (Van der Maaten and Hinton, 2008) in Figure 2, deliberately adding in some random words for com- parative purposes. As can be seen, neural models nicely learn the properties of local composition- ally, clustering negation+positive words (â not niceâ , â not goodâ ) together with negative words. Note also the asymmetry of negation: â not badâ is clustered more with the negative than the positive words (as shown both in Figure 1 and 2). This asymmetry has been widely discussed in linguistics, for exam- ple as arising from markedness, since â goodâ is the unmarked direction of the scale (Clark and Clark, 1977; Horn, 1989; Fraenkel and Schul, 2008). This suggests that although the model does seem to fo- cus on certain units for negation in Figure 1, the neural model is not just learning to apply a ï¬ xed transform for â notâ but is able to capture the subtle differences in the composition of different words. so neeâ L ° tig vot etgpsteg ro v9 â nrg PM ot goog ata oy f° "er â goigg not usetylat all NOt geRstinteregting at an â iste wept % ~ i increcy bad | mage 4 vy notugens = POTDIES â â â amazinggy bad yn ing naety ys et L â qh 4 / much yorse so pad ve tegit | even getter i AM uch getter} i A 1 Zz - i L J â egos teally good wy pee bast Dayeâ
1506.01066#9
1506.01066#11
1506.01066
[ "1510.03055" ]
1506.01066#11
Visualizing and Understanding Neural Models in NLP
got me oy ie oe na 9 be oneit \ oF smghe tery Nive 7 wondprtul Ee tena Figure 2: t-SNE Visualization on latent representations for modiï¬ cations and negations. _s hated! the moute apd it was 100 long gSpite the good acting, L {hated the mie: =~ _â â >, | hated the moute althoygh it hac! good acting much yorse Whoted the move buy it had good acting not gad not goog! at all not ugeful L very,pad not great so gad not interegting at all not usetylat all by incrediljy bad not good L hrarclly ysehul not integesting, amazingly bac disige â liked the movie althoygh it had bad! acting not pice im Hikea she mowe gt â was 100 long Useful S nN tk oye love sqrmuch tke glot belter â much getter \ \ wonedgetul best anegnas SSN we dood oven getter e a L ited the owe angie good ating mtgptic lait . inctectibly good wy gee tenjfic â Viked!thig movie co a alee sep teary goo se aged interegting [- amazingly good nige # i
1506.01066#10
1506.01066#12
1506.01066
[ "1510.03055" ]
1506.01066#12
Visualizing and Understanding Neural Models in NLP
Figure 4: t-SNE Visualization for clause composition. Concessive Sentences In concessive sentences, two clauses have opposite polarities, usually re- lated by a contrary-to-expectation implicature. We plot evolving representations over time for two con- cessives in Figure 3. The plots suggest: 1. For tasks like sentiment analysis whose goal is to predict a speciï¬ c semantic dimension (as op- posed to general tasks like language model word prediction), too large a dimensionality leads to many dimensions non-functional (with values close to 0), causing two sentences of opposite sentiment to differ only in a few dimensions. This may ex- plain why more dimensions donâ t necessarily lead to better performance on such tasks (For example, as reported in (Socher et al., 2013), optimal perfor- mance is achieved when word dimensionality is set to between 25 and 35). 2. Both sentences contain two clauses connected by the conjunction â thoughâ . Such two-clause sen- tences might either work collaborativelyâ models would remember the word â thoughâ and make the second clause share the same sentiment orienta- tion as ï¬ rstâ or competitively, with the stronger one dominating. The region within dotted line in Figure 3(a) favors the second assumption: the dif- ference between the two sentences is diluted when the ï¬ nal words (â interestingâ and â boringâ ) appear.
1506.01066#11
1506.01066#13
1506.01066
[ "1510.03055" ]
1506.01066#13
Visualizing and Understanding Neural Models in NLP
Clause Composition In Figure 4 we explore this clause composition in more detail. Representations move closer to the negative sentiment region by adding negative clauses like â although it had bad actingâ or â but it is too longâ to the end of a simply positive â I like the movieâ . By contrast, adding a concessive clause to a negative clause does not move toward the positive; â I hate X but ...â is still very negative, not that different than â I hate Xâ . This difference again suggests the model is able to capture negative asymmetry (Clark and Clark, 1977; Horn, 1989; Fraenkel and Schul, 2008). _ 20 I hate | | . hate the = the movie *° movie hate the movie Recurrent ° ry 2 » Bi - Directional LSTM jozr Figure 5: Saliency heatmap for for â
1506.01066#12
1506.01066#14
1506.01066
[ "1510.03055" ]
1506.01066#14
Visualizing and Understanding Neural Models in NLP
I hate the movie .â Each row corresponds to saliency scores for the correspondent word representation with each grid representing each dimension. 0.032 10.200 I 0.45, I hat â po bate TT IETT TMI) the 028 oss = the 0.150 novie 0.020 030 movie 0.125 I oor 0.25 I 10.100 saw lo.or2 020 saw ors 015g last 0.008 last | 1.050 . 010 night night 008 ight soos . 0.05 . | ° 10 200 30 4050 2.000 ° 40. 20 30 40 «650 â 0.00 o 10 20 30 40 «(80 9.000 Recurrent LSTM Bi- Directional LSTM Figure 6: Saliency heatmap for â
1506.01066#13
1506.01066#15
1506.01066
[ "1510.03055" ]
1506.01066#15
Visualizing and Understanding Neural Models in NLP
I hate the movie I saw last night .â . I I I 0.24 10.64 hate Pee 0 mel TL IML Hi) 0.08 oz ose the the the 4 oor . 0.18 . 0.48 movie movie movie | || 0.06 though | | I] though | | | ors â =| jo.40 0.05 the the o.r2 the los2 5 0.04 plot plot 0.09 a ome 03 s â 0.02 is 0.08 ad 0.16 interesting interesting interesting | | ii | | | 0.01 os 0.08 â 0.00 eo 10 Re hea at 50 o 10 2 3% 40 50 9° 0 10 2 3 40 50 °° ecurren!
1506.01066#14
1506.01066#16
1506.01066
[ "1510.03055" ]
1506.01066#16
Visualizing and Understanding Neural Models in NLP
LSTM Bi - Directional LSTM Figure 7: Saliency heatmap for â I hate the movie though the plot is interesting .â . # 5 First-Derivative Saliency In this section, we describe another strategy which is is inspired by the back-propagation strategy in vision (Erhan et al., 2009; Simonyan et al., 2013). It measures how much each input unit contributes to the ï¬ nal decision, which can be approximated by ï¬ rst derivatives.
1506.01066#15
1506.01066#17
1506.01066
[ "1510.03055" ]
1506.01066#17
Visualizing and Understanding Neural Models in NLP
of words, while labels could be POS tags, sentiment labels, the next word index to predict etc.) Given embeddings E for input words with the associated gold class label c, the trained model associates the pair (E, c) with a score Sc(E). The goal is to decide which units of E make the most signiï¬ cant contribution to Sc(e), and thus the decision, the choice of class label c. More formally, for a classiï¬ cation model, an input E is associated with a gold-standard class label c. (Depending on the NLP task, an input could be the embedding for a word or a sequence In the case of deep neural models, the class score Sc(e) is a highly non-linear function. We approxi- mate Sc(e) with a linear function of e by computing Intensiï¬ cation | | Tlike it Tlike it a lot Thate it = SS | pop Thate it so much ] mec l | | | I] the movie is incredibly good Negation 10 good 0.8 06 | | | | iT] | not good 0.4 : ETT «= 0.0 02 | I | } | not bad 0.4 7 0.6 ll like 0.8 1.0 | | | | | n't like Figure 1: Visualizing intensiï¬
1506.01066#16
1506.01066#18
1506.01066
[ "1510.03055" ]
1506.01066#18
Visualizing and Understanding Neural Models in NLP
cation and negation. Each ver- tical bar shows the value of one dimension in the ï¬ nal sen- tence/phrase representation after compositions. Embeddings for phrases or sentences are attained by composing word rep- resentations from the pretrained model. the ï¬ rst-order Taylor expansion Sc(e) â w(e)T e + b (1) where w(e) is the derivative of Sc with respect to the embedding e. w(e) = â (Sc) â e |e (2) The magnitude (absolute value) of the derivative in- dicates the sensitiveness of the ï¬ nal decision to the change in one particular dimension, telling us how much one speciï¬ c dimension of the word embed- ding contributes to the ï¬ nal decision. The saliency score is given by S(e) = |w(e)| (3) # 5.1 Results on Stanford Sentiment Treebank We ï¬ rst illustrate results on Stanford Treebank. We plot in Figures 5, 6 and 7 the saliency scores (the
1506.01066#17
1506.01066#19
1506.01066
[ "1510.03055" ]
1506.01066#19
Visualizing and Understanding Neural Models in NLP
HY 42 i hate the movie though the plot is interesting c= â â : 24 i like the movie though the plot is boring r) Absolute value of dfference Figure 3: Representations over time from LSTMs. Each col- umn corresponds to outputs from LSTM at each time-step (representations obtained after combining current word em- bedding with previous build embeddings). Each grid from the column corresponds to each dimension of current time-step representation. The last rows correspond to absolute differ- ences for each time step between two sequences. absolute value of the derivative of the loss function with respect to each dimension of all word inputs) for three sentences, applying the trained model to each sentence. Each row corresponds to saliency score for the correspondent word representation with each grid representing each dimension. The examples are based on the clear sentiment indicator â
1506.01066#18
1506.01066#20
1506.01066
[ "1510.03055" ]
1506.01066#20
Visualizing and Understanding Neural Models in NLP
hateâ that lends them all negative sentiment. â I hate the movieâ All three models assign high saliency to â hateâ and dampen the inï¬ uence of other tokens. LSTM offers a clearer focus on â hateâ than the standard recurrent model, but the bi-directional LSTM shows the clearest focus, at- taching almost zero emphasis on words other than â hateâ . This is presumably due to the gates struc- tures in LSTMs and Bi-LSTMs that controls infor- mation ï¬ ow, making these architectures better at ï¬ ltering out less relevant information. jos I I os wae! |i || wate || ll | o7 the the - os | movie novie though | a I the aw | || plot jos last is oo Might meres |}! [II 1} 1 ° 10 2 o 10 2 9 40 60 °° one of the greatest | | | ll I = e . | | 9, family - riented mel ND TE > the «fantasy Jadventure movie * movies . = ever | | | ne Jose pe the \] oa film 0.54 jon make eee strong || p case 0.42 o.08 for â 0.00 0.36 * the importance 0.30 os of 0.40 the 0.24 oss musicians Il | om in 0.18 | creating o% the 0.12 on motown 0.15 sound | | 0.06 a - Figure 8: Variance visualization. â
1506.01066#19
1506.01066#21
1506.01066
[ "1510.03055" ]
1506.01066#21
Visualizing and Understanding Neural Models in NLP
I hate the movie that I saw last nightâ All three models assign the correct sentiment. The simple recurrent models again do poorly at ï¬ lter- ing out irrelevant information, assigning too much salience to words unrelated to sentiment. However none of the models suffer from the gradient van- ishing problems despite this sentence being longer; the salience of â hateâ still stands out after 7-8 fol- lowing convolutional operations. only a rough approximate to individual contribu- tions and might not sufï¬ ce to deal with highly non- linear cases. By contrast, the LSTM emphasizes the ï¬ rst clause, sharply dampening the inï¬ uence from the second clause, while the Bi-LSTM focuses on both â hate the movieâ and â plot is interestingâ .
1506.01066#20
1506.01066#22
1506.01066
[ "1510.03055" ]
1506.01066#22
Visualizing and Understanding Neural Models in NLP
# 5.2 Results on Sequence-to-Sequence Autoencoder â I hate the movie though the plot is interestingâ The simple recurrent model emphasizes only the second clause â the plot is interestingâ , assigning no credit to the ï¬ rst clause â I hate the movieâ . This might seem to be caused by a vanishing gradient, yet the model correctly classiï¬ es the sentence as very negative, suggesting that it is successfully incorporating information from the ï¬ rst negative clause. We separately tested the individual clause â though the plot is interestingâ . The standard recur- rent model conï¬ dently labels it as positive. Thus despite the lower saliency scores for words in the ï¬ rst clause, the simple recurrent system manages to rely on that clause and downplay the information from the latter positive clauseâ despite the higher saliency scores of the later words. This illustrates a limitation of saliency visualization. ï¬ rst-order derivatives donâ t capture all the information we would like to visualize, perhaps because they are Figure 9 represents saliency heatmap for auto- encoder in terms of predicting correspondent to- ken at each time step. We compute ï¬ rst-derivatives for each preceding word through back-propagation as decoding goes on. Each grid corresponds to magnitude of average saliency value for each 1000- dimensional word vector. The heatmaps give clear overview about the behavior of neural models dur- ing decoding. Observations can be summarized as follows: 1. For each time step of word prediction, SEQ2SEQ models manage to link word to predict back to correspondent region at the inputs (automat- ically learn alignments), e.g., input region centering around token â
1506.01066#21
1506.01066#23
1506.01066
[ "1510.03055" ]
1506.01066#23
Visualizing and Understanding Neural Models in NLP
hateâ exerts more impact when to- ken â hateâ is to be predicted, similar cases with tokens â movieâ , â plotâ and â boringâ . 2. Neural decoding combines the previously built representation with the word predicted at the current step. As decoding proceeds, the inï¬ uence - 2g 25 2 2 £28 2848 Beppe aé⠬ Es a = 3 s hate the movie though the plot is boring 3 - 2 Py - gzgge284e 2 27337 8 = £ â â ¬ 2 8 fi a = es 2 ee gee e828 22 £2£,ro3 7 2 4 BE Es 3 a though â -egeeSeeueme- â -g oo gg¢e $2 g 2 ete a ¢⠬ £23 â e 2 8 â â gegegeegu4rr'-ege88 2.38327 8 ¢⠬ Be 32 es gs es | FP - hate the movie though the plot i hate the movie though the boring | hate the movie though the plot boring hate the movie though the plot : hate the movie though the plot i hate the movie though the boring 3 8 - 2g 25 2 2 £28 2848 Beppe aé⠬ Es a 0.14 = 3 s eS os 8 hate the movie though the plot is boring 3 3 2 - 2 Py - gzgge284e 2 27337 8 = £ â â ¬ 2 8 0.00 fi a = es 2 ee gee e828 22 £2£,ro3 7 2 4 BE Es 3 a though â -egeeSeeueme- â -g oo gg¢e $2 g 2 ete a ¢⠬ £23 â e 2 8 â â gegegeegu4rr'-ege88 2.38327 8 ¢⠬ Be 32 es gs es
1506.01066#22
1506.01066#24
1506.01066
[ "1510.03055" ]
1506.01066#24
Visualizing and Understanding Neural Models in NLP
| FP - hate the movie though the plot i hate the movie though the boring | hate the movie though the plot boring hate the movie though the plot : hate the movie though the plot i hate the movie though the boring Figure 9: Saliency heatmap for SEQ2SEQ auto-encoder in terms of predicting correspondent token at each time step. of the initial input on decoding (i.e., tokens in source sentences) gradually diminishes as more previously-predicted words are encoded in the vec- tor representations.
1506.01066#23
1506.01066#25
1506.01066
[ "1510.03055" ]
1506.01066#25
Visualizing and Understanding Neural Models in NLP
Meanwhile, the inï¬ uence of language model gradually dominates: when word â boringâ is to be predicted, models attach more weight to earlier predicted tokens â plotâ and â isâ but less to correspondent regions in the inputs, i.e., the word â boringâ in inputs. # 6 Average and Variance For settings where word embeddings are treated as parameters to optimize from scratch (as opposed to using pre-trained embeddings), we propose a sec- ond, surprisingly easy and direct way to visualize important indicators.
1506.01066#24
1506.01066#26
1506.01066
[ "1510.03055" ]
1506.01066#26
Visualizing and Understanding Neural Models in NLP
We ï¬ rst compute the average of the word embeddings for all the words within the sentences. The measure of salience or inï¬ uence for a word is its deviation from this average. The idea is that during training, models would learn to render indicators different from non-indicator words, enabling them to stand out even after many layers of computation. Figure 8 shows a map of variance; each grid cor- responds to the value of ||e;,; â Ne Viens ew,\|? where e;,; denotes the value for 7 th dimension of word i and N denotes the number of token within the sentences. As the ï¬ gure shows, the variance-based salience measure also does a good job of emphasizing the relevant sentiment words. The model does have shortcomings: (1) it can only be used in to scenar- ios where word embeddings are parameters to learn (2) itâ s clear how well the model is able to visualize local compositionality. # 7 Conclusion In this paper, we offer several methods to help visualize and interpret neural models, to understand how neural models are able to compose meanings, demonstrating asymmetries of negation and explain some aspects of the strong performance of LSTMs at these tasks. Though our attempts only touch superï¬ cial points in neural models, and each method has its pros and cons, together they may offer some in- sights into the behaviors of neural models in lan- guage based tasks, marking one initial step toward understanding how they achieve meaning compo- sition in natural language processing. Our future work includes using results of the visualization be used to perform error analysis, and understanding strengths limitations of different neural models.
1506.01066#25
1506.01066#27
1506.01066
[ "1510.03055" ]
1506.01066#27
Visualizing and Understanding Neural Models in NLP
# References Herbert H. Clark and Eve V. Clark. 1977. Psychology and language: An introduction to psycholinguistics. Harcourt Brace Jovanovich. Navneet Dalal and Bill Triggs. 2005. Histograms of In Com- oriented gradients for human detection. puter Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol- ume 1, pages 886â 893. IEEE. Jeffrey L. Elman. 1989. Representation and structure in connectionist models. Technical Report 8903, Center for Research in Language, University of Cal- ifornia, San Diego. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2009. Visualizing higher-layer fea- tures of a deep network.
1506.01066#26
1506.01066#28
1506.01066
[ "1510.03055" ]
1506.01066#28
Visualizing and Understanding Neural Models in NLP
Dept. IRO, Universit´e de Montr´eal, Tech. Rep. Improving vector space word representations using multilingual correlation. In Proceedings of EACL, volume 2014. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. 2015. Sparse overcom- plete word vector representations. arXiv preprint arXiv:1506.02004. Tamar Fraenkel and Yaacov Schul. 2008.
1506.01066#27
1506.01066#29
1506.01066
[ "1510.03055" ]
1506.01066#29
Visualizing and Understanding Neural Models in NLP
The mean- ing of negated adjectives. Intercultural Pragmatics, 5(4):517â 540. Alona Fyshe, Leila Wehbe, Partha P Talukdar, Brian Murphy, and Tom M Mitchell. 2015. A compo- sitional and interpretable semantic space. Proceed- ings of the NAACL-HLT, Denver, USA. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jiten- dra Malik. 2014. Rich feature hierarchies for accu- rate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â
1506.01066#28
1506.01066#30
1506.01066
[ "1510.03055" ]
1506.01066#30
Visualizing and Understanding Neural Models in NLP
587. IEEE. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â 1780. Laurence R. Horn. 1989. A natural history of negation, volume 960. University of Chicago Press Chicago. Yangfeng Ji and Jacob Eisenstein. 2014. Represen- tation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, volume 1, pages 13â
1506.01066#29
1506.01066#31
1506.01066
[ "1510.03055" ]
1506.01066#31
Visualizing and Understanding Neural Models in NLP
24. Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬ cation with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097â 1105. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055. David G Lowe. 2004. Distinctive image features from International journal of scale-invariant keypoints. computer vision, 60(2):91â 110. Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2014. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206. Aravindh Mahendran and Andrea Vedaldi. 2014. Un- derstanding deep image representations by inverting them. arXiv preprint arXiv:1412.0035. Brian Murphy, Partha Pratim Talukdar, and Tom M Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embed- ding. In COLING, pages 1933â
1506.01066#30
1506.01066#32
1506.01066
[ "1510.03055" ]
1506.01066#32
Visualizing and Understanding Neural Models in NLP
1950. Anh Nguyen, Jason Yosinski, and Jeff Clune. 2014. Deep neural networks are easily fooled: High conï¬ - dence predictions for unrecognizable images. arXiv preprint arXiv:1412.1897. Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673â 2681. Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013.
1506.01066#31
1506.01066#33
1506.01066
[ "1510.03055" ]
1506.01066#33
Visualizing and Understanding Neural Models in NLP
Deep inside convolutional networks: Visualising image classiï¬ cation models and saliency maps. arXiv preprint arXiv:1312.6034. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment In Proceedings of the conference on treebank. empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems, pages 3104â 3112. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85. Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869. Carl Vondrick, Aditya Khosla, Tomasz Malisiewicz, 2013. Hoggles: Visual- and Antonio Torralba. In Computer Vi- izing object detection features. sion (ICCV), 2013 IEEE International Conference on, pages 1â
1506.01066#32
1506.01066#34
1506.01066
[ "1510.03055" ]
1506.01066#34
Visualizing and Understanding Neural Models in NLP
8. IEEE. Philippe Weinzaepfel, Herv´e J´egou, and Patrick P´erez. 2011. Reconstructing an image from its local de- scriptors. In Computer Vision and Pattern Recogni- tion (CVPR), 2011 IEEE Conference on, pages 337â 344. IEEE. Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Com- puter Visionâ ECCV 2014, pages 818â 833. Springer. # Appendix
1506.01066#33
1506.01066#35
1506.01066
[ "1510.03055" ]
1506.01066#35
Visualizing and Understanding Neural Models in NLP
Recurrent Models A recurrent network succes- sively takes word wt at step t, combines its vector representation et with previously built hidden vec- tor htâ 1 from time t â 1, calculates the resulting current embedding ht, and passes it to the next step. The embedding ht for the current time t is thus: ht = f (W · htâ 1 + V · et) (4) where W and V denote compositional matrices. If Ns denote the length of the sequence, hNs repre- sents the whole sequence S. hNs is used as input a softmax function for classiï¬
1506.01066#34
1506.01066#36
1506.01066
[ "1510.03055" ]
1506.01066#36
Visualizing and Understanding Neural Models in NLP
cation tasks. Multi-layer Recurrent Models Multi-layer re- current models extend one-layer recurrent structure by operation on a deep neural architecture that en- ables more expressivity and ï¬ exibly. The model associates each time step for each layer with a hid- den representation hl,t, where l â [1, L] denotes the index of layer and t denote the index of time step. hl,t is given by: ht,l = f (W · htâ 1,l + V · ht,lâ 1) (5) where ht,0 = et, which is the original word embed- ding input at current time step. Long-short Term Memory LSTM model, ï¬ rst proposed in (Hochreiter and Schmidhuber, 1997), maps an input sequence to a ï¬ xed-sized vector by sequentially convoluting the current representation with the output representation of the previous step. LSTM associates each time epoch with an input, control and memory gate, and tries to minimize it, ft and the impact of unrelated information. ot denote to gate states at time t. ht denotes the hidden vector outputted from LSTM model at time t and et denotes the word embedding input at time t.
1506.01066#35
1506.01066#37
1506.01066
[ "1510.03055" ]
1506.01066#37
Visualizing and Understanding Neural Models in NLP
We have it = Ï (Wi · et + Vi · htâ 1) ft = Ï (Wf · et + Vf · htâ 1) ot = Ï (Wo · et + Vo · htâ 1) lt = tanh(Wl · et + Vl · htâ 1) ct = ft · ctâ 1 + it à lt ht = ot · mt (6) where Ï denotes the sigmoid function. it, ft and ot are scalars within the range of [0,1]. à denotes pairwise dot.
1506.01066#36
1506.01066#38
1506.01066
[ "1510.03055" ]
1506.01066#38
Visualizing and Understanding Neural Models in NLP
A multi-layer LSTM models works in the same way as multi-layer recurrent models by enable multi-layerâ s compositions. Bidirectional Models (Schuster and Paliwal, 1997) add bidirectionality to the recurrent frame- work where embeddings for each time are calcu- lated both forwardly and backwardly: t = f (W â · hâ hâ t = f (W â · hâ hâ tâ 1 + V â · et) t+1 + V â · et) (7) Normally, bidirectional models feed the concate- nation vector calculated from both directions [eâ ] to the classiï¬ er.
1506.01066#37
1506.01066#39
1506.01066
[ "1510.03055" ]
1506.01066#39
Visualizing and Understanding Neural Models in NLP
Bidirectional models can be similarly extended to both multi-layer neu- ral model and LSTM version.
1506.01066#38
1506.01066
[ "1510.03055" ]
1506.02488#0
On the Fuzzy Stability of an Affine Functional Equation
arXiv:1506.02488v1_[math.CA] 2015 5 1 0 2 y a M 4 2 ] A C . h t a m [ 1 v 8 8 4 2 0 . 6 0 5 1 : v i X r a # On the Fuzzy Stability of an Aï¬ ne Functional Equation # Md. Nasiruzzaman Department of Mathematics, Aligarh Muslim University, Aligarh 202002, India Email: nasir3489@gmail.com
1506.02488#1
1506.02488
[ "1506.02488" ]
1506.02488#1
On the Fuzzy Stability of an Affine Functional Equation
Abstract: In this paper, we obtain the general solution of the following functional equation f (3x + y + z) + f (x + 3y + z) + f (x + y + 3z) + f (x) + f (y) + f (z) = 6f (x + y + z). We establish the Hyers-Ulam-Rassias stability of the above functional equation in the fuzzy normed spaces. Further we show the above functional equation is stable in the sense of Hyers and Ulam in fuzzy normed spaces. 1.
1506.02488#0
1506.02488#2
1506.02488
[ "1506.02488" ]
1506.02488#2
On the Fuzzy Stability of an Affine Functional Equation
Introduction In modelling applied problems only partial informations may be known (or) there may be a degree of uncertainty in the parameters used in the model or some measurements may be imprecise. Due to such features, we are tempted to consider the study of functional equations in the fuzzy setting. For the last 40 years, fuzzy theory has become very active area of research and a lot of development has been made in the theory of fuzzy sets [1] to ï¬ nd the fuzzy analogues of the classical set theory.
1506.02488#1
1506.02488#3
1506.02488
[ "1506.02488" ]
1506.02488#3
On the Fuzzy Stability of an Affine Functional Equation
This branch ï¬ nds a wide range of applications in the ï¬ eld of science and engineering. A.K. Katsaras [2] introduced an idea of fuzzy norm on a linear space in 1984, in the same year Cpmgxin Wu and Jinxuan Fang [3] introduced a notion of fuzzy normed space to give a generalization of the Kolmogoroï¬ normalized theorem for fuzzy topological linear spaces. In 1991, R.
1506.02488#2
1506.02488#4
1506.02488
[ "1506.02488" ]
1506.02488#4
On the Fuzzy Stability of an Affine Functional Equation
Biswas [4] deï¬ ned and studied fuzzy inner product spaces in linear space. In 1992, C. Felbin [5] introduced an alternative deï¬ nition of a fuzzy norm on a linear topological structure of a fuzzy normed linear spaces. In 2003, T. Bag and S.K. Samanta [6] modiï¬ ed the deï¬ nition of S.C. Cheng and J.N. Mordeson [7] by removing a regular condition. In 1940, Ulam [8] raised a question concerning the stability of group homomorphism as follows: Let G1 be a group and G2 a metric group with the metric d(., .). Given ε > 0, does there exists a δ > 0 such that if a function f : G1 â G2 satisï¬
1506.02488#3
1506.02488#5
1506.02488
[ "1506.02488" ]
1506.02488#5
On the Fuzzy Stability of an Affine Functional Equation
es the inequality d(f (xy), f (x)f (y)) < δ for all x, y â G1, then there exists a homomorphism h : G1 â G2 with d(f (x), H(x)) < ε for all x â G1? 1 The concept of stability for a functional equation arises when we replace the functional equation by an inequality which acts as a perturbation of the equation. In 1941, the case of approximately additive mappings was solved by Hyers [9] under the assumption that G2 is a Banach space. In 1978, a generalized version of the theorem of Hyers for approximately linear mapping was given by Th.M.
1506.02488#4
1506.02488#6
1506.02488
[ "1506.02488" ]
1506.02488#6
On the Fuzzy Stability of an Affine Functional Equation
Rassias [10]. He proved that for a mapping f : E1 â E2 such that f (tx) is continuous in t â R and for each ï¬ xed x â E1 assume that there exist a constant ε > 0 and p â [0, 1) with k f (x + y) â f (x) â f (y) k6 ε(k x kp + k y kp) (1.1) x, y â E1, then there exist a unique R-Linear mapping T : E1 â E2 such that k f (x) â T (x) k6 2ε 2 â 2p k x kp (x â E1)
1506.02488#5
1506.02488#7
1506.02488
[ "1506.02488" ]