id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1506.06724#77
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
grabbing his wristwatch off the bedside table he checked the time , grimacing when he saw that it was just after two in the afternoon . jeanne louise should nâ t be up yet . stiï¬ ing a yawn , he slid out of bed and made his way to the en suite bathroom for a shower twenty minutes later paul was showered , dressed , and had . brushed his teeth and hair . feeling somewhat alive now , he made his way out of his and jeanne louise â s room , pausing to look in on livy as he passed . she cried . quentin put a heavy , warm , calming hand on her thigh , saying , â he should be sober by then . â a cell phone rang . he pulled his from his back pocket , glanced at it , then used the remote to turn the tv to the channel that showed the feed from the camera at the security gate . â oh , it â s rachel . â now however she was out of his shot . he had missed it completely until he had ended up on the ground with his shotgun . an old clock hung on the wall near the door . the was obviously broken , the small red hand ticking the same second away over and over again . morgan squeezed the trigger and pellets ripped out of their package , bounced down the barrel , ï¬ ew through the air and ripped into the old clock tearing it in two before it smashed to the ground . a man sat in a chair , facing the wall opposite of me . it nearly startled me when i ï¬ rst saw him , and made a bit of a squeak , but he did nothing . he had dark gray hair , a black suit and pants , and a gray and blue striped tie . s-sir ? i said . its been years since we last played together , but as i recall , he was rather weak at the net . or was it his serving ? all i know is he plays tennis much better than he plays cricket . perhaps , mr brearly , frances eventually replied , we should wait until we actually start playing . then we can ascertain our oppositions faults , and make a plan based on the new information .
1506.06724#76
1506.06724#78
1506.06724
[ "1502.03044" ]
1506.06724#78
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
since it was the middle of summer , there were candles in the ï¬ replace instead of a ï¬ re . but it still cast a romantic glow over the room . there were candles on the mantle and on a table set up in the corner with ï¬ owers . as she looked around , her eyes instinctively turned to ï¬ nd max who was behind a bar opening a bottle of champagne . the doors were closed quietly behind her and her mouth felt dry as she looked across the room at the man who had haunted her dreams for so long . the open doorway of another house provided a view of an ancient game of tiles . it wasnt the game that held reddings attention . it was the four elderly people who sat around a table playing the game . they were well beyond their productive years and the canal township had probably been their whole lives . redding and lin ming stepped away from the doorway right into the path of a wooden pushcart .
1506.06724#77
1506.06724#79
1506.06724
[ "1502.03044" ]
1506.06724#79
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
along with the ï¬ sh , howard had given them some other picnic treats that had spoiled ... mushrooms in cream sauce , rotted greens . the bats and temp were only eating from the river now , but the remaining picnic food was running low . there were a few loaves of stale bread , some cheese , some dried vegetables , and a couple of cakes . gregor looked over the supplies and thought about boots wailing for food and water in the jungle . it had been unbearable . he felt the ï¬ rst stirrings of fear mixing with his anger . a light ï¬ icked on in the room and eric jerked , blinking for a minute at the brightness before the images focused . there was a tall , thin man standing over a mannequin . he looked like he was assembling it , since its leg was on the ground next to the man and its arm was in two pieces farther away . then the mannequin â s head turned .
1506.06724#78
1506.06724#80
1506.06724
[ "1502.03044" ]
1506.06724#80
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
# References [1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine trans- lation by jointly learning to align and translate. ICLR, 2015. 4 [2] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014. 4
1506.06724#79
1506.06724#81
1506.06724
[ "1502.03044" ]
1506.06724#81
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
[3] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. 4 and B. Taskar. Movie/script: Alignment and parsing of video and text tran- scription. In ECCV, 2008. 2 [5] M. Everingham, J. Sivic, and A. Zisserman. â Hello! My name is... Buffyâ â Automatic Naming of Characters in TV Video. BMVC, pages 899â 908, 2006. 2 [6] A. Farhadi, M. Hejrati, M. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth.
1506.06724#80
1506.06724#82
1506.06724
[ "1502.03044" ]
1506.06724#82
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Every picture tells a story: Generating sentences for images. In ECCV, 2010. 2 [7] S. Fidler, A. Sharma, and R. Urtasun. A sentence is worth a thousand pixels. In CVPR, 2013. 2 [8] A. Gupta and L. Davis. Beyond nouns: Exploiting prepo- sitions and comparative adjectives for learning visual classi- ï¬ ers. In ECCV, 2008. 1 [9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. 4 [10] N. Kalchbrenner and P. Blunsom.
1506.06724#81
1506.06724#83
1506.06724
[ "1502.03044" ]
1506.06724#83
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Recurrent continuous translation models. In EMNLP, pages 1700â 1709, 2013. 4 [11] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 1, 2 [12] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980, 2014. 5 [13] R. Kiros, R. Salakhutdinov, and R. S. Zemel.
1506.06724#82
1506.06724#84
1506.06724
[ "1502.03044" ]
1506.06724#84
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Unifying visual-semantic embeddings with multimodal neural lan- guage models. CoRR, abs/1411.2539, 2014. 1, 2, 3, 5, 9, 10 [14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skip-Thought Vectors. In Arxiv, 2015. 3, 4 [15] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S.
1506.06724#83
1506.06724#85
1506.06724
[ "1502.03044" ]
1506.06724#85
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Fidler. What are you talking about? text-to-image coreference. In CVPR, 2014. 1, 2 [16] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. Berg, and T. Berg. Baby talk: Understanding and generating simple image descriptions. In CVPR, 2011. 2 [17] D. Lin, S. Fidler, C. Kong, and R.
1506.06724#84
1506.06724#86
1506.06724
[ "1502.03044" ]
1506.06724#86
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Urtasun. Visual Seman- tic Search: Retrieving Videos via Complex Textual Queries. CVPR, pages 2657â 2664, 2014. 1, 2 [18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- mon objects in context.
1506.06724#85
1506.06724#87
1506.06724
[ "1502.03044" ]
1506.06724#87
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
In ECCV, pages 740â 755. 2014. 1, 19 [19] X. Lin and D. Parikh. Donâ t just listen, use your imagination: In Leveraging visual common sense for non-visual tasks. CVPR, 2015. 1 [20] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In NIPS, 2014. 1 [21] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille.
1506.06724#86
1506.06724#88
1506.06724
[ "1502.03044" ]
1506.06724#88
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Ex- plain images with multimodal recurrent neural networks. In arXiv:1410.1090, 2014. 1, 2 [22] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efï¬ cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. 4 [23] K. Papineni, S. Roukos, T. Ward, and W. J. Zhu.
1506.06724#87
1506.06724#89
1506.06724
[ "1502.03044" ]
1506.06724#89
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
BLEU: a method for automatic evaluation of machine translation. In ACL, pages 311â 318, 2002. 6 [24] H. Pirsiavash, C. Vondrick, and A. Torralba. why in images. arXiv.org, jun 2014. 2 Inferring the [25] V. Ramanathan, A. Joulin, P. Liang, and L. Fei-Fei. Link- ing People in Videos with â Theirâ Names Using Coreference Resolution. In ECCV, pages 95â 110. 2014. 2 [26] V. Ramanathan, P. Liang, and L. Fei-Fei.
1506.06724#88
1506.06724#90
1506.06724
[ "1502.03044" ]
1506.06724#90
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Video event under- standing using natural language descriptions. In ICCV, 2013. 1 [27] A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele. A dataset for movie description. In CVPR, 2015. 2, 5 [28] P. Sankar, C. V. Jawahar, and A. Zisserman. Subtitle-free Movie to Script Alignment. In BMVC, 2009. 2 [29] A. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efï¬
1506.06724#89
1506.06724#91
1506.06724
[ "1502.03044" ]
1506.06724#91
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
- cient Structured Prediction with Latent Variables for General Graphical Models. In ICML, 2012. 6 [30] J. Sivic, M. Everingham, and A. Zisserman. â Who are you?â - Learning person speciï¬ c classiï¬ ers from video. CVPR, pages 1145â 1152, 2009. 2 [31] I. Sutskever, O. Vinyals, and Q. V. Le.
1506.06724#90
1506.06724#92
1506.06724
[ "1502.03044" ]
1506.06724#92
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Sequence to sequence learning with neural networks. In NIPS, 2014. 4 [32] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 5 [33] M. Tapaswi, M. Bauml, and R. Stiefelhagen.
1506.06724#91
1506.06724#93
1506.06724
[ "1502.03044" ]
1506.06724#93
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Book2Movie: Aligning Video scenes with Book chapters. In CVPR, 2015. 2 [34] M. Tapaswi, M. Buml, and R. Stiefelhagen. Aligning Plot Synopses to Videos for Story-based Retrieval. IJMIR, 4:3â 16, 2015. 1, 2, 6 [35] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko.
1506.06724#92
1506.06724#94
1506.06724
[ "1502.03044" ]
1506.06724#94
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Translating Videos to Natural Language Using Deep Recurrent Neural Networks. CoRR abs/1312.6229, cs.CV, 2014. 1, 2 [36] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In arXiv:1411.4555, 2014. 1, 2 [37] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhut- dinov, R. Zemel, and Y. Bengio.
1506.06724#93
1506.06724#95
1506.06724
[ "1502.03044" ]
1506.06724#95
Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Show, attend and tell: Neural image caption generation with visual attention. In arXiv:1502.03044, 2015. 2 [38] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning Deep Features for Scene Recognition using Places Database. In NIPS, 2014. 5, 7
1506.06724#94
1506.06724
[ "1502.03044" ]
1506.02438#0
High-Dimensional Continuous Control Using Generalized Advantage Estimation
8 1 0 2 t c O 0 2 ] G L . s c [ 6 v 8 3 4 2 0 . 6 0 5 1 : v i X r a Published as a conference paper at ICLR 2016 # HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan and Pieter Abbeel Department of Electrical Engineering and Computer Science University of California, Berkeley {joschu,pcmoritz,levine,jordan,pabbeel}@eecs.berkeley.edu # ABSTRACT
1506.02438#1
1506.02438
[ "1502.05477" ]
1506.02438#1
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods are an appealing approach in reinforcement learning be- cause they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difï¬ - culty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the ï¬ rst challenge by using value functions to substan- tially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(λ). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomo- tion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy repre- sentations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experi- ence required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
1506.02438#0
1506.02438#2
1506.02438
[ "1502.05477" ]
1506.02438#2
High-Dimensional Continuous Control Using Generalized Advantage Estimation
# INTRODUCTION The typical problem formulation in reinforcement learning is to maximize the expected total reward of a policy. A key source of difï¬ culty is the long time delay between actions and their positive or negative effect on rewards; this issue is called the credit assignment problem in the reinforcement learning literature (Minsky, 1961; Sutton & Barto, 1998), and the distal reward problem in the behavioral literature (Hull, 1943). Value functions offer an elegant solution to the credit assignment problemâ they allow us to estimate the goodness of an action before the delayed reward arrives. Reinforcement learning algorithms make use of value functions in a variety of different ways; this paper considers algorithms that optimize a parameterized policy and use value functions to help estimate how the policy should be improved. When using a parameterized stochastic policy, it is possible to obtain an unbiased estimate of the gradient of the expected total returns (Williams, 1992; Sutton et al., 1999; Baxter & Bartlett, 2000); these noisy gradient estimates can be used in a stochastic gradient ascent algorithm. Unfortunately, the variance of the gradient estimator scales unfavorably with the time horizon, since the effect of an action is confounded with the effects of past and future actions. Another class of policy gradient algorithms, called actor-critic methods, use a value function rather than the empirical returns, ob- taining an estimator with lower variance at the cost of introducing bias (Konda & Tsitsiklis, 2003; Hafner & Riedmiller, 2011). But while high variance necessitates using more samples, bias is more perniciousâ even with an unlimited number of samples, bias can cause the algorithm to fail to con- verge, or to converge to a poor solution that is not even a local optimum. We propose a family of policy gradient estimators that signiï¬ cantly reduce variance while main- taining a tolerable level of bias. We call this estimation scheme, parameterized by γ â [0, 1] and 1
1506.02438#1
1506.02438#3
1506.02438
[ "1502.05477" ]
1506.02438#3
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Published as a conference paper at ICLR 2016 λ â [0, 1], the generalized advantage estimator (GAE). Related methods have been proposed in the context of online actor-critic methods (Kimura & Kobayashi, 1998; Wawrzy´nski, 2009). We provide a more general analysis, which is applicable in both the online and batch settings, and discuss an in- terpretation of our method as an instance of reward shaping (Ng et al., 1999), where the approximate value function is used to shape the reward. We present experimental results on a number of highly challenging 3D locomotion tasks, where we show that our approach can learn complex gaits using high-dimensional, general purpose neural network function approximators for both the policy and the value function, each with over 104 parameters. The policies perform torque-level control of simulated 3D robots with up to 33 state dimensions and 10 actuators.
1506.02438#2
1506.02438#4
1506.02438
[ "1502.05477" ]
1506.02438#4
High-Dimensional Continuous Control Using Generalized Advantage Estimation
The contributions of this paper are summarized as follows: 1. We provide justiï¬ cation and intuition for an effective variance reduction scheme for policy gra- dients, which we call generalized advantage estimation (GAE). While the formula has been pro- posed in prior work (Kimura & Kobayashi, 1998; Wawrzy´nski, 2009), our analysis is novel and enables GAE to be applied with a more general set of algorithms, including the batch trust-region algorithm we use for our experiments.
1506.02438#3
1506.02438#5
1506.02438
[ "1502.05477" ]
1506.02438#5
High-Dimensional Continuous Control Using Generalized Advantage Estimation
2. We propose the use of a trust region optimization method for the value function, which we ï¬ nd is a robust and efï¬ cient way to train neural network value functions with thousands of parameters. 3. By combining (1) and (2) above, we obtain an algorithm that empirically is effective at learning neural network policies for challenging control tasks. The results extend the state of the art in using reinforcement learning for high-dimensional continuous control. Videos are available at https://sites.google.com/site/gaepapersupp.
1506.02438#4
1506.02438#6
1506.02438
[ "1502.05477" ]
1506.02438#6
High-Dimensional Continuous Control Using Generalized Advantage Estimation
# 2 PRELIMINARIES We consider an undiscounted formulation of the policy optimization problem. The initial state 80 is sampled from distribution po. A trajectory (so, a0, $1,41,...) is generated by sampling ac- tions according to the policy a; ~ 7(a; | s,) and sampling the states according to the dynamics Stn © P(S141 | Sz, 4), until a terminal (absorbing) state is reached. A reward r, = =I (St, a Si+1) is received at each timestep. The goal is to maximize the expected total reward )7?° 9 rz, which is assumed to be finite for all policies. Note that we are not using a discount as part of the problem spec- ification; it will appear below as an algorithm parameter that adjusts a bias-variance tradeoff. But the discounted problem (maximizing ran y'r,) can be handled as an instance of the undiscounted problem in which we absorb the discount factor into the reward function, making it time-dependent. Policy gradient methods maximize the expected total reward by repeatedly estimating the gradient g:= VoE Dean r;]. There are several different related expressions for the policy gradient, which have the form
1506.02438#5
1506.02438#7
1506.02438
[ "1502.05477" ]
1506.02438#7
High-Dimensional Continuous Control Using Generalized Advantage Estimation
=E So WiVo log 7o(ae | 52) ; dd) t=0 where Ψt may be one of the following: 1. P29 Te: total reward of the trajectory. 4. Qâ ¢(s;, a4): state-action value function. 2. OP, rv: reward following action ay. 5. Aâ ¢(s,,a;): advantage function. 3. Py rv â b(se): baselined version of previous formula. 6. re + V"(8141) â V7 (s¢): TD residual. # The latter formulas use the definitions # » ru | V Ï (st) := Est+1:â , at:â # rt+l l=0 AÏ (st, at) := QÏ (st, at) â V Ï (st), Q* (st, at) = Eesitticos >» ru (2) 1=0 # (Advantage function). 2 (3)
1506.02438#6
1506.02438#8
1506.02438
[ "1502.05477" ]
1506.02438#8
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Published as a conference paper at ICLR 2016 Here, the subscript of E enumerates the variables being integrated over, where states and actions are sampled sequentially from the dynamics model P (st+1 | st, at) and policy Ï (at | st), respectively. The colon notation a : b refers to the inclusive range (a, a + 1, . . . , b). These formulas are well known and straightforward to obtain; they follow directly from Proposition 1, which will be stated shortly. The choice Ψt = AÏ (st, at) yields almost the lowest possible variance, though in practice, the advantage function is not known and must be estimated. This statement can be intuitively justiï¬ ed by the following interpretation of the policy gradient: that a step in the policy gradient direction should increase the probability of better-than-average actions and decrease the probability of worse-than- average actions. The advantage function, by itâ
1506.02438#7
1506.02438#9
1506.02438
[ "1502.05477" ]
1506.02438#9
High-Dimensional Continuous Control Using Generalized Advantage Estimation
s deï¬ nition AÏ (s, a) = QÏ (s, a) â V Ï (s), measures whether or not the action is better or worse than the policyâ s default behavior. Hence, we should choose Ψt to be the advantage function AÏ (st, at), so that the gradient term Ψtâ θ log Ï Î¸(at | st) points in the direction of increased Ï Î¸(at | st) if and only if AÏ (st, at) > 0.
1506.02438#8
1506.02438#10
1506.02438
[ "1502.05477" ]
1506.02438#10
High-Dimensional Continuous Control Using Generalized Advantage Estimation
See Greensmith et al. (2004) for a more rigorous analysis of the variance of policy gradient estimators and the effect of using a baseline. We will introduce a parameter γ that allows us to reduce variance by downweighting rewards cor- responding to delayed effects, at the cost of introducing bias. This parameter corresponds to the discount factor used in discounted formulations of MDPs, but we treat it as a variance reduction parameter in an undiscounted problem; this technique was analyzed theoretically by Marbach & Tsitsiklis (2003); Kakade (2001b); Thomas (2014). The discounted value functions are given by:
1506.02438#9
1506.02438#11
1506.02438
[ "1502.05477" ]
1506.02438#11
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Vâ ¢7(s¢) = Ese ico, » vn Qâ ¢7 (8,41) = Eseyiico, » vn (4) At+1:c0 1=0 1=0 Aâ ¢ (8p, a1) = Qâ ¢ (St, a2) â V7 (81). (5) The discounted approximation to the policy gradient is defined as follows: oo oo f= Esso..0 DAMS: a0)Vo log 79 (a: | 3) : (6) The following section discusses how to obtain biased (but not too biased) estimators for AÏ ,γ, giving us noisy estimates of the discounted policy gradient in Equation (6). Before proceeding, we will introduce the notion of a γ-just estimator of the advantage function, which is an estimator that does not introduce bias when we use it in place of AÏ ,γ (which is not known and must be estimated) in Equation (6) to estimate gγ.1 Consider an advantage estimator Ë At(s0:â , a0:â ), which may in general be a function of the entire trajectory.
1506.02438#10
1506.02438#12
1506.02438
[ "1502.05477" ]
1506.02438#12
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Deï¬ nition 1. The estimator Ë At is γ-just if Esso:c0 [Ar(s0:205 a0:20)Vo log 9 (at | s1)| = Esso:00 [Aâ ¢-7 (sz, at) Vo log 79 (a:z | 82)] - (7) It follows immediately that if Ë At is γ-just for all t, then oo Eso.c0 0:00 At(S0:00; do:00) Vo log mo (at | 3) = (8) t=0 One sufï¬ cient condition for Ë At to be γ-just is that Ë At decomposes as the difference between two functions Qt and bt, where Qt can depend on any trajectory variables but gives an unbiased estimator of the γ-discounted Q-function, and bt is an arbitrary function of the states and actions sampled before at.
1506.02438#11
1506.02438#13
1506.02438
[ "1502.05477" ]
1506.02438#13
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Proposition 1. Suppose that Ë At can be written in the form Ë At(s0:â , a0:â ) = Qt(st:â , at:â ) â bt(s0:t, a0:tâ 1) such that for all (st, at), Est+1:â ,at+1:â | st,at [Qt(st:â , at:â )] = QÏ ,γ(st, at). Then Ë A is γ-just. 1Note, that we have already introduced bias by using AÏ ,γ in place of AÏ ; here we are concerned with obtaining an unbiased estimate of gγ, which is a biased estimate of the policy gradient of the undiscounted MDP. 3
1506.02438#12
1506.02438#14
1506.02438
[ "1502.05477" ]
1506.02438#14
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Published as a conference paper at ICLR 2016 The proof is provided in Appendix B. It is easy to verify that the following expressions are γ-just advantage estimators for Ë At: © Deore e Aâ ¢7 (54, at) © Qâ ¢7 (81, a2) er, t+ (Stg1) â Vâ ¢7 (82). # 3 ADVANTAGE FUNCTION ESTIMATION This section will be concerned with producing an accurate estimate Ë At of the discounted advan- tage function AÏ ,γ(st, at), which will then be used to construct a policy gradient estimator of the following form: N oo yb AV log mo(a? | s?) (9) n=1 t=0 where n indexes over a batch of episodes.
1506.02438#13
1506.02438#15
1506.02438
[ "1502.05477" ]
1506.02438#15
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Let V be an approximate value function. Deï¬ ne δV t = rt + γV (st+1) â V (st), i.e., the TD residual of V with discount γ (Sutton & Barto, 1998). Note that δV t can be considered as an estimate of the advantage of the action at. In fact, if we have the correct value function V = V Ï ,γ, then it is a γ-just advantage estimator, and in fact, an unbiased estimator of AÏ ,γ: Est+1 δV Ï ,γ t = Est+1 [rt + γV Ï ,γ(st+1) â V Ï ,γ(st)] = Est+1 [QÏ ,γ(st, at) â V Ï ,γ(st)] = AÏ ,γ(st, at). (10) However, this estimator is only γ-just for V = V Ï ,γ, otherwise it will yield biased policy gradient estimates. Next, let us consider taking the sum of k of these δ terms, which we will denote by Ë A(k) # t # Ë A(1) t Ë A(2) t Ë A(3) t (11) # := δV t t + γδV := δV t + γδV = â V (st) + rt + γV (st+1) = â V (st) + rt + γrt+1 + γ2V (st+2)
1506.02438#14
1506.02438#16
1506.02438
[ "1502.05477" ]
1506.02438#16
High-Dimensional Continuous Control Using Generalized Advantage Estimation
(12) t+1 t+1 + γ2δV := δV t+2 = â V (st) + rt + γrt+1 + γ2rt+2 + γ3V (st+3) (13) k-1 AM; Soyo â V(se) tre trig He te rege $V (sie) (14) 1=0 # Ë A(k) t These equations result from a telescoping sum, and we see that Ë A(k) involves a k-step estimate of the returns, minus a baseline term â V (st). Analogously to the case of δV , we can consider Ë A(k) to be an estimator of the advantage function, which is only γ-just when V = V Ï ,γ. However, t note that the bias generally becomes smaller as k â â , since the term γkV (st+k) becomes more heavily discounted, and the term â V (st) does not affect the bias. Taking k â â , we get Al) = yy 541 = â V (st) + Vy (15)
1506.02438#15
1506.02438#17
1506.02438
[ "1502.05477" ]
1506.02438#17
High-Dimensional Continuous Control Using Generalized Advantage Estimation
which is simply the empirical returns minus the value function baseline. 4 Published as a conference paper at ICLR 2016 The generalized advantage estimator GAE(γ, λ) is deï¬ ned as the exponentially-weighted average of these k-step estimators: AGAFON 1 (4)? + rAAP) 4 2A 4 ...) = (Lâ A)(5Y + ACY + 8tha) +72 (6F +901 + 775rh2) +...) =(L-A)(6/(L+AFAM +...) GL AF MY + A384...) by oro(AP + M4 A 4...) +...) - (a (+) b88.a(745) 7?68.2(5) b-) ( = VM) ohn 1=0 # Ë AGAE(γ,λ) # t l=0
1506.02438#16
1506.02438#18
1506.02438
[ "1502.05477" ]
1506.02438#18
High-Dimensional Continuous Control Using Generalized Advantage Estimation
From Equation (16), we see that the advantage estimator has a remarkably simple formula involving a discounted sum of Bellman residual terms. Section 4 discusses an interpretation of this formula as the returns in an MDP with a modiï¬ ed reward function. The construction we used above is closely analogous to the one used to deï¬ ne TD(λ) (Sutton & Barto, 1998), however TD(λ) is an estimator of the value function, whereas here we are estimating the advantage function. There are two notable special cases of this formula, obtained by setting λ = 0 and λ = 1. # Ap := 5 Ar GAE(γ, 0) : = rt + γV (st+1) â V (st) (17) GAEL): Ar = So 4'b41 = DO a're4i â Vs) (18) 1=0 1=0 GAE(γ, 1) is γ-just regardless of the accuracy of V , but it has high variance due to the sum of terms. GAE(γ, 0) is γ-just for V = V Ï ,γ and otherwise induces bias, but it typically has much lower variance. The generalized advantage estimator for 0 < λ < 1 makes a compromise between bias and variance, controlled by parameter λ.
1506.02438#17
1506.02438#19
1506.02438
[ "1502.05477" ]
1506.02438#19
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Weâ ve described an advantage estimator with two separate parameters γ and λ, both of which con- tribute to the bias-variance tradeoff when using an approximate value function. However, they serve different purposes and work best with different ranges of values. γ most importantly determines the scale of the value function V Ï ,γ, which does not depend on λ. Taking γ < 1 introduces bias into the policy gradient estimate, regardless of the value functionâ s accuracy. On the other hand, λ < 1 introduces bias only when the value function is inaccurate. Empirically, we ï¬ nd that the best value of λ is much lower than the best value of γ, likely because λ introduces far less bias than γ for a reasonably accurate value function. Using the generalized advantage estimator, we can construct a biased estimator of gγ, the discounted policy gradient from Equation (6):
1506.02438#18
1506.02438#20
1506.02438
[ "1502.05477" ]
1506.02438#20
High-Dimensional Continuous Control Using Generalized Advantage Estimation
xo x xo gf ~E Y- Vo trot sgaear| =E]S~ Vo log mo(ar | 81) SA) |. 19) t=0 t=0 l=0 where equality holds when λ = 1. 4 # INTERPRETATION AS REWARD SHAPING In this section, we discuss how one can interpret λ as an extra discount factor applied after per- forming a reward shaping transformation on the MDP. We also introduce the notion of a response function to help understand the bias introduced by γ and λ. Reward shaping (Ng et al., 1999) refers to the following transformation of the reward function of an MDP: let Φ :
1506.02438#19
1506.02438#21
1506.02438
[ "1502.05477" ]
1506.02438#21
High-Dimensional Continuous Control Using Generalized Advantage Estimation
S â R be an arbitrary scalar-valued function on state space, and deï¬ ne the transformed reward function Ë r by F(s,a,sâ ) =r(s,a, 8â ) + y®(sâ ) â ®(s), (20) 5 (16) Published as a conference paper at ICLR 2016 which in turn deï¬ nes a transformed MDP. This transformation leaves the discounted advantage function AÏ ,γ unchanged for any policy Ï
1506.02438#20
1506.02438#22
1506.02438
[ "1502.05477" ]
1506.02438#22
High-Dimensional Continuous Control Using Generalized Advantage Estimation
. To see this, consider the discounted sum of rewards of a trajectory starting with state st: oo 0° lx l Sy F(Se41, Qt, S441) = Sy 1(Se41,Ge41, St4i41) â O(S2). (21) 1=0 1=0 Letting Ë QÏ ,γ, Ë V Ï ,γ, Ë AÏ ,γ be the value and advantage functions of the transformed MDP, one obtains from the deï¬ nitions of these quantities that
1506.02438#21
1506.02438#23
1506.02438
[ "1502.05477" ]
1506.02438#23
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Ë QÏ ,γ(s, a) = QÏ ,γ(s, a) â Φ(s) Ë V Ï ,γ(s, a) = V Ï ,γ(s) â Φ(s) Ë AÏ ,γ(s, a) = (QÏ ,γ(s, a) â Φ(s)) â (V Ï ,γ(s) â Φ(s)) = AÏ ,γ(s, a). (24) Note that if Φ happens to be the state-value function V Ï ,γ from the original MDP, then the trans- formed MDP has the interesting property that Ë V Ï ,γ(s) is zero at every state.
1506.02438#22
1506.02438#24
1506.02438
[ "1502.05477" ]
1506.02438#24
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Note that (Ng et al., 1999) showed that the reward shaping transformation leaves the policy gradient and optimal policy unchanged when our objective is to maximize the discounted sum of rewards an y'r(8¢, Gt, 8441). In contrast, this paper is concerned with maximizing the undiscounted sum of rewards, where the discount 7 is used as a variance-reduction parameter. Having reviewed the idea of reward shaping, let us consider how we could use it to get a policy gradient estimate. The most natural approach is to construct policy gradient estimators that use discounted sums of shaped rewards Ë
1506.02438#23
1506.02438#25
1506.02438
[ "1502.05477" ]
1506.02438#25
High-Dimensional Continuous Control Using Generalized Advantage Estimation
r. However, Equation (21) shows that we obtain the discounted sum of the original MDPâ s rewards r minus a baseline term. Next, letâ s consider using a â steeperâ discount γλ, where 0 â ¤ λ â ¤ 1. Itâ s easy to see that the shaped reward Ë r equals the Bellman residual term δV , introduced in Section 3, where we set Φ = V . Letting Φ = V , we see that xo xe SON Fsertsar, sestet) = D(â ay'6yyy = APAPO, (25) l=0 l=0
1506.02438#24
1506.02438#26
1506.02438
[ "1502.05477" ]
1506.02438#26
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Hence, by considering the γλ-discounted sum of shaped rewards, we exactly obtain the generalized advantage estimators from Section 3. As shown previously, λ = 1 gives an unbiased estimate of gγ, whereas λ < 1 gives a biased estimate. To further analyze the effect of this shaping transformation and parameters γ and λ, it will be useful to introduce the notion of a response function Ï , which we deï¬ ne as follows: XE Se, ¢) = E [regi | se, a] â E [rei | se] - (26) Note that Aâ ¢*7(s,a) = 7729 7'x(J; s,a), hence the response function decomposes the advantage function across timesteps. The response function lets us quantify the temporal credit assignment problem: long range dependencies between actions and rewards correspond to nonzero values of the response function for / >> 0. Next, let us revisit the discount factor y and the approximation we are making by using Aâ ? rather than Aâ ¢!.
1506.02438#25
1506.02438#27
1506.02438
[ "1502.05477" ]
1506.02438#27
High-Dimensional Continuous Control Using Generalized Advantage Estimation
The discounted policy gradient estimator from Equation (6) has a sum of terms of the form Vo log mo(a; | 8:)Aâ ¢ (St, 44) = Vo log mo(a 51) So y'x(l; St, 41). (27) 1=0 Using a discount 7 < 1 corresponds to dropping the terms with | >> 1/(1 â ). Thus, the error introduced by this approximation will be small if y rapidly decays as / increases, i.e., if the effect of an action on rewards is â forgottenâ after + 1/(1 â 7) timesteps. If the reward function 7 were obtained using @ = Vâ ¢7, we would have E[?,4: | s:,a:] = E [r+ | s:] = 0 for 1 > 0, ie., the response function would only be nonzero at | = 0. Therefore, this shaping transformation would turn temporally extended response into an immediate response. Given that V7 completely reduces the temporal spread of the response function, we can hope that a good approximation V + V7 partially reduces it. This observation suggests an interpretation of Equation (16): reshape the rewards using V to shrink the temporal extent of the response function, and then introduce a â steeperâ discount 7A to cut off the noise arising from long delays, i.e., ignore terms Vo log 9 (az | $¢)6¢,, where 1 >> 1/(1 â 7). 6 (22)
1506.02438#26
1506.02438#28
1506.02438
[ "1502.05477" ]
1506.02438#28
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Published as a conference paper at ICLR 2016 # 5 VALUE FUNCTION ESTIMATION A variety of different methods can be used to estimate the value function (see, e.g., Bertsekas (2012)). When using a nonlinear function approximator to represent the value function, the sim- plest approach is to solve a nonlinear regression problem: N minimize So lÂ¥s(sn) -V,ll, (28) n=1 n=1 where Vv, = an Â¥! rz41 is the discounted sum of rewards, and n indexes over all timesteps in a batch of trajectories. This is sometimes called the Monte Carlo or TD(1) approach for estimating the value function (Sutton & Barto, 1998). For the experiments in this work, we used a trust region method to optimize the value function in each iteration of a batch optimization procedure. The trust region helps us to avoid overfitting to the most recent batch of data. To formulate the trust region problem, we first compute ¢? = Â¥ lene (Sn) â Vall?, where dog is the parameter vector before optimization. Then we solve the following constrained optimization problem: N minimize So Vo(sn) â Vall? @ n=l N . 1 Vo(sn) â Vo, (Sn) |? subject to > 7 <e. (29) Na 20" This constraint is equivalent to constraining the average KL divergence between the previous value function and the new value function to be smaller than ¢, where the value function is taken to pa- rameterize a conditional Gaussian distribution with mean V;,(s) and variance 7. We compute an approximate solution to the trust region problem using the conjugate gradient algo- rithm (Wright & Nocedal, 1999).
1506.02438#27
1506.02438#29
1506.02438
[ "1502.05477" ]
1506.02438#29
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Speciï¬ cally, we are solving the quadratic program minimize g7( â doa) ¢ 1 N subject to = de â Goa)â H(e â Goa) < â ¬. (30) n= where g is the gradient of the objective, and H = W DY, jnda, where jn = VeVo(Sn)-. Note that His the â Gauss-Newtonâ approximation of the Hessian of the objective, and it is (up to a a? factor) the Fisher information matrix when interpreting the value function as a conditional probability dis- tribution. Using matrix-vector products v + Hv to implement the conjugate gradient algorithm, we compute a step direction s * â H~1g. Then we rescale s + as such that $(as)? H(as) = â ¬ and take 6 = ¢o1a + as.
1506.02438#28
1506.02438#30
1506.02438
[ "1502.05477" ]
1506.02438#30
High-Dimensional Continuous Control Using Generalized Advantage Estimation
This procedure is analogous to the procedure we use for updating the policy, which is described further in Section 6 and based on Schulman et al. (2015). # 6 EXPERIMENTS We designed a set of experiments to investigate the following questions: 1. What is the empirical effect of varying λ â [0, 1] and γ â [0, 1] when optimizing episodic total reward using generalized advantage estimation? 2. Can generalized advantage estimation, along with trust region algorithms for policy and value function optimization, be used to optimize large neural network policies for challenging control problems? 2Another natural choice is to compute target values with an estimator based on the TD(λ) backup (Bertsekas, t = VÏ
1506.02438#29
1506.02438#31
1506.02438
[ "1502.05477" ]
1506.02438#31
High-Dimensional Continuous Control Using Generalized Advantage Estimation
old (sn)+ l=0(γλ)lδt+l. While we experimented with this choice, we did not notice a difference in performance from 7 Published as a conference paper at ICLR 2016 # 6.1 POLICY OPTIMIZATION ALGORITHM While generalized advantage estimation can be used along with a variety of different policy gra- dient methods, for these experiments, we performed the policy updates using trust region policy optimization (TRPO) (Schulman et al., 2015). TRPO updates the policy by approximately solving the following constrained optimization problem each iteration:
1506.02438#30
1506.02438#32
1506.02438
[ "1502.05477" ]
1506.02438#32
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Lθold (θ) minimize A) . zeta subject to Dir" (79,,,,70) < â ¬ 1 Qi melan| sn) 5 where Lo,,,(0) = = ). 22" _ A, N hal TO.1a(An | $n) = i Dict! (ora 7) = 5 > Dix (m10(+ | 8n) || T(- | 8n)) (31) As described in (Schulman et al., 2015), we approximately solve this problem by linearizing the objective and quadraticizing the constraint, which yields a step in the direction θ â θold â â F â
1506.02438#31
1506.02438#33
1506.02438
[ "1502.05477" ]
1506.02438#33
High-Dimensional Continuous Control Using Generalized Advantage Estimation
1g, where F is the average Fisher information matrix, and g is a policy gradient estimate. This policy update yields the same step direction as the natural policy gradient (Kakade, 2001a) and natural actor-critic (Peters & Schaal, 2008), however it uses a different stepsize determination scheme and numerical procedure for computing the step. Since prior work (Schulman et al., 2015) compared TRPO to a variety of different policy optimiza- tion algorithms, we will not repeat these comparisons; rather, we will focus on varying the γ, λ parameters of policy gradient estimator while keeping the underlying algorithm ï¬
1506.02438#32
1506.02438#34
1506.02438
[ "1502.05477" ]
1506.02438#34
High-Dimensional Continuous Control Using Generalized Advantage Estimation
xed. For completeness, the whole algorithm for iteratively updating policy and value function is given below: Initialize policy parameter 69 and value function parameter ¢o. fori =0,1,2,... do Simulate current policy 79, until N timesteps are obtained. Compute 6)â at all timesteps t â ¬ {1,2,...,.N}, using V = Vy,. Compute Ay = 37729 (7A)/5Â¥,, at all timesteps. Compute 6; with TRPO update, Equation (31). Compute #;41 with Equation (30). end for Note that the policy update θi â θi+1 is performed using the value function VÏ i for advantage estimation, not VÏ i+1. Additional bias would have been introduced if we updated the value function ï¬ rst. To see this, consider the extreme case where we overï¬ t the value function, and the Bellman residual rt + γV (st+1) â V (st) becomes zero at all timestepsâ the policy gradient estimate would be zero.
1506.02438#33
1506.02438#35
1506.02438
[ "1502.05477" ]
1506.02438#35
High-Dimensional Continuous Control Using Generalized Advantage Estimation
6.2 EXPERIMENTAL SETUP We evaluated our approach on the classic cart-pole balancing problem, as well as several challenging 3D locomotion tasks: (1) bipedal locomotion; (2) quadrupedal locomotion; (3) dynamically standing up, for the biped, which starts off laying on its back. The models are shown in Figure 1. 6.2.1 ARCHITECTURE We used the same neural network architecture for all of the 3D robot tasks, which was a feedforward network with three hidden layers, with 100, 50 and 25 tanh units respectively. The same architecture was used for the policy and value function.
1506.02438#34
1506.02438#36
1506.02438
[ "1502.05477" ]
1506.02438#36
High-Dimensional Continuous Control Using Generalized Advantage Estimation
The ï¬ nal output layer had linear activation. The value function estimator used the same architecture, but with only one scalar output. For the simpler cart- pole task, we used a linear policy, and a neural network with one 20-unit hidden layer as the value function. 8 Published as a conference paper at ICLR 2016 Figure 1: Top ï¬ gures: robot models used for 3D locomotion. Bottom ï¬ gures: a sequence of frames from the learned gaits. Videos are available at https://sites.google.com/site/ gaepapersupp.
1506.02438#35
1506.02438#37
1506.02438
[ "1502.05477" ]
1506.02438#37
High-Dimensional Continuous Control Using Generalized Advantage Estimation
6.2.2 TASK DETAILS For the cart-pole balancing task, we collected 20 trajectories per batch, with a maximum length of 1000 timesteps, using the physical parameters from Barto et al. (1983). The simulated robot tasks were simulated using the MuJoCo physics engine (Todorov et al., 2012). The humanoid model has 33 state dimensions and 10 actuated degrees of freedom, while the quadruped model has 29 state dimensions and 8 actuated degrees of freedom. The initial state for these tasks consisted of a uniform distribution centered on a reference conï¬ guration. We used 50000 timesteps per batch for bipedal locomotion, and 200000 timesteps per batch for quadrupedal locomotion and bipedal standing. Each episode was terminated after 2000 timesteps if the robot had not reached a terminal state beforehand. The timestep was 0.01 seconds.
1506.02438#36
1506.02438#38
1506.02438
[ "1502.05477" ]
1506.02438#38
High-Dimensional Continuous Control Using Generalized Advantage Estimation
The reward functions are provided in the table below. Task Reward 3D biped locomotion â vgwa â LO~* Jul]? â 10? |] fimpact |? + 0.2 Quadruped locomotion â vga â 107% |u|? â 1073 || fimpact ||? + 0.05 Biped getting up â (Rheaa â 1.5)? â 107>||ul|? Here, vfwd := forward velocity, u := vector of joint torques, fimpact := impact forces, hhead := height of the head. In the locomotion tasks, the episode is terminated if the center of mass of the actor falls below a predeï¬ ned height: .8 m for the biped, and .2 m for the quadruped. The constant offset in the reward function encourages longer episodes; otherwise the quadratic reward terms might lead lead to a policy that ends the episodes as quickly as possible.
1506.02438#37
1506.02438#39
1506.02438
[ "1502.05477" ]
1506.02438#39
High-Dimensional Continuous Control Using Generalized Advantage Estimation
6.3 EXPERIMENTAL RESULTS All results are presented in terms of the cost, which is deï¬ ned as negative reward and is mini- mized. Videos of the learned policies are available at https://sites.google.com/site/ gaepapersupp. In plots, â No VFâ means that we used a time-dependent baseline that did not depend on the state, rather than an estimate of the state value function. The time-dependent baseline was computed by averaging the return at each timestep over the trajectories in the batch.
1506.02438#38
1506.02438#40
1506.02438
[ "1502.05477" ]
1506.02438#40
High-Dimensional Continuous Control Using Generalized Advantage Estimation
# 6.3.1 CART-POLE The results are averaged across 21 experiments with different random seeds. Results are shown in Figure 2, and indicate that the best results are obtained at intermediate values of the parameters: γ â [0.96, 0.99] and λ â [0.92, 0.99]. 9 Published as a conference paper at ICLR 2016 Cart-pole performance after 20 iterations Cart-pole learning curves (at y=0.99) cost ° 10 20 30 40 50 number of policy iterations Xr Cart-pole performance after 20 iterations Xr
1506.02438#39
1506.02438#41
1506.02438
[ "1502.05477" ]
1506.02438#41
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Figure 2: Left: learning curves for cart-pole task, using generalized advantage estimation with varying values of λ at γ = 0.99. The fastest policy improvement is obtain by intermediate values of λ in the range [0.92, 0.98]. Right: performance after 20 iterations of policy optimization, as γ and λ are varied. White means higher reward. The best results are obtained at intermediate values of both. 3D Biped > 3D Quadruped , No value fn =1 cost cost 8 100 200 300 400 500 8 200 400 600 800 1000 number of policy iterations number of policy iterations Figure 3: Left: Learning curves for 3D bipedal locomotion, averaged across nine runs of the algo- rithm. Right: learning curves for 3D quadrupedal locomotion, averaged across ï¬ ve runs. 3D BIPEDAL LOCOMOTION Each trial took about 2 hours to run on a 16-core machine, where the simulation rollouts were paral- lelized, as were the function, gradient, and matrix-vector-product evaluations used when optimizing the policy and value function. Here, the results are averaged across 9 trials with different random seeds. The best performance is again obtained using intermediate values of γ â [0.99, 0.995], λ â [0.96, 0.99].
1506.02438#40
1506.02438#42
1506.02438
[ "1502.05477" ]
1506.02438#42
High-Dimensional Continuous Control Using Generalized Advantage Estimation
The result after 1000 iterations is a fast, smooth, and stable gait that is effectively completely stable. We can compute how much â real timeâ was used for this learning process: 0.01 seconds/timestepà 50000 timesteps/batchà 1000 batches/3600·24 seconds/day = 5.8 days. Hence, it is plausible that this algorithm could be run on a real robot, or multiple real robots learning in par- allel, if there were a way to reset the state of the robot and ensure that it doesnâ
1506.02438#41
1506.02438#43
1506.02438
[ "1502.05477" ]
1506.02438#43
High-Dimensional Continuous Control Using Generalized Advantage Estimation
t damage itself. # 6.3.3 OTHER 3D ROBOT TASKS The other two motor behaviors considered are quadrupedal locomotion and getting up off the ground for the 3D biped. Again, we performed 5 trials per experimental condition, with different random seeds (and initializations). The experiments took about 4 hours per trial on a 32-core machine. We performed a more limited comparison on these domains (due to the substantial computational resources required to run these experiments), ï¬ xing γ = 0.995 but varying λ = {0, 0.96}, as well as an experimental condition with no value function. For quadrupedal locomotion, the best results are obtained using a value function with λ = 0.96 Section 6.3.2. For 3D standing, the value function always helped, but the results are roughly the same for λ = 0.96 and λ = 1.
1506.02438#42
1506.02438#44
1506.02438
[ "1502.05477" ]
1506.02438#44
High-Dimensional Continuous Control Using Generalized Advantage Estimation
10 Published as a conference paper at ICLR 2016 ds 3D Standing Up â 7=0.99, No value fn rm = oc 2 1 0.5 0.0 6 ° 100 208 300 400 560 4 5 number of policy iterations = cost 2 1 6 4 5 = Figure 4: (a) Learning curve from quadrupedal walking, (b) learning curve for 3D standing up, (c) clips from 3D standing up. # 7 DISCUSSION
1506.02438#43
1506.02438#45
1506.02438
[ "1502.05477" ]
1506.02438#45
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods provide a way to reduce reinforcement learning to stochastic gradient de- scent, by providing unbiased gradient estimates. However, so far their success at solving difï¬ cult control problems has been limited, largely due to their high sample complexity. We have argued that the key to variance reduction is to obtain good estimates of the advantage function. We have provided an intuitive but informal analysis of the problem of advantage function estimation, and justiï¬ ed the generalized advantage estimator, which has two parameters γ, λ which adjust the bias-variance tradeoff. We described how to combine this idea with trust region policy optimization and a trust region algorithm that optimizes a value function, both represented by neural networks. Combining these techniques, we are able to learn to solve difï¬ cult control tasks that have previously been out of reach for generic reinforcement learning methods. Our main experimental validation of generalized advantage estimation is in the domain of simulated robotic locomotion. As shown in our experiments, choosing an appropriate intermediate value of λ in the range [0.9, 0.99] usually results in the best performance. A possible topic for future work is how to adjust the estimator parameters γ, λ in an adaptive or automatic way. One question that merits future investigation is the relationship between value function estimation error and policy gradient estimation error. If this relationship were known, we could choose an error metric for value function ï¬ tting that is well-matched to the quantity of interest, which is typically the accuracy of the policy gradient estimation. Some candidates for such an error metric might include the Bellman error or projected Bellman error, as described in Bhatnagar et al. (2009). Another enticing possibility is to use a shared function approximation architecture for the policy and the value function, while optimizing the policy using generalized advantage estimation. While for- mulating this problem in a way that is suitable for numerical optimization and provides convergence guarantees remains an open question, such an approach could allow the value function and policy representations to share useful features of the input, resulting in even faster learning. In concurrent work, researchers have been developing policy gradient methods that involve differen- tiation with respect to the continuous-valued action (Lillicrap et al., 2015; Heess et al., 2015).
1506.02438#44
1506.02438#46
1506.02438
[ "1502.05477" ]
1506.02438#46
High-Dimensional Continuous Control Using Generalized Advantage Estimation
While we found empirically that the one-step return (λ = 0) leads to excessive bias and poor performance, these papers show that such methods can work when tuned appropriately. However, note that those papers consider control problems with substantially lower-dimensional state and action spaces than the ones considered here. A comparison between both classes of approach would be useful for future work. # ACKNOWLEDGEMENTS We thank Emo Todorov for providing the simulator as well as insightful discussions, and we thank Greg Wayne, Yuval Tassa, Dave Silver, Carlos Florensa Campo, and Greg Brockman for insightful discussions. This research was funded in part by the Ofï¬ ce of Naval Research through a Young
1506.02438#45
1506.02438#47
1506.02438
[ "1502.05477" ]
1506.02438#47
High-Dimensional Continuous Control Using Generalized Advantage Estimation
11 Published as a conference paper at ICLR 2016 Investigator Award and under grant number N00014-11-1-0688, DARPA through a Young Faculty Award, by the Army Research Ofï¬ ce through the MAST program. A FREQUENTLY ASKED QUESTIONS A.1 WHATâ S THE RELATIONSHIP WITH COMPATIBLE FEATURES? Compatible features are often mentioned in relation to policy gradient algorithms that make use of a value function, and the idea was proposed in the paper On Actor-Critic Methods by Konda & Tsitsiklis (2003). These authors pointed out that due to the limited representation power of the policy, the policy gradient only depends on a certain subspace of the space of advantage functions. This subspace is spanned by the compatible features â
1506.02438#46
1506.02438#48
1506.02438
[ "1502.05477" ]
1506.02438#48
High-Dimensional Continuous Control Using Generalized Advantage Estimation
θi log Ï Î¸(at|st), where i â {1, 2, . . . , dim θ}. This theory of compatible features provides no guidance on how to exploit the temporal structure of the problem to obtain better estimates of the advantage function, making it mostly orthogonal to the ideas in this paper. The idea of compatible features motivates an elegant method for computing the natural policy gradi- ent (Kakade, 2001a; Peters & Schaal, 2008). Given an empirical estimate of the advantage function Ë At at each timestep, we can project it onto the subspace of compatible features by solving the fol- lowing least squares problem: minimize )|[r - Vo log 76 (at | 81) â Arll?. (32) * t If Ë
1506.02438#47
1506.02438#49
1506.02438
[ "1502.05477" ]
1506.02438#49
High-Dimensional Continuous Control Using Generalized Advantage Estimation
A is γ-just, the least squares solution is the natural policy gradient (Kakade, 2001a). Note that any estimator of the advantage function can be substituted into this formula, including the ones we derive in this paper. For our experiments, we also compute natural policy gradient steps, but we use the more computationally efï¬ cient numerical procedure from Schulman et al. (2015), as discussed in Section 6. A.2 WHY DONâ T YOU JUST USE A Q-FUNCTION? Previous actor critic methods, e.g. in Konda & Tsitsiklis (2003), use a Q-function to obtain poten- tially low-variance policy gradient estimates. Recent papers, including Heess et al. (2015); Lillicrap et al. (2015), have shown that a neural network Q-function approximator can used effectively in a policy gradient method. However, there are several advantages to using a state-value function in the manner of this paper. First, the state-value function has a lower-dimensional input and is thus easier to learn than a state-action value function. Second, the method of this paper allows us to smoothly interpolate between the high-bias estimator (λ = 0) and the low-bias estimator (λ = 1). On the other hand, using a parameterized Q-function only allows us to use a high-bias estimator. We have found that the bias is prohibitively large when using a one-step estimate of the returns, i.e., the λ = 0 esti- mator, Ë At = δV t = rt + γV (st+1) â V (st). We expect that similar difï¬ culty would be encountered when using an advantage estimator involving a parameterized Q-function, Ë At = Q(s, a) â V (s). There is an interesting space of possible algorithms that would use a parameterized Q-function and attempt to reduce bias, however, an exploration of these possibilities is beyond the scope of this work. # B PROOFS Proof of Proposition 1: First we can split the expectation into terms involving Q and b, Es0:â ,a0:â [â θ log Ï Î¸(at | st)(Qt(s0:â , a0:â ) â bt(s0:t, a0:tâ 1))] = Es0:â
1506.02438#48
1506.02438#50
1506.02438
[ "1502.05477" ]
1506.02438#50
High-Dimensional Continuous Control Using Generalized Advantage Estimation
,a0:â [â θ log Ï Î¸(at | st)(Qt(s0:â , a0:â ))] â Es0:â ,a0:â [â θ log Ï Î¸(at | st)(bt(s0:t, a0:tâ 1))] (33) 12 Published as a conference paper at ICLR 2016 Weâ ll consider the terms with Q and b in turn. Eg9.50 40:00 [Vo log m9 (at | 81) Qt(So:00 9:0) = E56. ,00:4 [Ese 1iccteg oo [Vo log T6(at | 81) Qt(So:c0; 0:20) ]] = Eso..ao.e [Vo log m9 (at | 81JEs.44.000e4 1:0 [Qt (80:00; @0:00)]] = Es6.4,a0:1-1 [Vo log m9(at | se) Aâ (Se, ae) Next, Bs6..5 00:00 [Vo log 76 (at | 81) be (So:t, ao:tâ 1)] = Boojao0-1 [Eseyrce are [Vo log 79(ar | $¢)be(S0:t; @0:4-1)]] = Eso.4,00:¢-1 [Ese 41:c0,0tsc0 [Vo log 7 (at | s2)] bi(So:t; @0:eâ 1) | = Ego., 001 [0 «be (S0:2, @0:4â -1)] =0. # REFERENCES Barto, Andrew G, Sutton, Richard S, and Anderson, Charles W.
1506.02438#49
1506.02438#51
1506.02438
[ "1502.05477" ]
1506.02438#51
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Neuronlike adaptive elements that can solve difï¬ cult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834â 846, 1983. Baxter, Jonathan and Bartlett, Peter L. Reinforcement learning in POMDPs via direct gradient ascent. In ICML, pp. 41â 48, 2000. Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 2. Athena Scientiï¬ c, 2012. Bhatnagar, Shalabh, Precup, Doina, Silver, David, Sutton, Richard S, Maei, Hamid R, and Szepesv´ari, Csaba.
1506.02438#50
1506.02438#52
1506.02438
[ "1502.05477" ]
1506.02438#52
High-Dimensional Continuous Control Using Generalized Advantage Estimation
In Advances in Convergent temporal-difference learning with arbitrary smooth function approximation. Neural Information Processing Systems, pp. 1204â 1212, 2009. Greensmith, Evan, Bartlett, Peter L, and Baxter, Jonathan. Variance reduction techniques for gradient estimates in reinforcement learning. The Journal of Machine Learning Research, 5:1471â 1530, 2004. Hafner, Roland and Riedmiller, Martin. Reinforcement learning in feedback control. Machine learning, 84 (1-2):137â 169, 2011.
1506.02438#51
1506.02438#53
1506.02438
[ "1502.05477" ]
1506.02438#53
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Heess, Nicolas, Wayne, Greg, Silver, David, Lillicrap, Timothy, Tassa, Yuval, and Erez, Tom. Learning contin- uous control policies by stochastic value gradients. arXiv preprint arXiv:1510.09142, 2015. Hull, Clark. Principles of behavior. 1943. Kakade, Sham. A natural policy gradient. In NIPS, volume 14, pp. 1531â 1538, 2001a. Kakade, Sham.
1506.02438#52
1506.02438#54
1506.02438
[ "1502.05477" ]
1506.02438#54
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Optimizing average reward using discounted rewards. In Computational Learning Theory, pp. 605â 615. Springer, 2001b. Kimura, Hajime and Kobayashi, Shigenobu. An analysis of actor/critic algorithms using eligibility traces: Reinforcement learning with imperfect value function. In ICML, pp. 278â 286, 1998. Konda, Vijay R and Tsitsiklis, John N. On actor-critic algorithms. SIAM journal on Control and Optimization, 42(4):1143â 1166, 2003. Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Sil- ver, David, and Wierstra, Daan.
1506.02438#53
1506.02438#55
1506.02438
[ "1502.05477" ]
1506.02438#55
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Marbach, Peter and Tsitsiklis, John N. Approximate gradient methods in policy-space optimization of markov reward processes. Discrete Event Dynamic Systems, 13(1-2):111â 148, 2003. Minsky, Marvin. Steps toward artiï¬ cial intelligence. Proceedings of the IRE, 49(1):8â 30, 1961.
1506.02438#54
1506.02438#56
1506.02438
[ "1502.05477" ]
1506.02438#56
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Ng, Andrew Y, Harada, Daishi, and Russell, Stuart. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278â 287, 1999. Peters, Jan and Schaal, Stefan. Natural actor-critic. Neurocomputing, 71(7):1180â 1190, 2008. 13 Published as a conference paper at ICLR 2016 Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. arXiv preprint arXiv:1502.05477, 2015. Sutton, Richard S and Barto, Andrew G. Introduction to reinforcement learning. MIT Press, 1998. Sutton, Richard S, McAllester, David A, Singh, Satinder P, and Mansour, Yishay.
1506.02438#55
1506.02438#57
1506.02438
[ "1502.05477" ]
1506.02438#57
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â 1063. Citeseer, 1999. Thomas, Philip. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441â 448, 2014. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026â 5033. IEEE, 2012.
1506.02438#56
1506.02438#58
1506.02438
[ "1502.05477" ]
1506.02438#58
High-Dimensional Continuous Control Using Generalized Advantage Estimation
Wawrzy´nski, PaweÅ . Real-time reinforcement learning by sequential actorâ critics and experience replay. Neural Networks, 22(10):1484â 1497, 2009. Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. Wright, Stephen J and Nocedal, Jorge. Numerical optimization. Springer New York, 1999. 14
1506.02438#57
1506.02438
[ "1502.05477" ]
1506.02626#0
Learning both Weights and Connections for Efficient Neural Networks
5 1 0 2 t c O 0 3 ] E N . s c [ 3 v 6 2 6 2 0 . 6 0 5 1 : v i X r a # Learning both Weights and Connections for Efï¬ cient Neural Networks # Song Han Stanford University songhan@stanford.edu Jeff Pool NVIDIA jpool@nvidia.com # John Tran NVIDIA johntran@nvidia.com William J. Dally Stanford University NVIDIA dally@stanford.edu # Abstract Neural networks are both computationally intensive and memory intensive, making them difï¬ cult to deploy on embedded systems. Also, conventional networks ï¬ x the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unim- portant connections. Finally, we retrain the network to ï¬ ne tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9à , from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13à , from 138 million to 10.3 million, again with no loss of accuracy.
1506.02626#1
1506.02626
[ "1507.06149" ]
1506.02626#1
Learning both Weights and Connections for Efficient Neural Networks
# Introduction Neural networks have become ubiquitous in applications ranging from computer vision [1] to speech recognition [2] and natural language processing [3]. We consider convolutional neural networks used for computer vision tasks which have grown over time. In 1998 Lecun et al. designed a CNN model LeNet-5 with less than 1M parameters to classify handwritten digits [4], while in 2012, Krizhevsky et al. [1] won the ImageNet competition with 60M parameters. Deepface classiï¬ ed human faces with 120M parameters [5], and Coates et al. [6] scaled up a network to 10B parameters. While these large neural networks are very powerful, their size consumes considerable storage, memory bandwidth, and computational resources. For embedded mobile applications, these resource demands become prohibitive. Figure 1 shows the energy cost of basic arithmetic and memory operations in a 45nm CMOS process. From this data we see the energy per connection is dominated by memory access and ranges from 5pJ for 32 bit coefï¬ cients in on-chip SRAM to 640pJ for 32bit coefï¬ cients in off-chip DRAM [7]. Large networks do not ï¬ t in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20Hz would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device. Our goal in pruning networks is to reduce the energy required to run such large networks so they can run in real time on mobile devices. The model size reduction from pruning also facilitates storage and transmission of mobile applications incorporating DNNs.
1506.02626#0
1506.02626#2
1506.02626
[ "1507.06149" ]
1506.02626#2
Learning both Weights and Connections for Efficient Neural Networks
1 Relative Energy Cost Operation Energy [pJ] Relative Cost 32 bit int ADD 32 bit ï¬ oat ADD 32 bit Register File 32 bit int MULT 32 bit ï¬ oat MULT 32 bit SRAM Cache 32 bit DRAM Memory 0.1 0.9 1 3.1 3.7 5 640 1 9 10 31 37 50 6400 1 10 100 1000 = 10000 Figure 1: Energy table for 45nm CMOS process [7]. Memory access is 3 orders of magnitude more energy expensive than simple arithmetic.
1506.02626#1
1506.02626#3
1506.02626
[ "1507.06149" ]
1506.02626#3
Learning both Weights and Connections for Efficient Neural Networks
To achieve this goal, we present a method to prune network connections in a manner that preserves the original accuracy. After an initial training phase, we remove all connections whose weight is lower than a threshold. This pruning converts a dense, fully-connected layer to a sparse layer. This ï¬ rst phase learns the topology of the networks â learning which connections are important and removing the unimportant connections. We then retrain the sparse network so the remaining connections can compensate for the connections that have been removed. The phases of pruning and retraining may be repeated iteratively to further reduce network complexity. In effect, this training process learns the network connectivity in addition to the weights - much as in the mammalian brain [8][9], where synapses are created in the ï¬
1506.02626#2
1506.02626#4
1506.02626
[ "1507.06149" ]
1506.02626#4
Learning both Weights and Connections for Efficient Neural Networks
rst few months of a childâ s development, followed by gradual pruning of little-used connections, falling to typical adult values. # 2 Related Work Neural networks are typically over-parameterized, and there is signiï¬ cant redundancy for deep learn- ing models [10]. This results in a waste of both computation and memory. There have been various proposals to remove the redundancy: Vanhoucke et al. [11] explored a ï¬ xed-point implementation with 8-bit integer (vs 32-bit ï¬ oating point) activations. Denton et al. [12] exploited the linear structure of the neural network by ï¬ nding an appropriate low-rank approximation of the parameters and keeping the accuracy within 1% of the original model. With similar accuracy loss, Gong et al. [13] compressed deep convnets using vector quantization. These approximation and quantization techniques are orthogonal to network pruning, and they can be used together to obtain further gains [14]. There have been other attempts to reduce the number of parameters of neural networks by replacing the fully connected layer with global average pooling. The Network in Network architecture [15] and GoogLenet [16] achieves state-of-the-art results on several benchmarks by adopting this idea. However, transfer learning, i.e. reusing features learned on the ImageNet dataset and applying them to new tasks by only ï¬ ne-tuning the fully connected layers, is more difï¬ cult with this approach. This problem is noted by Szegedy et al. [16] and motivates them to add a linear layer on the top of their networks to enable transfer learning. Network pruning has been used both to reduce network complexity and to reduce over-ï¬
1506.02626#3
1506.02626#5
1506.02626
[ "1507.06149" ]
1506.02626#5
Learning both Weights and Connections for Efficient Neural Networks
tting. An early approach to pruning was biased weight decay [17]. Optimal Brain Damage [18] and Optimal Brain Surgeon [19] prune networks to reduce the number of connections based on the Hessian of the loss function and suggest that such pruning is more accurate than magnitude-based pruning such as weight decay. However, second order derivative needs additional computation. HashedNets [20] is a recent technique to reduce model sizes by using a hash function to randomly group connection weights into hash buckets, so that all connections within the same hash bucket share a single parameter value. This technique may beneï¬ t from pruning.
1506.02626#4
1506.02626#6
1506.02626
[ "1507.06149" ]
1506.02626#6
Learning both Weights and Connections for Efficient Neural Networks
As pointed out in Shi et al. [21] and Weinberger et al. [22], sparsity will minimize hash collision making feature hashing even more effective. HashedNets may be used together with pruning to give even better parameter savings. 2 Train Connectivity wu Prune Connections we Train Weights before pruning after pruning pruning synapses --> pruning neurons Figure 2: Three-Step Training Pipeline. Figure 3: Synapses and neurons before and after pruning. # 3 Learning Connections in Addition to Weights Our pruning method employs a three-step process, as illustrated in Figure 2, which begins by learning the connectivity via normal network training. Unlike conventional training, however, we are not learning the ï¬ nal values of the weights, but rather we are learning which connections are important. The second step is to prune the low-weight connections. All connections with weights below a threshold are removed from the network â converting a dense network into a sparse network, as shown in Figure 3.
1506.02626#5
1506.02626#7
1506.02626
[ "1507.06149" ]
1506.02626#7
Learning both Weights and Connections for Efficient Neural Networks
The ï¬ nal step retrains the network to learn the ï¬ nal weights for the remaining sparse connections. This step is critical. If the pruned network is used without retraining, accuracy is signiï¬ cantly impacted. # 3.1 Regularization Choosing the correct regularization impacts the performance of pruning and retraining. L1 regulariza- tion penalizes non-zero parameters resulting in more parameters near zero. This gives better accuracy after pruning, but before retraining. However, the remaining connections are not as good as with L2 regularization, resulting in lower accuracy after retraining. Overall, L2 regularization gives the best pruning results. This is further discussed in experiment section. # 3.2 Dropout Ratio Adjustment Dropout [23] is widely used to prevent over-ï¬ tting, and this also applies to retraining. During retraining, however, the dropout ratio must be adjusted to account for the change in model capacity. In dropout, each parameter is probabilistically dropped during training, but will come back during inference. In pruning, parameters are dropped forever after pruning and have no chance to come back during both training and inference. As the parameters get sparse, the classiï¬ er will select the most informative predictors and thus have much less prediction variance, which reduces over-ï¬
1506.02626#6
1506.02626#8
1506.02626
[ "1507.06149" ]
1506.02626#8
Learning both Weights and Connections for Efficient Neural Networks
tting. As pruning already reduced model capacity, the retraining dropout ratio should be smaller. Quantitatively, let Ci be the number of connections in layer i, Cio for the original network, Cir for the network after retraining, Ni be the number of neurons in layer i. Since dropout works on neurons, and Ci varies quadratically with Ni, according to Equation 1 thus the dropout ratio after pruning the parameters should follow Equation 2, where Do represent the original dropout rate, Dr represent the dropout rate during retraining. Ci = NiNiâ 1 (1) Dr = Do (2)
1506.02626#7
1506.02626#9
1506.02626
[ "1507.06149" ]
1506.02626#9
Learning both Weights and Connections for Efficient Neural Networks
# 3.3 Local Pruning and Parameter Co-adaptation During retraining, it is better to retain the weights from the initial training phase for the connections that survived pruning than it is to re-initialize the pruned layers. CNNs contain fragile co-adapted features [24]: gradient descent is able to ï¬ nd a good solution when the network is initially trained, but not after re-initializing some layers and retraining them. So when we retrain the pruned layers, we should keep the surviving parameters instead of re-initializing them.
1506.02626#8
1506.02626#10
1506.02626
[ "1507.06149" ]
1506.02626#10
Learning both Weights and Connections for Efficient Neural Networks
3 Table 1: Network pruning can save 9Ã to 13Ã parameters with no drop in predictive performance. Network Top-1 Error Top-5 Error Parameters Compression Rate LeNet-300-100 Ref LeNet-300-100 Pruned LeNet-5 Ref LeNet-5 Pruned AlexNet Ref AlexNet Pruned VGG-16 Ref VGG-16 Pruned 1.64% 1.59% 0.80% 0.77% 42.78% 42.77% 31.50% 31.34% - - - - 19.73% 19.67% 11.32% 10.88% 267K 22K 431K 36K 61M 6.7M 138M 10.3M 12Ã
1506.02626#9
1506.02626#11
1506.02626
[ "1507.06149" ]
1506.02626#11
Learning both Weights and Connections for Efficient Neural Networks
12à 9à 13à Retraining the pruned layers starting with retained weights requires less computation because we donâ t have to back propagate through the entire network. Also, neural networks are prone to suffer the vanishing gradient problem [25] as the networks get deeper, which makes pruning errors harder to recover for deep networks. To prevent this, we ï¬ x the parameters for CONV layers and only retrain the FC layers after pruning the FC layers, and vice versa. # Iterative Pruning Learning the right connections is an iterative process. Pruning followed by a retraining is one iteration, after many such iterations the minimum number connections could be found. Without loss of accuracy, this method can boost pruning rate from 5à to 9à on AlexNet compared with single-step aggressive pruning. Each iteration is a greedy search in that we ï¬
1506.02626#10
1506.02626#12
1506.02626
[ "1507.06149" ]
1506.02626#12
Learning both Weights and Connections for Efficient Neural Networks
nd the best connections. We also experimented with probabilistically pruning parameters based on their absolute value, but this gave worse results. # 3.5 Pruning Neurons After pruning connections, neurons with zero input connections or zero output connections may be safely pruned. This pruning is furthered by removing all connections to or from a pruned neuron. The retraining phase automatically arrives at the result where dead neurons will have both zero input connections and zero output connections. This occurs due to gradient descent and regularization. A neuron that has zero input connections (or zero output connections) will have no contribution to the ï¬ nal loss, leading the gradient to be zero for its output connection (or input connection), respectively. Only the regularization term will push the weights to zero. Thus, the dead neurons will be automatically removed during retraining.
1506.02626#11
1506.02626#13
1506.02626
[ "1507.06149" ]
1506.02626#13
Learning both Weights and Connections for Efficient Neural Networks
# 4 Experiments We implemented network pruning in Caffe [26]. Caffe was modiï¬ ed to add a mask which disregards pruned parameters during network operation for each weight tensor. The pruning threshold is chosen as a quality parameter multiplied by the standard deviation of a layerâ s weights. We carried out the experiments on Nvidia TitanX and GTX980 GPUs. We pruned four representative networks: Lenet-300-100 and Lenet-5 on MNIST, together with AlexNet and VGG-16 on ImageNet. The network parameters and accuracy 1 before and after pruning are shown in Table 1. # 4.1 LeNet on MNIST
1506.02626#12
1506.02626#14
1506.02626
[ "1507.06149" ]
1506.02626#14
Learning both Weights and Connections for Efficient Neural Networks
We ï¬ rst experimented on MNIST dataset with the LeNet-300-100 and LeNet-5 networks [4]. LeNet- 300-100 is a fully connected network with two hidden layers, with 300 and 100 neurons each, which achieves 1.6% error rate on MNIST. LeNet-5 is a convolutional network that has two convolutional layers and two fully connected layers, which achieves 0.8% error rate on MNIST. After pruning, the network is retrained with 1/10 of the original networkâ s original learning rate.
1506.02626#13
1506.02626#15
1506.02626
[ "1507.06149" ]
1506.02626#15
Learning both Weights and Connections for Efficient Neural Networks
Table 1 shows 1Reference model is from Caffe model zoo, accuracy is measured without data augmentation 4 Table 2: For Lenet-300-100, pruning reduces the number of weights by 12à and computation by 12à . Layer Weights fc1 fc2 fc3 Total 235K 30K 1K 266K FLOP Act% Weights% FLOP% 8% 470K 38% 65% 60K 9% 100% 26% 2K 8% 532K 46% 8% 4% 17% 8% Table 3: For Lenet-5, pruning reduces the number of weights by 12à and computation by 6à . Layer Weights conv1 conv2 fc1 fc2 Total 0.5K 25K 400K 5K 431K Act% Weights% FLOP% FLOP 66% 576K 82% 12% 3200K 72% 55% 800K 8% 100% 19% 10K 8% 4586K 77% 66% 10% 6% 10% 16% Figure 4: Visualization of the ï¬ rst FC layerâ s sparsity pattern of Lenet-300-100. It has a banded structure repeated 28 times, which correspond to the un-pruned parameters in the center of the images, since the digits are written in the center. pruning saves 12à parameters on these networks. For each layer of the network the table shows (left to right) the original number of weights, the number of ï¬ oating point operations to compute that layerâ s activations, the average percentage of activations that are non-zero, the percentage of non-zero weights after pruning, and the percentage of actually required ï¬ oating point operations. An interesting byproduct is that network pruning detects visual attention regions. Figure 4 shows the sparsity pattern of the ï¬ rst fully connected layer of LeNet-300-100, the matrix size is 784 â 300. It has 28 bands, each bandâ s width 28, corresponding to the 28 à 28 input pixels. The colored regions of the ï¬ gure, indicating non-zero parameters, correspond to the center of the image.
1506.02626#14
1506.02626#16
1506.02626
[ "1507.06149" ]
1506.02626#16
Learning both Weights and Connections for Efficient Neural Networks
Because digits are written in the center of the image, these are the important parameters. The graph is sparse on the left and right, corresponding to the less important regions on the top and bottom of the image. After pruning, the neural network ï¬ nds the center of the image more important, and the connections to the peripheral regions are more heavily pruned. # 4.2 AlexNet on ImageNet We further examine the performance of pruning on the ImageNet ILSVRC-2012 dataset, which has 1.2M training examples and 50k validation examples. We use the AlexNet Caffe model as the reference model, which has 61 million parameters across 5 convolutional layers and 3 fully connected layers. The AlexNet Caffe model achieved a top-1 accuracy of 57.2% and a top-5 accuracy of 80.3%. The original AlexNet took 75 hours to train on NVIDIA Titan X GPU. After pruning, the whole network is retrained with 1/100 of the original networkâ s initial learning rate. It took 173 hours to retrain the pruned AlexNet. Pruning is not used when iteratively prototyping the model, but rather used for model reduction when the model is ready for deployment. Thus, the retraining time is less a concern. Table 1 shows that AlexNet can be pruned to 1/9 of its original size without impacting accuracy, and the amount of computation can be reduced by 3Ã
1506.02626#15
1506.02626#17
1506.02626
[ "1507.06149" ]
1506.02626#17
Learning both Weights and Connections for Efficient Neural Networks
. 5 Table 4: For AlexNet, pruning reduces the number of weights by 9Ã and computation by 3Ã . Layer Weights conv1 conv2 conv3 conv4 conv5 fc1 fc2 fc3 Total 35K 307K 885K 663K 442K 38M 17M 4M 61M FLOP Act% Weights% FLOP% 84% 211M 88% 38% 448M 52% 35% 299M 37% 37% 224M 40% 37% 150M 34% 9% 75M 36% 9% 34M 40% 25% 100% 8M 11% 54% 1.5B 84% 33% 18% 14% 14% 3% 3% 10% 30% Table 5: For VGG-16, pruning reduces the number of weights by 12Ã and computation by 5Ã . 58% 12% 30% 29% 43% 16% 29% 21% 14% 15% 12% 9% 11% 1% 2% 9% 21% 100% 23% 7.5%
1506.02626#16
1506.02626#18
1506.02626
[ "1507.06149" ]
1506.02626#18
Learning both Weights and Connections for Efficient Neural Networks
# 4.3 VGG-16 on ImageNet With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 [27], on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three fully-connected layers. Following a similar methodology, we aggressively pruned both convolutional and fully-connected layers to realize a signiï¬ cant reduction in the number of weights, shown in Table 5. We used ï¬ ve iterations of pruning an retraining. The VGG-16 results are, like those for AlexNet, very promising.
1506.02626#17
1506.02626#19
1506.02626
[ "1507.06149" ]
1506.02626#19
Learning both Weights and Connections for Efficient Neural Networks
The network as a whole has been reduced to 7.5% of its original size (13à smaller). In particular, note that the two largest fully-connected layers can each be pruned to less than 4% of their original size. This reduction is critical for real time image processing, where there is little reuse of fully connected layers across images (unlike batch processing during training). # 5 Discussion The trade-off curve between accuracy and number of parameters is shown in Figure 5. The more parameters pruned away, the less the accuracy. We experimented with L1 and L2 regularization, with and without retraining, together with iterative pruning to give ï¬ ve trade off lines. Comparing solid and dashed lines, the importance of retraining is clear: without retraining, accuracy begins dropping much sooner â with 1/3 of the original connections, rather than with 1/10 of the original connections.
1506.02626#18
1506.02626#20
1506.02626
[ "1507.06149" ]
1506.02626#20
Learning both Weights and Connections for Efficient Neural Networks
Itâ s interesting to see that we have the â free lunchâ of reducing 2à the connections without losing accuracy even without retraining; while with retraining we are ably to reduce connections by 9à . 6 -O-L2 regularization w/o retrain -A-L1 regularization w/o retrain -&L1 regularization w/ retrain ~OL2 regularization w/ retrain -®L2 regularization w/ iterative prune and retrain 0.5% 0.0% -0.5% 1.0% â 1.5% -2.0% -2.5% -3.0% -3.5% -4.0% -4.5% 40% 50% 60% 70% 80% 90% 100% Parametes Pruned Away Accuracy Loss Figure 5: Trade-off curve for parameter reduction and loss in top-5 accuracy. L1 regularization performs better than L2 at learning the connections without retraining, while L2 regularization performs better than L1 at retraining. Iterative pruning gives the best result.
1506.02626#19
1506.02626#21
1506.02626
[ "1507.06149" ]
1506.02626#21
Learning both Weights and Connections for Efficient Neural Networks
â conv! ~conv2 fconvs cond -*-convS fet 0% XE SX Bax x18: 0% 0040-0 6 5% § B-10% 3-10% g â 15% â 18% -20% 25% 50% 75% 400% 0% 25% 50% 75% 100% #Parameters #Parameters â conv! ~conv2 fconvs cond -*-convS 0% XE SX Bax x18: 6 E | B-10% Es â 15% 25% 50% 75% 400% #Parameters fet 0% 0040-0 5% § 3-10% g â 18% -20% 0% 25% 50% 75% 100% #Parameters Figure 6: Pruning sensitivity for CONV layer (left) and FC layer (right) of AlexNet. L1 regularization gives better accuracy than L2 directly after pruning (dotted blue and purple lines) since it pushes more parameters closer to zero. However, comparing the yellow and green lines shows that L2 outperforms L1 after retraining, since there is no beneï¬ t to further pushing values towards zero. One extension is to use L1 regularization for pruning and then L2 for retraining, but this did not beat simply using L2 for both phases. Parameters from one mode do not adapt well to the other. The biggest gain comes from iterative pruning (solid red line with solid circles). Here we take the pruned and retrained network (solid green line with circles) and prune and retrain it again. The leftmost dot on this curve corresponds to the point on the green line at 80% (5à pruning) pruned to 8à . Thereâ s no accuracy loss at 9à . Not until 10à does the accuracy begin to drop sharply. Two green points achieve slightly better accuracy than the original model. We believe this accuracy improvement is due to pruning ï¬ nding the right capacity of the network and hence reducing overï¬ tting. Both CONV and FC layers can be pruned, but with different sensitivity. Figure 6 shows the sensitivity of each layer to network pruning.
1506.02626#20
1506.02626#22
1506.02626
[ "1507.06149" ]