id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1505.00521#39
Reinforcement Learning Neural Turing Machines - Revised
Proceedings. ICRAâ 04. 2004 IEEE International Conference on, volume 3, pp. 2619â 2624. IEEE, 2004. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204â 2212, 2014. Peters, Jan and Schaal, Stefan.
1505.00521#38
1505.00521#40
1505.00521
[ "1503.01007" ]
1505.00521#40
Reinforcement Learning Neural Turing Machines - Revised
Policy gradient methods for robotics. IEEE/RSJ International Conference on, pp. 2219â 2225. IEEE, 2006. In Intelligent Robots and Systems, 2006 Schmidhuber, Juergen. Self-delimiting neural networks. arXiv preprint arXiv:1210.0118, 2012. Schmidhuber, J¨urgen. Optimal ordered problem solver. Machine Learning, 54(3):211â 254, 2004.
1505.00521#39
1505.00521#41
1505.00521
[ "1503.01007" ]
1505.00521#41
Reinforcement Learning Neural Turing Machines - Revised
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. Weakly supervised memory networks. arXiv preprint arXiv:1503.08895, 2015. Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. Zaremba, Wojciech and Sutskever, Ilya. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
1505.00521#40
1505.00521#42
1505.00521
[ "1503.01007" ]
1505.00521#42
Reinforcement Learning Neural Turing Machines - Revised
10 # Under review as a conference paper at ICLR 2016 # APPENDIX A: DETAILED REINFORCE EXPLANATION We present here several techniques to decrease variance of the gradient estimation for the Reinforce. We have employed all of these tricks in our RLâ NTM implementation. We expand notation introduced in Sec. 4. Let Aâ ¡ denote all valid subsequences of actions (i.e. Aâ ¡ â Aâ â Aâ ). Moreover, we deï¬
1505.00521#41
1505.00521#43
1505.00521
[ "1503.01007" ]
1505.00521#43
Reinforcement Learning Neural Turing Machines - Revised
ne set of sequences of actions that are valid after executing a sequence a1:t, and that terminate. We denote such set by: Aâ a1:t terminates an episode. # CAUSALITY OF ACTIONS Actions at time t cannot possibly inï¬ uence rewards obtained in the past, because the past rewards are caused by actions prior to them. This idea allows to derive an unbiased estimator of â θJ(θ) with lower variance. Here, we formalize it: # po(a)[dp log po(a)] R(a) â θJ(θ) = = S> po(a)[dp log po(a)] R(a) ay.7rEAt = SD pola) [a logpo(a)] [> rars)] are At t=1 T SS dp log po (a11)r(a1)| i M SS S ai.reâ ¬At t=1 T = Ss po(a) Soa log po(aix)r(ai:e) + Oo log po(acr41):rla1)r(ai)| ay.rEAt t=1 T Ss SS po(ar:2)d0 log pa(a1:4)r(a1:4) + pa(a)Oe log po (4141-741) (a1) ay.reAt t=1 Tv S25 po(a1.2)06 log po (ar1)r(are) + po (are)r (art) ope (ai4s)-r|a1:1) ay.rEAt t=1 T T Ss [35 po(a1:1)00 log po (a1:e)r(a1:2)] + Ss SS [Po(are)r (a1) Popo ayrEAt t=1 ay,pEAt t=1 [Po(are)r (a1) Popo (@e41)-7la1)| We will show that the right side of this equation is equal to zero.
1505.00521#42
1505.00521#44
1505.00521
[ "1503.01007" ]
1505.00521#44
Reinforcement Learning Neural Turing Machines - Revised
Itâ s zero, because the future actions a(t+1):T donâ t inï¬ uence past rewards r(a1:t). Here we formalize it; we use an identity E a(t+1):T â Aâ # a1:t T Ss Ss [pe (a1) r(aa:e)Oope (assy lais)| = a,.7EAt t=1 SS [polare)r(aie) > Aopo(aus1rlare)] = aye At ae41): TEAL, SS polars)r(ai1)do1 = 0 ay. â ¬At We can purge the right side of the equation for â θJ(θ): T 0oF(8) = Ss [35 po (a1:1)00 log po (a1)r(a1:2)| ay.rEAt t=1 T = Ea, ~po(a)Eax~po(alar) «++ Ear~po(alaxcrâ ay)[ >, 00 108 Po(aelar.(eâ 1)) > '(a1:8)] t=1 i=t â θJ(θ) = 11
1505.00521#43
1505.00521#45
1505.00521
[ "1503.01007" ]
1505.00521#45
Reinforcement Learning Neural Turing Machines - Revised
# Under review as a conference paper at ICLR 2016 The last line of derived equations describes the learning algorithm. This can be implemented as fol- lows. A neural network outputs: l, = log po (az|a1.4-1)). We sequentially sample action a; from the distribution e!, and execute the sampled action a,. Simultaneously, we experience a reward r(a1;,). We should backpropagate to the node 0 log po(az 1.(,â 1)) the sum of rewards starting from time step t: a ,7(a1:;). The only difference in comparison to the initial algorithm is that we backpropagate sum of rewards starting from the current time step, instead of the sum of rewards over the entire episode. # ONLINE BASELINE PREDICTION Online baseline prediction is an idea, that the importance of reward is determined by its relative relation to other rewards. All the rewards could be shifted by a constant factor and such change shouldnâ t effect its relation, thus it shouldnâ t inï¬ uence expected gradient. However, it could decrease the variance of the gradient estimate. Aforementioned shift is called the baseline, and it can be estimated separately for the every time-step.
1505.00521#44
1505.00521#46
1505.00521
[ "1503.01007" ]
1505.00521#46
Reinforcement Learning Neural Turing Machines - Revised
We have that: Ss Po(A(141):7|@12) = 1 acpi eCAhy., Oo Ss Po(4(141):7 a1) = 0 acpi eCAhy., We are allowed to subtract above quantity (multiplied by bt) from our estimate of the gradient without changing its expected value: T T oF (8) = Eaynpo(a)Ea2~po(alar) ««-Ear~po(alarcrâ 1y) | >, 00 log po(ailar.eâ 1)) 9 (7(asi) â be) t=1 i=t Above statement holds for an any sequence of bt.
1505.00521#45
1505.00521#47
1505.00521
[ "1503.01007" ]
1505.00521#47
Reinforcement Learning Neural Turing Machines - Revised
We aim to ï¬ nd the sequence bt that yields the lowest variance estimator on â θJ(θ). The variance of our estimator is: T Var = La, wpe (a) Eag~po(alar) oo -Eap~po(alar.crâ 1y) [> 06 log po(aelar.ceâ 1) Sec a1: «) 76 d)â - t=1 i=t r 2 [Eos~po(a)Eos~po(aler) + Eapspo(alarcr-) | >, log po(alan.e-ay) 32 (r(rs) - b)]| i=t 8 The second term doesnâ t depend on bt, and the variance is always positive. Itâ s sufï¬ cient to minimize the ï¬ rst term. The ï¬ rst term is minimal when itâ s derivative with respect to bt is zero. This implies T T Ea, ~po(a)Eas~po(ajar) +++ Ear~polalarcrâ 1)) >» 20 log po (ailaa.(eâ 1y) 9 (r(aa:) â be) = 0 t=1 i=t T T S506 log po (aelaa.eâ 1) S(r(ai2) = be) = 0 t=1 i=t 1 86 log po(ar|ar.(tâ 1)) Spee Patt) by oy Oo log po(ar|ar.(¢-1)) This gives us estimate for a vector bt â R#θ. However, it is common to use a single scalar for bt â R, and estimate it as Epθ(at:T |a1:(tâ 1))R(at:T ). # OFFLINE BASELINE PREDICTION The Reinforce algorithm works much better whenever it has accurate baselines. A separate LSTM can help in the baseline estimation. First, run the baseline LSTM on the entire input tape to produce a vector summarizing the input. Next, continue running the baseline LSTM in tandem with the controller LSTM,
1505.00521#46
1505.00521#48
1505.00521
[ "1503.01007" ]
1505.00521#48
Reinforcement Learning Neural Turing Machines - Revised
12 # Under review as a conference paper at ICLR 2016 batt=1 batt=2 batt=3 bb =[}{}{}-â input tape the = thenentmatte2 | [the RLNTMat tes Figure 8: The baseline LSTM computes a baseline bt for every computational step t of the RL-NTM. The baseline LSTM receives the same inputs as the RL-NTM, and it computes a baseline bt for time t before observing the chosen actions of time t.
1505.00521#47
1505.00521#49
1505.00521
[ "1503.01007" ]
1505.00521#49
Reinforcement Learning Neural Turing Machines - Revised
However, it is important to ï¬ rst provide the baseline LSTM with the entire input tape as a preliminary inputs, because doing so allows the baseline LSTM to accurately estimate the true difï¬ culty of a given problem instance and therefore compute better base- lines. For example, if a problem instance is unusually difï¬ cult, then we expect R1 to be large and negative. If the baseline LSTM is given entire input tape as an auxiliary input, it could compute an appropriately large and negative b1. so that the baseline LSTM receives precisely the same inputs as the controller LSTM, and outputs a baseline ); at each timestep t. The baseline LSTM is trained to minimize yy [R(ar) - bi] (Fig. [8p. This technique introduces a biased estimator, however it works well in practise.
1505.00521#48
1505.00521#50
1505.00521
[ "1503.01007" ]
1505.00521#50
Reinforcement Learning Neural Turing Machines - Revised
We found it important to ï¬ rst have the baseline LSTM go over the entire input before computing the baselines bt. It is especially beneï¬ cial whenever there is considerable variation in the difï¬ culty of the examples. For example, if the baseline LSTM can recognize that the current instance is unusually difï¬ cult, it can output a large negative value for bt=1 in anticipation of a large and a negative R1. In general, it is cheap and therefore worthwhile to provide the baseline network with all of the available information, even if this information would not be available at test time, because the baseline network is not needed at test time. # APPENDIX B: EXECUTION TRACES We present several execution traces of the RLâ NTM. Each ï¬ gure shows execution traces of the trained RL-NTM on each of the tasks. The ï¬ rst row shows the input tape and the desired output, while each subsequent row shows the RL-NTMâ s position on the input tape and its prediction for the output tape. In these examples, the RL-NTM solved each task perfectly, so the predictions made in the output tape perfectly match the desired outputs listed in the ï¬
1505.00521#49
1505.00521#51
1505.00521
[ "1503.01007" ]
1505.00521#51
Reinforcement Learning Neural Turing Machines - Revised
rst row. 13 # Under review as a conference paper at ICLR 2016 Input Tape Output Tape G8C33EA6W G W6AE33C8GO * + * # * + * # * + W Input Tape Output Tape WESGLPA67CR68FY W YF86RCT6APLG3SEWO # # ® # * FS # ® # * FS # ® # Y An RL-NTM successfully solving a small in- stance of the Reverse problem (where the external memory is not used). An RL-NTM successfully solving a small in- stance of the ForwardReverse problem, where the external memory is used. Input Tape Output Tape SHBEW*56DL 3 HBEWsSDLHBEWsSDLHBEW+S6DLO H 8 Ww 5 D i. Seen eee Input Tape | Memory Output Tape QLKDLTP7KL * LKDLTPTKLLKDLTPTKLO 2 * is 2 * 5 L * Pi L * * K * D « + A D * is D * * is * T u * 5 T * P T * * RRR KEK An RL-NTM successfully solving an instance of the RepeatCopy problem where the input is to be repeated three times. An example of a failure of the RepeatCopy task, where the input tape is only allowed to move forward. The correct so- lution would have been to copy the input to the memory, and then solve the task using the memory. Instead, the memory pointer is moving randomly.
1505.00521#50
1505.00521#52
1505.00521
[ "1503.01007" ]
1505.00521#52
Reinforcement Learning Neural Turing Machines - Revised
14
1505.00521#51
1505.00521
[ "1503.01007" ]
1505.00387#0
Highway Networks
5 1 0 2 v o N 3 ] G L . s c [ 2 v 7 8 3 0 0 . 5 0 5 1 : v i X r a # Highway Networks # Rupesh Kumar Srivastava Klaus Greff J ¨urgen Schmidhuber RUPESH@IDSIA.CH KLAUS@IDSIA.CH JUERGEN@IDSIA.CH The Swiss AI Lab IDSIA Istituto Dalle Molle di Studi sullâ Intelligenza Artiï¬ ciale Universit`a della Svizzera italiana (USI) Scuola universitaria professionale della Svizzera italiana (SUPSI) Galleria 2, 6928 Manno-Lugano, Switzerland
1505.00387#1
1505.00387
[ "1502.01852" ]
1505.00387#1
Highway Networks
# Abstract There is plenty of theoretical and empirical evi- dence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difï¬ cult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information ï¬ ow across several layers on infor- mation highways. The architecture is character- ized by the use of gating units which learn to reg- ulate the ï¬ ow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient de- scent and with a variety of activation functions, opening up the possibility of studying extremely deep and efï¬ cient architectures. instance, the top-5 image classiï¬ cation accuracy on the 1000-class ImageNet dataset has increased from â ¼84% (Krizhevsky et al., 2012) to â ¼95% (Szegedy et al., 2014; Simonyan & Zisserman, 2014) through the use of ensem- bles of deeper architectures and smaller receptive ï¬ elds (Ciresan et al., 2011a;b; 2012) in just a few years. On the theoretical side, it is well known that deep net- works can represent certain function classes exponentially more efï¬ ciently than shallow ones (e.g. the work of HË astad (1987); HË astad & Goldmann (1991) and recently of Mont- ufar et al. (2014)). As argued by Bengio et al. (2013), the use of deep networks can offer both computational and sta- tistical efï¬ ciency for complex tasks. However, training deeper networks is not as straightfor- ward as simply adding layers. Optimization of deep net- works has proven to be considerably more difï¬
1505.00387#0
1505.00387#2
1505.00387
[ "1502.01852" ]
1505.00387#2
Highway Networks
cult, lead- ing to research on initialization schemes (Glorot & Ben- gio, 2010; Saxe et al., 2013; He et al., 2015), techniques of training networks in multiple stages (Simonyan & Zis- serman, 2014; Romero et al., 2014) or with temporary companion loss functions attached to some of the layers (Szegedy et al., 2014; Lee et al., 2015). Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with addi- tional references, experiments and analysis. # 1. Introduction
1505.00387#1
1505.00387#3
1505.00387
[ "1502.01852" ]
1505.00387#3
Highway Networks
Many recent empirical breakthroughs in supervised ma- chine learning have been achieved through the applica- tion of deep neural networks. Network depth (referring to the number of successive computation layers) has played perhaps the most important role in these successes. For In this extended abstract, we present a novel architecture that enables the optimization of networks with virtually ar- bitrary depth. This is accomplished through the use of a learned gating mechanism for regulating information ï¬ ow which is inspired by Long Short Term Memory recurrent neural networks (Hochreiter & Schmidhuber, 1995). Due to this gating mechanism, a neural network can have paths along which information can ï¬ ow across several layers without attenuation. We call such paths information high- ways, and such networks highway networks.
1505.00387#2
1505.00387#4
1505.00387
[ "1502.01852" ]
1505.00387#4
Highway Networks
Presented at the Deep Learning Workshop, International Confer- ence on Machine Learning, Lille, France, 2015. Copyright 2015 by the author(s). In preliminary experiments, we found that highway net- works as deep as 900 layers can be optimized using simple Stochastic Gradient Descent (SGD) with momentum. For Highway Networks up to 100 layers we compare their training behavior to that of traditional networks with normalized initialization (Glo- rot & Bengio, 2010; He et al., 2015). We show that opti- mization of highway networks is virtually independent of depth, while for traditional networks it suffers signiï¬ cantly as the number of layers increases. We also show that archi- tectures comparable to those recently presented by Romero et al. (2014) can be directly trained to obtain similar test set accuracy on the CIFAR-10 dataset without the need for a pre-trained teacher network.
1505.00387#3
1505.00387#5
1505.00387
[ "1502.01852" ]
1505.00387#5
Highway Networks
# 1.1. Notation We use boldface letters for vectors and matrices, and ital- icized capital letters to denote transformation functions. 0 and 1 denote vectors of zeros and ones respectively, and I denotes an identity matrix. The function Ï (x) is deï¬ ned as Ï (x) = 1 Similarly, for the Jacobian of the layer transform, if T(x, Wr) = 0, if T(x, Wy) =1. ©) dy _ JI, dx H'(x, Wy), Thus, depending on the output of the transform gates, a highway layer can smoothly vary its behavior between that of a plain layer and that of a layer which simply passes its inputs through. Just as a plain layer consists of multi- ple computing units such that the ith unit computes yi = Hi(x), a highway network consists of multiple blocks such that the ith block computes a block state Hi(x) and trans- form gate output Ti(x). Finally, it produces the block out- put yi = Hi(x) â Ti(x) + xi â (1 â Ti(x)), which is con- nected to the next layer. # 2.1. Constructing Highway Networks # 2. Highway Networks A plain feedforward neural network typically consists of L layers where the lth layer (l â {1, 2, ..., L}) applies a non- linear transform H (parameterized by WH,l) on its input xl to produce its output yl. Thus, x1 is the input to the network and yL is the networkâ s output. Omitting the layer index and biases for clarity, As mentioned earlier, Equation (3) requires that the dimen- sionality of x, y, H(x, WH) and T (x, WT) be the same. In cases when it is desirable to change the size of the rep- resentation, one can replace x with Ë x obtained by suitably sub-sampling or zero-padding x. Another alternative is to use a plain layer (without highways) to change dimension- ality and then continue with stacking highway layers. This is the alternative we use in this study.
1505.00387#4
1505.00387#6
1505.00387
[ "1502.01852" ]
1505.00387#6
Highway Networks
y = H(x, WH). (1) H is usually an afï¬ ne transform followed by a non-linear activation function, but in general it may take other forms. Convolutional highway layers are constructed similar to fully connected layers. Weight-sharing and local receptive ï¬ elds are utilized for both H and T transforms. We use zero-padding to ensure that the block state and transform gate feature maps are the same size as the input. For a highway network, we additionally deï¬ ne two non- linear transforms T (x, WT) and C(x, WC) such that # 2.2. Training Deep Highway Networks y = H(x, WH)· T (x, WT) + x · C(x, WC). We refer to T as the transform gate and C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. For simplicity, in this paper we set C = 1 â T , giving y = H(x, WH)· T (x, WT) + x · (1 â T (x, WT)). (3) The dimensionality of x, y, H(x, WH) and T (x, WT) must be the same for Equation (3) to be valid. Note that this re-parametrization of the layer transformation is much more ï¬ exible than Equation (1). In particular, observe that
1505.00387#5
1505.00387#7
1505.00387
[ "1502.01852" ]
1505.00387#7
Highway Networks
For plain deep networks, training with SGD stalls at the beginning unless a speciï¬ c weight initialization scheme is used such that the variance of the signals during forward and backward propagation is preserved initially (Glorot & Bengio, 2010; He et al., 2015). This initialization depends on the exact functional form of H. For highway layers, we use the transform gate deï¬ ned as T x + bT), where WT is the weight matrix T (x) = Ï (WT and bT the bias vector for the transform gates. This sug- gests a simple initialization scheme which is independent of the nature of H: bT can be initialized with a negative value (e.g. -1, -3 etc.) such that the network is initially biased towards carry behavior. This scheme is strongly in- spired by the proposal of Gers et al. (1999) to initially bias the gates in a Long Short-Term Memory recurrent network to help bridge long-term temporal dependencies early in learning.
1505.00387#6
1505.00387#8
1505.00387
[ "1502.01852" ]
1505.00387#8
Highway Networks
Note that Ï (x) â (0, 1), â x â R, so the condi- tions in Equation (4) can never be exactly true. y = x, H(x, WH), if T (x, WT) = 0, if T (x, WT) = 1. (4) In our experiments, we found that a negative bias initial- Highway Networks ization was sufï¬ cient for learning to proceed in very deep networks for various zero-mean initial distributions of WH and different activation functions used by H. This is sig- niï¬ cant property since in general it may not be possible to ï¬
1505.00387#7
1505.00387#9
1505.00387
[ "1502.01852" ]
1505.00387#9
Highway Networks
nd effective initialization schemes for many choices of H. address this question, we compared highway networks to the thin and deep architectures termed Fitnets proposed re- cently by Romero et al. (2014) on the CIFAR-10 dataset augmented with random translations. Results are summa- rized in Table 1. # 3. Experiments # 3.1. Optimization Very deep plain networks become difï¬ cult to optimize even if using the variance-preserving initialization scheme form (He et al., 2015). To show that highway networks do not suffer from depth in the same way we train run a series of experiments on the MNIST digit classiï¬ cation dataset. We measure the cross entropy error on the training set, to investigate optimization, without conï¬ ating them with gen- eralization issues. We train both plain networks and highway networks with the same architecture and varying depth.
1505.00387#8
1505.00387#10
1505.00387
[ "1502.01852" ]
1505.00387#10
Highway Networks
The ï¬ rst layer is always a regular fully-connected layer followed by 9, 19, 49, or 99 fully-connected plain or highway layers and a single softmax output layer. The number of units in each layer is kept constant and it is 50 for highways and 71 for plain networks. That way the number of parameters is roughly the same for both. To make the comparison fair we run a random search of 40 runs for both plain and high- way networks to ï¬ nd good settings for the hyperparame- ters. We optimized the initial learning rate, momentum, learning rate decay rate, activation function for H (either ReLU or tanh) and, for highway networks, the value for the transform gate bias (between -1 and -10). All other weights were initialized following the scheme introduced by (He et al., 2015). The convergence plots for the best performing networks for each depth can be seen in Figure 1. While for 10 layers plain network show very good performance, their perfor- mance signiï¬ cantly degrades as depth increases. Highway networks on the other hand do not seem to suffer from an increase in depth at all.
1505.00387#9
1505.00387#11
1505.00387
[ "1502.01852" ]
1505.00387#11
Highway Networks
The ï¬ nal result of the 100 layer highway network is about 1 order of magnitude better than the 10 layer one, and is on par with the 10 layer plain net- work. In fact, we started training a similar 900 layer high- way network on CIFAR-100 which is only at 80 epochs as of now, but so far has shown no signs of optimization difï¬ culties. It is also worth pointing out that the highway networks always converge signiï¬ cantly faster than the plain ones. # 3.2. Comparison to Fitnets
1505.00387#10
1505.00387#12
1505.00387
[ "1502.01852" ]
1505.00387#12
Highway Networks
Romero et al. (2014) reported that training using plain backpropogation was only possible for maxout networks with depth up to 5 layers when number of parameters was limited to â ¼250K and number of multiplications to â ¼30M. Training of deeper networks was only possible through the use of a two-stage training procedure and addition of soft targets produced from a pre-trained shallow teacher net- work (hint-based training). Similarly it was only possible to train 19-layer networks with a budget of 2.5M parame- ters using hint-based training. We found that it was easy to train highway networks with number of parameters and operations comparable to ï¬ t- nets directly using backpropagation. As shown in Table 1, Highway 1 and Highway 4, which are based on the archi- tecture of Fitnet 1 and Fitnet 4 respectively obtain similar or higher accuracy on the test set. We were also able to train thinner and deeper networks: a 19-layer highway net- work with â ¼1.4M parameters and a 32-layer highway net- work with â ¼1.25M parameter both perform similar to the teacher network of Romero et al. (2014).
1505.00387#11
1505.00387#13
1505.00387
[ "1502.01852" ]
1505.00387#13
Highway Networks
# 4. Analysis In Figure 2 we show some inspections on the inner work- ings of the best1 50 hidden layer fully-connected high- way networks trained on MNIST (top row) and CIFAR- 100 (bottom row). The ï¬ rst three columns show, for each transform gate, the bias, the mean activity over 10K ran- dom samples, and the activity for a single random sample respectively. The block outputs for the same single sample are displayed in the last column. The transform gate biases of the two networks were initial- ized to -2 and -4 respectively. It is interesting to note that contrary to our expectations most biases actually decreased further during training. For the CIFAR-100 network the bi- ases increase with depth forming a gradient. Curiously this gradient is inversely correlated with the average activity of the transform gates as seen in the second column. This in- dicates that the strong negative biases at low depths are not used to shut down the gates, but to make them more selec- tive. This behavior is also suggested by the fact that the transform gate activity for a single example (column 3) is very sparse. This effect is more pronounced for the CIFAR- 100 network, but can also be observed to a lesser extent in the MNIST network. Deep highway networks are easy to optimize, but are they also beneï¬ cial for supervised learning where we are in- terested in generalization performance on a test set?
1505.00387#12
1505.00387#14
1505.00387
[ "1502.01852" ]
1505.00387#14
Highway Networks
To 1obtained via random search over hyperparameters to mini- mize the best training set error achieved using each conï¬ guration Highway Networks 10° Depth 10 Depth 20 107 10° Mean Cross Entropy Error 10% 10° Depth 50 Depth 100 â plain â highway 0 50 100 150 200 250 300 350 4000 Number of Epochs Number of Epochs 50 100 150 200 250 300 350 4000 50 100 150 200 250 300 350 4000 Number of Epochs 50 100 150 200 250 300 350 400 Number of Epochs Figure 1. Comparison of optimization of plain networks and highway networks of various depths. All networks were optimized using SGD with momentum. The curves shown are for the best hyperparameter settings obtained for each conï¬ guration using a random search. Plain networks become much harder to optimize with increasing depth, while highway networks with up to 100 layers can still be optimized well. Network Fitnet Results reported by Romero et al. (2014) Number of Layers Number of Parameters Accuracy Teacher Fitnet 1 Fitnet 2 Fitnet 3 Fitnet 4 5 11 11 13 19 â
1505.00387#13
1505.00387#15
1505.00387
[ "1502.01852" ]
1505.00387#15
Highway Networks
¼9M â ¼250K â ¼862K â ¼1.6M â ¼2.5M 90.18% 89.01% 91.06% 91.10% 91.61% Highway networks Highway 1 (Fitnet 1) Highway 2 (Fitnet 4) Highway 3* Highway 4* 11 19 19 32 â ¼236K â ¼2.3M â ¼1.4M â ¼1.25M 89.18% 92.24% 90.68% 90.34% Table 1.
1505.00387#14
1505.00387#16
1505.00387
[ "1502.01852" ]
1505.00387#16
Highway Networks
CIFAR-10 test set accuracy of convolutional highway networks with rectiï¬ ed linear activation and sigmoid gates. For compar- ison, results reported by Romero et al. (2014) using maxout networks are also shown. Fitnets were trained using a two step training procedure using soft targets from the trained Teacher network, which was trained using backpropagation. We trained all highway net- works directly using backpropagation. * indicates networks which were trained only on a set of 40K out of 50K examples in the training set.
1505.00387#15
1505.00387#17
1505.00387
[ "1502.01852" ]
1505.00387#17
Highway Networks
The last column of Figure 2 displays the block outputs and clearly visualizes the concept of â information highwaysâ . Most of the outputs stay constant over many layers form- ing a pattern of stripes. Most of the change in outputs hap- pens in the early layers (â 10 for MNIST and â 30 for CIFAR-100). We hypothesize that this difference is due to the higher complexity of the CIFAR-100 dataset. # 5. Conclusion Learning to route information through neural networks has helped to scale up their application to challenging prob- lems by improving credit assignment and making training easier (Srivastava et al., 2015). Even so, training very deep networks has remained difï¬ cult, especially without consid- erably increasing total network size. In summary it is clear that highway networks actually uti- lize the gating mechanism to pass information almost un- changed through many layers. This mechanism serves not just as a means for easier training, but is also heavily used to route information in a trained network. We observe very selective activity of the transform gates, varying strongly in reaction to the current input patterns. Highway networks are novel neural network architectures which enable the training of extremely deep networks us- ing simple SGD. While the traditional plain neural archi- tectures become increasingly difï¬ cult to train with increas- ing network depth (even with variance-preserving initial- ization), our experiments show that optimization of high- way networks is not hampered even as network depth in- creases to a hundred layers. The ability to train extremely deep networks opens up the possibility of studying the impact of depth on complex Highway Networks Transform Gate Biases MNIST CIFAR-100 Depth Mean Transform Gate Outputs Transform Gate Outputs Block Outputs ecooor oN B&D wD L ° iy 0.4 0.8 eoory, RD ®oM ° ! | i 0 10 20 30 40 Block Block Figure 2. Visualization of certain internals of the blocks in the best 50 hidden layer highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The ï¬ rst hidden layer is a plain layer which changes the dimensionality of the representation to 50. Each of the 49 highway layers (y-axis) consists of 50 blocks (x-axis).
1505.00387#16
1505.00387#18
1505.00387
[ "1502.01852" ]
1505.00387#18
Highway Networks
The ï¬ rst column shows the transform gate biases, which were initialized to -2 and -4 respectively. In the second column the mean output of the transform gate over 10,000 training examples is depicted. The third and forth columns show the output of the transform gates and the block outputs for a single random training sample. problems without restrictions. Various activation functions which may be more suitable for particular problems but for which robust initialization schemes are unavailable can be used in deep highway networks. Future work will also at- tempt to improve the understanding of learning in highway networks.
1505.00387#17
1505.00387#19
1505.00387
[ "1502.01852" ]
1505.00387#19
Highway Networks
# Acknowledgments ï¬ c sign classiï¬ cation. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pp. 1918â 1921. IEEE, 2011a. URL http://ieeexplore.ieee. org/xpls/abs_all.jsp?arnumber=6033458. Ciresan, Dan, Meier, Ueli, and Schmidhuber, J¨urgen. Multi-column deep neural networks for image classiï¬ - In IEEE Conference on Computer Vision and cation. Pattern Recognition, 2012. This research was supported by the by EU project â NASCENCEâ (FP7-ICT-317662).
1505.00387#18
1505.00387#20
1505.00387
[ "1502.01852" ]
1505.00387#20
Highway Networks
We gratefully ac- knowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPUs used for this research. # References Ciresan, DC, Meier, Ueli, Masci, Jonathan, Gambardella, Luca M, and Schmidhuber, J¨urgen. Flexible, high performance convolutional neural networks for image classiï¬ cation. In IJCAI, 2011b. URL http://www. aaai.org/ocs/index.php/IJCAI/IJCAI11/ paper/download/3098/3425%0020http: //dl.acm.org/citation.cfm?id=2283603. Bengio, Yoshua, Courville, Aaron, and Vincent, Pas- Representation learning: A review and new Pattern Analysis and Machine In- IEEE Transactions on, 35(8):1798â 1828, URL http://ieeexplore.ieee.org/ cal. perspectives. telligence, 2013. xpls/abs_all.jsp?arnumber=6472238. Gers, Felix A., Schmidhuber, J¨urgen, and Cummins, Learning to forget: Continual prediction In ICANN, volume 2, pp. 850â 855, URL http://ieeexplore.ieee.org/
1505.00387#19
1505.00387#21
1505.00387
[ "1502.01852" ]
1505.00387#21
Highway Networks
Ciresan, Dan, Meier, Ueli, Masci, Jonathan, and Schmid- huber, J¨urgen. A committee of neural networks for traf- Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬ culty of training deep feedforward neural networks. Highway Networks In International Conference on Artiï¬ cial Intelligence URL http: and Statistics, pp. 249â 256, 2010. //machinelearning.wustl.edu/mlpapers/ paper_files/AISTATS2010_GlorotB10.pdf. Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv:1409.1556 [cs], September 2014. URL http: //arxiv.org/abs/1409.1556.
1505.00387#20
1505.00387#22
1505.00387
[ "1502.01852" ]
1505.00387#22
Highway Networks
HË astad, Johan. Computational limitations of small-depth circuits. MIT press, 1987. URL http://dl.acm. org/citation.cfm?id=SERIES9056.27031. HË astad, Johan and Goldmann, Mikael. On the Compu- URL power of small-depth threshold circuits. tational Complexity, 1(2):113â 129, 1991. http://link.springer.com/article/10. 1007/BF01272517.
1505.00387#21
1505.00387#23
1505.00387
[ "1502.01852" ]
1505.00387#23
Highway Networks
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ ers: Surpassing human-level performance on ImageNet classiï¬ cation. arXiv:1502.01852 [cs], February 2015. URL http: //arxiv.org/abs/1502.01852. Srivastava, Rupesh Kumar, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Understanding lo- In International Confer- cally competitive networks. ence on Learning Representations, 2015. URL http: //arxiv.org/abs/1410.1165. Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv:1409.4842 [cs], September 2014. URL http://arxiv.org/abs/ 1409.4842.
1505.00387#22
1505.00387#24
1505.00387
[ "1502.01852" ]
1505.00387#24
Highway Networks
Long short term memory. Technical Report FKI-207-95, Technische Universit¨at M¨unchen, M¨unchen, August 1995. URL http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.51.3117. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in Neural Information Process- ing Systems, 2012. URL http://books.nips.cc/ papers/files/nips25/NIPS2012_0534.pdf. Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick, Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. URL http://jmlr.org/ pp. 562â 570, 2015. proceedings/papers/v38/lee15a.html. and Bengio, Yoshua. regions of deep neural networks. in Neural URL 5422-on-the-number-of-linear-regions-of-deep-neural-networks. pdf. Kahou, Ballas, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, and Bengio, Yoshua. FitNets: Hints for thin deep arXiv:1412.6550 [cs], December 2014. URL nets. http://arxiv.org/abs/1412.6550. Saxe, Andrew M., McClelland, James L., and Gan- guli, Surya. Exact solutions to the nonlinear dy- namics of learning in deep linear neural networks. arXiv:1312.6120 [cond-mat, q-bio, stat], December URL http://arxiv.org/abs/1312. 2013. 6120.
1505.00387#23
1505.00387
[ "1502.01852" ]
1504.00702#0
End-to-End Training of Deep Visuomotor Policies
6 1 0 2 r p A 9 1 ] G L . s c [ 5 v 2 0 7 0 0 . 4 0 5 1 : v i X r a Journal of Machine Learning Research 17 (2016) 1-40 Submitted 10/15; Published 4/16 # End-to-End Training of Deep Visuomotor Policies Sergey Levineâ Chelsea Finnâ Trevor Darrell Pieter Abbeel Division of Computer Science University of California Berkeley, CA 94720-1776, USA â These authors contributed equally. svlevine@eecs.berkeley.edu cbfinn@eecs.berkeley.edu trevor@eecs.berkeley.edu pabbeel@eecs.berkeley.edu Editor: Jan Peters # Abstract
1504.00702#1
1504.00702
[ "1509.06113" ]
1504.00702#1
End-to-End Training of Deep Visuomotor Policies
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to- end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robotâ s motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.
1504.00702#0
1504.00702#2
1504.00702
[ "1509.06113" ]
1504.00702#2
End-to-End Training of Deep Visuomotor Policies
Keywords: Reinforcement Learning, Optimal Control, Vision, Neural Networks # 1. Introduction Robots can perform impressive tasks under human control, including surgery (Lanfranco et al., 2004) and household chores (Wyrobek et al., 2008). However, designing the perception and control software for autonomous operation remains a major challenge, even for basic tasks. Policy search methods hold the promise of allowing robots to automatically learn new behaviors through experience (Kober et al., 2010b; Deisenroth et al., 2011; Kalakrishnan et al., 2011; Deisenroth et al., 2013). However, policies learned using such methods often rely on a number of hand-engineered components for perception and control, so as to present the policy with a more manageable and low-dimensional representation of observations and actions. The vision system in particular can be complex and prone to errors, and it is typically not improved during policy training, nor adapted to the goal of the task.
1504.00702#1
1504.00702#3
1504.00702
[ "1509.06113" ]
1504.00702#3
End-to-End Training of Deep Visuomotor Policies
In this article, we aim to answer the following question: can we acquire more eï¬ ec- tive policies for sensorimotor control if the perception system is trained jointly with the control policy, rather than separately? In order to represent a policy that performs both ©2016 Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. Levine, Finn, Darrell, and Abbeel hanger cube hammer bottle hanger cube hammer bottle Figure 1:
1504.00702#2
1504.00702#4
1504.00702
[ "1509.06113" ]
1504.00702#4
End-to-End Training of Deep Visuomotor Policies
Our method learns visuomotor policies that directly use camera image observa- tions (left) to set motor torques on a PR2 robot (right). perception and control, we use deep neural networks. Deep neural network representations have recently seen widespread success in a variety of domains, such as computer vision and speech recognition, and even playing video games. However, using deep neural networks for real-world sensorimotor policies, such as robotic controllers that map image pixels and joint angles to motor torques, presents a number of unique challenges. Successful applications of deep neural networks typically rely on large amounts of data and direct supervision of the output, neither of which is available in robotic control. Real-world robot interaction data is scarce, and task completion is deï¬ ned at a high level by means of a cost function, which means that the learning algorithm must determine on its own which action to take at each point. From the control perspective, a further complication is that observations from the robotâ s sensors do not provide us with the full state of the system. Instead, important state information, such as the positions of task-relevant objects, must be inferred from inputs such as camera images. We address these challenges by developing a guided policy search algorithm for senso- rimotor deep learning, as well as a novel CNN architecture designed for robotic control. Guided policy search converts policy search into supervised learning, by iteratively con- structing the training data using an eï¬ cient model-free trajectory optimization procedure. We show that this can be formalized as an instance of Bregman ADMM (BADMM) (Wang and Banerjee, 2014), which can be used to show that the algorithm converges to a locally optimal solution. In our method, the full state of the system is observable at training time, but not at test time. For most tasks, providing the full state simply requires position- ing objects in one of several known positions for each trial during training. At test time, the learned CNN policy can handle novel, unknown conï¬ gurations, and no longer requires full state information. Since the policy is optimized with supervised learning, we can use standard methods like stochastic gradient descent for training. Our CNNs have 92,000 pa- rameters and 7 layers, including a novel spatial feature point transformation that provides accurate spatial reasoning and reduces overï¬
1504.00702#3
1504.00702#5
1504.00702
[ "1509.06113" ]
1504.00702#5
End-to-End Training of Deep Visuomotor Policies
tting. This allows us to train our policies with relatively modest amounts of data and only tens of minutes of real-world interaction time. We evaluate our method by learning policies for inserting a block into a shape sorting cube, screwing a cap onto a bottle, ï¬ tting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a rack with a PR2 robot (see Figure 1). These tasks require localization, visual tracking, and handling complex contact dynamics. Our results demonstrate improvements in consistency and generalization from training visuomotor poli- cies end-to-end, when compared to training the vision and control components separately. We also present simulated comparisons that show that guided policy search outperforms a
1504.00702#4
1504.00702#6
1504.00702
[ "1509.06113" ]
1504.00702#6
End-to-End Training of Deep Visuomotor Policies
2 End-to-End Training of Deep Visuomotor Policies number of prior methods when training high-dimensional neural network policies. Some of the material in this article has previously appeared in two conference papers (Levine and Abbeel, 2014; Levine et al., 2015), which we extend to introduce visual input into the policy. # 2. Related Work Reinforcement learning and policy search methods (Gullapalli, 1990; Williams, 1992) have been applied in robotics for playing games such as table tennis (Kober et al., 2010b), object manipulation (Gullapalli, 1995; Peters and Schaal, 2008; Kober et al., 2010a; Deisenroth et al., 2011; Kalakrishnan et al., 2011), locomotion (Benbrahim and Franklin, 1997; Kohl and Stone, 2004; Tedrake et al., 2004; Geng et al., 2006; Endo et al., 2008), and ï¬ ight (Ng et al., 2004). Several recent papers provide surveys of policy search in robotics (Deisenroth et al., 2013; Kober et al., 2013). Such methods are typically applied to one component of the robot control pipeline, which often sits on top of a hand-designed controller, such as a PD controller, and accepts processed input, for example from an existing vision pipeline (Kalakrishnan et al., 2011). Our method learns policies that map visual input and joint encoder readings directly to the torques at the robotâ
1504.00702#5
1504.00702#7
1504.00702
[ "1509.06113" ]
1504.00702#7
End-to-End Training of Deep Visuomotor Policies
s joints. By learning the entire map- ping from perception to control, the perception layers can be adapted to optimize task performance, and the motor control layers can be adapted to imperfect perception. We represent our policies with convolutional neural networks (CNNs). CNNs have a long history in computer vision and deep learning (Fukushima, 1980; LeCun et al., 1989; Schmidhuber, 2015), and have recently gained prominence due to excellent results on a number of vision benchmarks (Ciresan et al., 2011; Krizhevsky et al., 2012; Ciresan et al., 2012; Girshick et al., 2014a; Tompson et al., 2014; LeCun et al., 2015; He et al., 2015). Most applications of CNNs focus on classiï¬ cation, where locational information is discarded by means of successive pooling layers to provide for invariance (Lee et al., 2009). Applications to localization typically either use a sliding window (Girshick et al., 2014a) or object pro- posals (Endres and Hoiem, 2010; Uijlings et al., 2013; Girshick et al., 2014b) to localize the object, reducing the task to classiï¬ cation, perform regression to a heatmap of manually labeled keypoints (Tompson et al., 2014), requiring precise knowledge of the object posi- tion in the image and camera calibration, or use 3D models to localize previously scanned objects (Pepik et al., 2012; Savarese and Fei-Fei, 2007). Many prior robotic applications of CNNs do not directly consider control, but employ CNNs for the perception component of a larger robotic system (Hadsell et al., 2009; Sung et al., 2015; Lenz et al., 2015b; Pinto and Gupta, 2015). We use a novel CNN architecture for our policies that automatically learn feature points that capture spatial information about the scene, without any supervision beyond the information from the robotâ s encoders and camera. Applications of deep learning in robotic control have been less prevalent in recent years than in visual recognition.
1504.00702#6
1504.00702#8
1504.00702
[ "1509.06113" ]
1504.00702#8
End-to-End Training of Deep Visuomotor Policies
Backpropagation through the dynamics and the image for- mation process is typically impractical, since they are often non-diï¬ erentiable, and such long-range backpropagation can lead to extreme numerical instability, since the lineariza- tion of a suboptimal policy is likely to be unstable. This issue has also been observed in the related context of recurrent neural networks (Hochreiter et al., 2001; Pascanu and Bengio, 2012). The high dimensionality of the network also makes reinforcement learning diï¬ cult (Deisenroth et al., 2013). Pioneering early work on neural network control used
1504.00702#7
1504.00702#9
1504.00702
[ "1509.06113" ]
1504.00702#9
End-to-End Training of Deep Visuomotor Policies
3 Levine, Finn, Darrell, and Abbeel small, simple networks (Pomerleau, 1989; Hunt et al., 1992; Bekey and Goldberg, 1992; Lewis et al., 1998; Bakker et al., 2003; Mayer et al., 2006), and has largely been supplanted by methods that use carefully designed policies that can be learned eï¬ ciently with rein- forcement learning (Kober et al., 2013). More recent work on sensorimotor deep learning has tackled simple task-space motions (Lenz et al., 2015a; Lampe and Riedmiller, 2013) and used unsupervised learning to obtain low-dimensional state spaces from images (Lange et al., 2012). Such methods have been demonstrated on tasks with a low-dimensional un- derlying structure: Lenz et al. (2015a) controls the end-eï¬ ector in 2D space, while Lange et al. (2012) controls a 2-dimensional slot car with 1-dimensional actions. Our experiments include full torque control of 7-DoF robotic arms interacting with objects, with 30-40 state dimensions. In simple synthetic environments, control from images has been addressed with image features (Jodogne and Piater, 2007), nonparametric methods (van Hoof et al., 2015), and unsupervised state-space learning (B¨ohmer et al., 2013; Jonschkowski and Brock, 2014). CNNs have also been trained to play video games with Q-learning, Monte Carlo tree search, and stochastic search (Mnih et al., 2013; Koutn´ık et al., 2013; Guo et al., 2014), and have been applied to simple simulated control tasks (Watter et al., 2015; Lillicrap et al., 2015). However, such methods have only been demonstrated on synthetic domains that lack the visual complexity of the real world, and require an impractical number of samples for real- world robotic learning. Our method is sample eï¬ cient, requiring only minutes of interaction time. To the best of our knowledge, this is the ï¬
1504.00702#8
1504.00702#10
1504.00702
[ "1509.06113" ]
1504.00702#10
End-to-End Training of Deep Visuomotor Policies
rst method that can train deep visuomotor policies for complex, high-dimensional manipulation skills with direct torque control. Learning visuomotor policies on a real robot requires handling complex observations and high dimensional policy representations. We tackle these challenges using guided pol- icy search. In guided policy search, the policy is optimized using supervised learning, which scales gracefully with the dimensionality of the policy. The training set for supervised learn- ing can be constructed using trajectory optimization under known dynamics (Levine and Koltun, 2013a,b, 2014; Mordatch and Todorov, 2014) and trajectory-centric reinforcement learning methods that operate under unknown dynamics (Levine and Abbeel, 2014; Levine et al., 2015), which is the approach taken in this work. In both cases, the supervision is adapted to the policy, to ensure that the ï¬ nal policy can reproduce the training data. The use of supervised learning in the inner loop of iterative policy search has also been pro- posed in the context of imitation learning (Ross et al., 2011, 2013). However, such methods typically do not address the question of how the supervision should be adapted to the policy. The goal of our approach is also similar to visual servoing, which performs feedback control on feature points in a camera image (Espiau et al., 1992; Mohta et al., 2014; Wilson et al., 1996). However, our visuomotor policies are entirely learned from real-world data, and do not require feature points or feedback controllers to be speciï¬
1504.00702#9
1504.00702#11
1504.00702
[ "1509.06113" ]
1504.00702#11
End-to-End Training of Deep Visuomotor Policies
ed by hand. This allows our method much more ï¬ exibility in choosing how to use the visual signal. Our approach also does not require any sort of camera calibration, in contrast to many visual servoing methods (though not all â see e.g. J¨agersand et al. (1997); Yoshimi and Allen (1994)). # 3. Background and Overview In this section, we deï¬ ne the visuomotor policy learning problem and present an overview of our approach. The core component of our approach is a guided policy search algorithm
1504.00702#10
1504.00702#12
1504.00702
[ "1509.06113" ]
1504.00702#12
End-to-End Training of Deep Visuomotor Policies
4 End-to-End Training of Deep Visuomotor Policies that separates the problem of learning visuomotor policies into separate supervised learning and trajectory learning phases, each of which is easier than optimizing the policy directly. We also discuss a policy architecture suitable for end-to-end learning of vision and control, and a training setup that allows our method to be applied to real robotic platforms. # 3.1 Deï¬ nitions and Problem Formulation In policy search, the goal is to learn a policy 79(u;|o,) that allows an agent to choose actions u, in response to observations o; to control a dynamical system, such as a robot. The policy comes from some parametric class parameterized by 9, which could be, for example, the weights of a neural network. The system is defined by states x,, actions u,, and observations o;. For example, x; might include the joint angles of the robot, the positions of objects in the world, and their time derivatives, u; might consist of motor torque commands, and o; might include an image from the robotâ
1504.00702#11
1504.00702#13
1504.00702
[ "1509.06113" ]
1504.00702#13
End-to-End Training of Deep Visuomotor Policies
s onboard camera. In this paper, we address finite horizon episodic tasks with t ⠬ [1,..., 7]. The states evolve in time according to the system dynamics p(x++1|x¢, uz), and the observations are, in general, a stochastic consequence of the states, according to p(o;|x;). Neither the dynamics nor the observation distribution are assumed to be known in general. For notational convenience, we will use 79(u;|x;) to denote the distribution over actions under the policy conditioned on the state. However, since the policy is conditioned on the observation o;, this distribution is in fact given by 9(ui|xz) =f 79(ur|oz)p(o1|xz)do;. The dynamics and 79(u;,|x;) together induce a distribution over trajectories tT = {x), U1, X2,Uo,...,x7, ur}: T mo(T) = p(x1) | J ro (uelxe)p(xes1]Xe, ue): tt The goal of a task is given by a cost function ¢(x;, uz), and the objective in policy search is to minimize the expectation E,,(;) ee 1 (xt, uz)], which we will abbreviate as E,,,(7)[(7)]- A summary of the notation used in the paper is provided in Table 1. # 3.2 Approach Summary Our methods consists of two main components, which are illustrated in Figure 3. The first is a supervised learning algorithm that trains policies of the form 79(u;|o,) = N(u7 (oz), &(0z)), where both y⠢(o¢) and (oz) are general nonlinear functions. In our implementation, 1 (0¢) is a deep convolutional neural network, while ©7(o;) is an observation-independent earned covariance, though other representations are possible. The second component is a rajectory-centric reinforcement learning (RL) algorithm that generates guiding distribu- ions p;(u;|x;) that provide the supervision used to train the policy. These two components orm a policy search algorithm that can be used to learn complex robotic tasks using only a high-level cost function ¢(x;, uz).
1504.00702#12
1504.00702#14
1504.00702
[ "1509.06113" ]
1504.00702#14
End-to-End Training of Deep Visuomotor Policies
During training, only samples from the guiding distribu- ions p;(u;|xz) are generated by running rollouts on the physical system, which avoids the need to execute partially trained neural network policies on physical hardware. Supervised learning will not, in general, produce a policy with good long-horizon per- formance, since a small mistake on the part of the policy will place the system into states that are outside the distribution in the training data, causing compounding errors. To 5 Levine, Finn, Darrell, and Abbeel
1504.00702#13
1504.00702#15
1504.00702
[ "1509.06113" ]
1504.00702#15
End-to-End Training of Deep Visuomotor Policies
symbol definition example/details Markovian system state at time step t â ¬ Joint angles, end-effector pose, object Posl- Xe 1,7] tions, and their velocities; dimensionality: ; 14 to 32 trol i t ti tep t â ¬ [1,7] joint motor torque commands; dimensional- ur control or action at time step : ity: 7 (for the PR2 robot) RGB camera image, joint encoder readings oO observation at time step t â ¬ [1, T] & velocities, end-effector pose; dimensional- ity: around 200,000 r trajectory: notational shorthand for a sequence of states T = {X1,U1,X2,U2,...,xr, ur} and actions , . . dista betwe a bject in th i L(Xt, Xe) cost function that defines the goal of the task istance Detween an object In be gripper and the target P(Xt+1|Xe, Ue) unknown system dynamics physics that govern the robot and any ob- jects it interacts with stochastic process that produces camera im- 01x vat istributi p(o|Xz) unknown observation distribution ages from system state learned nonlinear global policy parameter-| convolutional neural network, such as the To (ut|Oz) . . an : ized by weights 0 one in Figure 2 (ui lx) f (wilor)p(orlxe)a notational shorthand for observation-based 9 (ut |x. To (ur|Oz)p(Or|xz)do. . sys o\mee oe Or) Portlet aor policy conditioned on state (us[xe) learned local time-varying linear-Gaussian | time-varying linear-Gaussian controller has pituelx controller for initial state x} form N(Kux: + ki, Cri) (7) trajectory distribution for â 7@(u,|xz):| notational shorthand for trajectory distribu- 0 P(x1) Wes To (We |Xt)P(Xt41[Xe, Us) tion induced by a policy
1504.00702#14
1504.00702#16
1504.00702
[ "1509.06113" ]
1504.00702#16
End-to-End Training of Deep Visuomotor Policies
Table 1: Summary of the notation frequently used in this article. avoid this issue, the training data must come from the policyâ s own state distribution (Ross et al., 2011). We achieve this by alternating between trajectory-centric RL and supervised learning. The RL stage adapts to the current policy Ï Î¸(ut|ot), providing supervision at states that are iteratively brought closer to the states visited by the policy. This is for- malized as a variant of the BADMM algorithm (Wang and Banerjee, 2014) for constrained optimization, which can be used to show that, at convergence, the policy Ï Î¸(ut|ot) and the guiding distributions pi(ut|xt) will exhibit the same behavior. This algorithm is derived in Section 4. The guiding distributions are substantially easier to optimize than learning the policy parameters directly (e.g., using model-free reinforcement learning), because they use the full state of the system xt, while the policy Ï Î¸(ut|ot) only uses the observations. This means that the method requires the full state to be known during training, but not at test time. This makes it possible to eï¬ ciently learn complex visuomotor policies, but imposes additional assumptions on the observability of xt during training that we discuss in Section 4. When learning visuomotor tasks, the policy Ï Î¸(ut|ot) is represented by a novel convo- lutional neural network (CNN) architecture, which we describe in Section 5.2. CNNs have enjoyed considerable success in computer vision (LeCun et al., 2015), but the most popular
1504.00702#15
1504.00702#17
1504.00702
[ "1509.06113" ]
1504.00702#17
End-to-End Training of Deep Visuomotor Policies
6 # End-to-End Training of Deep Visuomotor Policies RGB image convt conv conv3 spatial softmax feature motor . . . points torques 7x7 conv s fully fully fully stride 2 expected connected) connected >} connected J Ry ReLU [2D position ReLU. ReLU linear 240 109 a 40 40 7 109 109 robot configuration 39 Figure 2: Visuomotor policy architecture. The network contains three convolutional lay- ers, followed by a spatial softmax and an expected position layer that converts pixel-wise features to feature points, which are better suited for spatial computations. The points are concatenated with the robot conï¬ guration, then passed through three fully connected layers to produce the torques. architectures rely on large datasets and focus on semantic tasks such as classiï¬ cation, often intentionally discarding spatial information. Our architecture, illustrated in Figure 2, uses a ï¬ xed transformation from the last convolutional layer to a set of spatial feature points, which form a concise representation of the visual scene suitable for feedback control. Our network has 7 layers and around 92,000 parameters, which presents a major challenge for standard policy search methods (Deisenroth et al., 2013). To reduce the amount of experience needed to train visuomotor policies, we also introduce a pretraining scheme that allows us to train eï¬ ective policies with a relatively small number of iterations. The pretraining steps are illustrated in Figure 3. The intuition behind our pretraining is that, although we ultimately seek to obtain sensorimotor policies that combine both vision and control, low-level aspects of vision can be initialized independently. To that end, we pretrain the convolu- tional layers of our network by predicting elements of xt that are not provided in the observation ot, such as the positions of objects in the scene. We also initially train the guiding trajectory distributions pi(ut|xt) indepen- dently of the convolutional network until the trajecto- ries achieve a basic level of competence at the task, and then switch to full guided policy search with end-to-end In our implementation, we also training of Ï Î¸(ut|ot). initialize the ï¬ rst layer ï¬
1504.00702#16
1504.00702#18
1504.00702
[ "1509.06113" ]
1504.00702#18
End-to-End Training of Deep Visuomotor Policies
lters from the model of Szegedy et al. (2014), which is trained on ImageNet (Deng et al., 2009) classiï¬ cation. The initialization and pretraining scheme is described in Section 5.2. {Ï j i } pi Ï Î¸ pi pi # 4. Guided Policy Search with BADMM Guided policy search transforms policy search into a supervised learning problem, where the training set is generated by a simple trajectory-centric RL algorithm. This algorithm
1504.00702#17
1504.00702#19
1504.00702
[ "1509.06113" ]
1504.00702#19
End-to-End Training of Deep Visuomotor Policies
7 # Levine, Finn, Darrell, and Abbeel optimizes linear-Gaussian controllers pi(ut|xt), and is described in Section 4.2. We refer to the trajectory distribution induced by pi(ut|xt) as pi(Ï ). Each pi(ut|xt) succeeds from diï¬ erent initial states. For example, in the task of placing a cap on a bottle, these initial states correspond to diï¬ erent positions of the bottle. By training on trajectories for multiple bottle positions, the ï¬
1504.00702#18
1504.00702#20
1504.00702
[ "1509.06113" ]
1504.00702#20
End-to-End Training of Deep Visuomotor Policies
nal CNN policy can succeed from all initial states, and can generalize to other states from the same distribution. The ï¬ nal policy Ï Î¸(ut|ot) learned with guided policy search is only provided with observations ot of the full state xt, and the dynamics are assumed to be unknown. A diagram of this method, which corresponds to an expanded version of the guided policy search box in Figure 3, is shown on the right. In the outer loop, we draw sample trajectories {Ï j i } for each ini- tial state on the physical system by running the corresponding controller pi(ut|xt). The samples are used to ï¬ t the dynamics pi(xt+1|xt, ut) that are used to improve pi(ut|xt), and serve as training data for the policy. The inner loop alternates between optimizing each pi(Ï ) and optimizing the policy to match these trajectory distributions. The policy is trained to predict the actions along each trajectory from the observations ot, rather than the full state xt. This allows the policy to directly use raw observations at test time. This alternating optimization can be framed as an instance of the BADMM algorithm (Wang and Banerjee, 2014), which converges to a solution where the trajectory distributions and the policy have the same state distribution. This allows greedy supervised training of the policy to produce a policy with good long-horizon performance. outer loop run each pi(ut|xt) on robot inner loop samples {Ï j i } ï¬ t dynamics optimize Ï Î¸ w.r.t. Lθ optimize each pi(Ï ) w.r.t. Lp # 4.1 Algorithm Derivation Policy search methods minimize the expected cost Ex, [(7)], where 7 = {x1,u1,...,x7, ur} is a trajectory, and (7) = a 1 (Xt, uz) is the cost of an episode. In the fully observed case, the expectation is taken under 79(T) = p(x1) Tha mo (ur|Xt)p(Xt41|Xt, Uz).
1504.00702#19
1504.00702#21
1504.00702
[ "1509.06113" ]
1504.00702#21
End-to-End Training of Deep Visuomotor Policies
The final policy o(uz|oz) is conditioned on the observations o;, but 79(uz|xz) can be recovered as 79(uz|Xt) = J 79(u:|04)p(04|x;)do;. We will present the derivation in this section for 7(u;|x;), but we do not require knowledge of p(o;|x;) in the final algorithm. As discussed in Section 4.3, the integral will be evaluated with samples from the real system, which include both x; and o;. We begin by rewriting the expected cost minimization as a constrained problem: min Ep[¢(r)| s.t. p(u|xz) = 79(us|xt) V Xe, Ue, t, (1) where we will refer to p(Ï ) as a guiding distribution. This formulation is equivalent to the original problem, since the constraint forces the two distributions to be identical. However, if we approximate the initial state distribution p(x1) with samples xi 1, we can choose p(Ï ) to be a class of distributions that is much easier to optimize than Ï Î¸, as we will show later. This will allow us to use simple local learning methods for p(Ï ), without needing to train the complex neural network policy Ï Î¸(ut|ot) directly with reinforcement learning, which would require a prohibitive amount of experience on real physical systems. The constrained problem can be solved by a dual descent method, which alternates between minimizing the Lagrangian with respect to the primal variables, and incrementing
1504.00702#20
1504.00702#22
1504.00702
[ "1509.06113" ]
1504.00702#22
End-to-End Training of Deep Visuomotor Policies
8 End-to-End Training of Deep Visuomotor Policies the Lagrange multipliers by their subgradient. Minimization of the Lagrangian with respect to p(Ï ) and θ is done in alternating fashion: minimizing with respect to θ corresponds to supervised learning (making Ï Î¸ match p(Ï )), and minimizing with respect to p(Ï ) consists of one or more trajectory optimization problems. The dual descent method we use is based on BADMM (Wang and Banerjee, 2014), a variant of ADMM (Boyd et al., 2011) that augments the Lagrangian with a Bregman divergence between the constrained variables. We use the KL-divergence as the Bregman constraint, which is particularly convenient for working with probability distributions. We will also modify the constraint p(ut|xt) = Ï Î¸(ut|xt) by multiplying both sides by p(xt), to get p(ut|xt)p(xt) = Ï Î¸(ut|xt)p(xt). This constraint is equivalent, but has the convenient property that we can express the Lagrangian in terms of expectations. The BADMM augmented Lagrangians for θ and p are therefore given by T Lo(O,p) = > Ep (xy ay) EX u,)] + Ey(xe)mo (uslxe) [Ax:url _ Ep(xeu,) Ax:uel +r 464 (8, p) t=1 T Ly(p, 6) = > Evycxeu) (EX, uz)] Tr Eoy(xe) mo (uelxe) Axe.ue] _ Ey(xeur) xen T a (9, P), t=1 where λxt,ut is the Lagrange multiplier for state xt and action ut at time t, and Ï Î¸ Ï p t (θ, p) are expectations of the KL-divergences: (0, p) are Ot (p,9) = Encx,)[Dxx (p(uelxe)|| 79 (uelxe))] 9 (8, p) = Eycx,)(Dxu(mo (uy) [lp (ul x2)]- Dual descent with alternating primal minimization is then described by the following steps: T 6 < arg min)?
1504.00702#21
1504.00702#23
1504.00702
[ "1509.06113" ]
1504.00702#23
End-to-End Training of Deep Visuomotor Policies
Eoy(xce)mro (uelxe) Axe] + 10% (0, p) t=1 T pear min)? Enyce au) E(C%t; Wt) â Aree ay] + 4d? (p, t=1 Axes â Areas + 04 (779 (Ue Xt) (Xt) â p(Ur|Xt)P(%t))- # t (p, θ) This procedure is an instance of BADMM, and therefore inherits its convergence guarantees. Note that we drop terms that are independent of the optimization variables on each line. The parameter α is a step size. As with most augmented Lagrangian methods, the weight νt is set heuristically, as described in Appendix A.1. The dynamics only aï¬ ect the optimization with respect to p(Ï ). In order to make this optimization eï¬ cient, we choose p(Ï ) to be a mixture of N Gaussians pi(Ï ), one for each initial state sample xi 1. This makes the action conditionals pi(ut|xt) and the dynamics pi(xt+1|xt, ut) linear-Gaussian, as discussed in Section 4.2. This is a reasonable choice when the system is deterministic, or the noise is Gaussian or small, and we found that this approach is suï¬ ciently tolerant to noise for use on real physical systems. Our choice of p also assumes that the policy Ï Î¸(ut|ot) is conditionally Gaussian. This is also reasonable, since the mean and covariance of Ï Î¸(ut|ot) can be any nonlinear function of the observations
1504.00702#22
1504.00702#24
1504.00702
[ "1509.06113" ]
1504.00702#24
End-to-End Training of Deep Visuomotor Policies
9 # Levine, Finn, Darrell, and Abbeel ot, which themselves are a function of the unobserved state xt. In Section 4.2, we show how these assumptions enable each pi(Ï ) to be optimized very eï¬ ciently. We will refer to pi(Ï ) as guiding distributions, since they serve to focus the policy on good, low-cost behaviors. Aside from learning pi(Ï ), we must choose a tractable way to represent the inï¬ nite set of constraints p(ut|xt)p(xt) = Ï Î¸(ut|xt)p(xt). One approximate approach proposed in prior work is to replace the exact constraints with expectations of features (Peters et al., 2010). When the features consist of linear, quadratic, or higher order monomial functions of the random variable, this can be viewed as a constraint on the moments of the distributions.
1504.00702#23
1504.00702#25
1504.00702
[ "1509.06113" ]
1504.00702#25
End-to-End Training of Deep Visuomotor Policies
If we only use the ï¬ rst moment, we get a constraint on the expected action: Ep(ut|xt)p(xt)[ut] = EÏ Î¸(ut|xt)p(xt)[ut]. If the stochasticity in the dynamics is low, as we assumed previously, the optimal solution for each pi(Ï ) will have low entropy, making this ï¬ rst moment constraint a reasonable approximation. The KL-divergence terms in the augmented Lagrangians will still serve to softly enforce agreement between the higher moments. While this simpliï¬ cation is quite drastic, we found that it was more stable in practice than including higher moments, likely because these higher moments are harder to estimate accurately with a limited number of samples. The alternating optimization is now given by T 0<¢ arg min Epcxz)mo(uelxe) [uf Apel + 1.69 (0, p) (2) t=1 # t=1 T pe arg min)? Eny(x.uz)(E(Xt Ws) â Uf Ae] + MeO (D, 9) (3) t=1 λµt â λµt + ανt(EÏ Î¸(ut|xt)p(xt)[ut] â Ep(ut|xt)p(xt)[ut]), where λµt is the Lagrange multiplier on the expected action at time t. In the rest of the paper, we will use Lθ(θ, p) and Lp(p, θ) to denote the two augmented Lagrangians in Equations (2) and (3), respectively. In the next two sections, we will describe how Lp(p, θ) can be optimized with respect to p under unknown dynamics, and how Lθ(θ, p) can be optimized for complex, high-dimensional policies. Implementation details of the BADMM optimization are presented in Appendix A.1. # 4.2 Trajectory Optimization under Unknown Dynamics Since the Lagrangian £,(p, 0) in the previous section factorizes over the mixture elements in p(t) = 30; pi(r), we describe the trajectory optimization method for a single Gaussian p(t). When there are multiple mixture elements, this procedure is applied in parallel to each pi(T).
1504.00702#24
1504.00702#26
1504.00702
[ "1509.06113" ]
1504.00702#26
End-to-End Training of Deep Visuomotor Policies
Since p(T) is Gaussian, the conditionals p(x;+41|x,, uz) and p(u;,|x;), which correspond to the dynamics and the controller, are time-varying linear-Gaussian, and given by p(ut|xt) = N (Ktxt + kt, Ct) p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft). This type of controller can be learned eï¬ ciently with a small number of real-world samples, making it a good choice for optimizing the guiding distributions. Since a diï¬ erent set of time- varying linear-Gaussian dynamics is ï¬ tted for each initial state, this dynamics representation can model any continuous deterministic system that can be locally linearized. Stochastic dynamics can violate the local linearity assumption in principle, but we found that in practice this representation was well suited for a wide variety of noisy real-world tasks.
1504.00702#25
1504.00702#27
1504.00702
[ "1509.06113" ]
1504.00702#27
End-to-End Training of Deep Visuomotor Policies
10 End-to-End Training of Deep Visuomotor Policies The dynamics are determined by the environment. If they are known, p(u;|x;) can be optimized with a variant of the iterative linear-quadratic-Gaussian regulator (iLQG) (Li and Todorov, 2004; Levine and Koltun, 2013a), which is a variant of DDP (Jacobson and Mayne, 1970). In the case of unknown dynamics, we can fit p(x:41|Xz, Uz) to sample trajectories sampled from the trajectory distribution at the previous iteration, denoted f(r). If p(7) is too different from p(T), these samples will not give a good estimate of p(x:+1|xz, uz), and the optimization will diverge. To avoid this, we can bound the change from p(7) to p(T) in terms of their KL-divergence by a step size â ¬, producing the following constrained problem: i Ly(p,9) s.t. Dxr(y p Se. te 3 p(p, 4) s KL(P(T)||B(7)) < â ¬ This type of policy update has previously been proposed by several authors in the con- text of policy search (Bagnell and Schneider, 2003; Peters and Schaal, 2008; Peters et al., 2010; Levine and Abbeel, 2014). In the case when p(Ï ) is Gaussian, this problem can be solved eï¬ ciently using dual gradient descent, while the dynamics p(xt+1|xt, ut) are ï¬ tted to samples gathered by running the previous controller Ë p(ut|xt) on the robot. Fitting a global Gaussian mixture model to tuples (xt, ut, xt+1) and using it as a prior for ï¬ tting the dynamics p(xt+1|xt, ut) serves to greatly reduce the sample complexity. We describe the dynamics ï¬ tting procedure in detail in Appendix A.3. Note that the trajectory optimization cost function £,(p,0) also depends on the policy mo(uz|xz), while we only have access to 79(u;|oz).
1504.00702#26
1504.00702#28
1504.00702
[ "1509.06113" ]
1504.00702#28
End-to-End Training of Deep Visuomotor Policies
In order to compute a local quadratic expansion of the KL-divergence term Dkr (p(uz|x¢)||7(uz|xz)) inside L,(p, 0) for iLQG, we also estimate a linearization of the mean of the conditionally Gaussian policy 7(u;|o;) with respect to the state x;, using the same procedure that we use to linearize the dynamics. The data for this estimation consists of tuples {x}, E, a(u,|oi)[Ui]}, which we can obtain because both the states x} and the observations 0} are available for all of the samples evaluated on the real physical system. This constrained optimization is performed in the â inner loopâ of the optimization described in the previous section, and the KL-divergence constraint Dx i(p(7)||B(7)) < â ¬ imposes a step size on the trajectory update. The overall algorithm then becomes an instance of generalized BADMM (Wang and Banerjee, 2014). Note that the augmented Lagrangian £L,(p, @) consists of an expectation under p(7) of a quantity that is independent of p. We can locally approximate this quantity with a quadratic by using a quadratic expansion of (xz, uz), and fitting a linear-Gaussian to 79(u;|x;) with the same method we used for the dynamics. We can then solve the primal optimization in the dual gradient descent procedure with a standard LQR backward pass. This is significantly simpler and much faster than the forward-backward dynamic programming procedure employed in previous work (Levine and Abbeel, 2014; Levine and Koltun, 2014). This improvement is enabled by the use of BADMM, which allows us to always formulate the KL-divergence term in the Lagrangian with the distribution being optimized as the first argument. Since the KL-divergence is convex in its first argument, this makes the corresponding optimization significantly easier. The details of this LQR-based dual gradient descent algorithm are derived in Appendix A.4. We can further improve the efficiency of the method by allowing samples from multiple trajectories p;(7) to be used to fit a shared dynamics p(x:41|Xz, Uz), while the controllers pi(uz|x¢) are allowed to vary. This makes sense when the initial states of these trajectories
1504.00702#27
1504.00702#29
1504.00702
[ "1509.06113" ]
1504.00702#29
End-to-End Training of Deep Visuomotor Policies
11 Levine, Finn, Darrell, and Abbeel are similar, and they therefore visit similar regions. This allows us to draw just a single sample from each pi(Ï ) at each iteration, allowing us to handle many more initial states. # 4.3 Supervised Policy Optimization Since the policy parameters θ participate only in the constraints of the optimization problem in Equation (1), optimizing the policy corresponds to minimizing the KL-divergence between the policy and trajectory distribution, as well as the expectation of λT µtut. For a conditional Gaussian policy of the form Ï Î¸(ut|ot) = N (ÂµÏ (ot), Î£Ï (ot)), the objective is N T 1 _ £66.) =ay Yd Eriixnor) [tt[Cz;'=" (0r)] log |=" (0r)| i=1t=1 +(H" (01) â Hp (x1) Cai" (Uâ ¢ (Ot) â Mei (Xe)) + BAeâ (04)] where µp ti(xt) is the mean of pi(ut|xt) and Cti is the covariance, and the expectation is eval- uated using samples from each pi(Ï ) with corresponding observations ot. The observations are sampled from p(ot|xt) by recording camera images on the real system. Since the input to ÂµÏ (ot) and Î£Ï (ot) is not the state xt, but only an observation ot, we can train the policy to directly use raw observations. Note that Lθ(θ, p) is simply a weighted quadratic loss on the diï¬ erence between the policy mean and the mean action of the trajectory distribution, oï¬ set by the Lagrange multiplier. The weighting is the precision matrix of the conditional in the trajectory distribution, which is equal to the curvature of its cost-to-go function (Levine and Koltun, 2013a). This has an intuitive interpretation: Lθ(θ, p) penalizes de- viation from the trajectory distribution, with a penalty that is locally proportional to its cost-to-go. At convergence, when the policy Ï
1504.00702#28
1504.00702#30
1504.00702
[ "1509.06113" ]
1504.00702#30
End-to-End Training of Deep Visuomotor Policies
θ(ut|ot) takes the same actions as pi(ut|xt), their Q-functions are equal, and the supervised policy objective becomes equivalent to the policy iteration objective (Levine and Koltun, 2014) In this work, we optimize Lθ(θ, p) with respect to θ using stochastic gradient descent (SGD), a standard method for neural network training. The covariance of the Gaussian policy does not depend on the observation in our implementation, though adding this de- pendence would be straightforward. Since training complex neural networks requires a substantial number of samples, we found it beneï¬ cial to include sampled observations from previous iterations into the policy optimization, evaluating the action µp ti(xt) at their corre- sponding states using the current trajectory distributions. Since these samples come from the wrong state distribution, we use importance sampling and weight them according to the ratio of their probability under the current distribution p(xt) and the one they were sampled from, which is straightforward to evaluate under the estimated linear-Gaussian dynamics (Levine and Koltun, 2013b). # 4.4 Comparison with Prior Guided Policy Search Methods We presented a guided policy search method where the policy is trained on observations, while the trajectories are trained on the full state. The BADMM formulation of guided policy search is new to this work, though several prior guided policy search methods based on constrained optimization have been proposed. Levine and Koltun (2014) proposed a formulation similar to Equation (1), but with a constraint on the KL-divergence between
1504.00702#29
1504.00702#31
1504.00702
[ "1509.06113" ]
1504.00702#31
End-to-End Training of Deep Visuomotor Policies
12 # End-to-End Training of Deep Visuomotor Policies p(Ï ) and Ï Î¸. This results in a more complex, non-convex forward-backward trajectory optimization phase. Since the BADMM formulation solves a convex problem during the trajectory optimization phase, it is substantially faster and easier to implement and use, especially when the number of trajectories pi(Ï ) is large. The use of ADMM for guided policy search was also proposed by Mordatch and Todorov (2014) for deterministic policies under known dynamics. This approach requires known, de- terministic dynamics and trains deterministic policies. Furthermore, because this approach uses a simple quadratic augmented Lagrangian term, it further requires penalty terms on the gradient of the policy to account for local feedback. Our approach enforces this feed- back behavior due to the higher moments included in the KL-divergence term, but does not require computing the second derivative of the policy.
1504.00702#30
1504.00702#32
1504.00702
[ "1509.06113" ]
1504.00702#32
End-to-End Training of Deep Visuomotor Policies
# 5. End-to-End Visuomotor Policies Guided policy search allows us to optimize complex, high-dimensional policies with raw observations, such as when the input to the policy consists of images from a robotâ s onboard camera. However, leveraging this capability to directly learn policies for visuomotor control requires designing a policy representation that is both data-eï¬ cient and capable of learning complex control strategies directly from raw visual inputs. In this section, we describe a deep convolutional neural network (CNN) model that is uniquely suited to this task. Our approach combines a novel spatial soft-argmax layer with a pretraining procedure that provides for ï¬
1504.00702#31
1504.00702#33
1504.00702
[ "1509.06113" ]
1504.00702#33
End-to-End Training of Deep Visuomotor Policies
exibility and data-eï¬ ciency. # 5.1 Visuomotor Policy Architecture Our visuomotor policy runs at 20 Hz on the robot, mapping monocular RGB images and the robot conï¬ gurations to joint torques on a 7 DoF arm. The conï¬ guration includes the angles of the joints and the pose of the end-eï¬ ector (deï¬ ned by 3 points in the space of the end-eï¬ ector), as well as their velocities, but does not include the position of the target ob- ject or goal, which must be determined from the image. CNNs often use pooling to discard the locational information that is necessary to determine positions, since it is an irrelevant distractor for tasks such as object classiï¬ cation (Lee et al., 2009). Because locational in- formation is important for control, our policy does not use pooling. Additionally, CNNs built for spatial tasks such as human pose estimation often also rely on the availability of location labels in image-space, such as hand-labeled keypoints (Tompson et al., 2014). We propose a novel CNN architecture capable of estimating spatial information from an image without direct supervision in image space. Our pose estimation experiments, discussed in Section 5.2, show that this network can learn useful visual features using only 3D position information provided by the robot, and no camera calibration. Further training the network with guided policy search to directly output motor torques causes it to acquire task-speciï¬ c visual features. Our experiments in Section 6.4 show that this improves performance beyond the level achieved with features trained only for pose estimation. Our network architecture is shown in Figure 2. The visual processing layers of the network consist of three convolutional layers, each of which learns a bank of ï¬ lters that are applied to patches centered on every pixel of its input. These ï¬ lters form a hierarchy of local image features. Each convolutional layer is followed by a rectifying nonlinearity of
1504.00702#32
1504.00702#34
1504.00702
[ "1509.06113" ]
1504.00702#34
End-to-End Training of Deep Visuomotor Policies
13 # Levine, Finn, Darrell, and Abbeel the form a,j; = max(0, 2.;;) for each channel c and each pixel coordinate (i,j). The third convolutional layer contains 32 response maps with resolution 109 x 109. These response maps are passed through a spatial softmax function of the form sq; = e%4/Y>i, i! etlâ , Each output channel of the softmax is a probability distribution over the location of a feature in the image. To convert from this distribution to a coordinate representation (fee, fey), the network calculates the expected image position of each feature, yielding a 2D coordinate for each channel: fe = Ly Seijtiy and fry = Ly SeijYig, Where (jj, Yij) is the image-space position of the point (i,7) in the response map. Since this is a linear operation, it corresponds to a fixed, sparse fully connected layer with weights Wai2 = xj; and W.jy = yij- The combination of the spatial softmax and expectation operator implement a kind of soft-argmax. The spatial feature points (for, fey) are concatenated with the robotâ s configuration and fed into two fully connected layers, each with 40 rectified units, followed by linear connections to the torques. The full network contains about 92,000 parameters, of which 86,000 are in the convolutional layers. The spatial softmax and the expected position computation serve to convert pixel-wise representations in the convolutional layers to spatial coordinate representations, which can be manipulated by the fully connected layers into 3D positions or motor torques. The softmax also provides lateral inhibition, which suppresses low, erroneous activations, only keeping strong activations that are more likely to be accurate. This makes our policy more robust to distractors, providing generalization to novel visual variation. We compare our architecture with more standard alternatives in Section 6.3 and evaluate robustness to visual distractors in Section 6.4. However, the proposed architecture is also in some sense more specialized for visuomotor control, in contrast to more general standard convolutional networks. For example, not all perception tasks require information that can be coherently summarized by a set of spatial locations. # 5.2 Visuomotor Policy Training
1504.00702#33
1504.00702#35
1504.00702
[ "1509.06113" ]
1504.00702#35
End-to-End Training of Deep Visuomotor Policies
The guided policy search trajectory optimization phase uses the full state of the system, though the ï¬ nal policy only uses the observations. This type of instrumented training is a natural choice for many robotics tasks, where the robot is trained under controlled conditions, but must then act intelligently in uncon- trolled, real-world situations. In our tasks, the unobserved vari- ables are the pose of a target object (e.g. the bottle on which a cap must be placed). During training, this target object is typi- cally held in the robotâ s left gripper, while the robotâ s right arm performs the task, as shown to the right. This allows the robot to move the target through a range of known positions.
1504.00702#34
1504.00702#36
1504.00702
[ "1509.06113" ]
1504.00702#36
End-to-End Training of Deep Visuomotor Policies
The ï¬ nal visuomotor policy does not receive this position as input, but must instead use the camera images. Due to the modest amount of training data, distractors that are correlated with task-relevant variables can hamper generalization. For this reason, the left arm is covered with cloth to prevent the policy from associating its appearance with the objectâ s position. 14 End-to-End Training of Deep Visuomotor Policies While we can train the visuomotor policy entirely from scratch, the algorithm would spend a large number of iterations learning basic visual features and arm motions that can more eï¬ ciently be learned by themselves, before being incorporated into the policy. To speed up learning, we initialize both the vision layers in the policy and the trajectory distributions for guided policy search by leveraging the fully observed training setup. To initialize the vision layers, the robot moves the target object through a range of random positions, recording camera images and the objectâ s pose, which is computed automatically from the pose of the gripper. This dataset is used to train a pose regression CNN, which consists of the same vision layers as the policy, followed by a fully connected layer that outputs the 3D points that deï¬
1504.00702#35
1504.00702#37
1504.00702
[ "1509.06113" ]
1504.00702#37
End-to-End Training of Deep Visuomotor Policies
ne the target. Since the training set is still small (we use 1000 images collected from random arm motions), we initialize the ï¬ lters in the ï¬ rst layer with weights from the model of Szegedy et al. (2014), which is trained on ImageNet (Deng et al., 2009) classiï¬ cation. After training on pose regression, the weights in the convolutional layers are transferred to the policy CNN. This enables the robot to learn the appearance of the objects prior to learning the behavior. To initialize the linear-Gaussian controllers for each of the initial states, we take 15 iterations of guided policy search without optimizing the visuomotor policy. This allows for much faster training in the early iterations, when the trajectories are not yet successful, and optimizing the full visuomotor policy is unnecessarily time consuming. Since we still want the trajectories to arrive at compatible strategies for each target position, we replace the visuomotor policy during these iterations with a small network that receives the full state, which consisted of two layers with 40 rectiï¬ ed linear hidden units in our experiments. This network serves only to constrain the trajectories and avoid divergent behaviors from emerging for similar initial states, which would make subsequent policy learning diï¬
1504.00702#36
1504.00702#38
1504.00702
[ "1509.06113" ]
1504.00702#38
End-to-End Training of Deep Visuomotor Policies
cult. After initialization, we train the full visuomotor policy with guided policy search. During the supervised policy optimiza- tion phase, the fully connected motor control layers are ï¬ rst optimized by themselves, since they are not initialized with pre- training. This can be done very quickly because these layers are small. Then, the entire network is further optimized end-to-end. We found that ï¬ rst training the upper layers before end-to-end optimization prevented the convolutional layers from forgetting useful features learning during pretraining, when the error sig- nal due to the untrained upper layers is very large. The entire pretraining scheme is summarized in the diagram on the right. Note that the trajectories can be pretrained in parallel with the vision layer pretraining, which does not require access to the physical system. Further- more, the entire initialization procedure does not use any additional information that is not already available from the robot. requires robot collect visual pose data pretrain trajectories train pose CNN initial trajectories initial visual features end-to-end training policy requires robot # collect visual pose data # pretrain trajectories # train pose CNN # initial trajectories # initial visual features # end-to-end training
1504.00702#37
1504.00702#39
1504.00702
[ "1509.06113" ]
1504.00702#39
End-to-End Training of Deep Visuomotor Policies
# policy # 6. Experimental Evaluation In this section, we present a series of experiments aimed at evaluating our approach and answering the following questions: 15 Levine, Finn, Darrell, and Abbeel 1. How does the guided policy search algorithm compare to other policy search methods for training complex, high-dimensional policies, such as neural networks? 2. Does our trajectory optimization algorithm work on a real robotic platform with unknown dynamics, for a range of diï¬ erent tasks? 3.
1504.00702#38
1504.00702#40
1504.00702
[ "1509.06113" ]
1504.00702#40
End-to-End Training of Deep Visuomotor Policies
How does our spatial softmax architecture compare to other, more standard convolu- tional neural network architectures? 4. Does training the perception and control systems in a visuomotor policy jointly end- to-end provide better performance than training each component separately? Evaluating a wide range of policy search algorithms on a real robot would be extremely time consuming, particularly for methods that require a large number of samples. We therefore answer question (1) by using a physical simulator and simpler policies that do not use vision. This also allows us to test the generality of guided policy search on tasks that include manipulation, walking, and swimming. To answer question (2), we present a wide range of experiments on a PR2 robot. These experiments allow us to evaluate the sample eï¬ ciency of our trajectory optimization algorithm. To address question (3), we compare a range of diï¬ erent policy architectures on the task of localizing a target object (the cube in the shape sorting cube task). Since localizing the target object is a prerequisite for completing the shape sorting cube task, this serves as a good proxy for evaluating diï¬ erent architectures. Finally, we answer the last and most important question (4) by training visuomotor policies for hanging a coat hanger on a clothes rack, inserting a block into a shape sorting cube, ï¬ tting the claw of a toy hammer under a nail with various grasps, and screwing on a bottle cap. These tasks are illustrated in Figure 8.
1504.00702#39
1504.00702#41
1504.00702
[ "1509.06113" ]
1504.00702#41
End-to-End Training of Deep Visuomotor Policies
# 6.1 Simulated Comparisons to Prior Policy Search Methods In this section, we compare our method against prior policy search techniques on a range of simulated robotic control tasks. These results previously appeared in our conference paper that introduced the trajectory optimization procedure with local linear models (Levine and Abbeel, 2014). In these tasks, the state xt consists of the joint angles and velocities of each robot, and the actions ut consist of the torques at each joint. The neural network policies used one hidden layer and soft rectiï¬ er nonlinearities of the form a = log(1 + exp(z)). Since these policies use the state as input, they only have a few hundred parameters, far fewer than our visuomotor policies. However, even this number of parameters can pose a major challenge for prior policy search methods (Deisenroth et al., 2013). Experimental tasks. We simulated 2D and 3D peg insertion, octopus arm control, and planar swimming and walking.
1504.00702#40
1504.00702#42
1504.00702
[ "1509.06113" ]
1504.00702#42
End-to-End Training of Deep Visuomotor Policies
The diï¬ culty in the peg insertion tasks stems from the need to align the peg with the slot and the complex contacts between the peg and the walls, which result in discontinuous dynamics. Octopus arm control involves moving the tip of a ï¬ exible arm to a goal position (Engel et al., 2005). The challenge in this task stems from its high dimensionality: the arm has 25 degrees of freedom, corresponding to 50 state dimensions. The swimming task requires controlling a three-link snake, and the walking task requires a seven-link biped to maintain a target velocity. The challenge in these tasks comes from underactuation. Details of the simulation and cost for each task are in Appendix B.1.
1504.00702#41
1504.00702#43
1504.00702
[ "1509.06113" ]
1504.00702#43
End-to-End Training of Deep Visuomotor Policies
16 # End-to-End Training of Deep Visuomotor Policies itr 1 itr 2 itr 4 itr 1 itr 5 itr 1 itr 20 itr 40 itr 10 Figure 4: Results for learning linear-Gaussian controllers for 2D and 3D insertion, octopus arm, and swimming. Our approach uses fewer samples and ï¬ nds better solutions than prior methods, and the GMM further reduces the required sample count. Images in the lower- right show the last time step for each system at several iterations of our method, with red lines indicating end eï¬ ector trajectories. Prior methods. We compare to REPS (Peters et al., 2010), reward-weighted regression (RWR) (Peters and Schaal, 2007; Kober and Peters, 2009), the cross-entropy method (CEM) (Rubinstein and Kroese, 2004), and PILCO (Deisenroth and Rasmussen, 2011). We also use iLQG (Li and Todorov, 2004) with a known model as a baseline, shown as a black horizontal line in all plots. REPS is a model-free method that, like our approach, enforces a KL-divergence constraint between the new and old policy. We compare to a variant of REPS that also ï¬ ts linear dynamics to generate 500 pseudo-samples (Lioutikov et al., 2014), which we label â REPS (20 + 500).â RWR is an EM algorithm that ï¬ ts the policy to previous samples weighted by the exponential of their reward, and CEM ï¬ ts the policy to the best samples in each batch. With Gaussian trajectories, CEM and RWR only diï¬ er in the weights. These methods represent a class of RL algorithms that ï¬ t the policy to weighted samples, including PoWER and PI2 (Kober and Peters, 2009; Theodorou et al., 2010; Stulp and Sigaud, 2012). PILCO is a model-based method that uses a Gaussian process to learn a global dynamics model that is used to optimize the policy. We used the open-source implementation of PILCO provided by the authors.
1504.00702#42
1504.00702#44
1504.00702
[ "1509.06113" ]
1504.00702#44
End-to-End Training of Deep Visuomotor Policies
Both REPS and PILCO require solving large nonlinear optimizations at each iteration, while our method does not. Our method used 5 rollouts with the Gaussian mixture model prior, and 20 without. Due to its computational cost, PILCO was provided with 5 rollouts per iteration, while other prior methods used 20 and 100. For all prior methods with free hyperparameters (such as the fraction of elites for CEM), we performed hyperparameter sweeps and chose the most successful settings for the comparison.
1504.00702#43
1504.00702#45
1504.00702
[ "1509.06113" ]
1504.00702#45
End-to-End Training of Deep Visuomotor Policies
Gaussian trajectory distributions. In the ï¬ rst set of comparisons, we evaluate only the trajectory optimization procedure for training linear-Gaussian controllers under unknown dynamics to determine its sample-eï¬ ciency and applicability to complex, high-dimensional problems. The results of this comparison for the peg insertion, octopus arm, and swimming 17 # Levine, Finn, Darrell, and Abbeel #1 #2 #3 #1 #2 #3 #4 #4 Figure 5:
1504.00702#44
1504.00702#46
1504.00702
[ "1509.06113" ]
1504.00702#46
End-to-End Training of Deep Visuomotor Policies
Comparison on neural network policies. For insertion, the policy was trained to search for an unknown slot position on four slot positions (shown above). Generalization to new positions is graphed with dashed lines. Note how the end eï¬ ector (red) follows the surface to ï¬ nd the slot, and how the swimming gait is smoother due to the stationary policy. tasks appears in Figure 4. The horizontal axis shows the total number of samples, and the vertical axis shows the minimum distance between the end of the peg and the bottom of the slot, the distance to the target for the octopus arm, or the total distance travelled by the swimmer. Since the peg is 0.5 units long, distances above this amount correspond to controllers that cannot perform an insertion.
1504.00702#45
1504.00702#47
1504.00702
[ "1509.06113" ]
1504.00702#47
End-to-End Training of Deep Visuomotor Policies
Our method learned much more eï¬ ective controllers with fewer samples, especially when using the Gaussian mixture model prior. On 3D insertion, it outperformed the iLQG baseline, which used a known model. Contact discontinuities cause problems for derivative-based methods like iLQG, as well as methods like PILCO that learn a smooth global dynamics model. We use a time-varying local model, which preserves more detail, and ï¬ tting the model to samples has a smoothing eï¬ ect that mitigates discontinuity issues. Prior policy search methods could servo to the hole, but were unable to insert the peg. On the octopus arm, our method succeeded despite the high dimensionality of the state and action spaces.1 Our method also successfully learned a swimming gait, while prior model-free methods could not initiate forward motion. PILCO also learned an eï¬ ective gait due to the smooth dynamics of this task, but its GP- based optimization required orders of magnitude more computation time than our method, taking about 50 minutes per iteration. In the case of prior model-free methods, the high dimensionality of the time-varying linear-Gaussian controllers likely caused considerable diï¬ culty (Deisenroth et al., 2013), while our approach exploits the structure of linear- Gaussian controllers for eï¬ cient learning. 1.
1504.00702#46
1504.00702#48
1504.00702
[ "1509.06113" ]
1504.00702#48
End-to-End Training of Deep Visuomotor Policies
The high dimensionality of the octopus arm made it diï¬ cult to run PILCO, though in principle, such methods should perform well on this task given the armâ s smooth dynamics. 18 End-to-End Training of Deep Visuomotor Policies Neural network policies. In the second set of comparisons, shown in Figure 5, we compare guided policy search to RWR and CEM2 on the challenging task of training high- dimensional neural network policies for the peg insertion and locomotion tasks. The vari- ant of guided policy search used in this comparison diï¬ ers somewhat from the version described in Section 4, in that it used a simpler dual gradient descent formulation, rather than BADMM. In practice, we found the performance of these methods to be very similar, though the BADMM variant was substantially faster and easier to implement. On swimming, our method achieved similar performance to the linear-Gaussian case, but since the neural network policy was stationary, the resulting gait was much smoother. Previous methods could only solve this task with 100 samples per iteration, with RWR eventually obtaining a distance of 0.5m after 4000 samples, and CEM reaching 2.1m after 3000. Our method was able to reach such distances with many fewer samples. Following prior work (Levine and Koltun, 2013a), the walker trajectory was initialized from a demon- stration, which was stabilized with simple linear feedback. The RWR and CEM policies were initialized with samples from this controller to provide a fair comparison. The graph shows the average distance travelled on rollouts that did not fall, and shows that only our method was able to learn walking policies that succeeded consistently. On peg insertion, the neural network was trained to insert the peg without precise knowledge of the position of the hole, resulting in a partially observed problem. The holes were placed in a region of radius 0.2 units in 2D and 0.1 units in 3D. The policies were trained on four diï¬ erent hole positions, and then tested on four new hole positions to evaluate generalization. The hole position was not provided to the neural network, and the policies therefore had to search for the hole, with only joint angles and velocities as input.
1504.00702#47
1504.00702#49
1504.00702
[ "1509.06113" ]
1504.00702#49
End-to-End Training of Deep Visuomotor Policies
Only our method could acquire a successful strategy to locate both the training and test holes, although RWR was eventually able to insert the peg into one of the four holes in 2D. These comparisons show that training even medium-sized neural network policies for continuous control tasks with a limited number of samples is very diï¬ cult for many prior policy search algorithms. Indeed, it is generally known that model-free policy search meth- In ods struggle with policies that have over 100 parameters (Deisenroth et al., 2013). subsequent sections, we will evaluate our method on real robotic tasks, showing that it can scale from these simulated tasks all the way up to end-to-end learning of visuomotor control. # 6.2 Learning Linear-Gaussian Controllers on a PR2 Robot In this section, we demonstrate the range of manipulation tasks that can be learned using our trajectory optimization algorithm on a real PR2 robot. These experiments previously appeared in our conference paper on guided policy search (Levine et al., 2015). Since performing trajectory optimization is a prerequisite for guided policy search to learn eï¬ ective visuomotor policies, it is important to evaluate that our trajectory optimization can learn a wide variety of robotic manipulation tasks under unknown dynamics. The tasks in these experiments are shown in Figure 6, while Figure 7 shows the learning curves for each task. For all robotic experiments in this article, the tasks were learned entirely from scratch,
1504.00702#48
1504.00702#50
1504.00702
[ "1509.06113" ]
1504.00702#50
End-to-End Training of Deep Visuomotor Policies
2. PILCO cannot optimize neural network policies, and we could not obtain reasonable results with REPS. Prior applications of REPS generally focus on simpler, lower-dimensional policy classes (Peters et al., 2010; Lioutikov et al., 2014). 19 Levine, Finn, Darrell, and Abbeel (a) (b) (c) (d) (e) (f) (g) (h) (i) (a) (b) (d) (e) (f) (g) (h) (c) (i) Figure 6: Tasks for linear-Gaussian controller evaluation: (a) stacking lego blocks on a ï¬ xed base, (b) onto a free-standing block, (c) held in both gripper; (d) threading wooden rings onto a peg; (e) attaching the wheels to a toy airplane; (f) inserting a shoe tree into a shoe; (g,h) screwing caps onto pill bottles and (i) onto a water bottle. with the initialization of the controllers p(ut|xt) described in Appendix B.2. The number of samples required to learn each controller is around 20-25, substantially lower than many prior policy search meth- ods in the literature (Peters and Schaal, 2008; Kober et al., 2010b; Theodorou et al., 2010; Deisenroth et al., 2013). Total learn- ing time was about ten minutes for each task, of which only 3-4 minutes involved sys- tem interaction. The rest of the time was spent resetting the robot to the initial state and on computation. linear-Gaussian controller learning curves â
1504.00702#49
1504.00702#51
1504.00702
[ "1509.06113" ]
1504.00702#51
End-to-End Training of Deep Visuomotor Policies
â ego block (xed) â â toy airplane ~ 8h â â lego block (free) â â shoe tree E â â lego block (hand) + â =â pill bottle 3 ° â â ~ Fing on peg â â â waler bottle 5 4 ~° 2 olâ DE nS eS samples Figure 7: Distance to target point during training of linear-Gaussian controllers. The actual target may diï¬ er due to perturbations. Error bars indicate one standard deviation. The linear-Gaussian controllers are optimized for a speciï¬
1504.00702#50
1504.00702#52
1504.00702
[ "1509.06113" ]
1504.00702#52
End-to-End Training of Deep Visuomotor Policies
c condition â e.g., a speciï¬ c position of the target lego block. To evaluate their robustness to errors in the speciï¬ ed target position, we conducted experiments on the lego block and ring tasks where the target object (the lower block and the peg) was perturbed at each trial during training, and then tested with various perturbations. For each task, controllers were trained with Gaussian perturbations with standard deviations of 0, 1, and 2 cm in the position of the target object, and each controller was tested with perturbations of radius 0, 1, 2, and 3 cm. Note that with a radius of 2 cm, the peg would be placed about one ring-width away from the expected position. The results are shown in Table 2. All controllers were robust to perturbations of 1 cm, and would often succeed at 2 cm. Robustness increased slightly when more noise was injected during training, but even controllers trained without noise exhibited considerable robustness, since the linear-Gaussian controllers themselves add noise during sampling. We also evaluated a kinematic baseline for each perturbation level, which planned a straight path from a point 5 cm above the target to the expected (unperturbed) target location. This baseline was only able to place the lego block in the absence of perturbations. The rounded top of the peg provided an easier condition for the baseline, with occasional successes at higher perturbation levels. Our controllers outperformed the baseline by a wide margin. All of the robotic experiments discussed in this section may be viewed in the corre- sponding supplementary video, available online: http://rll.berkeley.edu/icra2015gps. A video illustration of the visuomotor policies, discussed in the following sections, is also available: http://sites.google.com/site/visuomotorpolicy.
1504.00702#51
1504.00702#53
1504.00702
[ "1509.06113" ]
1504.00702#53
End-to-End Training of Deep Visuomotor Policies
20 # End-to-End Training of Deep Visuomotor Policies test perturbation lego block ring on peg g n i n i a r t . 0 cm b 1 cm r u t 2 cm r e kinematic baseline p 0 cm 1 cm 2 cm 3 cm 0 cm 1 cm 2 cm 3 cm 5/5 5/5 5/5 5/5 5/5 5/5 5/5 0/5 3/5 3/5 5/5 0/5 2/5 2/5 3/5 0/5 5/5 5/5 5/5 5/5 5/5 5/5 5/5 3/5 0/5 3/5 3/5 0/5 0/5 0/5 0/5 0/5
1504.00702#52
1504.00702#54
1504.00702
[ "1509.06113" ]
1504.00702#54
End-to-End Training of Deep Visuomotor Policies
Table 2: Success rates of linear-Gaussian controllers under target object perturbation. 6.3 Spatial Softmax CNN Architecture Evaluation In this section, we evaluate the neural network architecture that we propose in Section 5.1 in comparison to more standard convolutional networks. To isolate the architectures from other confounding factors, we measure their accuracy on the pose estimation pretraining task described in Section 5.2. This is a reasonable proxy for evaluating how well the network can overcome two major challenges in visuomotor learning: the ability to handle relatively small datasets without overï¬ tting, and the capability to learn tasks that are inherently spatial. We compare to a network where the expectation operator after the softmax is replaced with a learned fully connected layer, as is standard in the literature, a network where both the softmax and the expectation operators are replaced with a fully connected layer, and a version of this network that also uses 3 à 3 max pooling with stride 2 at the ï¬ rst two layers. These alternative architectures have many more parameters, since the fully connected layer takes the entire bank of response maps from the third convolutional layer as input. Pooling helps to reduce the number of parameters, but not to the same degree as the spatial softmax and expectation operators in our architecture. The results in Table 3 indicate that using the softmax and expectation operators im- proves pose estimation accuracy substantially. Our network is able to outperform the more standard architectures because it is forced by the softmax and expectation operators to learn feature points, which provide a concise representation suitable for spatial inference. Since most of the parameters in this architecture are in the convolutional layers, which ben- eï¬ t from extensive weight sharing, overï¬ tting is also greatly reduced. By removing pooling, our network also maintains higher resolution in the convolutional layers, improving spatial accuracy. Although we did attempt to regularize the larger standard architectures with higher weight decay and dropout, we did not observe a signiï¬ cant improvement on this dataset. We also did not extensively optimize the parameters of this network, such as ï¬ l- ter size and number of channels, and investigating these design decisions further would be valuable to investigate in future work.
1504.00702#53
1504.00702#55
1504.00702
[ "1509.06113" ]
1504.00702#55
End-to-End Training of Deep Visuomotor Policies
network architecture softmax + feature points (ours) softmax + fully connected layer fully connected layer max-pooling + fully connected test error (cm) 1.30 ± 0.73 2.59 ± 1.19 4.75 ± 2.29 3.71 ± 1.73 Table 3: Average pose estimation accuracy and standard deviation with various architec- tures, measured as average Euclidean error for the three target points in 3D, with ground truth determined by forward kinematics from the left arm.
1504.00702#54
1504.00702#56
1504.00702
[ "1509.06113" ]
1504.00702#56
End-to-End Training of Deep Visuomotor Policies
21 Levine, Finn, Darrell, and Abbeel (a) hanger (b) cube (c) hammer (d) bottle Figure 8: Illustration of the tasks in our visuomotor policy experiments, showing the vari- ation in the position of the target for the hanger, cube, and bottle tasks, as well as two of the three grasps for the hammer, which also included variation in position (not shown). # 6.4 Deep Visuomotor Policy Evaluation In this section, we present an evaluation of our full visuomotor policy training algorithm on a PR2 robot. The aim of this evaluation is to answer the following question: does training the perception and control systems in a visuomotor policy jointly end-to-end provide better performance than training each component separately? Experimental tasks. We trained policies for hanging a coat hanger on a clothes rack, inserting a block into a shape sorting cube, ï¬ tting the claw of a toy hammer under a nail with various grasps, and screwing on a bottle cap. The cost function for these tasks encourages low distance between three points on the end-eï¬ ector and corresponding target points, low torques, and, for the bottle task, spinning the wrist. The equations for these cost functions and the details of each task are presented in Appendix B.2. The tasks are illustrated in Figure 8. Each task involved variation of 10-20 cm in each direction in the position of the target object (the rack, shape sorting cube, nail, and bottle). In addition, the coat hanger and hammer tasks were trained with two and three grasps, respectively. The current angle of the grasp was not provided to the policy, but had to be inferred from observing the robotâ s gripper in the camera images. All tasks used the same policy architecture and model parameters. Experimental conditions. We evaluated the visuomotor policies in three conditions: (1) the training target positions and grasps, (2) new target positions not seen during training and, for the hammer, new grasps (spatial test), and (3) training positions with visual distractors (visual test). A selection of these experiments is shown in the supplementary video.3 For the visual test, the shape sorting cube was placed on a table rather than held in 3.
1504.00702#55
1504.00702#57
1504.00702
[ "1509.06113" ]
1504.00702#57
End-to-End Training of Deep Visuomotor Policies
The video can be viewed at http://sites.google.com/site/visuomotorpolicy 22 End-to-End Training of Deep Visuomotor Policies the gripper, the coat hanger was placed on a rack with clothes, and the bottle and hammer tasks were done in the presence of clutter. Illustrations of this test are shown in Figure 9. Comparison. The success rates for each test are shown in Figure 9. We compared to two baselines, both of which train the vision layers in advance for pose prediction, instead of training the entire policy end-to-end. The features baseline discards the last layer of the pose predictor and uses the feature points, resulting in the same architecture as our policy, while the prediction baseline feeds the predicted pose into the control layers. The pose prediction baseline is analogous to a standard modular approach to policy learning, where the vision system is ï¬ rst trained to localize the target, and the policy is trained on top of it. This variant achieves poor performance. As discussed in Section 6.3, the pose estimate is accurate to about 1 cm. However, unlike the tasks in Section 6.2, where robust controllers could succeed even with inaccurate perception, many of these tasks have tolerances of just a few millimeters. In fact, the pose prediction baseline is only successful on the coat hanger, which requires comparatively little accuracy. Millimeter accuracy is diï¬ cult to achieve even with calibrated cameras and checkerboards. Indeed, prior work has reported that the PR2 can maintain a camera to end eï¬ ector accuracy of about 2 cm during open loop motion (Meeussen et al., 2010). This suggests that the failure of this baseline is not atypical, and that our visuomotor policies are learning visual features and control strategies that improve the robotâ s accuracy. When provided with pose estimation features, the policy has more freedom in how it uses the visual information, and achieves somewhat higher success rates. However, full end-to-end training performs signiï¬ cantly better, achieving high accuracy even on the challenging bottle task, and successfully adapting to the variety of grasps on the hammer task. This suggests that, although the vision layer pretraining is clearly beneï¬ cial for reducing computation time, it is not suï¬ cient by itself for discovering good features for visuomotor policies. Visual distractors.
1504.00702#56
1504.00702#58
1504.00702
[ "1509.06113" ]
1504.00702#58
End-to-End Training of Deep Visuomotor Policies
The policies exhibit moderate tolerance to distractors that are visu- ally separated from the target object. This is enabled in part by the spatial softmax, which has a lateral inhibition eï¬ ect that suppresses non-maximal activations. Since distractors are unlikely to activate each feature as much as the true object, their activations are there- fore suppressed. However, as expected, the learned policies tend to perform poorly under drastic changes to the backdrop, or when the distractors are adjacent to or occluding the manipulated objects, as shown in the supplementary video. A standard solution to this issue to expose the policy to a greater variety of visual situations during training. This issue could also be mitigated by artiï¬ cially augmenting the image samples with synthetic transformations, as discussed in prior work in computer vision (Simard et al., 2003), or even incorporating ideas from transfer and semi-supervised learning. # 6.5 Features Learned with End-to-End Training The visual processing layers of our architecture automatically learn features points using the spatial softmax and expectation operators. These feature points encapsulate all of the visual information received by the motor layers of the policy. In Figure 10, we show the features points discovered by our visuomotor policy through guided policy search. Each policy learns features on the target object and the robot manipulator, both clearly relevant
1504.00702#57
1504.00702#59
1504.00702
[ "1509.06113" ]
1504.00702#59
End-to-End Training of Deep Visuomotor Policies
23 Levine, Finn, Darrell, and Abbeel # training # visual test r e g n a h e b u c r e m m a h e l t t o b training (18) spatial test (24) visual test (18) coat hanger 100% end-to-end pose features 88.9% pose prediction 55.6% shape cube end-to-end pose features pose prediction 0% training (45) spatial test (60) visual test (60) toy hammer 91.1% end-to-end pose features 62.2% pose prediction 8.9% bottle cap end-to-end pose features Success rates on training positions, on novel test positions, and in the presence of visual distractors. The number of trials per test is shown in parentheses. Figure 9: Training and visual test scenes as seen by the policy (left), and experimental results (right). The hammer and bottle images were cropped for visualization only. to task execution. The policy tends to pick out robust, distinctive features on the objects, such as the left pole of the clothes rack, the left corners of the shape-sorting cube and the bottom-left corner of the toy tool bench. In the bottle task, the end-to-end trained policy outputs points on both sides of the bottle, including one on the cap, while the pose prediction network only ï¬ nds points on the right edge of the bottle. In Figure 11, we compare the feature points learned through guided policy search to those learned by a CNN trained for pose prediction. After end-to-end training, the policy acquired a distinctly diï¬ erent set of feature points compared to the pose prediction CNN used for initialization. The end-to-end trained model ï¬ nds more feature points on task- relevant objects and fewer points on background objects. This suggests that the policy improves its performance by acquiring goal-driven visual features that diï¬ er from those learned for object localization. The feature point representation is very simple, since it assumes that the learned features are present at all times, and only one instance of each feature is ever present in the image.
1504.00702#58
1504.00702#60
1504.00702
[ "1509.06113" ]
1504.00702#60
End-to-End Training of Deep Visuomotor Policies
While this is a drastic simpliï¬ cation, both the pose predictor and the policy still achieve good results. A more ï¬ exible architecture that still learns a concise feature point representation could further improve policy performance. We hope to explore this in future work. # 6.6 Computational Performance and Sample Eï¬ ciency We used the Caï¬ e deep learning library (Jia et al., 2014) for CNN training. Each visuomotor policy required a total of 3-4 hours of training time: 20-30 minutes for the pose prediction data collection on the robot, 40-60 minutes for the fully observed trajectory pretraining on
1504.00702#59
1504.00702#61
1504.00702
[ "1509.06113" ]