id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1504.00702#61
End-to-End Training of Deep Visuomotor Policies
24 # End-to-End Training of Deep Visuomotor Policies (a) hanger (b) cube (c) hammer (d) bottle Figure 10: Feature points tracked by the policy during task execution for each of the four tasks. Each feature point is displayed in a diï¬ erent random color, with consistent coloring across images. The policy ï¬ nds features on the target object and the robot gripper and arm. In the bottle cap task, note that the policy correctly ignores the distractor bottle in the background, even though it was not present during training. (a) hanger (b) cube (c) hammer (d) bottle Figure 11:
1504.00702#60
1504.00702#62
1504.00702
[ "1509.06113" ]
1504.00702#62
End-to-End Training of Deep Visuomotor Policies
Feature points learned for each task. For each input image, the feature points produced by the policy are shown in blue, while the feature points of the pose prediction network are shown in red. The end-to-end trained policy tends to discover more feature points on the target object and the robot arm than the pose prediction network. 25 Levine, Finn, Darrell, and Abbeel the robot and oï¬ ine pose pretraining (which can be done in parallel), and between 1.5 and 2.5 hours for end-to-end training with guided policy search. The coat hanger task required two iterations of guided policy search, the shape sorting cube and the hammer required three, and the bottle task required four. Only about 15 minutes of the training time consisted of executing trials on the robot. Since training was dominated by computation, we expect signiï¬ cant speedup from a more eï¬ cient implementation.
1504.00702#61
1504.00702#63
1504.00702
[ "1509.06113" ]
1504.00702#63
End-to-End Training of Deep Visuomotor Policies
The number of samples for training each policy is shown in Table 4. Each trial was ï¬ ve seconds in length, and the numbers do not include the time needed to collect about 1000 images for pretraining the visual processing layers of the policy. number of trials task coat hanger shape cube toy hammer bottle cap trajectory pretraining 120 90 150 180 end-to-end training 36 81 90 108 total 156 171 240 288 Table 4: Total number of trials used for learning each visuomotor policy. # 7. Discussion and Future Work
1504.00702#62
1504.00702#64
1504.00702
[ "1509.06113" ]
1504.00702#64
End-to-End Training of Deep Visuomotor Policies
In this paper, we presented a method for learning robotic control policies that use raw input from a monocular camera. These policies are represented by a novel convolutional neural network architecture, and can be trained end-to-end using our guided policy search algorithm, which decomposes the policy search problem in a trajectory optimization phase that uses full state information and a supervised learning phase that only uses the obser- vations. This decomposition allows us to leverage state-of-the-art tools from supervised learning, making it straightforward to optimize extremely high-dimensional policies. Our experimental results show that our method can execute complex manipulation skills, and that end-to-end training produces signiï¬ cant improvements in policy performance compared to using ï¬ xed vision layers trained for pose prediction. Although we demonstrate moderate generalization over variations in the scene, our current method does not generalize to dramatically diï¬ erent settings, especially when visual distractors occlude the manipulated object or break up its silhouette in ways that diï¬ er from the training. The success of CNNs on exceedingly challenging vision tasks suggests that this class of models is capable of learning invariance to irrelevant distractor features (LeCun et al., 2015), and in principle this issue can be addressed by training the policy in a variety of environments, though this poses certain logistical challenges. More practical alternatives that could be explored in future work include simultaneously training the policy on multiple robots, each of which is located in a diï¬ erent environment, developing more sophisticated regularization and pretraining techniques to avoid overï¬ tting, and introducing artiï¬ cial data augmentation to encourage the policy to be invariant to irrelevant clutter. However, even without these improvements, our method has numerous applications in, for example, an industrial setting where the robot must repeatedly and eï¬ ciently perform a task that requires visual feedback under moderate variation in background and clutter conditions. Our method takes advantage of a known, fully observed state space during training.
1504.00702#63
1504.00702#65
1504.00702
[ "1509.06113" ]
1504.00702#65
End-to-End Training of Deep Visuomotor Policies
This is both a weakness and a strength. It allows us to train linear-Gaussian controllers 26 End-to-End Training of Deep Visuomotor Policies for guided policy search using a very small number of samples, far more eï¬ ciently than standard policy search methods. However, the requirement to observe the full state during training limits the tasks to which the method can be applied. In many cases, this limitation is minor, and the only â instrumentationâ required at training is to position the objects in the scene at consistent positions. However, tasks that require, for example, manipulating freely moving objects require more extensive instrumentation, such as motion capture. A promising direction for addressing this limitation is to combine our method with unsuper- vised state-space learning, as proposed in several recent works, including our own (Lange et al., 2012; Watter et al., 2015; Finn et al., 2015). In future work, we hope to explore more complex policy architectures, such as recurrent policies that can deal with extensive occlusions by keeping a memory of past observations. We also hope to extend our method to a wider range of tasks that can beneï¬ t from visual input, as well as a variety of other rich sensory modalities, including haptic input from pressure sensors and auditory input. With a wider range of sensory modalities, end-to- end training of sensorimotor policies will become increasingly important: while it is often straightforward to imagine how vision might help to localize the position of an object in the scene, it is much less apparent how sound can be integrated into robotic control. A learned sensorimotor policy would be able to naturally integrate a wide range of modalities and utilize them to directly aid in control.
1504.00702#64
1504.00702#66
1504.00702
[ "1509.06113" ]
1504.00702#66
End-to-End Training of Deep Visuomotor Policies
# Acknowledgements This research was funded in part by DARPA through a Young Faculty Award, the Army Research Oï¬ ce through the MAST program, NSF awards IIS-1427425 and IIS-1212798, the Berkeley Vision and Learning Center, and a Berkeley EECS Department Fellowship. # Appendix A. Guided Policy Search Algorithm Details In this appendix, we describe a number of implementation details of our BADMM-based guided policy search algorithm and our linear-Gaussian controller optimization method. # A.1 BADMM Dual Variables and Weight Adjustment Recall that the inner loop alternating optimization is given by T 0 Harg min Eop(xp)rro (url) Ue Apt] +1469 (0,p) t=1 T p arg min > Enyxeq,any) E(%t, We) â UF Ape] +44? (p, 8) t=1 Aut â Aut + Ot (Erry (aye) p(xe) [Ue] _ Ey(us|xe)p(o) [Ue])-
1504.00702#65
1504.00702#67
1504.00702
[ "1509.06113" ]
1504.00702#67
End-to-End Training of Deep Visuomotor Policies
We use a step size of α = 0.1 in all of our experiments, which we found to be more stable than α = 1.0. The weights νt are initialized to 0.01 and incremented based on the following schedule: at every iteration, we compute the average KL-divergence between p(ut|xt) and Ï Î¸(ut|xt) at each time step, as well as its standard deviation over time steps.
1504.00702#66
1504.00702#68
1504.00702
[ "1509.06113" ]
1504.00702#68
End-to-End Training of Deep Visuomotor Policies
27 Levine, Finn, Darrell, and Abbeel The weights νt corresponding to time steps where the KL-divergence is higher than the average are increased by a factor of 2, and the weights corresponding to time steps where the KL-divergence is two standard deviations or more below the average are decreased by a factor of 2. The rationale behind this schedule is to adjust the KL-divergence penalty to keep the policy and trajectory in agreement by roughly the same amount at all time steps. Increasing νt too quickly can lead to the policy and trajectory becoming â
1504.00702#67
1504.00702#69
1504.00702
[ "1509.06113" ]
1504.00702#69
End-to-End Training of Deep Visuomotor Policies
lockedâ together, which makes it diï¬ cult for the trajectory to decrease its cost, while leaving it too low requires more iterations for convergence. We found this schedule to work well across all tasks, both during trajectory pretraining and while training the visuomotor policy. To update the dual variables λµt, we evaluate the expectations over p(xt) by using the latest batch of sampled trajectories. For each state {xi t} along these sampled trajectories, we evaluate the expectations over ut under Ï Î¸(ut|xt) and p(ut|xt), which correspond simply to the means of these conditional Gaussian distributions, in closed form. # A.2 Policy Variance Optimization As discussed in Section 4, the variance of the Gaussian policy Ï Î¸(ut|ot) does not depend on the observation, though this dependence would be straightforward to add. Analyzing the objective Lθ(θ, p), we can write out only the terms that depend on Î£Ï : N 1 _ L£o(9,p) = oN » ye Epicxs.on) [a [C,"2"] â log |X*]] . i=1 t=1 Diï¬ erentiating and setting the derivative to zero, we obtain the following equation for Î£Ï : where the expectation under pi(xt) is omitted, since Cti does not depend on xt. # A.3 Dynamics Fitting Optimizing the linear-Gaussian controllers pi(ut|xt) that induce the trajectory distributions pi(Ï ) requires ï¬ tting the system dynamics pi(xt+1|xt, ut) at each iteration to samples gen- erated on the physical system from the previous controller Ë pi(ut|xt). In this section, we describe how these dynamics are ï¬ tted. As in Section 4, we drop the subscript i, since the dynamics are ï¬ tted the same way for all of the trajectory distributions. The linear-Gaussian dynamics are deï¬ ned as p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft), and the data that we obtain from the robot can be viewed as tuples {xi t+1}. A simple way to ï¬
1504.00702#68
1504.00702#70
1504.00702
[ "1509.06113" ]
1504.00702#70
End-to-End Training of Deep Visuomotor Policies
t these linear-Gaussian dynamics is to use linear regression to determine fx, fu, and fc, and ï¬ t Ft based on the errors. However, the sample complexity of linear regression scales with the dimensionality of xt. For a high-dimensional robotic system, we might need an impractically large number of samples at each iteration to obtain a good ï¬ t. However, we can observe that the dynamics at nearby time steps are strongly correlated, and we can dramatically reduce the sample complexity of the dynamics ï¬ tting by bringing in information from other time steps, and even prior iterations. We will bring in this
1504.00702#69
1504.00702#71
1504.00702
[ "1509.06113" ]
1504.00702#71
End-to-End Training of Deep Visuomotor Policies
28 # End-to-End Training of Deep Visuomotor Policies information by ï¬ tting a global model to all of the transitions {xi t+1} for all t and all tuples from several prior iterations (we use three prior iterations in our implementation), and then use this model as a prior for ï¬ tting the dynamics at each time step. Note that this global model does not itself need to be a good forward dynamics model â it just needs to serve as a good prior to reduce the sample complexity of linear regression. To make it more convenient to incorporate a data-driven prior, we will ï¬ rst reformulate this linear regression ï¬ t and view it as ï¬ tting a Gaussian model to the dataset {xi t+1} at each time step t, and then conditioning this Gaussian to obtain p(xt+1|xt, ut). While this is equivalent to linear regression, it allows us to easily incorporate a normal-inverse- Wishart prior on this Gaussian in order to bring in prior information.
1504.00702#70
1504.00702#72
1504.00702
[ "1509.06113" ]
1504.00702#72
End-to-End Training of Deep Visuomotor Policies
Let Ë Î£ be the empirical covariance of our dataset, and let Ë Âµ be the empirical mean. The normal-inverse-Wishart prior is deï¬ ned by prior parameters Φ, µ0, m, and n0. Under this prior, the maximum a posteriori estimates for the covariance Σ and mean µ are given by Σ = Φ + N Ë Î£ + N m N +m (Ë Âµ â µ0)(Ë Âµ â µ0)T N + n0 µ = mµ0 + n0 Ë Âµ m + n0 . Having obtained Σ and µ, we can obtain an estimate of the dynamics p(xt+1|xt, ut) by conditioning the distribution N (µ, Σ) on [xt; ut], which produces linear-Gaussian dynamics p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft). The parameters of the normal-inverse-Wishart prior are obtained from the global model of the dynamics which, as described previously, is ï¬ tted to all available tuples {xi The simplest prior can be obtained by fitting a Gaussian distribution to vectors [x; u; xâ ]. If the mean and covariance of this data are given by ji and ©, the prior is given by ® = nod and pio = ft, while no and m should be set to the number of data points in the datasets. In practice, settings ng and m to 1 tends to produce better results, since the prior is fitted to many more samples than are available for linear regression at each time step. While this prior is simple, we can obtain a better prior by employing a nonlinear model. The particular global model we use in this work is a Gaussian mixture model over vectors [x;u;xâ
1504.00702#71
1504.00702#73
1504.00702
[ "1509.06113" ]
1504.00702#73
End-to-End Training of Deep Visuomotor Policies
]. Systems of articulated rigid bodies undergoing contact dynamics, such as robots interacting with their environment, can be coarsely modeled as having piecewise inear dynamics. The Gaussian mixture model provides a good approximation for such piecewise linear systems, with each mixture element corresponding to a different linear mode Khansari-Zadeh and Billard, 2010). Under this model, the state transition tuple is assumed o come from a distribution that depends on some hidden state h, which corresponds to he mixture element identity. In practice, this hidden state might correspond to the type of contact profile experienced by a robotic arm at step 7. The prior for the dynamics fit at time step t is then obtained by inferring the hidden state distribution for the transition dataset {xi, ui, x} ai}; and using the mean and covariance of the corresponding mixture elements weighted by their probabilities) to obtain fi and X. The prior parameters can then be obtained as described above. In our experiments, we set the number of mixture elements for the Gaussian mixture model prior such that there were at least 40 samples per mixture element, or 20 total mixture elements, whichever was lower.
1504.00702#72
1504.00702#74
1504.00702
[ "1509.06113" ]
1504.00702#74
End-to-End Training of Deep Visuomotor Policies
In general, we did not ï¬ nd the performance of the method to be sensitive to this parameter, though overï¬ tting did tend to occur in the early iterations when the number of samples is low, if the number of mixtures was too high. 29 Levine, Finn, Darrell, and Abbeel # A.4 Trajectory Optimization In this section, we show how the LQR backward pass can be used to optimize the constrained objective in Section 4.2. The constrained trajectory optimization problem is given by oe Ly(p, 8) s.t. Di (p(r)||B(7)) < â ¬. The augmented Lagrangian £,(p, 0) consists of an entropy term and an expectation under p(T) of a quantity that is independent of p. We can locally approximate this quantity with a quadratic by using a quadratic expansion of ¢(x;,u;), and fitting a linear Gaussian to mo (uy|X,) with the same method we used for the dynamics. We can then solve the primal optimization in the dual gradient descent procedure with a standard LQR backward pass. As discussed in Section 4, £,(p,) can be written as the expectation of some function c(r) that is independent of p, such that L,(p,0) = E,,7)[e(7)] â 4H (p(7)). Specifically, c(xt, Ue) = C(Xt, Us) â UP Ape â Ue log 79 (UE| xe)
1504.00702#73
1504.00702#75
1504.00702
[ "1509.06113" ]
1504.00702#75
End-to-End Training of Deep Visuomotor Policies
Writing the Lagrangian of the constrained optimization, we have L(p) = Eqryle(r) â nlog p(r)] â (n+ u)H(p(r)) â ne, where η is the Lagrange multiplier. Note that L(p) is the Lagrangian of the constrained trajectory optimization, which is not related to the augmented Lagrangian Lp(Ï , θ). Group- ing the terms in the expectation and omitting constants, we can rewrite the minimization of the Lagrangian with respect to the primal variables as min p(Ï )â N (Ï ) Ep(Ï ) 1 η + νt c(Ï )â η η + νt log Ë p(Ï ) â H(p(Ï )). (4) Let Ë c(Ï ) = 1 log Ë p(Ï ). The above optimization corresponds to minimizing Ep(Ï )[Ë c(Ï )] â H(p(Ï )). This type of maximum entropy problem can be solved using the LQR algorithm, and the solution is given by p(ut|xt) = N (Ktxt + kt; Qâ
1504.00702#74
1504.00702#76
1504.00702
[ "1509.06113" ]
1504.00702#76
End-to-End Training of Deep Visuomotor Policies
1 where Kt and kt are the feedback and open loop terms of the optimal linear feedback controller corresponding to the cost Ë c(xt, ut) and the dynamics p(xt+1|xt, ut), and Qu,ut is the quadratic term in the Q-function at time step t. All of these terms can be obtained from a standard LQR backward pass (Li and Todorov, 2004), which we summarize below. Recall that the estimated linear-Gaussian dynamics have the form p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft). The quadratic cost approximation has the form
1504.00702#75
1504.00702#77
1504.00702
[ "1509.06113" ]
1504.00702#77
End-to-End Training of Deep Visuomotor Policies
Ë c(xt, ut) â 1 2 [xt; ut]TË cxu,xut[xt; ut] + [xt; ut]TË cxut + const, where subscripts denote derivatives, e.g. Ë cxut is the gradient of Ë c with respect to [xt; ut], while Ë cxu,xut is the Hessian.4 Under this model of the dynamics and cost function, the 4. We assume that all Taylor expansions here are recentered around zero. Otherwise, the point around which the derivatives are computed must be subtracted from xt and ut in all of these equations.
1504.00702#76
1504.00702#78
1504.00702
[ "1509.06113" ]
1504.00702#78
End-to-End Training of Deep Visuomotor Policies
30 # End-to-End Training of Deep Visuomotor Policies optimal controller can be computed by recursively computing the quadratic Q-function and value function, starting with the last time step. These functions are given by V (xt) = Q(xt, ut) = 1 2 1 2 t Vx,xtxt + xT xT t Vxt + const [xt; ut]TQxu,xut[xt; ut] + [xt; ut]TQxut + const We can express them with the following recurrence, which is computed starting at the last time step t = T and moving backward through time: Qxu,xut = Ë cxu,xut + f T Qxut = Ë cxut + f T Vx,xt = Qx,xt â QT Vxt = Qxt â QT xutVx,xt+1fxut xutVxt+1 + f T u,xtQâ 1 u,xtQâ 1 xutVx,xt+1fct u,utQu,xt u,utQut, and the optimal control law is then given by g(xt) = Ktxt + kt, where Kt = â Qâ 1 u,utQu,xt and kt = â Qâ 1 u,utQut. If, instead of simply minimizing the expected cost, we instead wish to optimize the maximum entropy objective in Equation (4), the optimal controller is instead linear-Gaussian, with the solution given by p(ut|xt) = N (Ktxt + kt; Qâ 1 u,ut), as shown in prior work (Levine and Koltun, 2013a). # Appendix B. Experimental Setup Details In this appendix, we present a detailed summary of the experimental setup for our simulated and real-world experiments. # B.1 Simulated Experiment Details All of the simulated experiments used the MuJoCo simulation package (Todorov et al., 2012), with simulated frictional contacts and torque motors at the joints used for actuation. Although no control or state noise was added during simulation, noise was injected naturally by the linear-Gaussian controllers.
1504.00702#77
1504.00702#79
1504.00702
[ "1509.06113" ]
1504.00702#79
End-to-End Training of Deep Visuomotor Policies
The linear-Gaussian controllers pi(ut|xt) were initialized to stay near the initial state x1 using linear feedback based on a proportional-derivative control law for all tasks, except for the octopus arm, where pi(ut|xt) was initialized to be zero mean with a ï¬ xed spherical covariance, and the walker, which was initialized to track a demonstration trajectory with proportional-derivative feedback. The walker was the only task that used a demonstration, as described previously. We describe the details of each system below.
1504.00702#78
1504.00702#80
1504.00702
[ "1509.06113" ]
1504.00702#80
End-to-End Training of Deep Visuomotor Policies
Peg insertion: The 2D peg insertion task has 6 state dimensions (joint angles and angular velocities) and 2 action dimensions. The 3D version of the task has 12 state dimensions, since the arm has 3 degrees of freedom at the shoulder, 1 at the elbow, and 2 at the wrist. Trials were 8 seconds in length and simulated at 100 Hz, resulting in 800 time steps per rollout. The cost function is given by 1 2 * (xy, We) = zeullwell? + wpli2(Px, â
1504.00702#79
1504.00702#81
1504.00702
[ "1509.06113" ]
1504.00702#81
End-to-End Training of Deep Visuomotor Policies
Pâ ), 31 Levine, Finn, Darrell, and Abbeel where px, is the position of the end effector for state x;, p* is the desired end effector position at the bottom of the slot, and the norm f19(z) is given by 4||z||? + Va + 2, which corresponds to the sum of an é2 and soft @; norm. We use this norm to encourage the peg to precisely reach the target position at the bottom of the hole, but to also receive a larger penalty when far away. The task also works well in 2D with a simple ¢2 penalty, though we found that the 3D version of the task takes longer to insert the peg all the way into the hole without the ¢;-like square root term. The weights were set to wy = 10~® and Wp = 1. Initial states were chosen by moving the shoulder of the arm relative to the hole, with four equally spaced starting states in a 20 cm region for the 2D arm, and four random starting states in a 10 cm radius for the 3D arm. Octopus arm: The octopus arm consists of six four-sided chambers. Each edge of each chamber is a simulated muscle, and actions correspond to contracting or relaxing the mus- cle. The state space consists of the positions and velocities of the chamber vertices. The midpoint of one edge of the ï¬ rst chamber is ï¬ xed, resulting in a total of 25 degrees of free- dom: the 2D positions of the 12 unconstrained points, and the orientation of the ï¬ rst edge. Including velocities, the total dimensionality of the state space is 50. The cost function depends on the activation of the muscles and distance between the tip of the arm and the target point, in the same way as for peg insertion. The weights are set to wu = 10â 3 and wp = 1. Swimmer: The swimmer consists of 3 links and 5 degrees of freedom, including the global position and orientation which, together with the velocities, produces a 10 dimensional state space. The swimmer has 2 action dimensions corresponding to the torques between joints. The simulation applied drag on each link of the swimmer to roughly simulate a ï¬ uid, allowing it to propel itself.
1504.00702#80
1504.00702#82
1504.00702
[ "1509.06113" ]
1504.00702#82
End-to-End Training of Deep Visuomotor Policies
The rollouts were 20 seconds in length at 20 Hz, resulting in 400 time steps per rollout. The cost function for the swimmer is given by 1 1 (xt, Ut) = 3 wull uelâ + qwellerx. â vt? 5 where vz, is the horizontal velocity, vk = 2.0m/s, and the weights were wy = 2- 107° and Wy = 1. Walker: The bipedal walker consists of a torso and two legs, each with three links, for a total of 9 degrees of freedom and 18 dimensions, with velocity, and 6 action dimensions.
1504.00702#81
1504.00702#83
1504.00702
[ "1509.06113" ]
1504.00702#83
End-to-End Training of Deep Visuomotor Policies
The simulation ran for 5 seconds at 100 Hz, for a total of 500 time steps. The cost function is given by 1 2 2 + gwallPuse, _ Pyll 1 1 0(Xt, Ut) = zwulluel|â + gwellvex. â v, where Ury, is again the horizontal velocity, Pyx, 38 the vertical position of the root, v} = 2.1m/s, py = 1.1m, and the weights were set to wa = 0-4, wy = 1, and wp, = 1. 32
1504.00702#82
1504.00702#84
1504.00702
[ "1509.06113" ]
1504.00702#84
End-to-End Training of Deep Visuomotor Policies
End-to-End Training of Deep Visuomotor Policies # B.2 Robotic Experiment Details All of the robotic experiments were conducted on a PR2 robot. The robot was controlled at 20 Hz via direct eï¬ ort control,5 and camera images were recorded using the RGB camera on a PrimeSense Carmine sensor. The images were downsampled to 240 à 240 à 3. The learned policies controlled one 7 DoF arm of the robot, while the other arm was used to move objects in the scene to automatically vary the initial conditions. The camera was kept ï¬ xed in each experiment. Each episode was 5 seconds in length.
1504.00702#83
1504.00702#85
1504.00702
[ "1509.06113" ]
1504.00702#85
End-to-End Training of Deep Visuomotor Policies
For each task, the cost function required placing the object held in the gripper at a particular location (which might require, for example, to insert a shape into a shape sorting cube). The cost was given by the following equation: C(xt, Ut) = wed? + Wiog log(d? + a) + wullurl|?, where d; is the distance between three points in the space of the end-effector and their target positions.®, and the weights are set to Wey = 10-3, Wiog = 1.0, and wa = 10-?. The quadratic term encourages moving the end-effector toward the target when it is far, while the logarithm term encourages placing it precisely at the target location, as discussed in prior work (Levine et al., 2015). The bottle cap task used an additional cost term consisting of a quadratic penalty on the difference between the wrist angular velocity and a target velocity. For all of the tasks, we initialized all of the linear-Gaussian controllers p;(u;|x;) to stay near the initial state x;, with a diagonal noise covariance. The covariance of the noise was chosen to be proportional to a diagonal approximation of the inverse effective mass at each joint, as provided by the manufacturer of the PR2 robot, and the feedback controller was constructed using LQR, with an approximate linear model obtained from the same diagonal inverse mass matrix. The role of this initial controller was primarily to avoid dangerous actions during the first iteration. We discuss the particular setup for each experiment below:
1504.00702#84
1504.00702#86
1504.00702
[ "1509.06113" ]
1504.00702#86
End-to-End Training of Deep Visuomotor Policies
Coat hanger: The coat hanger task required the robot to hang a coat hanger on a clothes rack. The coat hanger was grasped at one of two angles, about 35â ¦ apart, and the rack was positioned at three diï¬ erent distances from the robot during training, with diï¬ erences of about 10 cm between each position. The rack was moved manually between these positions during training. A trial was considered successful if, when the coat hanger was released, it remained hanging on the rack rather than dropping to the ground.
1504.00702#85
1504.00702#87
1504.00702
[ "1509.06113" ]
1504.00702#87
End-to-End Training of Deep Visuomotor Policies
Shape sorting cube: The shape sorting cube task required the robot to insert a red trapezoid into a trapezoidal hole on a shape sorting cube. During training, the cube was positioned at nine diï¬ erent positions, situated at the corners, edges, and middle of a rect- angular region 16 cm à 10 cm in size. During training, the shape sorting cube was moved through the training positions by using the left arm. A trial was considered successful if the bottom face of the trapezoid was completely inside the shape sorting cube, such that if the robot were to release the trapezoid, it would fall inside the cube.
1504.00702#86
1504.00702#88
1504.00702
[ "1509.06113" ]
1504.00702#88
End-to-End Training of Deep Visuomotor Policies
5. The PR2 robot does not provide for closed loop torque control, but instead supports an eï¬ ort control in- terface that directly sets feedforward motor voltages. In practice, these voltages are roughly proportional to feedforward torques, but are also aï¬ ected by friction and damping. 6. Three points fully deï¬ ne the pose of the end-eï¬ ector. For the bottle cap task, which is radially symmetric, we use only two points. 33 Levine, Finn, Darrell, and Abbeel
1504.00702#87
1504.00702#89
1504.00702
[ "1509.06113" ]
1504.00702#89
End-to-End Training of Deep Visuomotor Policies
Toy hammer: The hammer task required the robot to insert the claw of a toy hammer underneath a toy plastic nail, placing the claw around the base of the nail. The hammer was grasped at one of three angles, each 22.5â ¦ apart, for a total variation of 45â ¦ degrees, and the nail was positioned at ï¬ ve positions, at the corners and center of a rectangular region 10 cm à 7 cm in size. During training, the toy tool bench containing the nail was moved using the left arm. A trial was considered successful if the tip of the claw of the hammer was at least under the centerline of the nail.
1504.00702#88
1504.00702#90
1504.00702
[ "1509.06113" ]
1504.00702#90
End-to-End Training of Deep Visuomotor Policies
Bottle cap: The bottle cap task required the robot to screw a cap onto a bottle at various positions. The bottle was located at nine diï¬ erent positions, situated at the corners, edges, and middle of a rectangular region 16 cm à 10 cm in size, and the left arm was used to move the bottle through the training positions. A trial was considered successful if, after completion, the cap could not be removed from bottle simply by pulling vertically. # References J. A. Bagnell and J. Schneider.
1504.00702#89
1504.00702#91
1504.00702
[ "1509.06113" ]
1504.00702#91
End-to-End Training of Deep Visuomotor Policies
Covariant policy search. In International Joint Conference on Artiï¬ cial Intelligence (IJCAI), 2003. B. Bakker, V. Zhumatiy, G. Gruener, and J. Schmidhuber. A robot that reinforcement- learns to identify and memorize important previous observations. In International Con- ference on Intelligent Robots and Systems (IROS), 2003. G. Bekey and K. Goldberg. Neural Networks in Robotics. Springer US, 1992.
1504.00702#90
1504.00702#92
1504.00702
[ "1509.06113" ]
1504.00702#92
End-to-End Training of Deep Visuomotor Policies
H. Benbrahim and J. A. Franklin. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems, 22:283â 302, 1997. W. B¨ohmer, S. Gr¨unew¨alder, Y. Shen, M. Musial, and K. Obermayer. Construction of approximation spaces for reinforcement learning. Journal of Machine Learning Research, 14(1):2067â 2118, January 2013.
1504.00702#91
1504.00702#93
1504.00702
[ "1509.06113" ]
1504.00702#93
End-to-End Training of Deep Visuomotor Policies
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1122, 2011. D. Ciresan, U. Meier, J. Masci, L. Gambardella, and J. Schmidhuber. Flexible, high per- formance convolutional neural networks for image classiï¬ cation. In International Joint Conference on Artiï¬ cial Intelligence (IJCAI), 2011. D. Ciresan, U. Meier, and J. Schmidhuber.
1504.00702#92
1504.00702#94
1504.00702
[ "1509.06113" ]
1504.00702#94
End-to-End Training of Deep Visuomotor Policies
Multi-column deep neural networks for image classiï¬ cation. In Computer Vision and Pattern Recognition (CVPR), 2012. M. Deisenroth and C. Rasmussen. PILCO: a model-based and data-eï¬ cient approach to policy search. In International Conference on Machine Learning (ICML), 2011. M. Deisenroth, C. Rasmussen, and D. Fox. Learning to control a low-cost manipulator using data-eï¬ cient reinforcement learning. In Robotics: Science and Systems (RSS), 2011.
1504.00702#93
1504.00702#95
1504.00702
[ "1509.06113" ]
1504.00702#95
End-to-End Training of Deep Visuomotor Policies
34 End-to-End Training of Deep Visuomotor Policies M. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foun- dations and Trends in Robotics, 2(1-2):1â 142, 2013. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale In Computer Vision and Pattern Recognition (CVPR), hierarchical image database. 2009.
1504.00702#94
1504.00702#96
1504.00702
[ "1509.06113" ]
1504.00702#96
End-to-End Training of Deep Visuomotor Policies
G. Endo, J. Morimoto, T. Matsubara, J. Nakanishi, and G. Cheng. Learning CPG-based biped locomotion with a policy gradient method: Application to a humanoid robot. International Journal of Robotic Research, 27(2):213â 228, 2008. I. Endres and D. Hoiem. Category independent object proposals. In European Conference on Computer Vision (ECCV). 2010.
1504.00702#95
1504.00702#97
1504.00702
[ "1509.06113" ]
1504.00702#97
End-to-End Training of Deep Visuomotor Policies
Y. Engel, P. Szab´o, and D. Volkinshtein. Learning to control an octopus arm with Gaussian In Advances in Neural Information Processing process temporal diï¬ erence methods. Systems (NIPS), 2005. B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 1992. C. Finn, X. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel. Learning visual fea- ture spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015. K.
1504.00702#96
1504.00702#98
1504.00702
[ "1509.06113" ]
1504.00702#98
End-to-End Training of Deep Visuomotor Policies
Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaï¬ ected by shift in position. Biological Cybernetics, 36:193â 202, 1980. T. Geng, B. Porr, and F. W¨org¨otter. Fast biped walking with a reï¬ exive controller and realtime policy searching. In Advances in Neural Information Processing Systems (NIPS), 2006.
1504.00702#97
1504.00702#99
1504.00702
[ "1509.06113" ]
1504.00702#99
End-to-End Training of Deep Visuomotor Policies
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate In Conference on Computer Vision and object detection and semantic segmentation. Pattern Recognition (CVPR), 2014a. R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014b.
1504.00702#98
1504.00702#100
1504.00702
[ "1509.06113" ]
1504.00702#100
End-to-End Training of Deep Visuomotor Policies
V. Gullapalli. A stochastic reinforcement learning algorithm for learning real-valued func- tions. Neural Networks, 3(6):671â 692, 1990. V. Gullapalli. Skillful control under uncertainty via direct reinforcement learning. Rein- forcement Learning and Robotics, 15(4):237â 246, 1995. X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang.
1504.00702#99
1504.00702#101
1504.00702
[ "1509.06113" ]
1504.00702#101
End-to-End Training of Deep Visuomotor Policies
Deep learning for real-time Atari game play using oï¬ ine Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems (NIPS), 2014. 35 Levine, Finn, Darrell, and Abbeel R. Hadsell, P. Sermanet, J. B. A. Erkan, and M. Scoï¬ er. Learning long-range vision for autonomous oï¬ -road driving. Journal of Field Robotics, pages 120â 144, 2009. K. He, X. Zhang, S. Ren, and J. Sun.
1504.00702#100
1504.00702#102
1504.00702
[ "1509.06113" ]
1504.00702#102
End-to-End Training of Deep Visuomotor Policies
Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ ow in recurrent nets: the diï¬ culty of learning long-term dependencies. In A Field Guide to Dynamic Recurrent Neural Networks. IEEE Press, 2001. K. J. Hunt, D. Sbarbaro, R. Ë Zbikowski, and P. J. Gawthrop.
1504.00702#101
1504.00702#103
1504.00702
[ "1509.06113" ]
1504.00702#103
End-to-End Training of Deep Visuomotor Policies
Neural networks for control systems: A survey. Automatica, 28(6):1083â 1112, November 1992. D. Jacobson and D. Mayne. Diï¬ erential Dynamic Programming. Elsevier, 1970. M. J¨agersand, O. Fuentes, and R. C. Nelson. Experimental evaluation of uncalibrated visual servoing for precision manipulation. In International Conference on Robotics and Automation (ICRA), 1997.
1504.00702#102
1504.00702#104
1504.00702
[ "1509.06113" ]
1504.00702#104
End-to-End Training of Deep Visuomotor Policies
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caï¬ e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. S. Jodogne and J. H. Piater. Closed-loop learning of visual control policies. Journal of Artiï¬ cial Intelligence Research, 28:349â 391, 2007. R. Jonschkowski and O. Brock.
1504.00702#103
1504.00702#105
1504.00702
[ "1509.06113" ]
1504.00702#105
End-to-End Training of Deep Visuomotor Policies
State representation learning in robotics: Using prior knowledge about physical interaction. In Proceedings of Robotics: Science and Systems, 2014. M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal. Learning force control policies for compliant manipulation. In International Conference on Intelligent Robots and Systems (IROS), 2011. S. M. Khansari-Zadeh and A. Billard. BM: An iterative algorithm to learn stable non- linear dynamical systems with Gaussian mixture models.
1504.00702#104
1504.00702#106
1504.00702
[ "1509.06113" ]
1504.00702#106
End-to-End Training of Deep Visuomotor Policies
In International Conference on Robotics and Automation (ICRA), 2010. J. Kober and J. Peters. Learning motor primitives for robotics. In International Conference on Robotics and Automation (ICRA), 2009. J. Kober, K. Muelling, O. Kroemer, C.H. Lampert, B. Schoelkopf, and J. Peters. Movement templates for learning of hitting and batting. In International Conference on Robotics and Automation (ICRA), 2010a.
1504.00702#105
1504.00702#107
1504.00702
[ "1509.06113" ]
1504.00702#107
End-to-End Training of Deep Visuomotor Policies
J. Kober, E. Oztop, and J. Peters. Reinforcement learning to adjust robot movements to new situations. In Robotics: Science and Systems (RSS), 2010b. J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotic Research, 32(11):1238â 1274, 2013. 36 End-to-End Training of Deep Visuomotor Policies
1504.00702#106
1504.00702#108
1504.00702
[ "1509.06113" ]
1504.00702#108
End-to-End Training of Deep Visuomotor Policies
N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomo- tion. In International Conference on Robotics and Automation (IROS), 2004. J. Koutn´ık, G. Cuccu, J. Schmidhuber, and F. Gomez. Evolving large-scale neural networks for vision-based reinforcement learning. In Conference on Genetic and Evolutionary Com- putation, GECCO â 13, 2013. A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classiï¬ cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS). 2012.
1504.00702#107
1504.00702#109
1504.00702
[ "1509.06113" ]
1504.00702#109
End-to-End Training of Deep Visuomotor Policies
T. Lampe and M. Riedmiller. Acquiring visual servoing reaching and grasping skills using In International Joint Conference on Neural Networks neural reinforcement learning. (IJCNN), 2013. A. Lanfranco, A. Castellanos, J. Desai, and W. Meyers. Robotic surgery: a current per- spective. Annals of surgery, 239(1):14, 2004. S. Lange, M. Riedmiller, and A. Voigtlaender.
1504.00702#108
1504.00702#110
1504.00702
[ "1509.06113" ]
1504.00702#110
End-to-End Training of Deep Visuomotor Policies
Autonomous reinforcement learning on raw visual input data in a real world application. In International Joint Conference on Neural Networks, 2012. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems (NIPS), 1989. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521:436â 444, May 2015. H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng.
1504.00702#109
1504.00702#111
1504.00702
[ "1509.06113" ]
1504.00702#111
End-to-End Training of Deep Visuomotor Policies
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning (ICML), 2009. Ian Lenz, Ross Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model predictive control. In RSS, 2015a. Ian Lenz, Honglak Lee, and Ashutosh Saxena. Deep learning for detecting robotic grasps. IJRR, 2015b. S. Levine and P. Abbeel.
1504.00702#110
1504.00702#112
1504.00702
[ "1509.06113" ]
1504.00702#112
End-to-End Training of Deep Visuomotor Policies
Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), 2014. S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013a. S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems (NIPS), 2013b. S. Levine and V.
1504.00702#111
1504.00702#113
1504.00702
[ "1509.06113" ]
1504.00702#113
End-to-End Training of Deep Visuomotor Policies
Koltun. Learning complex neural network policies with trajectory opti- mization. In International Conference on Machine Learning (ICML), 2014. S. Levine, N. Wagener, and P. Abbeel. Learning contact-rich manipulation skills with guided policy search. In International Conference on Robotics and Automation (ICRA), 2015. 37 Levine, Finn, Darrell, and Abbeel F. L. Lewis, A. Yesildirak, and S. Jagannathan.
1504.00702#112
1504.00702#114
1504.00702
[ "1509.06113" ]
1504.00702#114
End-to-End Training of Deep Visuomotor Policies
Neural Network Control of Robot Manipu- lators and Nonlinear Systems. Taylor & Francis, Inc., 1998. W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222â 229, 2004. T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra.
1504.00702#113
1504.00702#115
1504.00702
[ "1509.06113" ]
1504.00702#115
End-to-End Training of Deep Visuomotor Policies
Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. R. Lioutikov, A. Paraschos, G. Neumann, and J. Peters. Sample-based information-theoretic In International Conference on Robotics and Automation, stochastic optimal control. 2014. H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber.
1504.00702#114
1504.00702#116
1504.00702
[ "1509.06113" ]
1504.00702#116
End-to-End Training of Deep Visuomotor Policies
A system In for robotic heart surgery that learns to tie knots using recurrent neural networks. International Conference on Intelligent Robots and Systems (IROS), 2006. W. Meeussen, M. Wise, S. Glaser, S. Chitta, C. McGann, P. Mihelich, E. Marder-Eppstein, M. Muja, Victor Eruhimov, T. Foote, J. Hsu, R.B. Rusu, B. Marthi, G. Bradski, K. Kono- lige, B. Gerkey, and E. Berger.
1504.00702#115
1504.00702#117
1504.00702
[ "1509.06113" ]
1504.00702#117
End-to-End Training of Deep Visuomotor Policies
Autonomous door opening and plugging in with a personal robot. In International Conference on Robotics and Automation (ICRA), 2010. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Ried- miller. Playing Atari with deep reinforcement learning. NIPS â 13 Workshop on Deep Learning, 2013. K. Mohta, V. Kumar, and K.
1504.00702#116
1504.00702#118
1504.00702
[ "1509.06113" ]
1504.00702#118
End-to-End Training of Deep Visuomotor Policies
Daniilidis. Vision based control of a quadrotor for perching on planes and lines. In International Conference on Robotics and Automation (ICRA), 2014. I. Mordatch and E. Todorov. Combining the beneï¬ ts of function approximation and tra- jectory optimization. In Robotics: Science and Systems (RSS), 2014. A. Y. Ng, H. J. Kim, M. I. Jordan, and S. Sastry.
1504.00702#117
1504.00702#119
1504.00702
[ "1509.06113" ]
1504.00702#119
End-to-End Training of Deep Visuomotor Policies
Inverted autonomous helicopter ï¬ ight via reinforcement learning. In International Symposium on Experimental Robotics, 2004. R. Pascanu and Y. Bengio. On the diï¬ culty of training recurrent neural networks. Technical Report arXiv:1211.5063, Universite de Montreal, 2012. B. Pepik, M. Stark, P. Gehler, and B. Schiele. Teaching 3D geometry to deformable part models. In Computer Vision and Pattern Recognition (CVPR), 2012.
1504.00702#118
1504.00702#120
1504.00702
[ "1509.06113" ]
1504.00702#120
End-to-End Training of Deep Visuomotor Policies
J. Peters and S. Schaal. Applying the episodic natural actor-critic architecture to motor In European Symposium on Artiï¬ cial Neural Networks (ESANN), primitive learning. 2007. J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682â 697, 2008. 38 End-to-End Training of Deep Visuomotor Policies J. Peters, K. M¨ulling, and Y. Alt¨un.
1504.00702#119
1504.00702#121
1504.00702
[ "1509.06113" ]
1504.00702#121
End-to-End Training of Deep Visuomotor Policies
Relative entropy policy search. In AAAI Conference on Artiï¬ cial Intelligence, 2010. Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. CoRR, abs/1509.06825, 2015. D. Pomerleau. ALVINN: an autonomous land vehicle in a neural network. In Advances in Neural Information Processing Systems (NIPS), 1989.
1504.00702#120
1504.00702#122
1504.00702
[ "1509.06113" ]
1504.00702#122
End-to-End Training of Deep Visuomotor Policies
S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15:627â 635, 2011. S. Ross, N. Melik-Barkhudarov, K. Shaurya Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert. Learning monocular reactive UAV control in cluttered natural environments. In International Conference on Robotics and Automation (ICRA), 2013.
1504.00702#121
1504.00702#123
1504.00702
[ "1509.06113" ]
1504.00702#123
End-to-End Training of Deep Visuomotor Policies
R. Rubinstein and D. Kroese. The Cross-Entropy Method: A Uniï¬ ed Approach to Combi- natorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, 2004. S. Savarese and L. Fei-Fei. 3D generic object categorization, localization and pose estima- tion. In International Conference on Computer Vision (ICCV), 2007. J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61: 85â 117, 2015. P. Y. Simard, D. Steinkraus, and J. C. Platt.
1504.00702#122
1504.00702#124
1504.00702
[ "1509.06113" ]
1504.00702#124
End-to-End Training of Deep Visuomotor Policies
Best practices for convolutional neural networks applied to visual document analysis. In Seventh International Conference on Document Analysis and Recognition, 2003. F. Stulp and O. Sigaud. Path integral policy improvement with covariance matrix adapta- tion. In International Conference on Machine Learning (ICML), 2012. Jaeyong Sung, Seok Hyun Jin, and Ashutosh Saxena. Robobarista: Object part based transfer of manipulation trajectories from crowd-sourcing in 3d pointclouds. CoRR, abs/1504.03071, 2015. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. R. Tedrake, T. Zhang, and H. Seung.
1504.00702#123
1504.00702#125
1504.00702
[ "1509.06113" ]
1504.00702#125
End-to-End Training of Deep Visuomotor Policies
Stochastic policy gradient reinforcement learning on a simple 3d biped. In International Conference on Intelligent Robots and Systems (IROS), 2004. E. Theodorou, J. Buchli, and S. Schaal. Reinforcement learning of motor skills in high dimensions. In International Conference on Robotics and Automation (ICRA), 2010. E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
1504.00702#124
1504.00702#126
1504.00702
[ "1509.06113" ]
1504.00702#126
End-to-End Training of Deep Visuomotor Policies
39 Levine, Finn, Darrell, and Abbeel J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in Neural Information Processing Systems (NIPS), 2014. J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 2013. H. van Hoof, J. Peters, and G. Neumann. Learning of non-parametric control policies with In International Conference on Artiï¬ cial Intelligence high-dimensional state features. and Statistics, 2015.
1504.00702#125
1504.00702#127
1504.00702
[ "1509.06113" ]
1504.00702#127
End-to-End Training of Deep Visuomotor Policies
H. Wang and A. Banerjee. Bregman alternating direction method of multipliers. In Advances in Neural Information Processing Systems (NIPS). 2014. M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A lo- cally linear latent dynamics model for control from raw images. In Advanced in Neural Information Processing Systems (NIPS), 2015. R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229â 256, May 1992. W. J. Wilson, C. W. Williams Hulls, and G. S. Bell.
1504.00702#126
1504.00702#128
1504.00702
[ "1509.06113" ]
1504.00702#128
End-to-End Training of Deep Visuomotor Policies
Relative end-eï¬ ector control using cartesian position based visual servoing. IEEE Transactions on Robotics and Automation, 12(5), 1996. K.A. Wyrobek, E.H. Berger, HF M. Van der Loos, and K. Salisbury. Towards a personal robotics development platform: Rationale and design of an intrinsically safe personal robot. In International Conference on Robotics and Automation (ICRA), 2008. B. H. Yoshimi and P. K. Allen.
1504.00702#127
1504.00702#129
1504.00702
[ "1509.06113" ]
1504.00702#129
End-to-End Training of Deep Visuomotor Policies
Active, uncalibrated visual servoing. In International Conference on Robotics and Automation (ICRA), 1994. 40
1504.00702#128
1504.00702
[ "1509.06113" ]
1504.00325#0
Microsoft COCO Captions: Data Collection and Evaluation Server
5 1 0 2 r p A 3 ] V C . s c [ 2 v 5 2 3 0 0 . 4 0 5 1 : v i X r a # Microsoft COCO Captions: Data Collection and Evaluation Server Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam Saurabh Gupta, Piotr Doll ´ar, C. Lawrence Zitnick Abstractâ
1504.00325#1
1504.00325
[ "1502.03671" ]
1504.00325#1
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, ï¬ ve independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided. 1 INTRODUCTION The automatic generation of captions for images is a long standing and challenging problem in artiï¬ cial in- telligence [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. Research in this area spans numerous domains, such as computer vision, natural language processing, and machine learn- ing. Recently there has been a surprising resurgence of interest in this area [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], due to the renewed interest in neural network learning techniques [31], [32] and increasingly large datasets [33], [34], [35], [7], [36], [37], [38]. In this paper, we describe our process of collecting captions for the Microsoft COCO Caption dataset, and the evaluation server we have set up to evaluate perfor- mance of different algorithms. The MS COCO caption dataset contains human generated captions for images contained in the Microsoft Common Objects in COntext (COCO) dataset [38]. Similar to previous datasets [7], [36], we collect our captions using Amazonâ s Mechanical Turk (AMT). Upon completion of the dataset it will contain over a million captions.
1504.00325#0
1504.00325#2
1504.00325
[ "1502.03671" ]
1504.00325#2
Microsoft COCO Captions: Data Collection and Evaluation Server
A large bus sitting next to a very tall building. The man at bat readies to swing at the pitch while the umpire looks on. Bunk bed with a narrow shelf sitting underneath it. A horse carrying a large load of hay and two people sitting on it. Fig. 1: Example images and captions from the Microsoft COCO Caption dataset. When evaluating image caption generation algo- rithms, it is essential that a consistent evaluation protocol is used. Comparing results from different approaches can be difï¬ cult since numerous evaluation metrics exist [39], [40], [41], [42]. To further complicate matters the imple- mentations of these metrics often differ. To help alleviate these issues, we have built an evaluation server to enable consistency in evaluation of different caption generation approaches. Using the testing data, our evaluation server evaluates captions output by different approaches using numerous automatic metrics:
1504.00325#1
1504.00325#3
1504.00325
[ "1502.03671" ]
1504.00325#3
Microsoft COCO Captions: Data Collection and Evaluation Server
BLEU [39], METEOR [41], ROUGE [40] and CIDEr [42]. We hope to augment these results with human evaluations on an annual basis. This paper is organized as follows: First we describe the data collection process. Next, we describe the caption evaluation server and the various metrics used. Human performance using these metrics are provided. Finally the annotation format and instructions for using the eval- uation server are described for those who wish to submit results. We conclude by discussing future directions and known issues.
1504.00325#2
1504.00325#4
1504.00325
[ "1502.03671" ]
1504.00325#4
Microsoft COCO Captions: Data Collection and Evaluation Server
Xinlei Chen is with Carnegie Mellon University. ⠢ Hao Fang is with the University of Washington. ⠢ T.Y. Lin is with Cornell NYC Tech. ⠢ Ramakrishna Vedantam is with Virginia Tech. ⠢ Saurabh Gupta is with the Univeristy of California, Berkeley. ⠢ P. Doll´ar is with Facebook AI Research. ⠢ C. L. Zitnick is with Microsoft Research, Redmond. # 2 DATA COLLECTION In this section we describe how the data is gathered for the MS COCO captions dataset. For images, we use the dataset collected by Microsoft COCO [38]. These images are split into training, validation and testing sets.
1504.00325#3
1504.00325#5
1504.00325
[ "1502.03671" ]
1504.00325#5
Microsoft COCO Captions: Data Collection and Evaluation Server
1 2 The images were gathered by searching for pairs of 80 object categories and various scene types on Flickr. The goal of the MS COCO image collection process was to gather images containing multiple objects in their natural context. Given the visual complexity of most images in the dataset, they pose an interesting and difï¬ cult challenge for image captioning. For generating a dataset of image captions, the same training, validation and testing sets were used as in the original MS COCO dataset. Two datasets were collected.
1504.00325#4
1504.00325#6
1504.00325
[ "1502.03671" ]
1504.00325#6
Microsoft COCO Captions: Data Collection and Evaluation Server
The ï¬ rst dataset MS COCO c5 contains ï¬ ve reference captions for every image in the MS COCO training, validation and testing datasets. The second dataset MS COCO c40 contains 40 reference sentences for a ran- domly chosen 5,000 images from the MS COCO testing dataset. MS COCO c40 was created since many auto- matic evaluation metrics achieve higher correlation with human judgement when given more reference sentences [42]. MS COCO c40 may be expanded to include the MS COCO validation dataset in the future. Our process for gathering captions received signiï¬ cant inspiration from the work of Young etal. [36] and Ho- dosh etal. [7] that collected captions on Flickr images using Amazonâ s Mechanical Turk (AMT). Each of our captions are also generated using human subjects on AMT. Each subject was shown the user interface in Figure 2. The subjects were instructed to: Describe all the important parts of the scene. â ¢ Do not start the sentences with â There is. â ¢ Do not describe unimportant details. â ¢ Do not describe things that might have happened
1504.00325#5
1504.00325#7
1504.00325
[ "1502.03671" ]
1504.00325#7
Microsoft COCO Captions: Data Collection and Evaluation Server
in the future or past. Do not describe what a person might say. â ¢ Do not give people proper names. â ¢ The sentences should contain at least 8 words. The number of captions gathered is 413,915 captions for 82,783 images in training, 202,520 captions for 40,504 images in validation and 379,249 captions for 40,775 images in testing including 179,189 for MS COCO c5 and 200,060 for MS COCO c40. For each testing image, we collected one additional caption to compute the scores of human performance for comparing scores of machine generated captions. The total number of collected cap- tions is 1,026,459. We plan to collect captions for the MS COCO 2015 dataset when it is released, which should approximately double the size of the caption dataset. The AMT interface may be obtained from the MS COCO website. 3 CAPTION EVALUATION In this section we describe the MS COCO caption evalu- ation server. Instructions for using the evaluation server are provided in Section 5. As input the evaluation server receives candidate captions for both the validation and testing datasets in the format speciï¬ ed in Section 5. The validation and test images are provided to the submit- ter. However, the human generated reference sentences Instructions: H+ Describe all the important parts of the soane. + Do not start the sentences with "There is". + Do not describe unimportant details. + Do not deseribe things that might have happened in the future or past. * Do not describe what a person might say. + Do not give people proper names. + The sentence should contain at least 8 words. Please describe the image: prev || next Fig. 2: Example user interface for the caption gathering task. are only provided for the validation set. The reference sentences for the testing set are kept private to reduce the risk of overï¬
1504.00325#6
1504.00325#8
1504.00325
[ "1502.03671" ]
1504.00325#8
Microsoft COCO Captions: Data Collection and Evaluation Server
tting. Numerous evaluation metrics are computed on both MS COCO c5 and MS COCO c40. These include BLEU- 1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, METEOR and CIDEr-D. The details of the these metrics are described next. # 3.1 Tokenization and preprocessing Both the candidate captions and the reference captions are pre-processed by the evaluation server. To tokenize the captions, we use Stanford PTBTokenizer in Stanford CoreNLP tools (version 3.4.1) [43] which mimics Penn Treebank 3 tokenization. In addition, punctuations1 are removed from the tokenized captions. 3.2 Evaluation metrics Our goal is to automatically evaluate for an image Ii the quality of a candidate caption ci given a set of reference captions Si = {si1, . . . , sim} â
1504.00325#7
1504.00325#9
1504.00325
[ "1502.03671" ]
1504.00325#9
Microsoft COCO Captions: Data Collection and Evaluation Server
S. The caption sentences are represented using sets of n-grams, where an n-gram Ï k â â ¦ is a set of one or more ordered words. In this paper we explore n-grams with one to four words. No stemming is performed on the words. The number of times an n-gram Ï k occurs in a sentence sij is denoted hk(sij) or hk(ci) for the candidate sentence ci â C. # 3.3 BLEU BLEU [39] is a popular machine translation metric that analyzes the co-occurrences of n-grams between the candidate and reference sentences. It computes a corpus- level clipped n-gram precision between sentences as follows: i Dig min(has (ci), max ha (sig) A) CP,(C, S$) (1) 1. The full list of punctuations: {â , â , â , â , -LRB-, -RRB-, -LCB-, -RCB-, ., ?, !, ,, :, -, â , ..., ;}. where k indexes the set of possible n-grams of length n. The clipped precision metric limits the number of times an n-gram may be counted to the maximum number of times it is observed in a single reference sentence. Note that CPn is a precision score and it favors short sentences. So a brevity penalty is also used:
1504.00325#8
1504.00325#10
1504.00325
[ "1502.03671" ]
1504.00325#10
Microsoft COCO Captions: Data Collection and Evaluation Server
if lo > Is 1 Io > I: W(C,S) = fri i y (2) Ig < lg where lC is the total length of candidate sentences ciâ s and lS is the length of the corpus-level effective refer- ence length. When there are multiple references for a candidate sentence, we choose to use the closest reference length for the brevity penalty. The overall BLEU score is computed using a weighted geometric mean of the individual n-gram precision: N BLEUN(C, 8) = 0(C, 8) exp (x Wn log CP, (C, 8) n=1 (3) where N = 1, 2, 3, 4 and wn is typically held constant for all n. BLEU has shown good performance for corpus- level comparisons over which a high number of n- gram matches exist. However, at a sentence-level the n-gram matches for higher n rarely occur. As a result, BLEU performs poorly when comparing individual sen- tences.
1504.00325#9
1504.00325#11
1504.00325
[ "1502.03671" ]
1504.00325#11
Microsoft COCO Captions: Data Collection and Evaluation Server
# 3.4 ROUGE ROUGE [40] is a set of evaluation metrics designed to evaluate text summarization algorithms. 1) ROUGEy: The first ROUGE metric computes a simple n-gram recall over all reference summaries given a candidate sentence: Yj Vex min(he (ci); ha (Sig) Yj Vex min(he (ci); ha (Sig) vi Ve hx (ij) ROUGEN (ci, i) (4) 2) ROUGEL: ROUGEL uses a measure based on the Longest Common Subsequence (LCS). An LCS is a set words shared by two sentences which occur in the same order. However, unlike n-grams there may be words in between the words that create the LCS. Given the length l(ci, sij) of the LCS between a pair of sentences, ROUGEL is found by computing an F-measure: Rl = max j Pl = max j l(ci, sij) |sij| l(ci, sij) |ci| ROU GEL(ci, Si) = (1 + β2)RlPl Rl + β2Pl (3) (5) (6) (7)
1504.00325#10
1504.00325#12
1504.00325
[ "1502.03671" ]
1504.00325#12
Microsoft COCO Captions: Data Collection and Evaluation Server
Rl and Pl are recall and precision of LCS. β is usually set to favor recall (β = 1.2). Since n- grams are implicit in this measure due to the use of the LCS, they need not be speciï¬ ed. 3) ROUGES: The ï¬ nal ROUGE metric uses skip bi- grams instead of the LCS or n-grams. Skip bi-grams are pairs of ordered words in a sentence. However, similar to the LCS, words may be skipped between pairs of words. Thus, a sentence with 4 words would have C 4 2 = 6 skip bi-grams. Precision and recall are again incorporated to compute an F- measure score. If fk(sij) is the skip bi-gram count for sentence sij, ROUGES is computed as: Re = max LA Min( Seles) Fels) wo Vx Feliz) (8) P= max Demin Sule) FalSi)) eG Lx Se(ci) ROU GES(ci, Si) = (1 + β2)RsPs Rs + β2Ps (10)
1504.00325#11
1504.00325#13
1504.00325
[ "1502.03671" ]
1504.00325#13
Microsoft COCO Captions: Data Collection and Evaluation Server
Skip bi-grams are capable of capturing long range sentence structure. In practice, skip bi-grams are computed so that the component words occur at a distance of at most 4 from each other. # 3.5 METEOR METEOR [41] is calculated by generating an alignment between the words in the candidate and reference sen- tences, with an aim of 1:1 correspondence. This align- ment is computed while minimizing the number of chunks, ch, of contiguous and identically ordered tokens in the sentence pair. The alignment is based on exact token matching, followed by WordNet synonyms [44], stemmed tokens and then paraphrases. Given a set of alignments, m, the METEOR score is the harmonic mean of precision Pm and recall Rm between the best scoring reference and candidate:
1504.00325#12
1504.00325#14
1504.00325
[ "1502.03671" ]
1504.00325#14
Microsoft COCO Captions: Data Collection and Evaluation Server
Pen=y(%) (11) m Fmean = PmRm αPm + (1 â α)Rm (12) |m| k hk(ci) |m| k hk(sij) M ET EOR = (1 â P en)Fmean |m| Pin = SO 13, 5 hel) 0) |m| Rn = = (14) dhe (siz) (15) Thus, the ï¬ nal METEOR score includes a penalty P en based on chunkiness of resolved matches and a har- monic mean term that gives the quality of the resolved matches. The default parameters α, γ and θ are used for this evaluation. Note that similar to BLEU, statistics of precision and recall are ï¬ rst aggregated over the entire corpus, which are then combined to give the corpus-level METEOR score.
1504.00325#13
1504.00325#15
1504.00325
[ "1502.03671" ]
1504.00325#15
Microsoft COCO Captions: Data Collection and Evaluation Server
3 (8) (9) 4 # 3.6 CIDEr The CIDEr metric [42] measures consensus in image captions by performing a Term Frequency Inverse Doc- ument Frequency (TF-IDF) weighting for each n-gram. The number of times an n-gram Ï k occurs in a reference sentence sij is denoted by hk(sij) or hk(ci) for the candi- date sentence ci. CIDEr computes the TF-IDF weighting gk(sij) for each n-gram Ï k using: # gk(sij) = Pe(sig) toe Z| Soca Talsa) | g (< oat rm) , (16) where â
1504.00325#14
1504.00325#16
1504.00325
[ "1502.03671" ]
1504.00325#16
Microsoft COCO Captions: Data Collection and Evaluation Server
¦ is the vocabulary of all n-grams and I is the set of all images in the dataset. The ï¬ rst term measures the TF of each n-gram Ï k, and the second term measures the rarity of Ï k using its IDF. Intuitively, TF places higher weight on n-grams that frequently occur in the reference sentences describing an image, while IDF reduces the weight of n-grams that commonly occur across all de- scriptions. That is, the IDF provides a measure of word saliency by discounting popular words that are likely to be less visually informative. The IDF is computed using the logarithm of the number of images in the dataset |I| divided by the number of images for which Ï
1504.00325#15
1504.00325#17
1504.00325
[ "1502.03671" ]
1504.00325#17
Microsoft COCO Captions: Data Collection and Evaluation Server
k occurs in any of its reference sentences. The CIDErn score for n-grams of length n is com- puted using the average cosine similarity between the candidate sentence and the reference sentences, which accounts for both precision and recall: CIDE rp (ci, 5%) +> ra (ci) -gâ (siz) (17) J ) Cea){llloâ sis MI where gâ (c;) is a vector formed by 9,(c;) corresponding to all n-grams of length n and ||gâ (c;)|| is the magnitude of the vector gâ (c;). Similarly for gâ (s;;). Higher order (longer) n-grams to are used to cap- ture grammatical properties as well as richer semantics. Scores from n-grams of varying lengths are combined as follows: N CIDEr(c;, $;) = S> wnCIDEr (ci, $i), n=1 (18) Uniform weights are used wn = 1/N . N = 4 is used.
1504.00325#16
1504.00325#18
1504.00325
[ "1502.03671" ]
1504.00325#18
Microsoft COCO Captions: Data Collection and Evaluation Server
CIDEr-D is a modiï¬ cation to CIDEr to make it more robust to gaming. Gaming refers to the phenomenon where a sentence that is poorly judged by humans tends to score highly with an automated metric. To defend the CIDEr metric against gaming effects, [42] add clipping and a length based gaussian penalty to the CIDEr metric described above. This results in the following equations for CIDEr-D: TABLE 1: Human Agreement for Image Captioning: Various metrics when benchmarking a human generated caption against ground truth captions.
1504.00325#17
1504.00325#19
1504.00325
[ "1502.03671" ]
1504.00325#19
Microsoft COCO Captions: Data Collection and Evaluation Server
Metric Name MS COCO c5 MS COCO c40 BLEU 1 BLEU 2 BLEU 3 BLEU 4 0.663 0.469 0.321 0.217 0.880 0.744 0.603 0.471 METEOR ROUGEL CIDEr-D 0.252 0.484 0.854 0.335 0.626 0.910 \ 10 =(ei) =U sig)? CIDEr-D,, (¢;, S;) = Tn DS e302 & j min(gâ (ci), 9â (siz)) 9" (Sis) llaâ (ea)MIlloâ (sis) Il » (19) Where l(ci) and l(sij) denote the lengths of candidate and reference sentences respectively. Ï = 6 is used. A factor of 10 is used in the numerator to make the CIDEr- D scores numerically similar to the other metrics. The ï¬ nal CIDEr-D metric is computed in a similar manner to CIDEr (analogous to eqn. 18): N CIDEr-D(;, 5:) = }> waCIDEr-D, (ci, $i), n=1 (20) Note that just like the BLEU and ROUGE metrics, CIDEr- D does not use stemming. We adopt the CIDEr-D metric for the evaluation server.
1504.00325#18
1504.00325#20
1504.00325
[ "1502.03671" ]
1504.00325#20
Microsoft COCO Captions: Data Collection and Evaluation Server
4 HUMAN PERFORMANCE In this section, we study the human agreement among humans at this task. We start with analyzing the inter- human agreement for image captioning (Section. 4.1) and then analyze human agreement for the word prediction sub-task and provide a simple model which explains human agreement for this sub-task (Section. 4.2). # 4.1 Human Agreement for Image Captioning When examining human agreement on captions, it be- comes clear that there are many equivalent ways to say essentially the same thing. We quantify this by conducting the following experiment: We collect one additional human caption for each image in the test set and treat this caption as the prediction. Using the MS COCO caption evaluation server we compute the various metrics. The results are tabulated in Table 1. # 4.2 Human Agreement for Word Prediction We can do a similar analysis for human agreement at the sub-task of word prediction. Consider the task of tagging the image with words that occur in the captions. For this task, we can compute the human precision and recall for
1504.00325#19
1504.00325#21
1504.00325
[ "1502.03671" ]
1504.00325#21
Microsoft COCO Captions: Data Collection and Evaluation Server
TABLE 2: Model deï¬ ntions. # o w n k q p object or visual concept = = word associated with o total number of images = number of captions per image = P (o = 1) = P (w = 1|o = 1) = a given word w by benchmarking words used in the k+1 human caption with respect to words used in the ï¬ rst k reference captions. Note that we use weighted versions of precision and recall, where each negative image has a weight of 1 and each positive image has a weight equal to the number of captions containing the word w. Human precision (Hp) and human recall (Hr) can be computed from the counts of how many subjects out of k use the word w to describe a given image over the whole dataset. We plot Hp versus Hr for a set of nouns, verbs and adjectives, and all 1000 words considered in Figure 3. Nouns referring to animals like â elephantâ have a high recall, which means that if an â elephantâ exists in the image, a subject is likely to talk about it (which makes intuitive sense, given â elephantâ images are somewhat rare, and there are no alternative words that could be used instead of â elephantâ ). On the other hand, an adjective like â brightâ
1504.00325#20
1504.00325#22
1504.00325
[ "1502.03671" ]
1504.00325#22
Microsoft COCO Captions: Data Collection and Evaluation Server
is used inconsistently and hence has low recall. Interestingly, words with high recall also have high precision. Indeed, all the points of human agreement appear to lie on a one-dimensional curve in the two-dimension precision-recall space. This observation motivates us to propose a simple model for when subjects use a particular word w for describing an image. Let o denote an object or visual concept associated with word w, n be the total number of images, and k be the number of reference captions. Next, let q = P (o = 1) be the probability that object o exists in an image.
1504.00325#21
1504.00325#23
1504.00325
[ "1502.03671" ]
1504.00325#23
Microsoft COCO Captions: Data Collection and Evaluation Server
For clarity these deï¬ nitions are summarized in Table 2. We make two simpliï¬ cations. First, we ig- nore image level saliency and instead focus on word level saliency. Speciï¬ cally, we only model p = P (w = 1|o = 1), the probability a subject uses w given that o is in the image, without conditioning on the image itself. Second, we assume that P (w = 1|o = 0) = 0, i.e. that a subject does not use w unless o is in the image. As we will show, even with these simpliï¬ cations our model sufï¬ ces to explain the empirical observations in Figure 3 to a reasonable degree of accuracy. Given these assumptions, we can model human preci- sion Hy and recall H, for a word w given only p and k. First, given k captions per image, we need to compute the expected number of (1) captions containing w (cw), (2) true positives (tp), and (3) false positives (fp). Note that in our definition there can be up to k true positives per image (if cw = k, i.e. each of the k captions contains word w) but at most 1 false positive (if none of the k captions contains w). The expectations, in terms of k, p, and q are: E[cw] = Σk i=1P (wi = 1) = ΣiP (wi = 1|o = 1)P (o = 1) +ΣiP (wi = 1|o = 0)P (o = 0) = kpq + 0 = kpq E[tp] = Σk i=1P (wi = 1 â § wk+1 = 1) = ΣiP (wi = 1 â § wk+1 = 1|o = 1)P (o = 1) +ΣiP (wi = 1 â § wk+1 = 1|o = 0)P (o = 0) = kppq + 0 = kp2q E[f p] = P (w1 . . . wk = 0 â
1504.00325#22
1504.00325#24
1504.00325
[ "1502.03671" ]
1504.00325#24
Microsoft COCO Captions: Data Collection and Evaluation Server
§ wk+1 = 1) = P (o = 1 â § w1 . . . wk = 0 â § wk+1 = 1) +P (o = 0 â § w1 . . . wk = 0 â § wk+1 = 1) = q(1 â p)kp + 0 = q(1 â p)kp In the above wi = 1 denotes that w appeared in the ith caption. Note that we are also assuming independence between subjects conditioned on o. We can now deï¬ ne model precision and recall as: ir : nE p] pk » nEftp] + nE[ fp] ~ pk + (1 â p)é 7 nE[tp] _ Hy 3 nE|cu] P Note that these expressions are independent of q and only depend on p. Interestingly, because of the use of weighted precision and recall, the recall for a category comes out to be exactly equal to p, the probability a subject uses w given that o is in the image. We set k = 4 and vary p to plot Hp versus H,, getting the curve as shown in blue in Figure 3 (bottom left). The curve explains the observed data quite well, closely matching the precision-recall tradeoffs of the empirical data (although not perfectly). We can also reduce the number of captions from four, and look at how the empirical and predicted precision and recall change. Figure 3 (bottom right), shows this variation as we reduce the number of reference captions per image from four to one annotations. We see that the points of human agreement remain at the same recall value, but decrease in their precision, which is consistent with what the model predicts. Also, the human precision at infinite subjects will approach one, which is again reasonable given that a subject will only use the word w if the corresponding object is in the image (and in the presence of infinite subjects someone else will also use the word w). In fact, the ï¬ xed recall value can help us recover p, the probability that a subject will use the word w in describing the image given the object is present. Nouns like â elephantâ and â tennisâ
1504.00325#23
1504.00325#25
1504.00325
[ "1502.03671" ]
1504.00325#25
Microsoft COCO Captions: Data Collection and Evaluation Server
have large p, which is reasonable. Verbs and adjectives, on the other hand, have smaller p values, which can be justiï¬ ed from the fact that a) subjects are less likely to describe attributes 5 6 Nouns boy ote Precision Precision Adjectives Precision Recall Precision Recall Recall "Recall Precision â number of reference = 1 Recall Fig. 3: Precision-recall points for human agreement: we compute precision and recall by treating one human caption as prediction and benchmark it against the others to obtain points on the precision recall curve. We plot these points for example nouns (top left), adjectives (top center), and verbs (top right), and for all words (bottom left). We also plot the ï¬ t of our model for human agreement with the empirical data (bottom left) and show how the human agreement changes with different number of captions being used (bottom right). We see that the human agreement point remains at the same recall value but dips in precision when using fewer captions. of objects and b) subjects might use a different word (synonym) to describe the same attribute. This analysis of human agreement also motivates us- ing a different metric for measuring performance. We propose Precision at Human Recall (PHR) as a metric for measuring performance of a vision system perform- ing this task. Given that human recall for a particular word is ï¬ xed and precision varies with the number of annotations, we can look at system precision at human recall and compare it with human precision to report the performance of the vision system.
1504.00325#24
1504.00325#26
1504.00325
[ "1502.03671" ]
1504.00325#26
Microsoft COCO Captions: Data Collection and Evaluation Server
5 EVALUATION SERVER INSTRUCTIONS Directions on how to use the MS COCO caption evalu- ation server can be found on the MS COCO website. The evaluation server is hosted by CodaLab. To par- ticipate, a user account on CodaLab must be created. The participants need to generate results on both the validation and testing datasets. When training for the generation of results on the test dataset, the training and validation dataset may be used as the participant sees ï¬
1504.00325#25
1504.00325#27
1504.00325
[ "1502.03671" ]
1504.00325#27
Microsoft COCO Captions: Data Collection and Evaluation Server
t. That is, the validation dataset may be used for training if desired. However, when generating results on the validation set, we ask participants to only train on the training dataset, and only use the validation dataset for tuning meta-parameters. Two JSON ï¬ les should be created corresponding to results on each dataset in the following format: [{ â image idâ â captionâ }] : : int, str, The results may then be placed into a zip ï¬ le and uploaded to the server for evaluation. Code is also provided on GitHub to evaluate results on the validation dataset without having to upload to the server. The number of submissions per user is limited to a ï¬ xed amount. # 6 DISCUSSION
1504.00325#26
1504.00325#28
1504.00325
[ "1502.03671" ]
1504.00325#28
Microsoft COCO Captions: Data Collection and Evaluation Server
Many challenges exist when creating an image caption dataset. As stated in [7], [42], [45] the captions generated by human subjects can vary signiï¬ cantly. However even though two captions may be very different, they may be judged equally â goodâ by human subjects. Designing effective automatic evaluation metrics that are highly correlated with human judgment remains a difï¬ cult challenge [7], [42], [45], [46]. We hope that by releasing results on the validation data, we can help enable future research in this area. Since automatic evaluation metrics do not always correspond to human judgment, we hope to conduct experiments using human subjects to judge the quality of automatically generated captions, which are most similar to human captions, and whether they are grammatically correct [45], [42], [7], [4], [5]. This is essential to determin- ing whether future algorithms are indeed improving, or whether they are merely over ï¬ tting to a speciï¬ c metric.
1504.00325#27
1504.00325#29
1504.00325
[ "1502.03671" ]
1504.00325#29
Microsoft COCO Captions: Data Collection and Evaluation Server
These human experiments will also allow us to evaluate the automatic evaluation metrics themselves, and see which ones are correlated to human judgment. # REFERENCES [1] K. Barnard and D. Forsyth, â Learning the semantics of words and pictures,â in ICCV, vol. 2, 2001, pp. 408â 415. [2] K. Barnard, P. Duygulu, D. Forsyth, N. De Freitas, D. M. Blei, and M. I. Jordan, â Matching words and pictures,â JMLR, vol. 3, pp. 1107â 1135, 2003. [3] V. Lavrenko, R. Manmatha, and J. Jeon, â A model for learning the semantics of pictures,â in NIPS, 2003. [4] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg, â Baby talk: Understanding and generating simple image descriptions,â in CVPR, 2011. [5] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and H. Daum´e III, â Midge:
1504.00325#28
1504.00325#30
1504.00325
[ "1502.03671" ]
1504.00325#30
Microsoft COCO Captions: Data Collection and Evaluation Server
Generating image descriptions from computer vision detections,â in EACL, 2012. [6] A. Farhadi, M. Hejrati, M. A. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth, â Every picture tells a story: Generating sentences from images,â in ECCV, 2010. [7] M. Hodosh, P. Young, and J. Hockenmaier, â
1504.00325#29
1504.00325#31
1504.00325
[ "1502.03671" ]