paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:44aca1ce39be826e389afbff70936eb0ef774f8f
[ "This paper is about developing VAEs in non-Euclidean spaces. Fairly recently, ML researchers have developed non-Euclidean embeddings, initially in hyperbolic space (constant negative curvature), and then in product spaces that have varying curvatures. These ideas were developed for embeddings, and recent attempts have been made to build entire models that operate in non-Euclidean spaces. The authors develop VAEs for the product spaces case.", "This paper introduces a general formulation of the notion of a VAE with a latent space composed by a curved manifold. It follows the current trend of learning representations on curved spaces by proposing a formulation of the latent distributions of the VAE in a variety of fixed-curvature spaces, and introduces an approach to learn the curvature of the space itself. Extensive mathematical derivations are provided, as well as experiments illustrating the impact of various choices of latent manifolds on the performance of the VAE." ]
Euclidean geometry has historically been the typical “workhorse” for machine learning applications due to its power and simplicity. However, it has recently been shown that geometric spaces with constant non-zero curvature improve representations and performance on a variety of data types and downstream tasks. Consequently, generative models like Variational Autoencoders (VAEs) have been successfully generalized to elliptical and hyperbolic latent spaces. While these approaches work well on data with particular kinds of biases e.g. tree-like data for a hyperbolic VAE, there exists no generic approach unifying and leveraging all three models. We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature is fixed or learnable. This generalizes the Euclidean VAE to curved latent spaces and recovers it when curvatures of all latent space components go to 0.
[ { "affiliations": [], "name": "Ondrej Skopek" }, { "affiliations": [], "name": "Octavian-Eugen Ganea" }, { "affiliations": [], "name": "Gary Bécigneul" } ]
[ { "authors": [ "Hervé Abdi", "Lynne J. Williams" ], "title": "Principal component analysis", "venue": "Wiley Interdisciplinary Reviews: Computational Statistics,", "year": 2010 }, { "authors": [ "Georgios Arvanitidis", "Lars Kai Hansen", "Søren Hauberg" ], "title": "Latent space oddity: on the curvature of deep generative models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Gregor Bachmann", "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Constant Curvature Graph Convolutional Networks, 2020", "venue": "URL https://openreview.net/forum?id=BJg73xHtvr", "year": 2020 }, { "authors": [ "Kayhan Batmanghelich", "Ardavan Saeedi", "Karthik Narasimhan", "Sam Gershman" ], "title": "Nonparametric Spherical Topic Modeling with Word Embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 537–542", "venue": "Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Riemannian Adaptive Optimization Methods", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Marcel Berger" ], "title": "A panoramic view of Riemannian geometry", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Silvere Bonnabel" ], "title": "Stochastic Gradient Descent on Riemannian Manifolds", "venue": "IEEE Transactions on Automatic Control,", "year": 2013 }, { "authors": [ "Samuel R. Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating Sentences from a Continuous Space", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "M.M. Bronstein", "J. Bruna", "Y. LeCun", "A. Szlam", "P. Vandergheynst" ], "title": "Geometric Deep Learning: Going beyond Euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance Weighted Autoencoders", "venue": "In Proceedings of the 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Xi Chen", "Diederik P. Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Variational Lossy Autoencoder", "venue": "In Proceedings of the 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Tim R. Davidson", "Luca Falorsi", "Nicola De Cao", "Thomas Kipf", "Jakub M. Tomczak" ], "title": "Hyperspherical Variational Auto-Encoders", "venue": "In UAI,", "year": 2018 }, { "authors": [ "Bhuwan Dhingra", "Christopher J Shallue", "Mohammad Norouzi", "Andrew M Dai", "George E Dahl" ], "title": "Embedding Text in Hyperbolic Spaces", "venue": "arXiv preprint arXiv:1806.04313,", "year": 2018 }, { "authors": [ "Robert L. Foote" ], "title": "A Unified Pythagorean Theorem in Euclidean, Spherical, and Hyperbolic Geometries", "venue": "Mathematics Magazine,", "year": 2017 }, { "authors": [ "Octavian Ganea", "Gary Bécigneul", "Thomas Hofmann" ], "title": "Hyperbolic Neural Networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Octavian-Eugen Ganea", "Gary Bécigneul", "Thomas Hofmann" ], "title": "Hyperbolic Entailment Cones for Learning Hierarchical Embeddings", "venue": "arXiv preprint arXiv:1804.01882,", "year": 2018 }, { "authors": [ "Mevlana C Gemici", "Danilo Rezende", "Shakir Mohamed" ], "title": "Normalizing Flows on Riemannian Manifolds", "venue": "arXiv preprint arXiv:1611.02304,", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative Adversarial Nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Albert Gu", "Frederic Sala", "Beliz Gunel", "Christopher Ré" ], "title": "Learning Mixed-Curvature Representations in Product Spaces", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Søren Hauberg" ], "title": "Directional Statistics with the Spherical Normal Distribution", "venue": "21st International Conference on Information Fusion (FUSION),", "year": 2018 }, { "authors": [ "Chin-Wei Huang", "David Krueger", "Alexandre Lacoste", "Aaron Courville" ], "title": "Neural autoregressive flows", "venue": "arXiv preprint arXiv:1804.00779,", "year": 2018 }, { "authors": [ "Edwin T Jaynes" ], "title": "Information theory and statistical mechanics", "venue": "Physical review,", "year": 1957 }, { "authors": [ "Yoon Kim", "Kelly Zhang", "Alexander M Rush", "Yann LeCun" ], "title": "Adversarially regularized autoencoders", "venue": "arXiv preprint arXiv:1706.04223,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Lei Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In Proceedings of the 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Max Kochurov", "Sergey Kozlukov", "Rasul Karimov", "Viktor Yanush" ], "title": "Geoopt: Adaptive Riemannian optimization in PyTorch", "venue": null, "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Brenden M. Lake", "Ruslan Salakhutdinov", "Joshua B. Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": "Science, 350(6266):1332–1338,", "year": 2015 }, { "authors": [ "Yann LeCun" ], "title": "The MNIST database of handwritten digits", "venue": "URL http://yann.lecun. com/exdb/mnist/", "year": 1998 }, { "authors": [ "J.M. Lee" ], "title": "Riemannian Manifolds: An Introduction to Curvature", "venue": "Graduate Texts in Mathematics. Springer New York,", "year": 1997 }, { "authors": [ "Hongbo Li", "David Hestenes", "Alyn Rockwood" ], "title": "A Universal Model for Conformal Geometries of Euclidean, Spherical and Double-Hyperbolic Spaces, pp. 77–104", "venue": null, "year": 2001 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "arXiv preprint arXiv:1811.12359,", "year": 2018 }, { "authors": [ "Emile Mathieu", "Charline Le Lan", "Chris J Maddison", "Ryota Tomioka", "Yee Whye Teh" ], "title": "Hierarchical Representations with Poincaré Variational Auto-Encoders", "venue": null, "year": 1901 }, { "authors": [ "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework", "venue": "In Proceedings of the 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yoshihiro Nagano", "Shoichiro Yamaguchi", "Yasuhiro Fujita", "Masanori Koyama" ], "title": "A differentiable gaussian-like distribution on hyperbolic space for gradient-based learning", "venue": "arXiv preprint arXiv:1902.02992,", "year": 2019 }, { "authors": [ "Maximilian Nickel", "Douwe Kiela" ], "title": "Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry", "venue": "In Proceedings of the 35th International Conference on International Conference on Machine Learning - Volume 50,", "year": 2018 }, { "authors": [ "Maximillian Nickel", "Douwe Kiela" ], "title": "Poincaré Embeddings for Learning Hierarchical Representations", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Shirui Pan", "Ruiqi Hu", "Guodong Long", "Jing Jiang", "Lina Yao", "Chengqi Zhang" ], "title": "Adversarially regularized graph autoencoder for graph embedding", "venue": "arXiv preprint arXiv:1802.04407,", "year": 2018 }, { "authors": [ "Xavier Pennec" ], "title": "Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements", "venue": "Journal of Mathematical Imaging and Vision,", "year": 2006 }, { "authors": [ "Peter Petersen", "S Axler", "KA Ribet" ], "title": "Riemannian Geometry, volume 171", "venue": null, "year": 2006 }, { "authors": [ "John Ratcliffe" ], "title": "Foundations of Hyperbolic Manifolds, volume 149", "venue": "Springer Science & Business Media,", "year": 2006 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational Inference with Normalizing Flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Frederic Sala", "Chris De Sa", "Albert Gu", "Christopher Re" ], "title": "Representation tradeoffs for hyperbolic embeddings", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ruslan Salakhutdinov", "Iain Murray" ], "title": "On the Quantitative Analysis of Deep Belief Networks", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "GraphVAE: Towards generation of small graphs using variational autoencoders", "venue": "In International Conference on Artificial Neural Networks,", "year": 2018 }, { "authors": [ "John Parr Snyder" ], "title": "Map projections–A working manual, volume 1395", "venue": "US Government Printing Office,", "year": 1987 }, { "authors": [ "Alexandru Tifrea", "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Poincaré Glove: Hyperbolic Word Embeddings", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Abraham Albert Ungar" ], "title": "A Gyrovector Space Approach to Hyperbolic Geometry", "venue": "Synthesis Lectures on Mathematics and Statistics,", "year": 2008 }, { "authors": [ "Benjamin Wilson", "Matthias Leimeister" ], "title": "Gradient descent in hyperbolic space", "venue": "arXiv preprint arXiv:1805.08207,", "year": 2018 }, { "authors": [ "Richard C. Wilson", "Edwin R. Hancock" ], "title": "Spherical embedding and classification", "venue": null, "year": 2010 }, { "authors": [ "Jiacheng Xu", "Greg Durrett" ], "title": "Spherical Latent Spaces for Stable Variational Autoencoders", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Philipp Krähenbühl", "Eli Shechtman", "Alexei A. Efros" ], "title": "Generative Visual Manipulation on the Natural Image Manifold", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Kochurov" ], "title": "2018a) have also derived and implemented the parallel transport operation for the Poincaré ball: PTx→y(v", "venue": null, "year": 2018 }, { "authors": [ "Rezende" ], "title": "pulling back the metric onto the specific sub-manifold. Unfortunately, we found the formalism not useful for our application, apart from providing a very interesting theoretical connection among the models. Concurrent VAE approaches The variational autoencoder was originally proposed in Kingma ", "venue": null, "year": 2014 }, { "authors": [ "Mathieu" ], "title": "non-maximum entropy probability distribution called the Wrapped Normal", "venue": null, "year": 2019 }, { "authors": [ "Gu" ], "title": "2019) and unifying the different approaches into a single framework for all spaces of constant curvature. D EXTENDED FUTURE WORK Even though we have shown that one can approximate the true posterior very well with Normallike distributions in Riemannian manifolds of constant curvature, there remain several promising", "venue": null, "year": 2019 }, { "authors": [ "Huang" ], "title": "Using normalizing flows, one should be able to achieve the desired level of complexity of the latent distribution in a VAE, which should, similarly to our work, help to approximate the true posterior of the data better. The advantage of normalizing flows is the flexibility of the modeled distributions, at the expense of being more computationally expensive", "venue": null, "year": 2018 }, { "authors": [ "Kim" ], "title": "Finally, another interesting extension would be to extend the defined geometrical models to allow for training generative adversarial networks (GANs) (Goodfellow et al., 2014) in products of constant curvature spaces and benefit from the better sharpness and quality of samples that GANs provide. Finally, one could synthesize the above to achieve adversarially trained autoencoders in Riemannian manifolds similarly to Pan et al", "venue": "Makhzani et al", "year": 2015 } ]
[ { "heading": null, "text": "Euclidean geometry has historically been the typical “workhorse” for machine learning applications due to its power and simplicity. However, it has recently been shown that geometric spaces with constant non-zero curvature improve representations and performance on a variety of data types and downstream tasks. Consequently, generative models like Variational Autoencoders (VAEs) have been successfully generalized to elliptical and hyperbolic latent spaces. While these approaches work well on data with particular kinds of biases e.g. tree-like data for a hyperbolic VAE, there exists no generic approach unifying and leveraging all three models. We develop a Mixed-curvature Variational Autoencoder, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature is fixed or learnable. This generalizes the Euclidean VAE to curved latent spaces and recovers it when curvatures of all latent space components go to 0." }, { "heading": "1 INTRODUCTION", "text": "Generative models, a growing area of unsupervised learning, aim to model the data distribution p(x) over data points x from a space X , which is usually a high-dimensional Euclidean space Rn. This has desirable benefits like a naturally definable inner-product, vector addition, or a closedform distance function. Yet, many types of data have a strongly non-Euclidean latent structure (Bronstein et al., 2017), e.g. the set of human-interpretable images. They are usually thought to live on a “natural image manifold” (Zhu et al., 2016), a continuous lower-dimensional subset of the space in which they are represented. By moving along the manifold, one can continuously change the content and appearance of interpretable images. As noted in Nickel & Kiela (2017), changing the geometry of the underlying latent space enables better representations of specific data types compared to Euclidean spaces of any dimensions, e.g. tree structures and scale-free networks.\nMotivated by these observations, a range of recent methods learn representations in different spaces of constant curvatures: spherical or elliptical (Batmanghelich et al., 2016), hyperbolic (Nickel & Kiela, 2017; Sala et al., 2018; Tifrea et al., 2019) and even in products of these spaces (Gu et al., 2019; Bachmann et al., 2020). Using a combination of different constant curvature spaces, Gu et al. (2019) aim to match the underlying geometry of the data better. However, an open question remains: how to choose the dimensionality and curvatures of each of the partial spaces?\nA popular approach to generative modeling is the Variational Autoencoder (Kingma & Welling, 2014). VAEs provide a way to sidestep the intractability of marginalizing a joint probability model of the input and latent space p(x, z), while allowing for a prior p(z) on the latent space. Recently, variants of the VAE have been introduced for spherical (Davidson et al., 2018; Xu & Durrett, 2018) and hyperbolic (Mathieu et al., 2019; Nagano et al., 2019) latent spaces.\nOur approach, the Mixed-curvature Variational Autoencoder, is a generalization of VAEs to products of constant curvature spaces.1 It has the advantage of a better reduction in dimensionality, while maintaining efficient optimization. The resulting latent space is a “non-constantly” curved manifold that is more flexible than a single constant curvature manifold.\n1Code is available on GitHub at https://github.com/oskopek/mvae.\nOur contributions are the following: (i) we develop a principled framework for manipulating representations and modeling probability distributions in products of constant curvature spaces that smoothly transitions across curvatures of different signs, (ii) we generalize Variational Autoencoders to learn latent representations on products of constant curvature spaces with generalized Gaussianlike priors, and (iii) empirically, our models outperform current benchmarks on a synthetic tree dataset (Mathieu et al., 2019) and on image reconstruction on the MNIST (LeCun, 1998), Omniglot (Lake et al., 2015), and CIFAR (Krizhevsky, 2009) datasets for some latent space dimensions." }, { "heading": "2 GEOMETRY AND PROBABILITY IN RIEMANNIAN MANIFOLDS", "text": "To define constantly curved spaces, we first need to define the notion of sectional curvatureK(τx) of two linearly independent vectors in the tangent space at a point x ∈M spanning a two-dimensional plane τx (Berger, 2012). Since we deal with constant curvature spaces where all the sectional curvatures are equal, we denote a manifold’s curvature as K. Instead of curvature K, we sometimes use the generalized notion of a radius: R = 1/ √ |K|.\nThere are three different types of manifoldsM we can define with respect to the sign of the curvature: a positively curved space, a “flat” space, and a negatively curved space. Common realizations of those manifolds are the hypersphere SK , the Euclidean space E, and the hyperboloid HK :\nM = SnK = {x ∈ Rn+1 : 〈x,x〉2 = 1/K}, for K > 0 En = Rn, for K = 0 HnK = {x ∈ Rn+1 : 〈x,x〉L = 1/K}, for K < 0\nwhere 〈·, ·〉2 is the standard Euclidean inner product, and 〈·, ·〉L is the Lorentz inner product,\n〈x,y〉L = −x1y1 + n+1∑ i=2 xiyi ∀x,y ∈ Rn+1.\nWe will need to define the exponential map, logarithmic map, and parallel transport in all spaces we consider. The exponential map in Euclidean space is defined as expx(v) = x + v, for all x ∈ En and v ∈ TxEn. Its inverse, the logarithmic map is logx(y) = y − x, for all x,y ∈ En. Parallel transport in Euclidean space is simply an identity PTx→y(v) = v, for all x,y ∈ En and v ∈ TxEn. An overview of these operations in the hyperboloid HnK and the hypersphere SnK can be found in Table 1. For more details, refer to Petersen et al. (2006), Cannon et al. (1997), or Appendix A." }, { "heading": "2.1 STEREOGRAPHICALLY PROJECTED SPACES", "text": "The above spaces are enough to cover any possible value of the curvature, and they define all the necessary operations we will need to train VAEs in them. However, both the hypersphere and the hyperboloid have an unsuitable property, namely the non-convergence of the norm of points as the curvature goes to 0. Both spaces grow as K → 0 and become locally “flatter”, but to do that, their points have to go away from the origin of the coordinate space 0 to be able to satisfy the manifold’s definition. A good example of a point that diverges is the origin of the hyperboloid (or a pole of the hypersphere) µ0 = (1/ √ |K|, 0, . . . , 0)T . In general, we can see that ‖µ0‖2 = 1/K\nK→0−−−→ ±∞. Additionally, the distance and the metric tensors of these spaces do not converge to their Euclidean variants as K → 0, hence the spaces themselves do not converge to Rd. This makes both of these spaces unsuitable for trying to learn sign-agnostic curvatures.\nLuckily, there exist well-defined non-Euclidean spaces that inherit most properties from the hyperboloid and the hypersphere, yet do not have these properties – namely, the Poincaré ball and the projected sphere, respectively. We obtain them by applying stereographic conformal projections, meaning that angles are preserved by this transformation. Since the distance function on the hyperboloid and hypersphere only depend on the radius and angles between points, they are isometric.\nWe first need to define the projection function ρK . For a point (ξ;xT )T ∈ Rn+1 and curvature K ∈ R, where ξ ∈ R, x,y ∈ Rn\nρK((ξ;x T )T ) =\nx 1 + √ |K|ξ , ρ−1K (y) = ( 1√ |K| 1−K ‖y‖22 1 +K ‖y‖22 ; 2yT 1 +K ‖y‖22 )T .\nThe formulas correspond to the classical stereographic projections defined for these models (Lee, 1997). Note that both of these projections map the point µ0 = (1/ √ |K|, 0, . . . , 0) in the original space to µ0 = 0 in the projected space, and back.\nSince the stereographic projection is conformal, the metric tensors of both spaces will be conformal. In this case, the metric tensors of both spaces are the same, except for the sign of K: gDKx = g PK x = (λKx ) 2gE, for all x in the respective manifold (Ganea et al., 2018a), and gEy = I for all y ∈ E. The conformal factor λKx is then defined as λ K x = 2/(1 + K ‖x‖ 2 2). Among other things, this form of the metric tensor has the consequence that we unfortunately cannot define a single unified inner product in all tangent spaces at all points. The inner product at x ∈ M has the form of 〈u,v〉x = (λKx )2 〈u,v〉2 for all u,v ∈ TxM. We can now define the two models corresponding to K > 0 and K < 0. The curvature of the projected manifold is the same as the original manifold. An n-dimensional projected hypersphere (K > 0) is defined as the set DnK = ρK(SnK \\ {−µ0}) = Rn, where µ0 = (1/ √ |K|, 0, . . . , 0)T ∈ SnK , along with the induced distance function. The n-dimensional Poincaré ball PnK (also called the Poincaré disk when n = 2) for a given curvature K < 0 is defined as PnK = ρK(HnK) ={ x ∈ Rn : 〈x,x〉2 < − 1 K } , with the induced distance function." }, { "heading": "2.2 GYROVECTOR SPACES", "text": "An important analogy to vector spaces (vector addition and scalar multiplication) in non-Euclidean geometry is the notion of gyrovector spaces (Ungar, 2008). Both of the above spaces DK and PK (jointly denoted asMK) share the same structure, hence they also share the following definition of addition. The Möbius addition ⊕K of x,y ∈MK (for both signs of K) is defined as\nx⊕K y = (1− 2K 〈x,y〉2 −K ‖y‖ 2 2)x+ (1 +K ‖x‖ 2 2)y\n1− 2K 〈x,y〉2 +K2 ‖x‖ 2 2 ‖y‖ 2 2\n.\nWe can, therefore, define “gyrospace distances” for both of the above spaces, which are alternative curvature-aware distance functions\ndDgyr(x,y) = 2√ K\ntan−1( √ K ‖−x⊕K y‖2), dPgyr(x,y) =\n2√ −K tanh−1( √ −K ‖−x⊕K y‖2).\nThese two distances are equivalent to their non-gyrospace variants dM(x,y) = dMgyr(x,y), as is shown in Theorem A.4 and its hypersphere equivalent. Additionally, Theorem A.5 shows that\ndMgyr(x,y) K→0−−−→ 2 ‖x− y‖2 ,\nwhich means that the distance functions converge to the Euclidean distance function as K → 0. We can notice that most statements and operations in constant curvature spaces have a dual statement or operation in the corresponding space with the opposite curvature sign. The notion of duality is one which comes up very often and in our case is based on Euler’s formula eix = cos(x) + i sin(x) and the notion of principal square roots √ −K = i √ K. This provides a connection between trigonometric, hyperbolic trigonometric, and exponential functions. Thus, we can convert all the hyperbolic formulas above to their spherical equivalents and vice-versa.\nSince Ganea et al. (2018a) and Tifrea et al. (2019) used the same gyrovector spaces to define an exponential map, its inverse logarithmic map, and parallel transport in the Poincaré ball, we can reuse them for the projected hypersphere by applying the transformations above, as they share the same formalism. For parallel transport, we additionally need the notion of gyration (Ungar, 2008)\ngyr[x,y]v = K(x⊕K y)⊕K (x⊕K (y ⊕K v)).\nParallel transport in the both the projected hypersphere and the Poincaré ball then is PTKx→y(v) = (λKx /λ K y )gyr[y,−x]v, for all x,y ∈ MnK and v ∈ TxMnK . Using a curvature-aware definition scalar products (〈x,y〉K = 〈x,y〉2 if K ≥ 0, 〈x,y〉L if K < 0) and of trigonometric functions\nsinK = { sin if K > 0 sinh if K < 0\ncosK = { cos if K > 0 cosh if K < 0\ntanK = { tan if K > 0 tanh if K < 0\nwe can summarize all the necessary operations in all manifolds compactly in Table 1 and Table 2.\nTable 1: Summary of operations in SK and HK .\nDistance d(x,y) = 1√ |K| cos−1K (|K| 〈x,y〉K)\nExponential map expKx (v) = cosK (√ |K| ‖v‖K ) x+ sinK (√ |K| ‖v‖K ) v√ |K| ‖v‖K Logarithmic map logKx (y) = cos−1K (K 〈x,y〉K)\nsinK ( cos−1K (K 〈x,y〉K) ) (y −K 〈x,y〉K x) Parallel transport PTKx→y(v) = v −\nK 〈y,v〉K 1 +K 〈x,y〉K (x+ y)\nTable 2: Summary of operations in projected spaces DK and PK .\nDistance d(x,y) = 1√ |K| cos−1K\n(\n1−\n2K ‖x− y‖22 (1 +K ‖x‖22)(1 +K ‖y‖ 2 2)\n)\nGyrospace distance dgyr(x,y) =\n2√ |K| tan−1K ( √ |K| ‖−x⊕K y‖2)\nExponential map expKx (v) = x⊕K ( tanK (√ |K|\nλKx ‖v‖2 2\n) v√\n|K| ‖v‖2 ) Logarithmic map logKx (y) =\n2√ |K|λKx tan−1K\n(√ |K| ‖−x⊕K y‖2 ) −x⊕K y ‖−x⊕K y‖2\nParallel transport PTKx→y(v) = λKx λKy gyr[y,−x]v" }, { "heading": "2.3 PRODUCTS OF SPACES", "text": "Previously, our space consisted of only one manifold of varying dimensionality and fixed curvature. Like Gu et al. (2019), we propose learning latent representations in products of constant curvature spaces, contrary to existing VAE approaches which are limited to a single Riemannian manifold.\nOur latent spaceM′ consists of several component spacesM′ =×ki=1MniKi , where ni is the dimensionality of the space, Ki is its curvature, andM ∈ {E,S,D,H,P} is the model choice. Even though all components have constant curvature, the resulting manifoldM′ has non-constant curvature. Its distance function decomposes based on its definition d2M(x,y) = ∑k i=1 d\n2 MniKi\n( x(i),y(i) ) ,\nwhere x(i) represents a vector in MniKi , corresponding to the part of the latent space representation of x belonging toMniKi . All other operations we defined on our manifolds are element-wise. Therefore, we again decompose the representations into parts x(i), apply the operation on that part x̃(i) = f\n(ni) Ki\n(x(i)) and concatenate the resulting parts back x̃ = ⊙k\ni=1 x̃ (i).\nThe signature of the product space, i.e. its parametrization, has several degrees of freedom per component: (i) the model M, (ii) the dimensionality ni, and (iii) the curvature Ki. We need to select all of the above for every component in our product space. To simplify, we use a shorthand notation for repeated components: (MniKi)\nj =×jl=1MniKi . In Euclidean spaces, the notation is redundant. For n1, . . . , nk ∈ Z, such that ∑k i=1 ni = n ∈ Z, it holds that the Cartesian product of Euclidean spaces Eni is En =×ki=1 Eni . However, the equality does not hold for the other considered manifolds. This is due to the additional constraints posed on the points in the definitions of individual models of curved spaces." }, { "heading": "2.4 PROBABILITY DISTRIBUTIONS ON RIEMANNIAN MANIFOLDS", "text": "To be able to train Variational Autoencoders, we need to chose a probability distribution p as a prior and a corresponding posterior distribution family q. Both of these distributions have to be differentiable with respect to their parametrization, they need to have a differentiable KullbackLeiber (KL) divergence, and be “reparametrizable” (Kingma & Welling, 2014). For distributions where the KL does not have a closed-form solution independent on z, or where this integral is too hard to compute, we can estimate it using Monte Carlo estimation\nDKL (q || p) ≈ 1\nL L∑ l=1 log ( q(z(l)) p(z(l)) ) if L=1 = log ( q(z(1)) p(z(1)) ) ,\nwhere z(l) ∼ q for all l = 1, . . . , L. The Euclidean VAE uses a natural choice for a prior on its latent representations – the Gaussian distribution (Kingma & Welling, 2014). Apart from satisfying the requirements for a VAE prior and posterior distribution, the Gaussian distribution has additional properties, like being the maximum entropy distribution for a given variance (Jaynes, 1957). There exist several fundamentally different approaches to generalizing the Normal distribution to Riemannian manifolds. We discuss the following three generalizations based on the way they are constructed (Mathieu et al., 2019).\nWrapping This approach leverages the fact that all manifolds define a tangent vector space at every point. We simply sample from a Gaussian distribution in the tangent space at µ0 with mean 0, and use parallel transport and the exponential map to map the sampled point onto the manifold. The PDF can be obtained using the multivariate chain rule if we can compute the determinant of the Jacobian of the parallel transport and the exponential map. This is very computationally effective at the expense of losing some theoretical properties.\nRestriction The “Restricted Normal” approach is conceptually antagonal – instead of expanding a point to a dimensionally larger point, we restrict a point of the ambient space sampled from a Gaussian to the manifold. The consequence is that the distributions constructed this way are based on the “flat” Euclidean distance. An example of this is the von Mises-Fisher (vMF) distribution (Davidson et al., 2018). A downside of this approach is that vMF only has a single scalar covariance parameter κ, while other approaches can parametrize covariance in different dimensions separately.\nMaximizing entropy Assuming a known mean and covariance matrix, we want to maximize the entropy of the distribution (Pennec, 2006). This approach is usually called the Riemannian Normal distribution. Mathieu et al. (2019) derive it for the Poincaré ball, and Hauberg (2018) derive the Spherical Normal distribution on the hypersphere. Maximum entropy distributions resemble the Gaussian distribution’s properties the closest, but it is usually very hard to sample from such distributions, compute their normalization constants, and even derive the specific form. Since the gains for VAE performance using this construction of Normal distributions over wrapping is only marginal, as reported by Mathieu et al. (2019), we have chosen to focus on Wrapped Normal distributions.\nTo summarize, Wrapped Normal distributions are very computationally efficient to sample from and also efficient for computing the log probability of a sample, as detailed by Nagano et al. (2019). The Riemannian Normal distributions (based on geodesic distance in the manifold directly) could also be used, however they are more computationally expensive for sampling, because the only methods available are based on rejection sampling (Mathieu et al., 2019)." }, { "heading": "2.4.1 WRAPPED NORMAL DISTRIBUTION", "text": "First of all, we need to define an “origin” point on the manifold, which we will denote as µ0 ∈ MK . What this point corresponds to is manifold-specific: in the hyperboloid and hypersphere it corresponds to the point µ0 = (1/ √ |K|, 0, . . . , 0)T , and in the Poincaré ball, projected sphere, and Euclidean space it is simply µ0 = 0, the origin of the coordinate system.\nSampling from the distributionWN (µ,Σ) has been described in detail in Nagano et al. (2019) and Mathieu et al. (2019), and we have extended all the necessary operations and procedures to arbitrary curvature K. Sampling then corresponds to:\nv ∼ N (µ0,Σ) ∈ Tµ0MK , u = PT K µ0→µ(v) ∈ TµMK , z = expµ K x (u) ∈MK .\nThe log-probability of samples can be computed by the reverse procedure:\nu = logKµ (z) ∈ TµMK , v = PT K µ→µ0(u) ∈ Tµ0MK , logWN (µ,Σ) = logN (v;µ0,Σ)− log det ( ∂f\n∂v\n) ,\nwhere f = expKµ ◦PT K µ0→µ . The distribution can be applied to all manifolds that we have introduced. The only differences are the specific forms of operations and the log-determinant in the PDF. The specific forms of the log-PDF for the four spaces H, S, D, and P are derived in Appendix B. All variants of this distribution are reparametrizable, differentiable, and the KL can be computed using Monte Carlo estimation. As a consequence of the distance function and operations convergence theorems for the Poincaré ball A.5 (analogously for the projected hypersphere), A.17, A.18, and A.20, we see that the Wrapped Normal distribution converges to the Gaussian distribution as K → 0." }, { "heading": "3 VARIATIONAL AUTOENCODERS IN PRODUCTS SPACES", "text": "To be able to learn latent representations in Riemannian manifolds instead of in Rd as above, we only need to change the parametrization of the mean and covariance in the VAE forward pass, and the choice of prior and posterior distributions. The prior and posterior have to be chosen depending on the chosen manifold and are essentially treated as hyperparameters of our VAE. Since we have defined the Wrapped Normal family of distributions for all spaces, we can useWN (µ0,σ2I) as the posterior family, andWN (µ0, I) as the prior distribution. The forms of the distributions depend on the chosen space type. The mean is parametrized using the exponential map expKµ0 as an activation function. Hence, all the model’s parameters live in Euclidean space and can be optimized directly.\nIn experiments, we sometimes use vMF(µ, κ) for the hypersphere SnK (or a backprojected variant of vMF for DnK) with the associated hyperspherical uniform distribution U(SnK) as a prior (Davidson et al., 2018), or the Riemannian Normal distributionRN (µ, σ2) and the associated priorRNµ0, 1 for the Poincare ball PnK (Mathieu et al., 2019)." }, { "heading": "3.1 LEARNING CURVATURE", "text": "We have already seen approaches to learning VAEs in products of spaces of constant curvature. However, we can also change the curvature constant in each of the spaces during training. The individual spaces will still have constant curvature at each point, we just allow changing the constant in between training steps. To differentiate between these training procedures, we will call them fixed curvature and learnable curvature VAEs respectively.\nThe motivation behind changing curvature of non-Euclidean constant curvature spaces might not be clear, since it is apparent from the definition of the distance function in the hypersphere and hyperboloid d(x,y) = R · θx,y, that the distances between two points that stay at the same angle only get rescaled when changing the radius of the space. Same applies for the Poincaré ball and the projected spherical space. However, the decoder does not only depend on pairwise distances, but rather on the specific positions of points in the space. It can be conjectured that the KL term of the ELBO indeed is only “rescaled” when we change the curvature, however, the reconstruction process is influenced in non-trivial ways. Since that is hard to quantify and prove, we devise a series of practical experiments to show overall model performance is enhanced when learning curvature.\nFixed curvature VAEs In fixed curvature VAEs, all component latent spaces have a fixed curvature that is selected a priori and fixed for the whole duration of the training procedure, as well as during evaluation. For Euclidean components it is 0, for positively or negatively curved spaces any positive or negative number can be chosen, respectively. For stability reasons, we select curvature values from the range [0.25, 1.0], which corresponds to radii in [1.0, 2.0]. The exact curvature value does not have a significant impact on performance when training a fixed curvature VAE, as motivated by the distance rescaling remark above. In the following, we refer to fixed curvature components with a constant subscript, e.g. Hn1 .\nLearnable curvature VAEs In all our manifolds, we can differentiate the ELBO with respect to the curvature K. This enables us to treat K as a parameter of the model and learn it using gradientbased optimization, exactly like we learn the encoder/decoder maps in a VAE.\nLearning curvature directly is badly conditioned – we are trying to learn one scalar parameter that influences the resulting decoder and hence the ELBO quite heavily. Empirically, we have found that Stochastic Gradient Descent works well to optimize the radius of a component. We constrain the radius to be strictly positive in all non-Euclidean spaces by applying a ReLU activation function before we use it in operations.\nUniversal curvature VAEs However, we must still a priori select the “partitioning” of our latent space – the number of components and for each of them select the dimension and at least the sign of the curvature of that component (signature estimation).\nThe simplest approach would be to just try all possibilities and compare the results on a specific dataset. This procedure would most likely be optimal, but does not scale well.\nTo eliminate this, we propose an approximate method – we partition our space into 2-dimensional components (if the number of dimensions is odd, one component will have 3 dimensions). We initialize all of them as Euclidean components and train for half the number of maximal epochs we are allowed. Then, we split the components into 3 approximately equal-sized groups and make one group into hyperbolic components, one into spherical, and the last remains Euclidean. We do this by changing the curvature of a component by a very small . We then train just the encoder/decoder maps for a few epochs to stabilize the representations after changing the curvatures. Finally, we allow learning the curvatures of all non-Euclidean components and train for the rest of the allowed epochs. The method is not completely general, as it never uses components bigger than dimension 2, but the approximation has empirically performed satisfactorily.\nWe also do not constrain the curvature of the components to a specific sign in the last stage of training. Therefore, components may change their type of space from a positively curved to a negatively curved one, or vice-versa.\nBecause of the divergence of points as K → 0 for the hyperboloid and hypersphere, the universal curvature VAE assumes the positively curved space is D and the negatively curved space is P. In all experiments, this universal approach is denoted as Un." }, { "heading": "4 EXPERIMENTS", "text": "For our experiments, we use four datasets: (i) Branching diffusion process (Mathieu et al., 2019, BDP) – a synthetic tree-like dataset with injected noise, (ii) Dynamically-binarized MNIST digits (LeCun, 1998) – we binarize the images similarly to Burda et al. (2016); Salakhutdinov & Murray (2008): the training set is binarized dynamically (uniformly sampled threshold per-sample bin(x) ∈ {0, 1}D = x > U [0, 1],x ∈ x ⊆ [0, 1]D), and the evaluation set is done with a fixed binarization (x > 0.5), (iii) Dynamically-binarized Omniglot characters (Lake et al., 2015) downsampled to 28× 28 pixels, and (iv) CIFAR-10 (Krizhevsky, 2009). All models in all datasets are trained with early stopping on training ELBO with a lookahead of 50 epochs and a warmup of 100 epochs (Bowman et al., 2016). All BDP models are trained for a 1000 epochs, MNIST and Omniglot models are trained for 300 epochs, and CIFAR for 200 epochs. We compare models with a given latent space dimension using marginal log-likelihood with importance sampling (Burda et al., 2016) with 500 samples, except for CIFAR, which uses 50 due to memory constraints. In all tables, we denote it as LL. We run all experiments at least 3 times to get an estimate of variance when using different initial values.\nIn all the BDP, MNIST, and Omniglot experiments below, we use a simple feed-forward encoder and decoder architecture consisting of a single dense layer with 400 neurons and element-wise ReLU activation. Since all the VAE parameters {θ, φ} live in Euclidean manifolds, we can use standard gradient-based optimization methods. Specifically, we use the Adam (Kingma & Ba, 2015) optimizer with a learning rate of 10−3 and standard settings for β1 = 0.9, β2 = 0.999, and = 10−8.\nFor the CIFAR encoder map, we use a simple convolutional neural networks with three convolutional layers with 64, 128, and 512 channels respectively. For the decoder map, we first use a dense layer\nof dimension 2048, and then three consecutive transposed convolutional layers with 256, 64, and 3 channels. All layers are followed by a ReLU activation function, except for the last one. All convolutions have 4× 4 kernels with stride 2, and padding of size 1. The first 10 epochs for all models are trained with a fixed curvature starting at 0 and increasing in absolute value each epoch. This corresponds to a burn-in period similarly to Nickel & Kiela (2017). For learnable curvature approaches we then use Stochastic Gradient Descent with learning rate 10−4 and let the optimizers adjust the value freely, for fixed curvature approaches it stays at the last burn-in value. All our models use the Wrapped Normal distribution, or equivalently Gaussian in Euclidean components, unless specified otherwise. All fixed curvature components are denoted with aM1 orM−1 subscript, learnable curvature components do not have a subscript. The observation model for the reconstruction loss term were Bernoulli distributions for MNIST and Omniglot, and standard Gaussian distributions for BDP and CIFAR.\nAs baselines, we train VAEs with spaces that have a fixed constant curvature, i.e. assume a single Riemannian manifold (potentially a product of them) as their latent space. It is apparent that our models with a single component, like Sn1 correspond to Davidson et al. (2018) and Xu & Durrett (2018), Hn−1 is equivalent to the Hyperbolic VAE of Nagano et al. (2019), Pn−c corresponds to the Pc-VAE of Mathieu et al. (2019), and En is equivalent to the Euclidean VAE. In the following, we present a selection of all the obtained results. For more information see Appendix E. Bold numbers represent values that are particularly interesting. Since the Riemannian Normal and the von MisesFischer distribution only have a spherical covariance matrix, i.e. a single scalar variance parameter per component, we evaluate all our approaches with a spherical covariance parametrization as well.\nBinary diffusion process For the BDP dataset and latent dimension 6 (Table 3), we observe that all VAEs that only use the von Mises-Fischer distribution perform worse than a Wrapped Normal. However, when a VMF spherical component was paired with other component types, it performed better than if a Wrapped Normal spherical component was used instead. Riemannian Normal VAEs did very well on their own – the fixed Poincaré VAE (RN P2−1)3 obtains the best score. It did not fare as well when we tried to learn curvature with it, however.\nAn interesting observation is that all single-component VAEs M6 performed worse than product VAEs (M2)3 when curvature was learned, across all component types. Our universal curvature VAE (U2)3 managed to get better results than all other approaches except for the Riemannian Normal baseline, but it is within the margin of error of some other models. It also outperformed its singlecomponent variant U6. However, we did not find that it converged to specific curvature values, only that they were in the approximate range of (−0.1,+0.1).\nDynamically-binarized MNIST reconstruction On MNIST (Table 3) with spherical covariance, we noticed that VMF again under-performed Wrapped Normal, except when it was part of a product like E2×H2×(vMF S2). When paired with another Euclidean and a Riemannian Normal Poincaré disk component, it performed well, but that might be because theRN P−1 component achieved best results across the board on MNIST. It achieved good results even compared to diagonal covariance VAEs on 6-dimensional MNIST. Several approaches are better than the Euclidean baseline. That applies mainly to the above mentioned Riemannian Normal Poincaré ball components, but also S6 both with Wrapped Normal and VMF, as well as most product space VAEs with different curvatures (third section of the table). Our (U2)3 performed similarly to the Euclidean baseline VAE.\nWith diagonal covariance parametrization (Table 4), we observe similar trends. With a latent dimension of 6, the Riemannian Normal Poincaré ball VAE is still the best performer. The Euclidean baseline VAE achieved better results than its spherical covariance counterpart. Overall, the best result is achieved by the single-component spherical model, with learnable curvature S6. Interestingly, all single-component VAEs performed better than their (M2)3 counterparts, except for the H6 hyperboloid, but only by a tiny margin. Products of different component types also achieve good results. Noteworthy is that their fixed curvature variants seem to perform marginally better than learnable curvature ones. Our universal VAEs perform at around the Euclidean baseline VAE performance. Interestingly, all of them end up with negative curvatures −0.3 < K < 0. Secondly, we run our models with a latent space dimension of 12. We immediately notice, that not many models can beat the Euclidean VAE baseline E12 consistently, but several are within the margin of error. Notably, the product VAEs of H, S, and E, fixed and learnable H12, and our universal VAE (U2)6. Interestingly, products of small components perform better when curvature is fixed, whereas single big component VAEs are better when curvature is learned.\nDynamically-binarized Omniglot reconstruction For a latent space of dimension 6 (Table 5), the best of the baseline models is the Poincaré VAE of (Mathieu et al., 2019). Our models that come\nvery close to the average estimated marginal log-likelihood, and are definitely within the margin of error, are mainly (S2)3, D2 × E2 × P2, and U6. However, with the variance of performance across different runs, we cannot draw a clear conclusion. In general, hyperbolic VAEs seem to be doing a bit better on this dataset than spherical VAEs, which is also confirmed by the fact that almost all universal curvature models finished with negative curvature components.\nCIFAR-10 reconstruction For a latent space of dimension 6, we can observe that almost all nonEuclidean models perform better than the euclidean baseline E6. Especially well-performing is the fixed hyperboloid H6−1, and the learnable hypersphere S6. Curvatures for all learnable models on this dataset converge to values in the approximate range of (−0.15,+0.15).\nSummary In conclusion, a very good model seems to be the Riemannian Normal Poincaré ball VAE RN Pn. However, it has practical limitations due to a rejection sampling algorithm and an unstable implementation. On the contrary, von Mises-Fischer spherical VAEs have almost consistently performed worse than their Wrapped Normal equivalents. Overall, Wrapped Normal VAEs in all constant curvature manifolds seem to perform well at modeling the latent space.\nA key takeaway is that our universal curvature models Un and (U2)bn/2c seem to generally outperform their corresponding Euclidean VAE baselines in lower-dimensional latent spaces and, with minor losses, manage to keep most of the competitive performance as the dimensionality goes up, contrary to VAEs with other non-Euclidean components." }, { "heading": "5 CONCLUSION", "text": "By transforming the latent space and associated prior distributions onto Riemannian manifolds of constant curvature, it has previously been shown that we can learn representations on curved space. Generalizing on the above ideas, we have extended the theory of learning VAEs to products of constant curvature spaces. To do that, we have derived the necessary operations in several models of constant curvature spaces, extended existing probability distribution families to these manifolds, and generalized VAEs to latent spaces that are products of smaller “component” spaces, with learnable curvature. On various datasets, we show that our approach is competitive and additionally has the property that it generalizes the Euclidean variational autoencoder – if the curvatures of all components go to 0, we recover the VAE of Kingma & Welling (2014)." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Andreas Bloch for help in verifying some of the formulas for constant curvature spaces and for many insightful discussions; Prof. Thomas Hofmann and the Data Analytics Lab, the Leonhard cluster, and ETH Zürich for GPU access.\nWork was done while all authors were at ETH Zürich.\nOndrej Skopek (oskopek@google.com) is now at Google.\nOctavian-Eugen Ganea (oct@mit.edu) is now at the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology.\nGary Bécigneul (gary.becigneul@inf.ethz.ch) is funded by the Max Planck ETH Center for Learning Systems." }, { "heading": "A GEOMETRICAL DETAILS", "text": "" }, { "heading": "A.1 RIEMANNIAN GEOMETRY", "text": "An elementary notion in Riemannian geometry is that of a real, smooth manifoldM ⊆ Rn, which is a collection of real vectors x that is locally similar to a linear space, and lives in the ambient space Rn. At each point of the manifold x ∈ M a real vector space of the same dimensionality as M is defined, called the tangent space at point x: TxM. Intuitively, the tangent space contains all the directions and speeds at which one can pass through x. Given a matrix representation G(x) ∈ Rn×n of the Riemannian metric tensor g(x), we can define a scalar product on the tangent space: 〈·, ·〉x : TxM× TxM → M, where 〈a, b〉x = g(x)(a, b) = aTG(x)b for any a, b ∈ TxM. A Riemannian manifold is then the tuple (M, g). The scalar product induces a norm on the tangent space TxM: ||a||x = √ 〈a,a〉x ∀a ∈ TxM (Petersen et al., 2006).\nAlthough it seems like the manifold only defines a local geometry, it induces global quantities by integrating the local contributions. The metric tensor induces a local infinitesimal volume element on each tangent space TxM and hence a measure is induced as well dM(x) = √ |G(x)|dx where dx is the Lebesgue measure. The length of a curve γ : t 7→ γ(t) ∈ M, t ∈ [0, 1] is given by L(γ) = ∫ 1 0 √∥∥ d dtγ(t) ∥∥ γ(t) dt.\nStraight lines are generalized to constant speed curves giving the shortest path between pairs of points x, y ∈M, so called geodesics, for which it holds that γ∗ = argminγ L(γ), such that γ(0) = x, γ(1) = y, and ∥∥ d dtγ(t) ∥∥ γ(t)\n= 1. Global distances are thus induced on M by dM(x,y) = infγ L(γ). Using this metric, we can go on to define a metric space (M, dM). Moving from a point x ∈M in a given direction v ∈ TxM with constant velocity is formalized by the exponential map: expx : TxM → M. There exists a unique unit speed geodesic γ such that γ(0) = x and dγ(t) dt ∣∣∣ t=0 = v, where v ∈ TxM.\nThe corresponding exponential map is then defined as expx(v) = γ(1). The logarithmic map is the inverse logx = exp −1 x :M→ TxM. For geodesically complete manifolds, i.e. manifolds in which there exists a length-minimizing geodesic between every x,y ∈ M, such as the Lorentz model, hypersphere, and many others, expx is well-defined on the full tangent space TxM. To connect vectors in tangent spaces, we use parallel transport PTx→y : TxM→ TyM, which is an isomorphism between the two tangent spaces, so that the transported vectors stay parallel to the connection. It corresponds to moving tangent vectors along geodesics and defines a canonical way to connect tangent spaces." }, { "heading": "A.2 BRIEF COMPARISON OF CONSTANT CURVATURE SPACE MODELS", "text": "We have seen five different models of constant curvature space, each of which has advantages and disadvantages when applied to learning latent representations in them using VAEs.\nA big advantage of the hyperboloid and hypersphere is that optimization in the spaces does not suffer from as many numerical instabilities as it does in the respective projected spaces. On the other hand, we have seen that when K → 0, the norms of points go to infinity. As we will see in experiments, this is not a problem when optimizing curvature within these spaces in practice, except if we’re trying to cross the boundary at K = 0 and go from a hyperboloid to a sphere, or vice versa. Intuitively, the points are just positioned very differently in the ambient space of H− and S , for a small > 0.\nSince points in the n-dimensional projected hypersphere and Poincaré ball models can be represented using a real vector of length n, it enables us to visualize points in these manifolds directly for n = 2 or even n = 3. On the other hand, optimizing a function over these models is not very well-conditioned. In the case of the Poincaré ball, a significant amount of points lie close to the boundary of the ball (i.e. with a squared norm of almost 1/K), which causes numerical instabilities even when using 64-bit float precision in computations.\nA similar problem occurs with the projected hypersphere with points that are far away from the origin 0 (i.e. points that are close to the “South pole” on the backprojected sphere). Unintuitively,\nall points that are far away from the origin are actually very close to each other with respect to the induced distance function and very far away from each other in terms of Euclidean distance.\nBoth distance conversion theorems (A.5 and its projected hypersphere counterpart) rely on the points being fixed when changing curvature. If they are somehow dependent on curvature, the convergence theorem does not hold. We conjecture that if points stay close to the boundary in P or far away from 0 in D as K → 0, this is exactly the reason for numerical instabilities (apart from the standard numerical problem of representing large numbers in floating-point notation).\nBecause of the above reasons, we do some of our experiments with the projected spaces and others with the hyperboloid and hypersphere, and aim to compare the performance of these empirically as well." }, { "heading": "A.3 EUCLIDEAN GEOMETRY", "text": "" }, { "heading": "A.3.1 EUCLIDEAN SPACE", "text": "Distance function The distance function in En is\ndE(x,y) = ‖x− y‖2 . Due to the Pythagorean theorem, we can derive that\n‖x− y‖22 = 〈x− y,x− y〉2 = ‖x‖ 2 2 − 2 〈x,y〉2 + ‖y‖ 2 2\n= ‖x‖22 + ‖y‖ 2 2 − 2 ‖x‖2 ‖y‖2 cos −1 θx,y\nExponential map The exponential map in En is\nexpx(v) = x+ v.\nThe fact that the resulting points belong to the space is trivial. Deriving the inverse function, i.e. the logarithmic map, is also trivial:\nlogx(y) = y − x.\nParallel transport We do not need parallel transport in the Euclidean space, as we can directly sample from a Normal distribution. In other words, we can just define parallel transport to be an identity function." }, { "heading": "A.4 HYPERBOLIC GEOMETRY", "text": "" }, { "heading": "A.4.1 HYPERBOLOID", "text": "Do note, that all the theorems for the hypersphere are essentially trivial corollaries of their equivalents in the hypersphere (and vice-versa) (Section A.5.1). Notable differences include the fact that R2 = − 1K , not R\n2 = 1K , and all the operations use the hyperbolic trigonometric functions sinh, cosh, and tanh, instead of their Euclidean counterparts. Also, we often leverage the “hyperbolic” Pythagorean theorem, in the form cosh2(α)− sinh2(α) = 1.\nProjections Due to the definition of the space as a retraction from the ambient space, we can project a generic vector in the ambient space to the hyperboloid using the shortest Euclidean distance by normalization:\nprojHnK (x) = R x\n||x||L = x√ K ||x||L .\nSecondly, the n + 1 coordinates of a point on the hyperboloid are co-dependent; they satisfy the relation 〈x,x〉L = 1/K. This implies, that if we are given a vector with n coordinates x̃ = (x2, . . . , xn+1), we can compute the missing coordinate to place it onto the hyperboloid:\nx1 = √ ‖x̃‖22 − 1\nK .\nThis is useful for example in the case of orthogonally projecting points from Tµ0HnK onto the manifold.\nDistance function The distance function in HnK is dKH (x,y) = R · θx,y = R cosh −1 ( − 〈x,y〉L R2 ) = 1√ −K cosh−1 (−K 〈x,y〉L) .\nRemark A.1 (About the divergence of points in HnK). Since the points on the hyperboloid x ∈ HnK are norm-constrained to\n〈x,x〉L = 1\nK ,\nall the points on the hyperboloid go to infinity as K goes to 0− from below:\nlim K→0− 〈x,x〉L = −∞.\nThis confirms the intuition that the hyperboloid grows “flatter”, but to do that, it has to go away from the origin of the coordinate space 0. A good example of a point that diverges is the origin of the hyperboloid µK0 = (1/K, 0, . . . , 0)\nT = (R, 0, . . . , 0)T . That makes this model unsuitable for trying to learn sign-agnostic curvatures, similarly to the hypersphere.\nExponential map The exponential map in HnK is\nexpKx (v) = cosh ( ||v||L R ) x+ sinh ( ||v||L R ) Rv ||v||L ,\nand in the case of x := µ0 = (R, 0, . . . , 0)T :\nexpKµ0(v) =\n( cosh ( ||ṽ||2 R ) R; sinh ( ||ṽ||2 R ) R ||ṽ||2 ṽT )T ,\nwhere v = (0; ṽT )T and ||v||L = ||v||2 = ||ṽ||2. Theorem A.2 (Logarithmic map in HnK). For all x,y ∈ HnK , the logarithmic map in HnK maps y to a tangent vector at x:\nlogKx (y) = cosh−1(α)√ α2 − 1 (y − αx),\nwhere α = K 〈x,y〉L.\nProof. We show the detailed derivation of the logarithmic map as an inverse function to the exponential map logx(y) = exp −1 x (y), adapted from (Nagano et al., 2019).\nAs mentioned previously,\ny = expKx (v) = cosh ( ‖v‖L R ) x+ sinh ( ‖v‖L R ) Rv\n‖v‖L .\nSolving for v, we obtain\nv = ||v||L R sinh ( ‖v‖L R\n) (y − cosh(‖v‖L R ) x ) .\nHowever, we still need to rewrite ||v||L in evaluatable terms:\n0 = 〈x,v〉L = ||v||L R sinh ( ‖v‖L R\n) 〈x,y〉L − cosh(‖v‖LR ) 〈x,x〉L︸ ︷︷ ︸ −R2 , hence\ncosh ( ‖v‖L R ) = − 1 R2 〈x,y〉L ,\nand therefore ||v||L = R cosh−1 ( − 1 R2 〈x,y〉L ) = 1√ −K cosh−1(K 〈x,y〉L) = d K H (x,y).\nPlugging the result back into the first equation, we obtain\nv = ||v||L R sinh ( ‖v‖L R\n) (y − cosh(‖v‖L R ) x )\n= R cosh−1 (α) R sinh ( 1 RR cosh −1 (α) ) (y − cosh( 1 R R cosh−1 (α) ) x ) = cosh−1(α)\nsinh(cosh−1(α)) (y − cosh(cosh−1(α))x)\n= cosh−1(α)√ α2 − 1 (y − αx),\nwhere α = − 1R2 〈x,y〉L = K 〈x,y〉L , and the last equality assumes |α| > 1. This assumption holds, since for all points x,y ∈ HnK it holds that 〈x,y〉L ≤ −R2, and 〈x,y〉L = −R2 if and only if x = y, due to Cauchy-Schwarz (Ratcliffe, 2006, Theorem 3.1.6). Hence, the only case where this would be a problem would be if x = y, but it is clear that the result in that case is u = 0.\nParallel transport Using the generic formula for parallel transport in manifolds for x,y ∈ M and v ∈ TxM\nPTKx→y(v) = v −\n〈 logKx (y),v 〉 x\ndM(x,y) (logKx (y) + log K y (x)), (1)\nand the logarithmic map formula from Theorem A.2\nlogKx (y) = cosh−1(α)√ α2 − 1 (y − αx),\nwhere α = − 1R2 〈x,y〉L , we derive parallel transport in H n K :\nPTKx→y(v) = v + 〈y,v〉L\nR2 − 〈x,y〉L (x+ y).\nA special form of parallel transport exists for when the source vector is µ0 = (R, 0, . . . , 0)T :\nPTKµ0→y(v) = v + 〈y,v〉2 R2 +Ry1 y1 +R y2 ...\nyn+1 ." }, { "heading": "A.4.2 POINCARÉ BALL", "text": "Do note, that all the theorems for the projected hypersphere are essentially trivial corollaries of their equivalents in the Poincaré ball (and vice-versa) (Section A.5.2). Notable differences include the fact thatR2 = − 1K , notR\n2 = 1K , and all the operations use the hyperbolic trigonometric functions sinh, cosh, and tanh, instead of their Euclidean counterparts. Also, we often leverage the “hyperbolic” Pythagorean theorem, in the form cosh2(α)− sinh2(α) = 1." }, { "heading": "Stereographic projection", "text": "Theorem A.3 (Stereographic backprojected points of PnK belong to HnK). For all y ∈ PnK ,∥∥ρ−1K (y)∥∥2L = 1K . Proof.\n∥∥ρ−1K (y)∥∥2L = ∥∥∥∥∥∥ ( 1√ |K| K ‖y‖22 − 1 K ‖y‖22 + 1 ; 2yT K ‖y‖22 + 1 )T∥∥∥∥∥∥ 2\nL\n= − ( 1√ |K| K ‖y‖22 − 1 K ‖y‖22 + 1 )2 + 4 ‖y‖22 (K ‖y‖22 + 1)2\n= 1 |K| −(K ‖y‖22 − 1)2 + 4|K| ‖y‖ 2 2\n(K ‖y‖22 + 1)2\n= 1 −K −(K ‖y‖22 − 1)2 − 4K ‖y‖ 2 2\n(K ‖y‖22 + 1)2\n= 1\nK\n(K ‖y‖22 − 1)2 + 4K ‖y‖ 2 2\n(K ‖y‖22 + 1)2\n= 1\nK\nK2 ‖y‖42 + 2K ‖y‖ 2 2 + 1\n(K ‖y‖22 + 1)2\n= 1\nK\n(K ‖y‖22 + 1)2 (K ‖y‖22 + 1)2 = 1 K .\nDistance function The distance function in PnK is (derived from the hyperboloid distance function using the stereographic projection ρK):\ndP(x,y) = dH(ρ −1 K (x), ρ −1 K (y))\n= 1√ −K cosh−1\n( 1−\n2K ‖x− y‖22 (1 +K ‖x‖22)(1 +K ‖y‖ 2 2)\n)\n= R cosh−1 ( 1 +\n2R2 ‖x− y‖22 (R2 − ‖x‖22)(R2 − ‖y‖ 2 2)\n)\nTheorem A.4 (Distance equivalence in PnK). For all K < 0 and for all pairs of points x,y ∈ PnK , the Poincaré distance between them equals the gyrospace distance\ndP(x,y) = dPgyr(x,y).\nProof. Proven using Mathematica (File: distance limits.ws), proof involves heavy algebra.\nTheorem A.5 (Gyrospace distance converges to Euclidean in PnK). For any fixed pair of points x,y ∈ PnK , the Poincaré gyrospace distance between them converges to the Euclidean distance in the limit (up to a constant) as K → 0−:\nlim K→0− dPgyr(x,y) = 2 ‖x− y‖2 .\nProof.\nlim K→0− dPgyr(x,y) = 2 lim K→0−\n[ tanh−1(\n√ −K ‖−x⊕K y‖2)√ −K\n]\n= 2 lim K→0−\n[ tanh−1(\n√ −K ‖y − x‖2)√ −K ] = 2 ‖y − x‖2 ,\nwhere the second equality holds because of the theorem of limits of composed functions, where\nf(a) = tanh−1(a √ −K)√\n−K\ng(K) = ‖−x⊕K y‖2 .\nWe see that\nlim K→0− g(K) = ‖y − x‖2\ndue to Theorem A.14, and\nlim a→‖x−y‖2\nf(a) = tanh−1(a √ −K)√\n−K\nAdditionally for the last equality, we need the fact that\nlim x→0\ntanh−1(a √ |x|)√\n|x| = a.\nTheorem A.6 (Distance converges to Euclidean as K → 0− in PnK). For any fixed pair of points x,y ∈ PnK , the Poincaré distance between them converges to the Euclidean distance in the limit (up to a constant) as K → 0−:\nlim K→0− dP(x,y) = 2 ‖x− y‖2 .\nProof. Theorem A.4 and A.5.\nExponential map As derived and proven in Ganea et al. (2018a), the exponential map in PnK and its inverse is\nexpKx (v) = x⊕K ( tanh (√ −K\nλKx ‖v‖2 2\n) v√\n−K ‖v‖2 ) logKx (y) =\n2√ −KλKx\ntanh−1 (√ −K ‖−x⊕K y‖2 ) −x⊕K y ‖−x⊕K y‖2\nIn the case of x := µ0 = (0, . . . , 0)T they simplify to: expKµ0(v) = tanh (√ −K ‖v‖2 ) v√ −K ‖v‖2\nlogKµ0(y) = tanh −1 (√ −K ‖y‖2 ) y ‖y‖2 .\nParallel transport Kochurov et al. (2019); Ganea et al. (2018a) have also derived and implemented the parallel transport operation for the Poincaré ball:\nPTKx→y(v) = λKx λKy gyr[y,−x]v,\nPTKµ0→y(v) = 2\nλKy v,\nPTKx→µ0(v) = λKx 2 v,\nwhere\ngyr[x,y]v = −(x⊕K y)⊕K (x⊕K (y ⊕K v))\nis the gyration operation (Ungar, 2008, Definition 1.11).\nUnfortunately, on the Poincaré ball, 〈·, ·〉x has a form that changes with respect to x, unlike in the hyperboloid." }, { "heading": "A.5 SPHERICAL GEOMETRY", "text": "" }, { "heading": "A.5.1 HYPERSPHERE", "text": "All the theorems for the hypersphere are essentially trivial corollaries of their equivalents in the hyperboloid (Section A.4.1). Notable differences include the fact that R2 = 1K , not R\n2 = − 1K , and all the operations use the Euclidean trigonometric functions sin, cos, and tan, instead of their hyperbolic counterparts. Also, we often leverage the Pythagorean theorem, in the form sin2(α) + cos2(α) = 1.\nProjections Due to the definition of the space as a retraction from the ambient space, we can project a generic vector in the ambient space to the hypersphere using the shortest Euclidean distance by normalization:\nprojSn−1K (x) = R\nx\n||x||2 = x√ K ||x||2 .\nSecondly, the n + 1 coordinates of a point on the sphere are co-dependent; they satisfy the relation 〈x,x〉2 = 1/K. This implies, that if we are given a vector with n coordinates x̃ = (x2, . . . , xn+1), we can compute the missing coordinate to place it onto the sphere:\nx1 =\n√ 1\nK − ‖x̃‖22.\nThis is useful for example in the case of orthogonally projecting points from Tµ0SnK onto the manifold.\nDistance function The distance function in SnK is dKS (x,y) = R · θx,y = R cos−1 ( 〈x,y〉2 R2 ) = 1√ K cos−1 (K 〈x,y〉2) .\nRemark A.7 (About the divergence of points in SnK). Since the points on the hypersphere x ∈ SnK are norm-constrained to\n〈x,x〉2 = 1\nK ,\nall the points on the sphere go to infinity as K goes to 0+ from above:\nlim K→0+ 〈x,x〉2 =∞.\nThis confirms the intuition that the sphere grows “flatter”, but to do that, it has to go away from the origin of the coordinate space 0. A good example of a point that diverges is the north pole of the sphere µK0 = (1/K, 0, . . . , 0)\nT = (R, 0, . . . , 0)T . That makes this model unsuitable for trying to learn sign-agnostic curvatures, similarly to the hyperboloid.\nExponential map The exponential map in SnK is\nexpKx (v) = cos ( ||v||2 R ) x+ sin ( ||v||2 R ) Rv ||v||2 .\nTheorem A.8 (Logarithmic map in SnK). For all x,y ∈ SnK , the logarithmic map in SnK maps y to a tangent vector at x:\nlogKx (y) = cos−1(α)√ 1− α2 (y − αx),\nwhere α = K 〈x,y〉2.\nProof. Analogous to the proof of Theorem A.2.\nAs mentioned previously,\ny = expKx (v) = cos ( ‖v‖2 R ) x+ sin ( ‖v‖2 R ) Rv\n‖v‖2 .\nSolving for v, we obtain\nv = ||v||2 R sin ( ‖v‖2 R\n) (y − cos(‖v‖2 R ) x ) .\nHowever, we still need to rewrite ||v||2 in evaluatable terms:\n0 = 〈x,v〉2 = ||v||2 R sin ( ‖v‖2 R\n) 〈x,y〉2 − cos(‖v‖2R ) 〈x,x〉2︸ ︷︷ ︸ R2 , hence\ncos ( ‖v‖2 R ) = 1 R2 〈x,y〉2 ,\nand therefore ||v||2 = R cos−1 ( 1\nR2 〈x,y〉2\n) =\n1√ K cos−1(K 〈x,y〉2) = d K S (x,y).\nPlugging the result back into the first equation, we obtain\nv = ||v||2 R sin ( ‖v‖2 R\n) (y − cos(‖v‖2 R ) x )\n= R cos−1 (α) R sin ( 1 RR cos −1 (α) ) (y − cos( 1 R R cos−1 (α) ) x ) = cos−1(α)\nsin(cos−1(α)) (y − cos(cos−1(α))x)\n= cos−1(α)√ 1− α2 (y − αx),\nwhere α = 1R2 〈x,y〉2 = K 〈x,y〉2 , and the last equality assumes |α| > 1. This assumption holds, since for all points x,y ∈ SnK it holds that 〈x,y〉2 ≤ R2, and 〈x,y〉2 = R2 if and only if x = y, due to Cauchy-Schwarz (Ratcliffe, 2006, Theorem 3.1.6). Hence, the only case where this would be a problem would be if x = y, but it is clear that the result in that case is u = 0.\nParallel transport Using the generic formula for parallel transport in manifolds (Equation A.4.1) for x,y ∈ SnK and v ∈ TxSnK and the spherical logarithmic map formula\nlogKx (y) = cos−1(α)√ 1− α2 (y − αx),\nwhere α = K 〈x,y〉2 , we derive parallel transport in SnK :\nPTKx→y(v) = v − 〈y,v〉2\nR2 + 〈x,y〉2 (x+ y)\n= v − K 〈y,v〉2\n1 +K 〈x,y〉2 (x+ y).\nA special form of parallel transport exists for when the source vector is µ0 = (R, 0, . . . , 0)T :\nPTKµ0→y(v) = v − 〈y,v〉2 R2 +Ry1 y1 +R y2 ...\nyn+1 ." }, { "heading": "A.5.2 PROJECTED HYPERSPHERE", "text": "Do note, that all the theorems for the projected hypersphere are essentially trivial corollaries of their equivalents in the Poincaré ball (and vice-versa) (Section A.4.2). Notable differences include the fact that R2 = 1K , not R\n2 = − 1K , and all the operations use the Euclidean trigonometric functions sin, cos, and tan, instead of their hyperbolic counterparts. Also, we often leverage the Pythagorean theorem, in the form sin2(α) + cos2(α) = 1." }, { "heading": "Stereographic projection", "text": "Remark A.9 (Homeomorphism between SnK and Rn). We notice that ρK is not a homeomorphism between the n-dimensional sphere and Rn, as it is not defined at −µ0 = (−R;0T )T . If we additionally changed compactified the plane by adding a point “at infinity” and set it equal to ρK(µ0), ρK would become a homeomorphism.\nTheorem A.10 (Stereographic backprojected points of DnK belong to SnK). For all y ∈ DnK ,∥∥ρ−1K (y)∥∥22 = 1K . Proof.\n∥∥ρ−1K (y)∥∥22 = ∥∥∥∥∥∥ ( 1√ |K| K ‖y‖22 − 1 K ‖y‖22 + 1 ; 2yT K ‖y‖22 + 1 )T∥∥∥∥∥∥ 2\n2\n= ( 1√ |K| K ‖y‖22 − 1 K ‖y‖22 + 1 )2 + 4 ‖y‖22 (K ‖y‖22 + 1)2\n= 1 |K| (K ‖y‖22 − 1)2 + 4|K| ‖y‖ 2 2\n(K ‖y‖22 + 1)2\n= 1\nK\n(K ‖y‖22 − 1)2 + 4K ‖y‖ 2 2\n(K ‖y‖22 + 1)2\n= 1\nK\nK2 ‖y‖42 + 2K ‖y‖ 2 2 + 1\n(K ‖y‖22 + 1)2\n= 1\nK\n(K ‖y‖22 + 1)2 (K ‖y‖22 + 1)2 = 1 K .\nDistance function The distance function in DnK is (derived from the spherical distance function using the stereographic projection ρK):\ndD(x,y) = dS(ρ −1 K (x), ρ −1 K (y))\n= 1√ K cos−1\n( 1−\n2K ‖x− y‖22 (1 +K ‖x‖22)(1 +K ‖y‖ 2 2)\n)\n= R cos−1 ( 1−\n2R2 ‖x− y‖22 (R2 + ‖x‖22)(R2 + ‖y‖ 2 2)\n)\nTheorem A.11 (Distance equivalence in DnK). For allK > 0 and for all pairs of points x,y ∈ DnK , the spherical projected distance between them equals the gyrospace distance\ndD(x,y) = dDgyr(x,y).\nProof. Proven using Mathematica (File: distance limits.ws), proof involves heavy algebra.\nTheorem A.12 (Gyrospace distance converges to Euclidean in DnK). For any fixed pair of points x,y ∈ DnK , the spherical projected gyrospace distance between them converges to the Euclidean distance in the limit (up to a constant) as K → 0+:\nlim K→0+ dDgyr(x,y) = 2 ‖x− y‖2 .\nProof.\nlim K→0+ dDgyr(x,y) = 2 lim K→0+\n[ tan−1(\n√ K ‖−x⊕K y‖2)√\nK\n]\n= 2 lim K→0+\n[ tan−1(\n√ K ‖y − x‖2)√ K ] = 2 ‖y − x‖2 ,\nwhere the second equality holds because of the theorem of limits of composed functions, where\nf(a) = tan−1(a √ K)√\nK\ng(K) = ‖−x⊕K y‖2 .\nWe see that\nlim K→0− g(K) = ‖y − x‖2\ndue to Theorem A.14, and\nlim a→‖x−y‖2\nf(a) = tan−1(a √ K)√\nK\nAdditionally for the last equality, we need the fact that\nlim x→0\ntanh−1(a √ |x|)√\n|x| = a.\nTheorem A.13 (Distance converges to Euclidean as K → 0+ in DnK). For any fixed pair of points x,y ∈ DnK , the spherical projected distance between them converges to the Euclidean distance in the limit (up to a constant) as K → 0+:\nlim K→0+ dD(x,y) = 2 ‖x− y‖2 .\nProof. Theorem A.11 and A.12.\nExponential map Analogously to the derivation of the exponential map in PnK in Ganea et al. (2018a, Section 2.3–2.4), we can derive Möbius scalar multiplication in DnK :\nr ⊗K x = 1\ni √ K\ntanh(r tanh−1(i √ K ‖x‖2)) x\n‖x‖2\n= 1\ni √ K\ntanh(ri tan−1( √ K ‖x‖2)) x\n‖x‖2\n= 1√ K\ntan(r tan−1( √ K ‖x‖2)) x\n‖x‖2 ,\nwhere we use the fact that tanh−1(ix) = i tan−1(x) and tanh(ix) = i tan(x). We can easily see that 1⊗K x = x. Hence, the geodesic has the form of\nγx→y(t) = x⊕K t⊗K (−x⊕K y),\nand therefore the exponential map in DnK is:\nexpKx (v) = x⊕K\n( tan (√ K λKx ‖v‖2\n2\n) v√\nK ‖v‖2\n) .\nThe inverse formula can also be computed:\nlogKx (y) = 2√ KλKx\ntan−1 (√ K ‖−x⊕K y‖2 ) −x⊕K y ‖−x⊕K y‖2\nIn the case of x := µ0 = (0, . . . , 0)T they simplify to: expKµ0(v) = tan (√ K ‖v‖2 ) v√\nK ‖v‖2 logKµ0(y) = tan −1 (√ K ‖y‖2 ) y√\nK ‖y‖2 .\nParallel transport Similarly to the Poincaré ball, we can derive the parallel transport operation for the projected sphere:\nPTKx→y(v) = λKx λKy gyr[y,−x]v,\nPTKµ0→y(v) = 2\nλKy v,\nPTKx→µ0(v) = λKx 2 v,\nwhere\ngyr[x,y]v = −(x⊕K y)⊕K (x⊕K (y ⊕K v))\nis the gyration operation (Ungar, 2008, Definition 1.11).\nUnfortunately, on the projected sphere, 〈·, ·〉x has a form that changes with respect to x, similarly to the Poincaré ball and unlike in the hypersphere." }, { "heading": "A.6 MISCELLANEOUS PROPERTIES", "text": "Theorem A.14 (Möbius addition converges to Eucl. vector addition).\nlim K→0\n(x⊕K y) = x+ y.\nNote: This theorem works from both sides, hence applies to the Poincaré ball as well as the projected spherical space. Observe that the Möbius addition has the same form for both spaces." }, { "heading": "Proof.", "text": "lim K→0 (x⊕K y) = lim K→0\n[ (1− 2K 〈x,y〉2 −K ‖y‖ 2 2)x+ (1 +K ‖x‖ 2 2)y\n1− 2K 〈x,y〉2 +K2 ‖x‖ 2 2 ‖y‖ 2 2 ] = x+ y.\nTheorem A.15 (ρ−1K is the inverse stereographic projection). For all (ξ;xT )T ∈MnK , ξ ∈ R\nρ−1K (ρ((ξ;x T )T )) = x,\nwhereM∈ {S,H}.\nProof.\nρ−1K (ρK((ξ;x T )T )) = ρ−1K\n( x\n1− √ |K|ξ\n)\n= 1√|K| K ∥∥∥∥ x1−√|K|ξ ∥∥∥∥2 2 − 1\nK ∥∥∥∥ x1−√|K|ξ ∥∥∥∥2 2 + 1 ;\n2xT\n1− √ |K|ξ\nK ∥∥∥∥ x1−√|K|ξ ∥∥∥∥2 2 + 1\n T\n= 1/ √ |K|\nK ∥∥∥∥ x1−√|K|ξ ∥∥∥∥2 2 + 1\nK ∥∥∥∥∥ x1−√|K|ξ ∥∥∥∥∥ 2\n2\n− 1; 2 √ |K|xT\n1− √ |K|ξ\nT\n= 1/ √ |K|\nK‖x‖22 (1− √ |K|ξ)2 + 1\n( K ‖x‖22\n(1− √ |K|ξ)2\n− 1; 2 √ |K|xT\n1− √ |K|ξ\n)T\nWe observe that ‖x‖22 = 1 K − ξ 2, because x ∈MnK . Therefore\nρ−1K (ρK((ξ;x T )T )) =\n= . . . (above) = 1/ √ |K|\nK 1 K−ξ2\n(1− √ |K|ξ)2 + 1\n( K\n1 K − ξ 2 (1− √ |K|ξ)2\n− 1; 2 √ |K|xT\n1− √ |K|ξ\n)T\n= 1/ √ |K|\n(1− √ |K|ξ)(1+ √ |K|ξ)\n(1− √ |K|ξ)2\n+ 1\n( (1− √ |K|ξ)(1 + √ |K|ξ)\n(1− √ |K|ξ)2\n− 1; 2 √ |K|xT\n1− √ |K|ξ\n)T\n= 1/ √ |K|\n1+ √ |K|ξ 1− √ |K|ξ + 1\n( 1 + √ |K|ξ\n1− √ |K|ξ\n− 1; 2 √ |K|xT\n1− √ |K|ξ\n)T\n= 1/ √ |K|\n1+ √ |K|ξ+1− √ |K|ξ\n1− √ |K|ξ\n( 1 + √ |K|ξ − 1 + √ |K|ξ\n1− √ |K|ξ\n; 2 √ |K|xT\n1− √ |K|ξ\n)T\n= 1 2 √ |K|\n( 2 √ |K|ξ; 2 √ |K|xT )T = ( ξ;xT )T .\nLemma A.16 (λKx converges to 2 as K → 0). For all x in PnK or DnK , it holds that\nlim K→0\nλKx = 2.\nProof.\nlim K→0 λKx = lim K→0\n2\n1 +K ‖x‖22 = 2.\nTheorem A.17 (expKx (v) converges to x+ v as K → 0). For all x in the Poincaré ball PnK or the projected sphere DnK and v ∈ TxM, it holds that\nlim K→0\nexpKx (v) = expx(v) = x+ v,\nhence the exponential map converges to its Euclidean variant.\nProof. For the positive case K > 0\nlim K→0+ expKx (v) = lim K→0+\n( x⊕K ( tanK (√ |K|\nλKx ‖v‖2 2\n) v√\n|K| ‖v‖2\n))\n= x+ lim K→0+\n( tanK (√ |K|\nλKx ‖v‖2 2\n) v√\n|K| ‖v‖2\n)\n= x+ v\n‖v‖2 lim K→0+\ntan (√ K λKx ‖v‖2\n2 ) √ K ‖v‖2\n= x+ v,\ndue to several applications of the theorem of limits of composed functions, Lemma A.16, and the fact that\nlim α→0 tan( √ αa)√ α = a.\nThe negative case K < 0 is analogous.\nTheorem A.18 (logKx (y) converges to y − x as K → 0). For all x,y in the Poincaré ball PnK or the projected sphere DnK , it holds that\nlim K→0\nlogKx (y) = logx(v) = y − x,\nhence the logarithmic map converges to its Euclidean variant.\nProof. Firstly,\nz = −x⊕K y K→0−−−→ y − x,\ndue to Theorem A.14. For the positive case K > 0\nlim K→0+ logKx (y) = lim K→0+ ( 2√ |K|λKx tan−1K (√ |K| ‖z‖2 ) z ‖z‖2 )\n= lim K→0+ 2 λKx tan−1K (√ |K| ‖z‖2 ) √ |K| ‖z‖2 z = lim K→0+ 2 λKx · lim K→0+ tan−1 (√ K ‖z‖2 ) √ K ‖z‖2 · lim K→0+ z = 1 · 1 · (x− vy) = x− y,\ndue to several applications of the theorem of limits of composed functions, product rule for limits, Lemma A.16, and the fact that\nlim α→0\ntan−1( √ αa)√\nα = a.\nThe negative case K < 0 is analogous.\nLemma A.19 (gyr[x,y]v converges to v as K → 0). For all x,y in the Poincaré ball PnK or the projected sphere DnK and v ∈ TxM, it holds that\nlim K→0 gyr[x,y]x = v,\nhence gyration converges to an identity function." }, { "heading": "Proof.", "text": "lim K→0 gyr[x,y]v = lim K→0\n( K(x⊕K y)⊕K (x⊕K (y ⊕K v)))\n= −(x+ y) + (x+ (y + v)) = −x− y + x+ y + v = v,\ndue to Theorem A.14 and the theorem of limits of composed functions.\nTheorem A.20 (PTKx→y(v) converges to v as K → 0). For all x,y in the Poincaré ball PnK or the projected sphere DnK and v ∈ TxM, it holds that\nlim K→0\nPTKx→y(v) = v.\nProof.\nlim K→0 PTKx→y(v) = lim K→0 ( λKx λKy gyr[y,−x]v )\n= lim K→0 λKx λKy︸︷︷︸\nK→0−−−→1\n· lim K→0 gyr[y,−x]v︸ ︷︷ ︸ K→0−−−→v\n= v,\ndue to the product rule for limits, Lemma A.16, and Lemma A.19." }, { "heading": "B PROBABILITY DETAILS", "text": "" }, { "heading": "B.1 WRAPPED NORMAL DISTRIBUTIONS", "text": "Theorem B.1 (Probability density function ofWN (z;µ,Σ) in HnK).\nlogWN (z;µ,Σ) = logN (v;0,Σ)− (n− 1) log\nR sinh ( ‖u‖L R ) ‖u‖L , where u = logKµ (z), v = PT K µ→µ0(u), and R = 1/ √ −K.\nProof. This was shown for the case K = 1 by Nagano et al. (2019). The difference is that we do not assume unitary radius R = 1 = 1/ √ −K. Hence, our tranformation function has the form f = expKµ ◦PT K µ0→µ, and f −1 = PTKµ→µ0 ◦ log K µ .\nThe derivative of parallel transport PTKx→y(v) for any x,y ∈ HnK and v ∈ TxHnK is a map dPTKx→y(v) : Tv(TxHnK). Using the orthonormal basis (with respect to the Lorentz product) {ξ1, . . . ξn}, we can compute the determinant by computing the change with respect to each basis vector.\ndPTKx→y(ξ) = ∂\n∂ ∣∣∣∣ =0 PTKx→y(v + ξ)\n= ∂\n∂ ∣∣∣∣ =0 [ (v + ξ) + 〈y,v + ξ〉L R2 − 〈x,y〉L (x+ y) ] = [ ξ +\n〈y, ξ〉L R2 − 〈x,y〉L (x+ y) ] =0\n= PTKx→y(ξ).\nSince parallel transport preserves norms and vectors in the orthonormal basis have norm 1, the change is ∥∥dPTKx→y(ξ)∥∥L = ∥∥PTKx→y(ξ)∥∥L = 1.\nFor computing the determinant of the exponential map Jacobian, we choose the orthonormal basis {ξ1 = u/ ‖u‖L , ξ2, . . . , ξn}, where we just completed the basis based on the first vector. We again look at the change with respect to each basis vector. For the basis vector ξ1:\nd expKx (ξ1) =\n= ∂\n∂ ∣∣∣∣ =0 expKx ( u+\nu\n‖u‖L\n)\n= ∂\n∂ ∣∣∣∣ =0 cosh( | ‖u‖L + | R ) x+ R sinh ( |‖u‖L+ | R ) ‖u‖L | ‖u‖L + | (‖u‖L + )u = (‖u‖L + ) sinh ( |‖u‖L+ | R ) R| ‖u‖L + | x+ cosh ( |‖u‖L+ | R ) ‖u‖L u =0\n= sinh ( ‖u‖L R ) x R + cosh ( ‖u‖L R ) u\n‖u‖L ,\nwhere the second equality is due to∥∥∥∥u+ u‖u‖L ∥∥∥∥ L = ∥∥∥∥(1 + ‖u‖L ) u ∥∥∥∥ L = ∣∣∣∣1 + ‖u‖L ∣∣∣∣ ‖u‖L = | ‖u‖L + |.\nFor every other basis vector ξk where k > 1:\nd expKx (ξ) =\n= ∂\n∂ ∣∣∣∣ =0 expKx (u+ ξ)\n= ∂\n∂ ∣∣∣∣ =0 cosh(‖u+ ξ‖L R ) x+ R sinh ( ‖u+ ξ‖L R ) ‖u+ ξ‖L (u+ ξ) \n= ∂\n∂ ∣∣∣∣ =0 cosh √ ‖u‖2L + 2 R x+ R sinh (√ ‖u‖2L+ 2 R ) √ ‖u‖2L + 2 (u+ ξ) \n= cosh (√ ‖u‖2L+ 2 R ) ‖u‖2L + 2 (u+ ξ)\n+\n(R2 ‖u‖2L ξ −R2 u+ (‖u‖ 2 L + 2)x) sinh (√ ‖u‖2L+ 2 R ) R(‖u‖2L + 2)3/2 =0\n= R2 ‖u‖2L sinh\n( ‖u‖L R ) R(‖u‖2L)3/2 ξ = R sinh ( ‖u‖L R ) ‖u‖L ξ,\nwhere the third equality holds because\n‖u+ ξ‖2L = ‖u‖ 2 L + 2 ‖ξ‖2L − 2 〈u, ξ〉L = ‖u‖2L +\n2 − 2 〈u, ξ〉L = ‖u‖2L + 2,\nwhere the last equality relies on the fact that the basis is orthogonal, and u is parallel to ξ1 = u/ ‖u‖L, hence it is orthogonal to all the other basis vectors.\nBecause the basis is orthonormal the determinant is a product of the norms of the computed change for each basis vector. Therefore,\ndet\n( ∂ PTx→y(v)\n∂v\n) = 1n = 1.\nAdditionally, the following two properties hold:\n∥∥∥∥d expKx ( u‖u‖L )∥∥∥∥2 L = ∥∥∥∥sinh(‖u‖LR ) x R + cosh ( ‖u‖L R ) u ‖u‖L ∥∥∥∥2 L\n= sinh2 ( ‖u‖L R ) ‖x‖2L R2 + cosh2 ( ‖u‖L R ) ‖u‖2L ‖u‖2L\n= − sinh2 ( ‖u‖L R ) + cosh2 ( ‖u‖L R ) = 1.\nand\n∥∥d expKx (ξ)∥∥2L = ∥∥∥∥∥∥ R sinh ( ‖u‖L R ) ‖u‖L ξ ∥∥∥∥∥∥ 2\nL\n= R2 sinh2\n( ‖u‖L R ) ‖u‖2L ‖ξ‖2L\n= R2 sinh2\n( ‖u‖L R ) ‖u‖2L .\nTherefore, we obtain\ndet\n( ∂ expKx (u)\n∂u\n) = 1 · R sinh ( ‖u‖L R ) ‖u‖L n−1 . Finally,\ndet\n( ∂f(v)\n∂v\n) = det ( ∂ expKµ (u)\n∂u\n) · det ( ∂ PTKµ0→µ(v)\n∂v\n) = R sinh ( ‖u‖L R ) ‖u‖L n−1 . Theorem B.2 (Probability density function ofWN (z;µ,Σ) in SnK).\nlogWN (z;µ,Σ) = logN (v;0,Σ)− (n− 1) log R ∣∣∣sin(‖u‖2R )∣∣∣ ‖u‖2 , where u = logKµ (z), v = PT K µ→µ0(u), and R = 1/ √ K.\nProof. The theorem is very similar to Theorem B.1. The difference is that in this one, our manifold changes from HnK to SnK , hence K > 0. Our tranformation function has the form f = expKµ ◦PT K µ0→µ, and f −1 = PTKµ→µ0 ◦ log K µ .\nThe derivative of parallel transport PTKx→y(v) for any x,y ∈ SnK and v ∈ TxSnK is a map dPTKx→y(v) : Tv(TxSnK). Using the orthonormal basis (with respect to the Lorentz product) {ξ1, . . . ξn}, we can compute the determinant by computing the change with respect to each basis vector.\ndPTKx→y(ξ) = ∂\n∂ ∣∣∣∣ =0 PTKx→y(v + ξ)\n= ∂\n∂ ∣∣∣∣ =0 [ (v + ξ)− 〈y,v + ξ〉2 R2 + 〈x,y〉2 (x+ y) ] = [ ξ −\n〈y, ξ〉2 R2 + 〈x,y〉2 (x+ y) ] =0\n= PTKx→y(ξ).\nSince parallel transport preserves norms and vectors in the orthonormal basis have norm 1, the change is ∥∥dPTKx→y(ξ)∥∥2 = ∥∥PTKx→y(ξ)∥∥2 = 1. For computing the determinant of the exponential map Jacobian, we choose the orthonormal basis {ξ1 = u/ ‖u‖2 , ξ2, . . . , ξn}, where we just completed the basis based on the first vector. We again look at the change with respect to each basis vector. For the basis vector ξ1:\nd expKx (ξ1) =\n= ∂\n∂ ∣∣∣∣ =0 expKx ( u+ u ‖u‖2 )\n= ∂\n∂ ∣∣∣∣ =0 cos( | ‖u‖2 + | R ) x+ R sin ( |‖u‖2+ | R ) ‖u‖2 | ‖u‖2 + | (‖u‖2 + )u = − (‖u‖2 + ) sin ( |‖u‖2+ | R ) R| ‖u‖2 + | x+ cos ( |‖u‖2+ | R ) ‖u‖2 u =0\n= cos ( ‖u‖2 R ) u ‖u‖2 − sin ( ‖u‖2 R ) x R ,\nwhere the second equality is due to∥∥∥∥u+ u‖u‖2 ∥∥∥∥ 2 = ∥∥∥∥(1 + ‖u‖2 ) u ∥∥∥∥ 2 = ∣∣∣∣1 + ‖u‖2 ∣∣∣∣ ‖u‖2 = | ‖u‖2 + |.\nFor every other basis vector ξk where k > 1:\nd expKx (ξ) =\n= ∂\n∂ ∣∣∣∣ =0 expKx (u+ ξ)\n= ∂\n∂ ∣∣∣∣ =0 cos(‖u+ ξ‖2 R ) x+ R sin ( ‖u+ ξ‖2 R ) ‖u+ ξ‖2 (u+ ξ) \n= ∂\n∂ ∣∣∣∣ =0 cos √ ‖u‖22 + 2 R x+ R sin (√ ‖u‖22+ 2 R ) √ ‖u‖22 + 2 (u+ ξ) \n= cos (√ ‖u‖22+ 2 R ) ‖u‖22 + 2 (u+ ξ)\n+\n(R2 ‖u‖22 ξ −R2 u− (‖u‖ 2 2 + 2)x) sin (√ ‖u‖22+ 2 R ) R(‖u‖22 + 2)3/2 =0\n= R2 ‖u‖22 sin\n( ‖u‖2 R ) R(‖u‖22)3/2 ξ = R sin ( ‖u‖2 R ) ‖u‖2 ξ,\nwhere the third equality holds because\n‖u+ ξ‖22 = ‖u‖ 2 2 + 2 ‖ξ‖22 − 2 〈u, ξ〉2 = ‖u‖22 +\n2 − 2 〈u, ξ〉2 = ‖u‖22 + 2,\nwhere the last equality relies on the fact that the basis is orthogonal, and u is parallel to ξ1 = u/ ‖u‖2, hence it is orthogonal to all the other basis vectors. Because the basis is orthonormal the determinant is a product of the norms of the computed change for each basis vector. Therefore,\ndet\n( ∂ PTx→y(v)\n∂v\n) = 1n = 1.\nAdditionally, the following two properties hold:∥∥∥∥d expKx ( u‖u‖2 )∥∥∥∥2 2 = ∥∥∥∥cos(‖u‖2R ) u ‖u‖2 − sin ( ‖u‖2 R ) x R ∥∥∥∥2 2\n= sin2 ( ‖u‖2 R ) ‖x‖22 R2 + cos2 ( ‖u‖2 R ) ‖u‖22 ‖u‖22\n= sin2 ( ‖u‖2 R ) + cos2 ( ‖u‖2 R ) = 1.\nand\n∥∥d expKx (ξ)∥∥22 = ∥∥∥∥∥∥ R sin ( ‖u‖2 R ) ‖u‖2 ξ ∥∥∥∥∥∥ 2\n2\n= R2 sin2\n( ‖u‖2 R ) ‖u‖22 ‖ξ‖22\n= R2 sin2\n( ‖u‖2 R ) ‖u‖22 .\nTherefore, we obtain\ndet\n( ∂ expKx (u)\n∂u\n) = 1 · R ∣∣∣sin(‖u‖2R )∣∣∣ ‖u‖2 n−1 . Finally,\ndet\n( ∂f(v)\n∂v\n) = det ( ∂ expKµ (u)\n∂u\n) · det ( ∂ PTKµ0→µ(v)\n∂v\n) = R ∣∣∣sin(‖u‖2R )∣∣∣ ‖u‖2 n−1 . Theorem B.3 (Probability density function ofWN (z;µ,Σ) in PnK).\nlogWN PnK (z;µ,Σ) = logWNHnK (ρ −1 K (z); ρ −1 K (µ),Σ).\nProof. Follows from Theorem B.1 and A.3.\nAlso proven by (Mathieu et al., 2019) in a slightly different form for a scalar scale parameter WN (z;µ, σ2I). Given\nlogN (z;µ, σ2I) = −dE(µ, z) 2 2σ2 − n 2 log ( 2πσ2 )\nlogWN (z;µ, σ2I) =− d K P (µ, z) 2 2σ2 − n 2 log ( 2πσ2 ) + (n− 1) log ( √ −KdKP (µ, z)\nsinh( √ −KdKP (µ, z))\n) .\nTheorem B.4 (Probability density function ofWN (z;µ,Σ) in DnK).\nlogWNDnK (z;µ,Σ) = logWN SnK (ρ −1 K (z); ρ −1 K (µ),Σ).\nProof. Follows from Theorem B.2 and A.3 adapted from P to D." }, { "heading": "C RELATED WORK", "text": "Universal models of geometry Duality between spaces of constant curvature was first noticed by Lambert (1770), and later gave rise to various theorems that have the same or similar forms in all three geometries, like the law of sines (Bolyai, 1832)\nsinA\npK(a) =\nsinB pK(b) = sinC pK(c) ,\nwhere pK(r) = 2π sinK(r) denotes the circumference of a circle of radius r in a space of constant curvature K, and\nsinK(x) = x− Kx3 3! + K2x5 5! − . . . = ∞∑ i=0 (−1)iKix2i+1 (2i+ 1)! .\nOther unified formulas for the law of cosines, or recently, a unified Pythagorean theorem has also been proposed (Foote, 2017):\nA(c) = A(a) +A(b)− K 2π A(a)A(b),\nwhere A(r) is the area of a circle of radius r in a space of constant curvature K. Unfortunately, in this formulation A(r) still depends on the sign of K w.r.t. the choice of trigonometric functions in its definition.\nThere also exist approaches defining a universal geometry of constant curvature spaces. Li et al. (2001, Chapter 4) define a unified model of all three geometries using the null cone (light cone) of a Minkowski space. The term “Minkowski space” comes from special relativity and is usually denoted as R1,n, similar to the ambient space of what we defined as Hn, with the Lorentz scalar product 〈·, ·〉L. The hyperboloid Hn corresponds to the positive (upper, future) null cone of R1,n. All the other models can be defined in this space using the appropriate stereographic projections and pulling back the metric onto the specific sub-manifold. Unfortunately, we found the formalism not useful for our application, apart from providing a very interesting theoretical connection among the models.\nConcurrent VAE approaches The variational autoencoder was originally proposed in Kingma & Welling (2014) and concurrently in Rezende et al. (2014). One of the most common improvements on the VAE in practice is the choice of the encoder and decoder maps, ranging from linear parametrizations of the posterior to feed-forward neural networks, convolutional neural networks, etc. For different data domains, extensions like the GraphVAE (Simonovsky & Komodakis, 2018) using graph convolutional neural networks for the encoder and decoder were proposed.\nThe basic VAE framework was mostly improved upon by using autoregressive flows (Chen et al., 2014) or small changes to the ELBO loss function (Matthey et al., 2017; Burda et al., 2016). An important work in this area is β-VAE, which adds a simple scalar multiplicative constant to the KL divergence term in the ELBO, and has shown to improve both sample quality and (if β > 1) disentanglement of different dimensions in the latent representation. For more information on disentanglement, see Locatello et al. (2018).\nGeometric deep learning One of the emerging trends in deep learning has been to leverage nonEuclidean geometry to learn representations, originally emerging from knowledge-base and graph representation learning (Bronstein et al., 2017).\nRecently, several approaches to learning representations in Euclidean spaces have been generalized to non-Euclidean spaces (Dhingra et al., 2018; Ganea et al., 2018b; Nickel & Kiela, 2017). Since then, this research direction has grown immensely and accumulated more approaches, mostly for hyperbolic spaces, like Ganea et al. (2018a); Nickel & Kiela (2018); Tifrea et al. (2019); Law et al. (2019). Similarly, spherical spaces have also been leveraged for learning non-Euclidean representations (Batmanghelich et al., 2016; Wilson & Hancock, 2010).\nTo be able to learn representations in these spaces, new Riemannian optimization methods were required as well (Wilson & Leimeister, 2018; Bonnabel, 2013; Bécigneul & Ganea, 2019).\nThe generalization to products of constant curvature Riemannian manifolds is only natural and has been proposed by Gu et al. (2019). They evaluated their approach by directly optimizing a distancebased loss function using Riemannian optimization in products of spaces on graph reconstruction and word analogy tasks, in both cases reaping the benefits of non-Euclidean geometry, especially when learning lower-dimensional representations. Further use of product spaces with constant curvature components to train Graph Convolutional Networks was concurrently with this work done by Bachmann et al. (2020).\nGeometry in VAEs One of the first attempts at leveraging geometry in VAEs was Arvanitidis et al. (2018). They examine how a Euclidean VAE benefits both in sample quality and latent representation distribution quality when employing a non-Euclidean Riemannian metric in the latent space using kernel transformations.\nHence, a potential improvement area of VAEs could be the choice of the posterior family and prior distribution. However, the Gaussian (Normal) distribution works very well in practice, as it is the maximum entropy probability distribution with a known variance, and imposes no constraints on higher-order moments (skewness, kurtosis, etc.) of the distribution. Recently, non-Euclidean geometry has been used in learning variational autoencoders as well. Generalizing Normal distributions to these spaces is in general non-trivial..\nTwo similar approaches, Davidson et al. (2018) and Xu & Durrett (2018), used the von MisesFischer distribution on the unit hypersphere to generalize VAEs to spherical spaces. The von MisesFischer distribution is again a maximum entropy probability distribution on the unit hypersphere, but only has a spherical covariance parameter, which makes it less general than a Gaussian distribution.\nConversely, two approaches, Mathieu et al. (2019) and Nagano et al. (2019), have generalized VAEs to hyperbolic spaces – both the Poincaré ball and the hyperboloid, respectively. They both adopt a non-maximum entropy probability distribution called the Wrapped Normal. Additionally, Mathieu et al. (2019) also derive the Riemannian Normal, which is a maximum entropy distribution on the Poincaré disk, but in practice performs similar to the Wrapped Normal, especially in higher dimensions.\nOur approach generalizes on the afore-mentioned geometrical VAE work, by employing a “products of spaces” approach similar to Gu et al. (2019) and unifying the different approaches into a single framework for all spaces of constant curvature." }, { "heading": "D EXTENDED FUTURE WORK", "text": "Even though we have shown that one can approximate the true posterior very well with Normallike distributions in Riemannian manifolds of constant curvature, there remain several promising directions of explorations.\nFirst of all, an interesting extension of this work would be to try mixed-curvature VAEs on graph data, e.g. link prediction on social networks, as some of our models might be well suited for sparse and structured data. Another very beneficial extension would be to investigate why the obtained results have a relatively big variance across runs and try to reduce it. However, this is a problem that affects the Euclidean VAE as well, even if not as flagrantly.\nSecondly, we have empirically noticed that it seems to be significantly harder to optimize our models in spherical spaces – they seem more prone to divergence. In discussions, other researchers have also observed similar behavior, but a more thorough investigation is not available at the moment. We have side-stepped some optimization problems by introducing products of spaces – previously, it has been reported that both spherical and hyperbolic VAEs generally do not scale well to dimensions greater than 20 or 40. For those cases, we could successfully optimize a subdivided space (S2)36 instead of one big manifold S72. However, that also does not seem to be a conclusive rule. Especially in higher dimensions, we have noticed that our VAEs (S2)36 with learnable curvature and D721 with fixed curvature seem to consistently diverge. In a few cases S72 with fixed curvature and even the product (E2)12 × (H2)12 × (S2)12 with learnable curvature seemed to diverge quite often as well. The most promising future direction seems to be the use of “Normalizing Flows” for variational inference as presented by Rezende & Mohamed (2015) and Gemici et al. (2016). More recently, it was also combined with “autoregressive flows” in Huang et al. (2018). Using normalizing flows, one should be able to achieve the desired level of complexity of the latent distribution in a VAE, which should, similarly to our work, help to approximate the true posterior of the data better. The advantage of normalizing flows is the flexibility of the modeled distributions, at the expense of being more computationally expensive.\nFinally, another interesting extension would be to extend the defined geometrical models to allow for training generative adversarial networks (GANs) (Goodfellow et al., 2014) in products of constant curvature spaces and benefit from the better sharpness and quality of samples that GANs provide. Finally, one could synthesize the above to achieve adversarially trained autoencoders in Riemannian manifolds similarly to Pan et al. (2018); Kim et al. (2017); Makhzani et al. (2015) and aim to achieve good sample quality and a well-formed latent space at the same time." }, { "heading": "E EXTENDED RESULTS", "text": "" } ]
2,020
MIXED-CURVATURE VARIATIONAL AUTOENCODERS
SP:58b67f1e081e61982d524768c88f3754c3470e0a
[ "Paper proposes a method for continual learning. The method is based on the learning of a metric space where classes are represented by prototypes in this space. To prevent forgetting the method proposes to perform prototype recall, aiming to keep prototypes in the same location in embedding space (Fig 1b). The method is compared with several recent methods and is shown to outperform them on two small datasets (MNIST permuted and CIFAR10). The idea of using prototypes for continual learning is interesting, as the authors point out, this does not require adding new neurons to the network for new tasks.", "The proposed method addresses continual learning, by learning a mapping from the input space to an embedding space, and employing a loss that encourages clustering the embeddings by class (and task?) around some centroids called prototypes. Catastrophic forgetting is mitigated by adding a penalty term that is proportional to the distance of the embeddings under the current network of some samples from the past tasks, and the centroids previously associated to each of them." ]
Continual learning is a critical ability of continually acquiring and transferring knowledge without catastrophically forgetting previously learned knowledge. However, enabling continual learning for AI remains a long-standing challenge. In this work, we propose a novel method, Prototype Recalls, that efficiently embeds and recalls previously learnt knowledge to tackle catastrophic forgetting issue. In particular, we consider continual learning in classification tasks. For each classification task, our method learns a metric space containing a set of prototypes where embedding of the samples from the same class cluster around prototypes and class-representative prototypes are separated apart. To alleviate catastrophic forgetting, our method preserves the embedding function from the samples to the previous metric space, through our proposed prototype recalls from previous tasks. Specifically, the recalling process is implemented by replaying a small number of samples from previous tasks and correspondingly matching their embedding to their nearest class-representative prototypes. Compared with recent continual learning methods, our contributions are fourfold: first, our method achieves the best memory retention capability while adapting quickly to new tasks. Second, our method uses metric learning for classification, and does not require adding in new neurons given new object classes. Third, our method is more memory efficient since only class-representative prototypes need to be recalled. Fourth, our method suggests a promising solution for few-shot continual learning. Without tampering with the performance on initial tasks, our method learns novel concepts given a few training examples of each class in new tasks.
[]
[ { "authors": [ "Rahaf Aljundi", "Francesca Babiloni", "Mohamed Elhoseiny", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Memory aware synapses: Learning what (not) to forget", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Craig Atkinson", "Brendan McCane", "Lech Szymanski", "Anthony Robins" ], "title": "Pseudo-recursal: Solving the catastrophic forgetting problem in deep neural networks", "venue": "arXiv preprint arXiv:1802.03875,", "year": 2018 }, { "authors": [ "Jimmy Ba", "Rich Caruana" ], "title": "Do deep nets really need to be deep? In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In Proceedings of COMPSTAT’2010,", "year": 2010 }, { "authors": [ "Pratik Prabhanjan Brahma", "Adrienne Othon" ], "title": "Subset replay based continual learning for scalable improvement of autonomous systems", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2018 }, { "authors": [ "Yutian Chen", "Max Welling", "Alex Smola" ], "title": "Super-samples from kernel herding", "venue": "arXiv preprint arXiv:1203.3472,", "year": 2012 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Li Deng" ], "title": "The mnist database of handwritten digit images for machine learning research [best of the web", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "Robert M French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in cognitive sciences,", "year": 1999 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xu He", "Herbert Jaeger" ], "title": "Overcoming catastrophic interference using conceptor-aided backpropagation", "venue": null, "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Elad Hoffer", "Nir Ailon" ], "title": "Deep metric learning using triplet network", "venue": "In International Workshop on Similarity-Based Pattern Recognition,", "year": 2015 }, { "authors": [ "Ronald Kemker", "Christopher Kanan" ], "title": "Fearnet: Brain-inspired model for incremental learning", "venue": "arXiv preprint arXiv:1711.10563,", "year": 2017 }, { "authors": [ "Ronald Kemker", "Marc McClure", "Angelina Abitino", "Tyler L Hayes", "Christopher Kanan" ], "title": "Measuring catastrophic forgetting in neural networks", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming catastrophic forgetting by incremental moment matching", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "David Lopez-Paz" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Davide Maltoni", "Vincenzo Lomonaco" ], "title": "Continuous learning in single-incremental-task scenarios", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Cuong V Nguyen", "Yingzhen Li", "Thang D Bui", "Richard E Turner" ], "title": "Variational continual learning", "venue": "arXiv preprint arXiv:1710.10628,", "year": 2017 }, { "authors": [ "Roger Ratcliff" ], "title": "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions", "venue": "Psychological review,", "year": 1990 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "venue": "Connection Science,", "year": 1995 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "arXiv preprint arXiv:1805.06370,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sebastian Thrun", "Tom M Mitchell" ], "title": "Lifelong robot learning", "venue": "Robotics and autonomous systems,", "year": 1995 }, { "authors": [ "Gido M van de Ven", "Andreas S Tolias" ], "title": "Generative replay with feedback connections as a general strategy for continual learning", "venue": "arXiv preprint arXiv:1809.10635,", "year": 2018 }, { "authors": [ "Laurens Van Der Maaten" ], "title": "Accelerating t-sne using tree-based algorithms", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Junfeng Wen", "Yanshuai Cao", "Ruitong Huang" ], "title": "Few-shot self reminder to overcome catastrophic forgetting", "venue": "arXiv preprint arXiv:1812.00543,", "year": 2018 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Continual learning, also known as lifelong learning, is the crucial ability for humans to continually acquire and transfer new knowledge across their lifespans while retaining previously learnt experiences Hassabis et al. (2017). This ability is also critical for artificial intelligence (AI) systems to interact with the real world and process continuous streams of information Thrun & Mitchell (1995). However, the continual acquisition of incrementally available data from non-stationary data distributions generally leads to catastrophic forgetting in the system McCloskey & Cohen (1989); Ratcliff (1990); French (1999). Continual learning remains a long-standing challenge for deep neural network models since these models typically learn representations from stationary batches of training data and tend to fail to retain good performances in previous tasks when data become incrementally available over tasks Kemker et al. (2018); Maltoni & Lomonaco (2019).\nNumerous methods for alleviating catastrophic forgetting have been currently proposed. The most pragmatical way is to jointly train deep neural network models on both old and new tasks, which however demands a large amount of resources to store previous training data and hinders the learning of novel data in real time. Another option is to complement the training data for each new task with “pseudo-data” of the previous tasks Shin et al. (2017); Robins (1995). Besides the main model for task performance, a separate generative model is trained to generate fake historical data used for pseudo-rehearsal. Deep Generative Replay (DGR) Shin et al. (2017) replaces the storage of the previous training data with a Generative Adversarial Network to synthesize training data on all previously learnt tasks. These generative approaches have succeeded over very simple and artificial inputs but they cannot tackle more complicated inputs Atkinson et al. (2018). Moreover, to synthesize the historical data reasonably well, the size of the generative model is usually huge that costs much memory Wen et al. (2018). An alternative method is to store the weights of the model trained on previous tasks, and impose constraints of weight updates on new tasks He & Jaeger\n(2018); Kirkpatrick et al. (2017); Zenke et al. (2017); Lee et al. (2017); Lopez-Paz et al. (2017). For example, Elastic Weight Consolidation (EWC) Kirkpatrick et al. (2017) and Learning Without Forgetting (LwF) Li & Hoiem (2018) store all the model parameters on previously learnt tasks, estimate their importance on previous tasks and penalize future changes to the weights on new tasks. However, selecting the “important” parameters for previous tasks complicates the implementation by exhaustive hyper-parameter tuning. In addition, state-of-the-art neural network models often involve millions of parameters and storing all network parameters from previous tasks does not necessarily reduce the memory cost Wen et al. (2018). In contrast with these methods, storing a small subset of examples from previous tasks and replaying the “exact subset” substantially boost performance Kemker & Kanan (2017); Rebuffi et al. (2017); Nguyen et al. (2017). To achieve the desired network behavior on previous tasks, incremental Classifier and Representation Learner (iCARL) Rebuffi et al. (2017) and Few-shot Self-Reminder (FSR) Wen et al. (2018) follow the idea of logit matching or knowledge distillation in model compression Ba & Caruana (2014); Bucilua et al. (2006); Hinton et al. (2015). However, such approaches ignore the topological relations among clusters in the embedding space and rely too much on a small amount of individual data, which may result in overfitting as shown in our experiments (Section 4.2). In contrast with them, without tampering the performance in memory retention, our method learns embedding functions and compares the feature similarities represented by class prototypes in the embedding space which improves generalization, especially in the few-shot settings, as also been verified in works Hoffer & Ailon (2015); Snell et al. (2017).\nIn this paper, we propose the method, Prototype Recalls, for continual learning in classification tasks. Similar as Snell et al. (2017), we use a neural network to learn class-representative prototypes in an embedding space and classify embedded test data by finding their nearest class prototype. To tackle the problem of catastrophic forgetting, we impose additional constraints on the network by classifying the embedded test data based on prototypes from previous tasks, which promotes the preservation of initial embedding function. For example (Figure 1), in the first task (Subfigure 1a), the network learns color prototypes to classify blue and yellow circles and in the second task (Subfigure 1b), the network learns shape prototypes to classify green circles and triangles. With catastrophically forgetting color features, the network extracts circle features on the first task and fails to classify blue and yellow circles. To alleviate catastrophic forgetting, our method replays the embeded previous samples (blue and yellow circles) and match them with previous color prototypes (blue and yellow) which reminds the network of extracting both color and shape features in both classification tasks.\nWe evaluate our method under two typical experimental protocols, incremental domain and incremental class, for continual learning across three benchmark datasets, MNIST Deng (2012), CIFAR10 Krizhevsky & Hinton (2009) and miniImageNet Deng et al. (2009). Compared with the state-of-the-arts, our method significantly boosts the performance of continual learning in terms of memory retention capability while being able to adapt to new tasks. Unlike parameter regularization methods or iCARL or FSR, our approach further reduces the memory storage by replacing logits of each data or network parameters with one prototype of each class in the episodic memory. Moreover, in contrast to these methods where the last layer in traditional classification networks often structurally depends on the number of classes, our method leverages on metric learning, maintains the same network architecture and does not require adding new neurons or layers for new object classes. Additionally, without sacrificing classification accuracy on initial tasks, our method can generalize to learn new concepts given a few training examples in new tasks due to the advantage of metric learning, commonly used in few-shot settings Snell et al. (2017); Hoffer & Ailon (2015)." }, { "heading": "2 PROPOSED METHOD", "text": "We propose the method, Prototype Recalls, for continual learning. For a sequence of datasets D1, D2, ..., Dt, ..., given Dt in any task t where t ∈ {1, 2, ..., T}, the goal for the model fT is to retain the good classification performance on all T datasets after being sequentially trained over T tasks. The value of T is not pre-determined. The model fT with learnable parameters φ is only allowed to carry over a limited amount of information from the previous T − 1 tasks. This constraint eliminates the naive solution of combining all previous datasets to form one big training set for fine-tuning the model fT at task T . Each dataset Dt consists of Nt labeled examples Dt = {Xt, Yt} = {(x1t, y1t), ..., (xNt, yNt)} where each xit ∈ RD is the D-dimensional feature\nvector of an example and yit ∈ {1, ...,Kt} is the corresponding class label. Skt denotes the set of examples labeled with class kt.\nAt task T , if we simply train a model by only minimizing the classification lossLclassi,DT on dataset DT , the model will forget how to perform classification on previous datasets Dt, t < T which is described as catastrophic forgetting problem McCloskey & Cohen (1989); Ratcliff (1990); French (1999). Here we show how the model trained in our method retains the good performance on all previous tasks while adaptively learning new tasks. The loss for all the previous datasets is denoted by LT (f) = ∑T t=1 EDt [L(f(Xt), Yt)]. Our objective is to learn fT defined as follows:\nfT = argmin f LT (f) = argmin f { T−1∑ t=1 Lclassi,Dt(ft) + Lclassi,DT (f) + T−1∑ t=1 δDt(f, ft)} (1)\nwhere Lclassi,DT (f) defines the classification loss of f on dataset DT and δDt(f, ft) measures the differences in the network behaviors in the embedding space learnt by f and ft onDt, as introduced later in Equ 7. Given f1, ..., fT−1 that are learnt from the previous tasks, at task T , learning fT requires minimizing both terms Lclassi,DT (f) and ∑T−1 t=1 δDt(f, ft). In the subsections below and Figure 1, we describe how to optimize these two terms." }, { "heading": "2.1 CLASSIFICATION", "text": "To perform classification on dataset Dt, our method learns an embedding space in which points cluster around a single prototype representation for each class and classification is performed by finding the nearest class prototype Snell et al. (2017) (Figure 1a). Compared to traditional classification networks with a specific classification layer attached in the end, such as iCARL and FSR, our method keeps the network architecture unchanged while finding the nearest neighbour in the embedding space, which would lead to more efficient memory usage. For example, in one of the continual learning protocols Snell et al. (2017) where the models are asked to classify incremental classes (also see Section 3.1), traditional classification networks have to expand their architectures by accommodating more output units in the last classification layer based on the number of incremental classes and consequently, additional network parameters have to be added into the memory.\nWithout loss of generality, here we show how our method performs classification on DT . First, the model learns an embedding function f : RD → RM and computes an M -dimensional prototype ckT ∈ RM which is the mean of the embeddings from examples SkT :\nckT = 1 |SkT | ∑\n(xiT ,yiT )∈SkT\nf(xiT ). (2)\nThe pairwise distance of one embedding and one prototype within the same class should be smaller than the intra-class ones. Our method introduces a distance function d : RM × RM → [0,∞). For each example xT , it estimates a distance distribution based on a softmax over distances to the prototypes of KT classes in the embedding space:\npφ(yT = kT |xT ) = exp(−d(f(xT ), ckT ))∑KT k′ exp(−d(f(xT ), ck′T )) . (3)\nThe objective function Lclassi,DT (f) is to minimize the negative log-probability − log pφ(yT = kT |xT ) of the ground truth class label kT via Stochastic Gradient Descent Bottou (2010):\nLclassi,DT (f) = − log pφ(yT = kT |xT ) (4) In practice, when NT is large, computing ckT is costly and memory inefficient during training. Thus, at each training iteration, we randomly sample two complement subsets from SkT over all KT classes: one for computing prototypes and the other for estimating distance distribution. Our primary choice of the distance function d(·) is squared Euclidean distance which has been verified to be effective in Snell et al. (2017). In addition, we include temperature hyperparameter τ in d(·) as introduced in network distillation literature Hinton et al. (2015) and set its value empirically based on the validation sets. A higher value for τ produces a softer probability distribution over classes." }, { "heading": "2.2 PROTOTYPE RECALL", "text": "Regardless of the changes of the network parameters from φt to φT at task t and T respectively, the primary goal of fT is to learn the embedding function which results in the similar metric space as ft on dataset Dt in task t (Figure 1b). Given a limited amount of memory, a direct approach is to randomly sample a small subset D̃t = {(x(t)i , y (t) i )|i = 1, ...,m} fromDt and replay these examples on task T . There have been some attempts Chen et al. (2012); Koh & Liang (2017); Brahma & Othon (2018) selecting representative examples for D̃t based on different scoring functions. However, the recent work Wen et al. (2018) has shown that random sampling uniformly across classes has already yielded outstanding performance in continual learning tasks. Hence, we adopt the same random sampling strategy to form D̃t.\nIntuitively, if the number of data samples in D̃t is very large, the network could re-produce the metric space at task t by replaying D̃t, which is our desired goal. However, this does not hold in practice given limited memory capacity. With the simple inductive bias that the metric space at task t can be underlined by class-representative prototypes, we introduce another loss that embedded data sample in D̃t should still be closest to their corresponding class prototype among all prototypes at task t. This ensures the metric space represented by a set of prototypes learnt from D̃t by fT provides good approximation to the one in task t.\nFormally, for any f after task t, we formulate the regularization of network behaviors δDt(f, ft) in the metric space of task t by satisfying two criteria: first, f learns a metric space to classify D̃t by minimizing the classification loss Lclassi,D̃t(f), as introduced in Sec. 2.1 above; second, to preserve the similar topological structure Lregu,D̃t,Dt(f, ft) among clusters on dataset Dt, the embeddings f(x̃t) predicted by f based on D̃t should produce the similar distance distribution based on a softmax over the distance to prototypes ckt computed using ft on dataset Dt:\npφ(ỹt = k̃t|x̃t) = exp(−d(f(x̃t), ckt))∑Kt k′ exp(−d(f(x̃t), ck′t)) , ckt = 1 |Skt| ∑\n(xit,yit)∈Skt\nft(xit). (5)\nConcretely, Lregu,D̃t,Dt(f, ft) is to minimize the negative log-probability pφ(ỹt = k̃t|x̃t) of the ground truth class label k̃t conditioned on prototypes ckt, which is pre-computed using ft in Eq 5 at task t and stored in the episodic memory until task T :\nLregu,D̃t,Dt(f, ft) = − log pφ(ỹt = k̃t|x̃t). (6)\nOverall, we define δDt(f, ft) in Eq 1 as below:\nδDt(f, ft) = Lclassi,D̃t(f) + Lregu,D̃t,Dt(f, ft). (7)\nAlgorithm 1: Prototype recall algorithm in continual learning for a training episode Input : A sequence of datasets D1, D2, ..., Dt, ..., one per task t. A feed-forward neural network learning embedding function f . Episodic memory with capacity C. Sampled m examples per dataset. Output: Update the network parameters φt for each task t do" }, { "heading": "2.3 DYNAMIC EPISODIC MEMORY ALLOCATION", "text": "Given a limited amount of memory with capacityC, our proposed method has to store a small subset D̃t with m examples randomly sampled from Dt and prototypes ckt, k ∈ {1, ...,Kt} computed using embedding function ft : RD → RM on Dt where t < T . The following constraint has to be satisfied:\nC = T−1∑ t=1 Kt(M +mD) (8)\nWhen the number of tasks T is small,m can be large and the episodic memory stores more examples in D̃t. Dynamic memory allocation of enabling more example replays in earlier tasks puts more emphasis on reviewing earlier tasks which are easier to forget, and introduces more varieties in data distributions when matching with prototypes. Pseudocode to our proposed algorithm in continual learning for a training episode is provided in Algorithm 1." }, { "heading": "3 EXPERIMENTAL DETAILS", "text": "We introduce two task protocols for evaluating continual learning algorithms with different memory usage over three benchmark datasets. Source codes will be public available upon acceptance." }, { "heading": "3.1 TASK PROTOCOLS", "text": "Permuted MNIST in incremental domain task is a benchmark task protocol in continual learning Lee et al. (2017); Lopez-Paz et al. (2017); Zenke et al. (2017) (Figure 2a). In each task, a fixed permutation sequence is randomly generated and is applied to input images in MNIST Deng (2012). Though the input distribution always changes across tasks, models are trained to classify 10 digits in each task and the model structure is always the same. There are 20 tasks in total. During testing, the task identity is not available to models. The models have to classify input images into 1 out of 10 digits.\nSplit CIFAR10 and split MiniImageNet in incremental class task is a more challenging task protocol where models need to infer the task identity and meanwhile solve each task. The input data is also more complex which includes classification on natural images in CIFAR10 Krizhevsky & Hinton (2009) and miniImageNet Deng et al. (2009). The former contains 10 classes and the latter consists of 100 classes. In CIFAR10, the model is first trained with 2 classes and later with 1 more class in each subsequent task. There are 9 tasks in total and 5,000 images per class in the training set. In miniImageNet, models are trained with 10 classes in each task. There are 10 tasks in total and 480 images per class in the training set.\nFew-shot Continual Learning Humans can learn novel concepts given a few examples without sacrificing classification accuracy on initial tasks Gidaris & Komodakis (2018). However, typical continual learning schemes assume that a large amount of training data over all tasks is always available for fine-tuning networks to adapt to new data distributions, which does not always hold in practice. We revise task protocols to more challenging ones: networks are trained with a few examples per class in sequential tasks except for the first task. For example, on CIFAR10/miniImageNet, we train the models with 5,000/480 example images per class in the first task and 50/100 images per class in subsequent tasks." }, { "heading": "3.2 BASELINES", "text": "We include the following categories of continual learning methods for comparing with our method. To eliminate the effect of network structures in performance, we introduce control conditions with the same architecture complexity for all the methods in the same task across all the experiments.\nParameter Regularization Methods: Elastic Weight Consolidation (EWC) Kirkpatrick et al. (2017), Synaptic Intelligence (SI) Zenke et al. (2017) and Memory Aware Synapses (MAS) Aljundi et al. (2018) where regularization terms are added in the loss function; online EWC Schwarz et al. (2018) which is an extension of EWC in scalability to a large number of tasks; L2 distance indicating parameter changes between tasks is added in the loss Kirkpatrick et al. (2017); SGD, which is a naive baseline without any regularization terms, is optimized with Stochastic Gradient Descent Bottou (2010) sequentially over all tasks.\nMemory Distillation and Replay Methods: incremental Classifier and Representation Learner (iCARL) Rebuffi et al. (2017) and Few-shot Self-Reminder (FSR) Wen et al. (2018) propose to regularize network behaviors by exact pseudo replay. Specifically, in FSR, there are two variants: FSR-KLD for logits matching via KullbackLeibler Divergence loss and FSR-MSE for logits distillation via L2 distance loss.\nPerformance is reported in terms of both mean and standard deviation after 10 runs per protocol. Since generative model-based approaches van de Ven & Tolias (2018); Shin et al. (2017) greatly alter architecture of the classification networks, we do not compare with them." }, { "heading": "3.3 MEMORY COMPARISON", "text": "For fair comparison, we use the same feed-forward architecture for all the methods and allocate a comparable amount of memory as EWC Kirkpatrick et al. (2017) and other parameter regularization methods, for storing example images per class and their prototypes. In EWC, the model often allocates a memory size twice as the number of network parameters for computing Fisher information matrix which can be used for regularizing changes of network parameters Kirkpatrick et al. (2017). In more challenging classification tasks, the network size tends to be larger and hence, these methods require much more memory. In Table 1, we show an example of memory allocation\non split CIFAR10 in incremental class tasks with full memory and little memory respectively. The feed-forward classification network contains around 16.3 × 105 parameters. Weight regularization methods require memory allocation twice as that, which takes about 32.63 × 105 parameters. The input RGB images are of size 3 × 32 × 32. Via Equ. 8, our method can allocate episodic memory with full capacity C = 16.3 × 105 and calculate m which is equivalent to storing 16.3 × 105/(3 × 32 × 32) = 530 example images per class. In experiments with little training data as described in Section 3.1, we reduce m to 10 example images per class." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "4.1 ALLEVIATING FORGETTING", "text": "Figure 3 reports the results of continual learning methods with full memory under the two task protocols. All compared continual learning methods outperform SGD (cyan) which is a baseline without preventing catastrophic forgetting. Our method (red) achieves the highest average classification accuracy among all the compared methods, including both parameter regularization methods and memory-based methods, with minimum forgetting.\nA good continual learning method should not only show good memory retention but also be able to adapt to new tasks. In Figure 3a, although our method (red) performs on par with EWC (brown) and FSR (date) in retaining the classification accuracy on dataset D1 in the first task along with 20 sequential tasks, the average classification accuracy of our method is far higher than EWC (brown) and FSR (date) as shown in Figure 3b, indicating both of these methods are able to retain good memory but fail to learn new tasks. After the 13th task, the average classification performance of\nEWC is even worse than SGD. Across total 20 tasks, our method leads FSR (date) by 3% more accurate on average. Similar reasoning can be applied to comparison with SI (green): although our method performs comparably well as SI in terms of average classification accuracy, SI fails to retain the classification accuracy on D1, which is 6% lower than ours in the 20th task.\nFigure 3c and 3d show the average task classification accuracy over sequential tasks in incremental class protocol. Incremental class protocol is more challenging than incremental domain protocol, since the models have to infer both the task identity and class labels in the task. Our method (red) performs slightly better than iCARL (date) and has the hightest average classification accuracy in continual learning. Compared with third best method, FSR (green), our method yields constantly around 5% higher on average across all tasks on CIFAR10 and miniImageNet respectively. Note that most weight regularization methods, such as EWC (brown), perform as badly as SGD. It is possible that EWC computes Fisher matrix to maintain local information and does not consider the scenarios when data distributions across tasks are too far apart. On the contrary, our method maintains remarkably better performance than EWC, because ours focuses primarily on the behaviors of network outputs, which indirectly relaxes the constraint about the change of network parameters." }, { "heading": "4.2 FEW-SHOT CONTINUAL LEARNING", "text": "We evaluate continual learning methods with little memory under two task protocols with few training data in the second tasks and onwards except for the first tasks. Figure 4 reports their performance. Our method (red) has the highest average classification accuracy over all sequential tasks among state-of-the-art methods with 27% and 11% vs. 19% and 4% of FSR-KLD (yellow), which is the second best, at the 9th and 10th tasks on CIFAR10 and miniImageNet respectively. Weight regularization methods, such as EWConline (blue) and MAS (brown), perform as badly as SGD (cyan), worse than logits matching methods, such as FSR (green and yellow) or iCARL (purple). Similar observations have been made as Figure 3 with full training data.\nCompared with logits matching methods, our method has the highest average task classification accuracy. It reveals that our method performs classification via metric learning in an effective few-shot manner. It is also because our network architecture is not dependent on the number of output classes and the knowledge in previous tasks can be well preserved and transferred to new tasks. It is superior to traditional networks with new parameters added in the last classification layer, which easily leads to overfitting. As a side benefit, given the same number of example inputs in the episodic memory, our method is more efficient in memory usage since it stores one prototype per class instead of the logits for each example input as verified in Table 1." }, { "heading": "4.3 NETWORK ANALYSIS", "text": "We also study the effects of the following three factors upon performance improvement. Figure 5 reports the average classification accuracy of these ablated methods. (1) Intuitively, limited memory capacity restricts number of example inputs to re-play and leads to performance drop. On permuted MNIST in incremental domain, with full memory capacity reduced by 2.5 times (from 5,000\nexample inputs to 2,000), our method shows a moderate decrease of average classification accuracy by 1% in the 20th task. (2) We also compare our method with memory replay optimized by cross-entropy loss at full memory conditions. A performance drop around 1.5% is observed which validates classifying example inputs based on initial prototypes results in better performance in memory retention. (3) Given fixed C, our method adopts the strategy of decreasing m numbers of example inputs in memory, with the increasing number of tasks. The performance drop of 1.5% using uniform memory allocation demonstrates the usefulness of dynamic memory allocation which enforces more examples to be replayed in earlier tasks, and therefore promotes memory retention.\nIn Figure 6, we provide visualizations of class embeddings by projecting these latent representations of classes into 2D space. It can be seen that our method is capable of clustering latent representations belonging to the same class and meanwhile accommodating new class embeddings across sequential tasks. Interestingly, the clusters are topologically organized based on feature similarities among classes and the topological structure from the same classes is preserved across tasks. For example, the cluster of “bird” (black) is close to that of “plane” (orange) in Task 3 and the same two clusters are still close in Task 9. This again validates that classifying example inputs from previous tasks based on initial prototypes promotes preservation of topological structure in the initial metric space." }, { "heading": "5 CONCLUSION", "text": "We address the problem of catastrophic forgetting by proposing prototype recalls in classification tasks. In addition to significantly alleviating catastrophic forgetting on benchmark datasets, our method is superior to others in terms of making the memory usage efficient, and being generalizable to learning novel concepts given only a few training examples in new tasks.\nHowever, given a finite memory capacity and a high number of tasks, we recognize that our method, just like other memory-based continual learning algorithms, have limitations in number of prototypes stored. The memory requirement of our method increases linearly with the number of continuous tasks. In practice, there is always a trade-off between memory usage and retention. We believe that our method is one of the most efficient continual learning methods in eliminating catastrophic forgetting with a decent amount of memory usage. Moreover, we restrict ourselves in classification tasks with discrete prototypes. In the future work, to apply our algorithm in more complex and challenging problems, such as regression and reinforcement learning (RL), one possible solution is to quantize the continuous space in regression or formulate RL in discrete state-action pairs." } ]
2,019
null
SP:27b73a836058b5e3cf5430f5c64fec2d1475da1b
[ "The paper studies batch meta learning, i.e. the problem of using a fixed experience from past tasks to learn a policy which can quickly adapt to a new related task. The proposed method combines the techniques of Fujimoto et al. (2018) for stabilizing batch off-policy learning with ideas from Rakelly et al. (2019) for learning a set invariant task embedding using task-specific datasets of transitions. They learn task-specific Q-values which are then distilled into a new Q function which is conditioned on the task embedding instead of the task ID. The embedding is further shaped using a next-state prediction auxiliary loss. The algorithmic ideas feel a bit too incremental and the experimental evaluation could be stronger--I'd recommend trying the method on more complicated environments and including ablation studies.", "This paper studies the meta-RL problem in the off-policy, batch learning setting. Batch-RL is the setting in which a policy is learned entirely offline, that is, without interaction with the environment and given only trajectories collected by some policy. Compared to RL, Meta-RL involves the additional challenge of task-inference; the goal of Meta-RL is to train a policy that can generalize to a distribution of tasks (i.e. a distribution of MDPs), without actually being given a description of the task (unlike contextual multi-task policy learning). A simple approach for solving the meta-RL problem thus might be to first perform task inference by encoding data from some task into a task description, and then condition a contextual policy on this task description. " ]
Pre-training is transformative in supervised learning: a large network trained with large and existing datasets can be used as an initialization when learning a new task. Such initialization speeds up convergence and leads to higher performance. In this paper, we seek to understand what the formalization for pre-training from only existing and observational data in Reinforcement Learning (RL) is and whether it is possible. We formulate the setting as Batch Meta Reinforcement Learning. We identify MDP mis-identification to be a central challenge and motivate it with theoretical analysis. Combining ideas from Batch RL and Meta RL, we propose tiMe, which learns distillation of multiple value functions and MDP embeddings from only existing data. In challenging control tasks and without additional exploration on unseen MDPs, tiMe is competitive with state-of-the-art model-free RL method trained with hundreds of thousands of interactions. This work demonstrates that Meta RL from observational data is possible and we hope it will gather additional interest from the community to tackle this problem.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y. Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the Twenty-first International Conference on Machine Learning,", "year": 2004 }, { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "Striving for simplicity in off-policy deep reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Alexander A. Alemi", "Ian Fischer", "Joshua V. Dillon", "Kevin Murphy" ], "title": "Deep variational information", "venue": "bottleneck. CoRR,", "year": 2016 }, { "authors": [ "András Antos", "Rémi Munos", "Csaba Szepesvári" ], "title": "Fitted q-iteration in continuous action-space mdps", "venue": "In Proceedings of the 20th International Conference on Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Kai Arulkumaran", "Antoine Cully", "Julian Togelius" ], "title": "Alphastar: An evolutionary computation perspective, 2019", "venue": "URL http://arxiv.org/abs/1902.01724. cite arxiv:1902.01724", "year": 1902 }, { "authors": [ "Mathew Botvinick", "Sam Ritter", "Jane Wang", "Zeb Kurth-Nelson", "Charles Blundell", "Demis Hassabis" ], "title": "Reinforcement learning, fast and slow", "venue": "Trends in Cognitive Sciences,", "year": 2019 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "CoRR, abs/1805.12114,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "CoRR, abs/1810.04805,", "year": 2018 }, { "authors": [ "Yan Duan", "John Schulman", "Xi Chen", "Peter L. Bartlett", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Rl$ˆ2$: Fast reinforcement learning via slow reinforcement learning", "venue": "CoRR, abs/1611.02779,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks. CoRR, abs/1703.03400, 2017", "venue": "URL http://arxiv.org/abs/1703", "year": 2017 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "CoRR, abs/1812.02900,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "Dave Meger" ], "title": "Addressing function approximation error in actorcritic methods", "venue": "CoRR, abs/1802.09477,", "year": 2018 }, { "authors": [ "Abhishek Gupta", "Russell Mendonca", "Yuxuan Liu", "Pieter Abbeel", "Sergey Levine" ], "title": "Metareinforcement learning of structured exploration strategies", "venue": "CoRR, abs/1802.07245,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "CoRR, abs/1801.01290,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic algorithms and applications", "venue": "CoRR, abs/1812.05905,", "year": 2018 }, { "authors": [ "Hado V. Hasselt" ], "title": "Double q-learning", "venue": "Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Matteo Hessel", "Hado van Hasselt", "Joseph Modayil", "David Silver" ], "title": "On inductive biases in deep reinforcement learning, 2019", "venue": "URL https://openreview.net/forum?id= rJgvf3RcFQ", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NIPS Deep Learning and Representation Learning Workshop,", "year": 2015 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "CoRR, abs/1606.03476,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "A. Steven Younger", "Peter R. Conwell" ], "title": "Learning to learn using gradient descent", "venue": "In IN LECTURE NOTES ON COMP", "year": 2001 }, { "authors": [ "Zhiao Huang", "Fangchen Liu", "Hao Su" ], "title": "Mapping state space using landmarks for universal goal reaching", "venue": null, "year": 2019 }, { "authors": [ "Jan Humplik", "Alexandre Galashov", "Leonard Hasenclever", "Pedro A. Ortega", "Yee Whye Teh", "Nicolas Heess" ], "title": "Meta reinforcement learning as task inference", "venue": "CoRR, abs/1905.06424,", "year": 2019 }, { "authors": [ "Max Jaderberg", "Wojciech M. Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcı́a Castañeda", "Charles Beattie", "Neil C. Rabinowitz", "Ari S. Morcos", "Avraham Ruderman", "Nicolas Sonnerat", "Tim Green", "Louise Deason", "Joel Z. Leibo", "David Silver", "Demis Hassabis", "Koray Kavukcuoglu", "Thore Graepel" ], "title": "Human-level performance in first-person multiplayer games with population-based deep reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Michael Kearns", "Daphne Koller" ], "title": "Efficient reinforcement learning in factored mdps", "venue": null, "year": 2000 }, { "authors": [ "Taylor W Killian", "Samuel Daulton", "George Konidaris", "Finale Doshi-Velez" ], "title": "Robust and efficient transfer learning with hidden parameter markov decision processes", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "CoRR, abs/1312.6114,", "year": 2013 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "An introduction to variational autoencoders", "venue": "CoRR, abs/1906.02691,", "year": 2019 }, { "authors": [ "Sascha Lange", "Thomas Gabel", "Martin Riedmiller" ], "title": "Batch reinforcement learning", "venue": "Reinforcement Learning: State of the Art,", "year": 2012 }, { "authors": [ "Alessandro Lazaric", "Marcello Restelli", "Andrea Bonarini" ], "title": "Transfer of samples in batch reinforcement learning", "venue": "In ICML, pp", "year": 2008 }, { "authors": [ "Lihong Li", "Vadim Bulitko", "Russell Greiner" ], "title": "Batch reinforcement learning with state importance", "venue": "Machine Learning: ECML", "year": 2004 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In Yoshua Bengio and Yann LeCun (eds.), ICLR,", "year": 2016 }, { "authors": [ "Dhruv Mahajan", "Ross B. Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "CoRR, abs/1805.00932,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "CoRR, abs/1803.02999,", "year": 2018 }, { "authors": [ "Emilio Parisotto", "Jimmy Ba", "Ruslan Salakhutdinov" ], "title": "Actor-mimic: Deep multitask and transfer reinforcement learning", "venue": "CoRR, abs/1511.06342,", "year": 2015 }, { "authors": [ "Martin L. Puterman" ], "title": "Markov Decision Processes: Discrete Stochastic Dynamic Programming", "venue": null, "year": 1994 }, { "authors": [ "Charles Ruizhongtai Qi", "Hao Su", "Kaichun Mo", "Leonidas J. Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "CoRR, abs/1612.00593,", "year": 2016 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Deirdre Quillen", "Chelsea Finn", "Sergey Levine" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": "CoRR, abs/1903.08254,", "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "Proceedings of the 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Reuven Y. Rubinstein", "Dirk P. Kroese" ], "title": "The Cross Entropy Method: A Unified Approach To Combinatorial Optimization, Monte-carlo Simulation (Information Science and Statistics)", "venue": null, "year": 2004 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael S. Bernstein", "Alexander C. Berg", "FeiFei Li" ], "title": "Imagenet large scale visual recognition challenge", "venue": "CoRR, abs/1409.0575,", "year": 2014 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Introduction to Reinforcement Learning", "venue": null, "year": 1998 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In IROS, pp. 5026–5033", "year": 2012 }, { "authors": [ "Hado van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "CoRR, abs/1509.06461,", "year": 2015 }, { "authors": [ "Quan Ho Vuong", "Yiming Zhang", "Keith W. Ross" ], "title": "Supervised policy update", "venue": "CoRR, abs/1805.11706,", "year": 2018 }, { "authors": [ "Jane X. Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z. Leibo", "Rémi Munos", "Charles Blundell", "Dharshan Kumaran", "Matthew Botvinick" ], "title": "Learning to reinforcement learn", "venue": "CoRR, abs/1611.05763,", "year": 2016 }, { "authors": [ "Jane X. Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z. Leibo", "Rémi Munos", "Charles Blundell", "Dharshan Kumaran", "Matthew Botvinick" ], "title": "Learning to reinforcement learn", "venue": "CoRR, abs/1611.05763,", "year": 2016 }, { "authors": [ "Tingwu Wang", "Xuchan Bao", "Ignasi Clavera", "Jerrick Hoang", "Yeming Wen", "Eric Langlois", "Shunshi Zhang", "Guodong Zhang", "Pieter Abbeel", "Jimmy Ba" ], "title": "Benchmarking model-based reinforcement learning", "venue": null, "year": 1907 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime G. Carbonell", "Ruslan Salakhutdinov", "Quoc V. Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "CoRR, abs/1906.08237,", "year": 2019 }, { "authors": [ "Manzil Zaheer", "Satwik Kottur", "Siamak Ravanbakhsh", "Barnabás Póczos", "Ruslan Salakhutdinov", "Alexander J. Smola" ], "title": "URL http://arxiv.org/ abs/1703.06114", "venue": "Deep sets. CoRR,", "year": 2017 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter Battaglia" ], "title": "Deep reinforcement learning with relational inductive biases", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Matthew D. Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "CoRR, abs/1311.2901,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Reinforcement Learning algorithms still require millions of environment interactions to obtain reasonable performance, hindering their applications (Mnih et al., 2015; Lillicrap et al., 2016; Vuong et al., 2018; Fujimoto et al., 2018b; Jaderberg et al., 2018; Arulkumaran et al., 2019; Huang et al., 2019). This is due to the lack of good pre-training methods. In supervised learning, a pre-trained network significantly reduces sample complexity when learning new tasks (Zeiler & Fergus, 2013; Devlin et al., 2018; Yang et al., 2019). Meta Reinforcement Learning (RL) has been proposed as a framework for pre-training in RL (Wang et al., 2016a; Duan et al., 2016; Finn et al., 2017). However, such methods still require the collection of millions of interactions during meta-train, which means that they face the same sample complexity challenge as standard RL algorithms. In this work, we use the following definition of pre-training: the ability to use data from a set of tasks to improve performance on unseen, but related tasks.\nIn supervised learning, a key reason why pre-training is incredibly successful is that the dataset used for pre-training can be collected from naturally occurring large-scale processes. This removes the need to manually collect data and allows for scalable data collection, resulting in massive datasets. For example, Mahajan et al. (2018) pre-trains using existing images and their corresponding hashtags from Instagram to obtain state-of-the-art performance on ImageNet (Russakovsky et al., 2014).\nIn this paper, we seek to formalize pre-training in RL in a way that allows for scalable data collection. The data used for pre-training should be purely observational and the policies that are being optimized for should not need to interact with the environment during pre-training. To this end, we propose Batch Meta Reinforcement Learning (BMRL) as a formalization of pre-training in RL from only existing and observational data. During training, the learning algorithms only have access to a batch of existing data collected a priori from a family of Markov Decision Process (MDP). During testing, the trained policies should perform well on unseen MDPs sampled from the family.\nA related setting is Batch RL (Antos et al., 2007; Lazaric et al., 2008; Lange et al., 2012), which we emphasize assumes the existing data comes from a single MDP. To enable scalable data collection, this assumption must be relaxed: the existing data should come from a family of related MDPs. Consider smart thermostats, whose goal is to maintain a specific temperature while minimizing electricity cost. Assuming Markovian dynamics, the interactions between a thermostat and\nits environment can be modelled as a MDP. Data generated by a thermostat operating in a single building can be used to train Batch RL algorithms. However, if we consider the data generated by the same thermostat operating in different buildings, much more data is available. While the interactions between the same thermostat model and different buildings correspond to different MDPs, these MDPs share regularities which can support generalization, such as the physics of heat diffusion. In section 6, we further discuss the relations between BMRL and other existing formulations.\nThe first challenge in BMRL is the accurate inference of the unseen MDP identity. We show that existing algorithms which sample mini-batches from the existing data to perform Q-learning style updates converge to a degenerate value function, a phenomena we term MDP mis-identification. The second challenge is the interpolation of knowledge about seen MDPs to perform well on unseen MDPs. While Meta RL algorithms can explicitly optimize for this objective thanks to the ability to interact with the environment, we must rely on the inherent generalizability of the trained networks. To mitigate these issues, we propose tiMe, which learns from existing data to distill multiple value functions and MDP embeddings. tiMe is a flexible and scalable pipeline with inductive biases to encourage accurate MDP identity inference and rich supervision to maximize generalization. The pipeline consists of two phases. In the first phase, Batch RL algorithm is used to extract MDPspecific networks from MDP-specific data. The second phase distills the MDP-specific networks.\nTo summarize, our contributions are three folds: (1) Formulation of Meta RL from observational data as Batch Meta Reinforcement Learning (BMRL) (2) A simple stage-wise approach which works well on standard benchmarks in the BMRL setting (3) Most importantly, demonstration that Meta RL from only observational data is possible. We hope this work will direct the attention of the meta RL community towards this research direction." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 BATCH REINFORCEMENT LEARNING", "text": "We model the environment as a Markov Decision Process (MDP), uniquely defined as a 5 element tuple Mi = (S,A, Ti, Ri, γ) with state space S, action space A, transition function Ti, reward function Ri and discount factor γ (Puterman, 1994; Sutton & Barto, 1998). At each discrete timestep, the agent is in a state s, pick an action a, and arrives at the next state s′ and receives a reward r. The goal of the agent π is to maximize the expected sum of discounted rewards J(π) = Eτ∼π,Mi [ ∑∞ t=0 γ tRi(st,i, at,i, s ′ t,i)] where τ = (s0,i, a0,i, r0,i, s1,i, a1,i, r1,i, . . .) is a trajectory generated by using π to interact with Mi. We will consider a family of MDPs, defined formally in subsection 2.3. We thus index each MDP in this family with i.\nIn Batch RL, policies are trained from scratch to solve a single MDP Mi using existing batch of N transition tuples Bi = {(st,i, at,i, rt,i, s′t,i)|t = 1, . . . , N} without any further interaction with Mi. At test time, we use the trained policies to interact with Mi to obtain an empirical estimate of its performance J . Batch RL optimizes for the same objective as standard RL algorithms. However, during training, the learning algorithm only has access to Bi and are not allowed to interact withMi." }, { "heading": "2.2 BATCH-CONSTRAINED Q-LEARNING", "text": "Fujimoto et al. (2018a) identifies extrapolation error and value function divergence as the modes of failure when modern Q-learning algorithms are applied to the Batch RL setting. Concretely, deep Qlearning algorithms approximate the expected sum of discounted reward starting from a state-action pair E[ ∑∞ t=0 γ tR(st, at, s ′ t)|s0 = s, a0 = a] with a value estimate Q(s, a). The estimate can be learned by sampling transition tuples from the batch and applying the temporal difference update:\nQ(s, a)← (1− αt)Q(s, a) + αt(r + γQ(s′, π(s′))) π (s′) ∈ arg max a∈A Q (s′, a) (1)\nThe value function diverges if Q fails to accurately estimate the value of π (s′). Fujimoto et al. (2018a) introduces Batch-Constrained Q-Learning, constraining π to select actions that are similar to actions in the batch to prevent inaccurate values estimation. Concretely, given s′, a generator G outputs multiple candidate actions {am}m. A perturbation model ξ takes each state-candidate action pair as input and generates small correction term ξ(s′, am) for each candidate. The corrected\ncandidate action am + ξ(s′, am) with the highest value as estimated by a learnt Q is π (s′):\nπ(s′) = arg max am+ξ(s′,am) Q(s, am + ξ(s ′, am)) {am = G(s′, zm)}m zm ∼ N(0, 1)\nEstimation error has also been previously studied and mitigated in model-free RL algorithms (Hasselt, 2010; van Hasselt et al., 2015; Fujimoto et al., 2018b)." }, { "heading": "2.3 META REINFORCEMENT LEARNING", "text": "Meta RL optimizes for average return on a family of MDPs and usually assume that the MDPs in this family share S,A, γ. Each MDP is uniquely defined by a tuple (Ti, Ri). A distribution p(Ti, Ri) defines a distribution over MDPs. During meta-train, we train a policy by sampling MDPs from this distribution and sampling trajectories from each sampled MDP, referred to as the meta-train MDPs. During meta-test, unseen MDPs are sampled from p(Ti, Ri), referred to as the meta-test MDPs. The trained policy is used to interact with the meta-test MDPs to obtain estimate of its performance. The choice of whether to update parameters (Finn et al., 2017) or to keep them fixed during meta-test (Hochreiter et al., 2001) is left to the learning algorithms, both having demonstrated prior successes." }, { "heading": "2.4 MDP IDENTITY INFERENCE WITH SET NEURAL NETWORK", "text": "A Meta RL policy needs to infer the meta-test MDP identity to pick actions with high return. Rakelly et al. (2019) introduces PEARL, which uses a set neural network (Qi et al., 2016; Zaheer et al., 2017) f as the MDP identity inference function. f takes as input a context set c = {(sk, ak, rk, s′k)}k and infers the identity of a MDP in the form of distributed representation in continuous space. The parameters of f is trained to minimize the error of the critic Q:\n(Q(s, a, f(c))− (r + V̄ (s′, f(c))))2 (2)\nwhere V̄ is a learnt state value function. PEARL also adopts an amortized variational approach (Kingma & Welling, 2013; Rezende et al., 2014; Alemi et al., 2016; Kingma & Welling, 2019) to train a probabilistic f , which is interpreted as an approximation to the true posterior over the set of possible MDP identities given the context set." }, { "heading": "3 BATCH META REINFORCEMENT LEARNING", "text": "Let K be the number of meta-train MDPs, N be the number of transition tuples available from each meta-train MDP, θ be the parameter of the policy, we can formulate Batch Meta Reinforcement Learning (BMRL) as an optimization problem:\narg max θ J(θ) = EMi∼p(Ti,Ri)\n[ Eτ∼πθ,Mi [ ∞∑ t=0 γtRi(st,i, at,i, st′,i) ]] (3)\nwhere the learning algorithms only have access to the batch B during meta-train:\nB = ∪Ki=1Bi Bi = {(st,i, at,i, rt,i, s′t,i)|t = 1, . . . , N} Mi ∼ p(Ti, Ri)\nWe assume we know which MDP each transition in the batch was collected from. This assumption simplifies our setting and is used to devise the algorithms. To maintain the flexibility of the formalization, we do not impose restrictions on the controller that generates the batch. However, the performance of learning algorithms generally increases as the training data becomes more diverse.\nMDP identity inference challenge To obtain high return on the unseen meta-test MDPs, the trained policies need to accurately infer their identities (Ritter et al., 2018; Gupta et al., 2018; Humplik et al., 2019). In BMRL, previously proposed solutions based on Q-learning style updates, where mini-batches are sampled from the batch to minimize TD error, converge to a degenerate solution. subsection 5.1 provides experimental result that demonstrates the phenomena. In finite MDP, this degenerate solution is the optimal value function of the MDP constructed by the relative frequencies of transitions contained in the batch. We can formalize this statement with the following proposition.\nProposition 1. Let N(s, a, s′) be the number of times the triple (s, a, s′) appears in B (with any reward). Performing Q-learning on finite S and A with all the Q-values Q(s, a) initialized to 0 and update rule (1) where (s, a, s′, r) is sampled uniformly at random from B at each step t, will lead the Q-values to converge to the optimal Q-value of the MDP (S,A, T̂ , R̂, γ) almost surely as long as αt ≥ 0, ∑∞ t=0 αt =∞, ∑∞ t=0 α 2 t <∞, where\nT̂ (s, a, s′) = 1s=s′ , if ∑ s′′∈S N(s, a, s′′) = 0\nN(s, a, s′)∑ s′′∈S N(s, a, s′′) , otherwise.\nR̂(s, a, s′) = 0, if N(s, a, s′) = 0∑ r:(s,a,s′,r)∈B r\nN(s, a, s′) , otherwise.\nThus, performing Q-learning style update directly on data sampled from the batch B fails to find a good policy because the value function converges to the optimal value function of the wrong MDP. We refer this phenomena as MDP mis-identification. The proof is shown in subsection A.1.\nInterpolation of seen MDPs to unseen MDPs challenge The trained policies need to generalize from the meta-train MDPs to unseen meta-test MDPs. Meta RL tackles this challenge by formulating an optimization problem that explicitly optimizes for the average return of the meta-trained policy after additional gradient steps in unseen MDPs (Finn et al., 2017; Rothfuss et al., 2018; Nichol et al., 2018). This is possible thanks to the ability to interact with the environment during metatrain. However, in the meta-train phase of BMRL, the learning algorithms do not have access to the environment. We must rely on the inherent generalizability of the trained networks to perform well on the unseen meta-test MDPs. The key challenge is therefore finding the right inductive biases in the architecture and training procedure to encourage such generalization. The need to find the right inductive biases in RL was highlighted by Botvinick et al. (2019); Zambaldi et al. (2019); Hessel et al. (2019). We note that previous works phrase the need to find inductive biases as a means to forgo generality for efficient learning. In our setting, these two goals need not be mutually exclusive." }, { "heading": "4 LEARNING DISTILLATION OF VALUE FUNCTIONS AND MDP EMBEDDINGS", "text": "" }, { "heading": "4.1 DESCRIPTION OF ARCHITECTURE AND TRAINING PROCEDURE", "text": "We propose a flexible and scalable pipeline for BMRL. Figure 1 (left) provides an overview of the pipeline in the simplest setting. Meta-train comprises of two separate phases. The first phase consists of independently training a value function Q∗i for each MDP-specific batch Bi using Batch RL algorithms. In the second phase, we distill the set of batch-specific value functions {Q∗i }i into a super value function QS through supervised learning (Hinton et al., 2015). Compared to the normal value function, a super value function takes not only a state-action pair as input, but also an inferred MDP identity, and outputs different values depending on the inferred MDP identity.\nThe pipeline is flexible in that any Batch RL algorithms are applicable in the first phase. Figure 1 (right) illustrates the architecture for the second phase given that the Batch RL algorithm used in\nAlgorithm 1: tiMe training procedure when BCQ is used in the first phase\nInput: batches {Bi}i, QS , GS , ξS , f, E, P parameterized jointly by θ\n1 Q∗i , G ∗ i , ξ ∗ i ← BCQ(Bi) ∀i 2 Randomly choose Bj out of {Bi}i 3 Sample a transition (sj , aj , s′j , rj) from Bj 4 Sample context {(sk, ak, s′k, rk)}k 6=j from Bj 5 Infer MDP identity: M̂ ← f({(sk, ak, s′k, rk)}k) 6 Predict s′, r: ŝ′, r̂ ← P (E(sj , aj), M̂)\n7 Predict state-action value: Q̂∗j ← QS(sj , aj , M̂) 8 z ∼ N(0, 1) 9 Predict candidate action: â← GS(sj , z, M̂)\n10 Obtain ground truth candidate action: a← G∗j (sj , z) 11 Predict correction factor: ξ̂∗j ← ξS(sj , a) 12 L← ‖ŝ′ − s′‖22 + (r̂ − r)2 + (Q̂∗j −\nQ∗j (sj , aj)) 2 + ‖â− a‖22 + ‖ξ̂∗j − ξj(sj , a)‖22\n13 θ ← θ −∇θL\nthe first phase is Batch Constrained Q (BCQ) Learning. As described in subsection 2.2, BCQ maintains three separate components, a learnt value function Q, a candidate action generator G and a perturbation model ξ. Therefore, the output of the first phase consists of 3 sets {Q∗i }i, {G∗i }i, {ξ∗i }i. The second phase distills {Q∗i }i, {G∗i }i, {ξ∗i }i to QS , GS , ξS respectively. The distillation of G and ξ is necessary to pick actions that lead to high return because each learnt value function Q∗i only provides reliable estimates for actions generated by G∗i and ξ ∗ i , a consequence of the training procedure of BCQ. In addition to QS , GS , ξS , the architecture consists of 3 other networks, f, P,E. f takes as input a context {(sk, ak, rk, s′k)}k and outputs a distributed representation of the MDP identity in a fixed-dimension continuous space. The output of f is an input to QS , GS , ξS , P . E and P predicts s′j , rj given a state-action pair (sj , aj). P has low capacity while the other networks are relatively large. During the second phase, all networks are jointly trained end-to-end by the regression losses of predicting s′, r and distilling {Q∗i }i, {G∗i }i, {ξ∗i }i. This is illustrated in details in Algorithm 1.\nDuring meta-test, f is used to infer the identity of the meta-test MDP as a fixed-dimension continuous vector. The super functions QS , GS , ξS are used to pick actions in the meta-test MDPs, using the same procedure as BCQ (subsection 2.2). The super functions also take as input the inferred MDP identity.\nThe key idea behind the approach is a simple stage-wise approach for the Meta RL from observational data problem. In the second phase, we distill many policies into one for related tasks by jointly learning distillation of value functions and MDP embeddings. We therefore name the approach tiMe. MDP embeddings refer to the ability to infer the identity of a MDP in the form of distributed representation in continuous space given a context." }, { "heading": "4.2 BENEFITS OF THE PROPOSED PIPELINE", "text": "Inductive biases to encourage accurate MDP identification The first inductive bias is the relationship between f and QS . They collectively receive as input state-action pair (sj , aj) and context {(sk, ak, rk, s′k)}k and regress to target value Q∗j (sj , aj). The target for each state-action pair can take on the values within the set {Q∗i (sj , aj)}i. Similar state-action pairs can have very different regression targets if they correspond to different meta-train MDPs. The context is the only information in the input to f and QS that correlates with which Q∗(sj , aj) out of the set {Q∗i (sj , aj)}i f and QS should regress to. Thus, f and QS must learn to interpret the context to predict the correct value for (sj , aj). The second inductive bias is the auxiliary task of predicting s′j , rj . A key design choice is that the network P which takes as input E(sj , aj) and f({(sk, ak, rk, s′k)}k) and predicts s′j , rj has low capacity. As such, the output of f must contain meaningful semantic information such that a small network can use it to reconstruct the MDP. This is to prevent the degenerate scenario where f learns to copy its input as its output. To summarize, these two explicit biases in the architecture and training procedure encourage f to accurately infer the MDP identity given the context.\nRichness and stability of supervision Previous approaches update f to minimize the critic loss (subsection 2.4). It is well-known that RL provides sparse training signal. This signal can also cause\ninstability since the target values in the critic loss change over time. In contrast, our pipeline provides training signal for f that is both rich and stable. It is rich because f is trained to infer a representation of the MDP identity that can be used for multiple downstream tasks, such as predicting s′, r. This encourages general-purpose learnt representation and supports generalization. The training signal is also stable since the regression targets are fixed during the second phase of tiMe.\nScalability The pipeline is scalable in that an arbitrary amount of purely observational data can be used in the first phase so long as computational constraints permit. The data can also be heterogeneous in the sense that they do not need to contain only trajectories with high return. In the experimental section, we demonstrate the benefit of the approach when the data contains trajectories of varying qualities, some of which were generated by random policies. The extraction of the batchspecific networks, such as the batch-specific value functions {Q∗i }i, from the MDP-specific batches can be trivially parallelized and scales gracefully as the number of meta-train MDPs increases." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "Our experiments have two main goals: (1) Demonstration of the MDP mis-identification phenomena and tiMe’s ability to effectively mitigate it. (2) Demonstration of the scalability of the tiMe pipeline to challenging continuous control tasks and generalization to unseen MDPs.\nIn all experiments, the MDP-specific batch Bi is the replay buffer when training Soft Actor Critic (SAC) (Haarnoja et al. (2018a;b)) in the MDP Mi for a fixed number of environment interactions. Thus, the batch Bi contains transitions with varying reward magnitude, some of which were generated by random and poorly performing policies. While our problem formulation BMRL and the pipeline tiMe allow for varying both the transition and reward functions within the family of MDPs, we consider the case of changing reward function in the experiments and leave changing transition function to future work. Thus, for the auxiliary prediction task in the second phase of the pipeline, P only predicts r and not s′." }, { "heading": "5.1 TOY EXPERIMENTS", "text": "This section illustrates MDP mis-identification as the failure mode of existing Batch RL algorithms in BMRL. The toy setting allows for easy interpretability of the trained agents’ behavior. We also show that in the standard Batch RL setting, the Batch RL algorithm tested finds a near-optimal policy. This means the failure of existing Batch RL algorithm in BMRL is not because of the previously identified extrapolation issue when learning from existing data (Fujimoto et al., 2018a).\nEnvironment Description In this environment, the agent needs to navigate on a 2d-plane to a goal location. The agent is a point mass whose starting location is at the origin (0, 0). Each goal location is a point on a semi-circle centered at the origin with radius of 10 units. At each discrete timestep, the agent receives as input its current location (x, y), takes an action indicating the change in its position (∆x,∆y), transitions to a new position (x + ∆x, y + ∆y) and receives a reward. The reward is the negative distance between the agent’s current location and the goal location. The agent does not receive the goal location as input and |∆x| ≤ 1, |∆y| ≤ 1. Since the MDP transition function is fixed, each goal location uniquely defines a MDP. The distribution over MDPs is defined by the distribution over goal locations, which corresponds to a distribution over reward functions.\nBatch SAC We modify SAC to learn from the batch by initializing the replay buffer with existing transitions. Otherwise, training stays the same. We test Batch SAC on a simple setting where there is only one meta-train MDP and one meta-test MDP which share the same goal location. This is the standard Batch RL setting and is a special case of BMRL. Batch SAC finds a near-optimal policy.\nThree meta-train and meta-test MDPs This experiment has 3 meta-train MDPs with different goal locations. The goals divide the semi-circle into two segments of equal length. There are three metatest MDPs whose goal locations coincides with the goal locations of the meta-train MDPs. This setting only tests the ability of the trained policies to correctly identify the meta-test MDPs and do not pose the challenge of generalization to unseen MDPs. Batch SAC was trained by combining the transitions from the 3 meta-train MDPs into a single replay buffer and sampling transitions from this buffer to perform gradient updates. Otherwise, training stays the same as SAC. Figure 2 (left, middle) illustrates that Batch SAC fails to learn a reasonable policy because of the MDP misidentification phenomena.\nBatch SAC with task inference function We also tried adding to the architecture of Batch SAC the probabilistic MDP identity inference function as described in subsection 2.4. This is the equivalent of adapting PEARL (Rakelly et al., 2019) to work in the BMRL setting. This approach fails to train a policy that performs well on all 3 meta-test MDPs. We note that the off-policy meta RL setting that the PEARL paper considers and the BMRL setting we consider are solving different problems. We do not argue that one is easier than the other.\nPerformance of tiMe Since Batch SAC can extract the optimal value function out of the batch in the single meta-train MDP case, we use it as the Batch RL algorithm in the first phase of the tiMe pipeline. The architecture in the second phase thus consists of E,P, f and QS . To pick an action, we randomly sample multiple actions and choose the action with the highest value as estimated by QS . This method is termed random shooting (Chua et al., 2018). As illustrated in Figure 2 (left, right), tiMe can identify the identities of the three meta-test MDPs and pick near-optimal actions." }, { "heading": "5.2 MUJOCO EXPERIMENTS", "text": "Environment Description This section illustrates the test of tiMe in challenging continuous control robotic locomotion tasks. Each task requires the application of control action to a simulated robot so that it moves with a particular velocity in the direction of its initial heading. Formally, the MDP within each MDP family share S,A, T, γ and only differs in R where R is defined to be:\nR(s, a, s′) = alive bonus− α|current velocity− target velocity| − β||a||2 where α and β are positive constant. A one-to-one correspondence exists between a MDP within the family and a target velocity. Defining a family of MDP is equivalent to picking an interval of possible target velocity. This setting is instantiated on two types of simulated robots, hopper and halfcheetah, illustrated in Figure 3. Experiments are performed inside the Mujoco simulator (Todorov et al., 2012). The setting was first proposed by Finn et al. (2017).\nZero-shot meta-test During testing, in contrast to prior works, we do not update the parameters of the trained networks, as is done in gradient-based meta RL, or allow for an initial exploratory phase where episode returns do not count towards the final meta-test performance, as is done in off-policy meta RL (Rakelly et al., 2019). This allows for testing the inherent generalizability of the trained networks without confounding factors. The meta-test MDPs are chosen such that they are unseen during meta-train, i.e. none of the transitions used during meta-train was sampled from any of the meta-test MDPs. At the beginning of each meta-test episode, the inferred MDP identity is initialized to a zero vector. Subsequent transitions collected during the episode is used as the\ncontext. The meta-test MDPs are also chosen to provide wide coverage over the support of the MDP distribution. This is to test whether our approach generalizes to a variety of meta-test MDPs, or simply overfit to a small region inside the support.\nMeta-train conditions The target velocities of the meta-train MDPs divide the target velocity interval into equal segments. This removes the bias of sampling meta-train MDPs when evaluating performance. The target velocity intervals, episode length, and number of meta-train MDPs for hopper and halfcheetah are [0.0, 2.35] and [0.0, 1.5], 1000 and 200, 16 and 29 respectively. For hopper and halfcheetah, each meta-train MDP has one million and sixty transitions respectively.\nPerformance analysis Figure 3 illustrates tiMe’s performance on unseen meta-test MDPs. tiMe is competitive with the state-of-the-art model-free RL methods trained from scratch for one million and sixty thousands environment interactions in hopper and halfcheetah respectively. We perform experiments on halfcheetah with an episode length 200 because of computational constraints. Previous Meta RL works also use an episode length of 200 (Rakelly et al., 2019). The same network trained with tiMe also performs well in a variety of different meta-test MDPs, demonstrating that it does not over-fit to one particular meta-train MDP. We compare with SAC to demonstrate BMRL is a promising research direction. We do not include other Meta RL algorithms as baseline because they would require interacting with the environment during meta-train and thus, is not solving for the problem that BMRL poses. We tried removing GS , ξS from the architecture in Figure 1 and picked action with Cross Entropy Method (Rubinstein & Kroese, 2004), but that lead to poor performance because QS over-estimates the values of actions not generated by GS , ξS .\nLimitations Our approach assumes that the transitions in the batch contain good enough transitions to learn a good policy in the batch RL setting. However, we note that the data in the batch contain data of varying qualities, some of which were generated by poorly performing policies. Also, our approach has only been demonstrated to work on tasks where reset are not crucial for exploration needed for task inference, e.g. sparse reward setting. We leave this venue for future work." }, { "heading": "6 RELATED WORKS", "text": "Supervised Learning and Imitation Learning The main differences between Batch (Meta) RL and supervised learning are: actions have long-term consequences and the actions in the batch are not assumed to be optimal. If they are optimal in the sense that they were collected from an expert, Batch RL reduces to Imitation Learning (Abbeel & Ng, 2004; Ho & Ermon, 2016). In fact, Fujimoto et al. (2018a) demonstrates that Batch RL generalizes Imitation Learning in discrete MDPs.\nMeta RL Equation 3 is the same objective that existing Meta RL algorithms optimize for (Wang et al. (2016b); Finn et al. (2017)). We could have formulated our experimental setting as a Partially Observable MDP, but we chose to formulate it as Batch Meta Reinforcement Learning to ensure consistency with literature that inspires this paper. The main difference between Meta RL and our formulation is access to the environment during training. Meta RL algorithms sample transitions from the environment during meta-train. We only have access to existing data during meta-train.\nContext Inference Zintgraf et al. (2019) and Rakelly et al. (2019) propose learning inference modules that infer the MDP identity. Their procedures sample transitions from the MDP during metatrain, which differs from our motivation of learning from only existing data. Killian et al. (2017) infers the MDP’s “hidden parameters”, inputs the parameters to a learnt transition function to generates synthetic data and train a policy from the synthetic data. Such model-based approaches are still outperformed by the best model-free methods (Wang et al. (2019)), which our method is based on.\nBatch RL Fujimoto et al. (2018a) and Agarwal et al. (2019) demonstrate that good policies can be learnt entirely from existing data in modern RL benchmarks. Our work extends their approaches to train policies from data generated by a family of MDPs. Li et al. (2004) selects transitions from the batch based on an importance measure. They assume that for state-action pair in the batch, their value under the optimal value function can be easily computed. We do not make such assumption.\nFactored MDPs In discrete MDP, the number of possible states increases exponentialy in the number of dimension. Kearns & Koller (2000) tackles this problem by assuming each dimension in the next state is conditionally dependent on only a subset of the dimensions in the current state. In contrast, our method makes no such assumption and applies to both discrete and continuous settings.\nJoint MDP The family of MDPs can be seen as a joint MDP with additional information in the state which differentiates states between the different MDPs (Parisotto et al., 2015). Sampling an initial state from the joint MDP is equivalent to sampling a MDP from the family of MDPs. However, without prior knowledge, it is unclear how to set the value of the additional information to supports generalization from the meta-train MDPs to the meta-test MDPs. In fact, the additional information in our approach is the transitions from the MDP and the network learns to infer MDP identity." }, { "heading": "7 CONCLUSION", "text": "We propose a new formalization of pre-training in RL as Batch Meta Reinforcement Learning (BMRL). BMRL differs from Batch RL in that the existing data comes from a family of related MDPs and thus enables scalable data collection. BMRL also differs from Meta RL in that no environment interaction happens during meta-train. We identified two main challenges in BMRL: MDP identity inference and generalization to unseen MDPs. To tackle these challenges, we propose tiMe, a flexible and scalable training pipeline which jointly learn distillation of value functions and MDP embeddings. Experimentally, we demonstrate that tiMe obtains performance competitive with those obtained by state-of-the-art model-free RL methods on unseen MDPs." }, { "heading": "A APPENDIX", "text": "A.1 MDP MIS-IDENTIFICATION CONVERGENCE PROOF\nStatement: Performing Q-learning on finite S and A with all the Q-values Q(s, a) initialized to 0 and update rule (1) where (s, a, s′, r) is sampled uniformly at random from B at each step t, will lead the Q-values to converge to the optimal Q-value of the MDP (S,A, T̂ , R̂, γ) almost surely as long as αt ≥ 0, ∑∞ t=0 αt =∞, ∑∞ t=0 α 2 t <∞, where\nT̂ (s, a, s′) = N(s, a, s′)∑ s′′∈S N(s, a, s ′′) , if ∑ s′′∈S N(s, a, s′′) > 0\n1s=s′ , otherwise.\nR̂(s, a, s′) = ∑ r:(s,a,s′,r)∈B r N(s, a, s′) , if N(s, a, s′) > 0\n0, otherwise.\nProof. First note that for any (s, a) ∈ S × A such ∑s′′∈S N(s, a, s′′) = 0, the initial Q(s, a) is already optimal and will never be updated; for all other (s, a) ∈ S ×A and any s′ ∈ S, we have\nT̂ (s, a, s′) = Pr (s0,a0,s′0,r0)∼B (s′0 = s ′|s0 = s, a0 = a) , R̂(s, a, s′) = E(s0,a0,s′0,r0)∼B [r0|s0 = s, a0 = a, s ′ 0 = s ′] ,\nand with probability 1, ∞∑ t=0 αt1{Q(s, a) is updated at round t} =∞\n∞∑ t=0 α2t1{Q(s, a) is updated at round t} <∞\nThen convergence follows from the same argument for the convergence of Q-learning (Watkins & Dayan, 1992)" }, { "heading": "B HYPER-PARAMETERS", "text": "The small, medium and large target velocity in hopper corresponds to 0.2, 1.1, 2.0. The small, medium and large velocity in halfcheetah corresponds to 0.475, 1.075, 1.475. The learning rate is 3e4 and the Adam optimizer is used in all experiment. All neural networks used are feed-forward network. All experiments are performed on machines with up to 48 CPU cores and 4 Nvidia GPU.\nAll experiments are performed in Python 3.7, mujoco-py 2.0.2.5 running on top of mujoco 200. All neural network operations are in Pytorch 1.2.\nIn the toy experiment, QS consists of 2 hidden layers, each of size 256. The inferred MDP size is 32. The context size is 1. f consists of 1 hidden layers of size 256. E consists of 1 hidden layers of size 256 and outputs 256 values. Same goes for P . Random shooting was performed with 100 random actions at each iterations.\nIn hopper, the size of the inferred MDP is 8. The context size is 1. f consists of 3 hidden layers, each of size 256. E consists of 4 hidden layers, each of size 16, and outputs a vector of size 8. B consists of 2 hidden layers, each of size 4. QS consists of 8 hidden layers, each of size 128. The training mini-batch size is 32. When BCQ is ran to extract the value function out of the batch, the same hyper-parameters as found in the official implementation are used https: //github.com/sfujim/BCQ, except the learning rate is lowered from 0.003 to 0.0003. The alpha and beta in the reward function definition are 1.0 and 0.001. The alive bonus is 1.0.\nIn halfcheetah, the size of the inferred MDP is 64. The context size is 1. f consists of 7 hidden layers, each of size 512. E consists of 7 hidden layers, each of size 512, and outputs a vector of size 64. B consists of 1 hidden layers, each of size 64. QS consists of 8 hidden layers, each of size 512. The training mini-batch size is 64. GS consists of 7 hidden layers, each of size 750. ξS consists of 7 hidden layers, each of size 400. When BCQ is ran to extract the value function out of the batch, unless otherwise mentioned, the same hyper-parameters as found in the official implementation are used https://github.com/sfujim/BCQ. The learning rate is lowered from 0.003 to 0.0003. The perturbation model has 2 hidden layers, of size 400, 300. The critic also has 2 hidden layers, of size 400, 300. The alpha and beta in the reward function definition are 1.0 and 0.05. The alive bonus is 0.0.\nIn both hopper and halfcheetah, except for the super Q function loss, the terms in the loss L in Algorithm 1 are scaled so that they have the same magnitude as the super Q function loss. Graphs for the mujoco experiments are generated by smoothing over the last 100 evaluation datapoints.\nThe performance on Mujoco was averaged over 5 seeds 0−4. The hyper-parameters for SAC are the same as those found in the Pytorch public implementation https://github.com/vitchyr/ rlkit. The standard deviations are averaged over 5000 timesteps during evaluation. This corresponds to 5 episodes in halfcheetah because there is no terminal state termination in halfcheetah and variable number of episodes in hopper because there is terminal state termination." } ]
2,019
null
SP:7dc520ce87edf76ac948de085da9855d2f32c7ab
[ "This paper introduces Graph Convolutional Reinforcement Learning (referred to as DGN). DGN is a Deep Q-Learning (DQN) agent structured as a graph neural network / graph convolutional network with multi-head dot product attention as a message aggregation function. Graphs are obtained based on spatial neighborhoods (e.g. k nearest neighbors) or based on network structure in the domain. DGN considers a multi-agent setting with a centralized learning algorithm and shared parameters across all (controlled) agents, but individually allocated reward. Further, the paper considers environments where other non-learning agents are present which follow a pre-trained, stationary policy. In addition to the attention-based multi-agent architecture, the paper introduces a regularizer on attention weights similar to the use of target networks in DQN, to stabilize training. Results demonstrate that the proposed model architecture outperforms related earlier agent architectures that do not use attention or use a fully-connected graph.", "This paper proposes an algorithm allowing \"cooperation\" between agents in multi-agent reinforcement learning, modeling agents as nodes in a graph. Each agent having only a partial view of the environment, the proposed algorithm uses multi-head attention as a (graph) convolution kernel but otherwise remains similar to the DQN algorithm. Performance is evaluated on three tasks using the MAgent framework." ]
Learning to cooperate is crucially important in multi-agent environments. The key is to understand the mutual interplay between agents. However, multi-agent environments are highly dynamic, where agents keep moving and their neighbors change quickly. This makes it hard to learn abstract representations of mutual interplay between agents. To tackle these difficulties, we propose graph convolutional reinforcement learning, where graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment, and relation kernels capture the interplay between agents by their relation representations. Latent features produced by convolutional layers from gradually increased receptive fields are exploited to learn cooperation, and cooperation is further improved by temporal relation regularization for consistency. Empirically, we show that our method substantially outperforms existing methods in a variety of cooperative scenarios.
[ { "affiliations": [], "name": "Jiechuan Jiang" }, { "affiliations": [], "name": "Chen Dun" }, { "affiliations": [], "name": "Tiejun Huang" }, { "affiliations": [], "name": "Zongqing Lu" } ]
[ { "authors": [ "Akshat Agarwal", "Sumit Kumar", "Katia Sycara" ], "title": "Learning transferable cooperative behavior in multi-agent teams", "venue": "arXiv preprint arXiv:1906.01202,", "year": 2019 }, { "authors": [ "Coren L Apicella", "Frank W Marlowe", "James H Fowler", "Nicholas A Christakis" ], "title": "Social networks and cooperation in hunter-gatherers", "venue": null, "year": 2012 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "R Qi Charles", "Hao Su", "Mo Kaichun", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": null, "year": 2017 }, { "authors": [ "Abhishek Das", "Théophile Gervet", "Joshua Romoff", "Dhruv Batra", "Devi Parikh", "Michael Rabbat", "Joelle Pineau" ], "title": "Tarmac: Targeted multi-agent communication", "venue": null, "year": 2019 }, { "authors": [ "David K Duvenaud", "Dougal Maclaurin", "Jorge Iparraguirre", "Rafael Bombarell", "Timothy Hirzel", "Alán Aspuru-Guzik", "Ryan P Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Jakob Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Jayesh K Gupta", "Maxim Egorov", "Mykel Kochenderfer" ], "title": "Cooperative multi-agent control using deep reinforcement learning", "venue": "In AAMAS,", "year": 2017 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": "arXiv preprint arXiv:1506.05163,", "year": 2015 }, { "authors": [ "Yedid Hoshen" ], "title": "Vain: Attentional multi-agent predictive modeling", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Natasha Jaques", "Angeliki Lazaridou", "Edward Hughes", "Caglar Gulcehre", "Pedro A Ortega", "DJ Strouse", "Joel Z Leibo", "Nando de Freitas" ], "title": "Social influence as intrinsic motivation for multi-agent deep reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Jiechuan Jiang", "Zongqing Lu" ], "title": "Learning attentional communication for multi-agent cooperation", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Aleksandra Malysheva", "Tegg Taekyong Sung", "Chae-Bong Sohn", "Daniel Kudenko", "Aleksei Shpilman" ], "title": "Deep multi-agent reinforcement learning with relevance graphs", "venue": "arXiv preprint arXiv:1811.12557,", "year": 2018 }, { "authors": [ "Laëtitia Matignon", "Laurent Jeanpierre", "Abdel-Illah Mouaddib" ], "title": "Coordinated multi-robot exploration under communication constraints using decentralized markov decision processes", "venue": "In AAAI,", "year": 2012 }, { "authors": [ "Alicia P Melis", "Dirk Semmann" ], "title": "How is human cooperation different", "venue": "Philosophical Transactions of the Royal Society of London B: Biological Sciences,", "year": 2010 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Hisashi Ohtsuki", "Christoph Hauert", "Erez Lieberman", "Martin A Nowak" ], "title": "A simple rule for the evolution of cooperation on graphs and social", "venue": null, "year": 2006 }, { "authors": [ "Peng Peng", "Ying Wen", "Yaodong Yang", "Quan Yuan", "Zhenkun Tang", "Haitao Long", "Jun Wang" ], "title": "Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games", "venue": "arXiv preprint arXiv:1703.10069,", "year": 2017 }, { "authors": [ "Chao Qu", "Shie Mannor", "Huan Xu", "Yuan Qi", "Le Song", "Junwu Xiong" ], "title": "Value propagation for decentralized networked deep multi-agent reinforcement learning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Shai Shalev-Shwartz", "Shaked Shammah", "Amnon Shashua" ], "title": "Safe, multi-agent, reinforcement learning for autonomous driving", "venue": "arXiv preprint arXiv:1610.03295,", "year": 2016 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning multiagent communication with backpropagation", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Andrea Tacchetti", "H Francis Song", "Pedro AM Mediano", "Vinicius Zambaldi", "Neil C Rabinowitz", "Thore Graepel", "Matthew Botvinick", "Peter W Battaglia" ], "title": "Relational forward models for multiagent learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ming Tan" ], "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", "venue": "In ICML,", "year": 1993 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Nicholas Watters", "Andrea Tacchetti", "Theophane Weber", "Razvan Pascanu", "Peter Battaglia", "Daniel Zoran" ], "title": "Visual interaction networks", "venue": "arXiv preprint arXiv:1706.01433,", "year": 2017 }, { "authors": [ "MA Wiering" ], "title": "Multi-agent reinforcement learning for traffic light control", "venue": "In ICML,", "year": 2000 }, { "authors": [ "Yaodong Yang", "Jianye Hao", "Mingyang Sun", "Zan Wang", "Changjie Fan", "Goran Strbac" ], "title": "Recurrent deep multiagent q-learning for autonomous brokers in smart grid", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Yaodong Yang", "Rui Luo", "Minne Li", "Ming Zhou", "Weinan Zhang", "Jun Wang" ], "title": "Mean field multiagent reinforcement learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart" ], "title": "Relational deep reinforcement learning", "venue": "arXiv preprint arXiv:1806.01830,", "year": 2018 }, { "authors": [ "Kaiqing Zhang", "Zhuoran Yang", "Han Liu", "Tong Zhang", "Tamer Başar" ], "title": "Fully decentralized multiagent reinforcement learning with networked agents", "venue": null, "year": 2018 }, { "authors": [ "Lianmin Zheng", "Jiacheng Yang", "Han Cai", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Magent: A many-agent reinforcement learning platform for artificial collective intelligence", "venue": "arXiv preprint arXiv:1712.00600,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Cooperation is a widespread phenomenon in nature from viruses, bacteria, and social amoebae to insect societies, social animals, and humans (Melis & Semmann, 2010). Human exceeds all other species in terms of range and scale of cooperation. The development of human cooperation is facilitated by the underlying graph of human societies (Ohtsuki et al., 2006; Apicella et al., 2012), where the mutual interplay between humans is abstracted by their relations.\nIt is crucially important to enable agents to learn to cooperate in multi-agent environments for many applications, e.g., autonomous driving (Shalev-Shwartz et al., 2016), traffic light control (Wiering, 2000), smart grid control (Yang et al., 2018a), and multi-robot control (Matignon et al., 2012). Multiagent reinforcement learning (MARL) facilitated by communication (Sukhbaatar et al., 2016; Peng et al., 2017; Jiang & Lu, 2018), mean field theory (Yang et al., 2018b), and causal influence (Jaques et al., 2019) have been exploited for multi-agent cooperation. However, communication among all agents (Sukhbaatar et al., 2016; Peng et al., 2017) makes it hard to extract valuable information for cooperation, while communication with only nearby agents (Jiang & Lu, 2018) may restrain the range of cooperation. MeanField (Yang et al., 2018b) captures the interplay of agents by mean action, but the mean action eliminates the difference among agents and thus incurs the loss of important information that could help cooperation. Causal influence (Jaques et al., 2019) is a measure of action influence, which is the policy change of an agent in the presence of an action of another agent. However, causal influence is not directly related to the reward of the environment and thus may not encourage cooperation. Unlike existing work, we consider the underlying graph of agents, which could potentially help understand agents’ mutual interplay and promote their cooperation as it does in human cooperation (Ohtsuki et al., 2006; Apicella et al., 2012).\nIn this paper, we propose graph convolutional reinforcement learning, where the multi-agent environment is modeled as a graph. Each agent is a node, the encoding of local observation of agent is the feature of node, and there is an edge between a node and its each neighbor. We apply convolution to the graph of agents. By employing multi-head attention (Vaswani et al., 2017) as the convolution kernel, graph convolution is able to extract the relation representation between nodes and convolve the features from neighboring nodes just like a neuron in a convolutional neural network (CNN). Latent features extracted from gradually increased receptive fields are exploited to learn cooperative policies. Moreover, the relation representation is temporally regularized to help the agent develop consistent cooperative policy. ∗Work done at Peking University. †Correspondence to Zongqing Lu <zongqing.lu@pku.edu.cn>.\nGraph convolutional reinforcement learning, namely DGN, is instantiated based on deep Q network and trained end-to-end. DGN shares weights among all agents, making it easy to scale. DGN abstracts the mutual interplay between agents by relation kernels, extracts latent features by convolution, and induces consistent cooperation by temporal relation regularization. We empirically show the learning effectiveness of DGN in jungle and battle games and routing in packet switching networks. We demonstrate that DGN agents are able to develop cooperative and sophisticated strategies and DGN outperforms existing methods in a large margin.\nBy ablation studies, we confirm the following. Graph convolution greatly enhances the cooperation of agents. Unlike other parameter-sharing methods, graph convolution allows the policy to be optimized by jointly considering the agents in the receptive field of an agent, promoting the mutual help. Relation kernels that are independent from the input order of features can effectively capture the interplay between agents and abstract relation representation to further improve cooperation. Temporal regularization, which minimizes the KL divergence of relation representations in successive timesteps, boosts the cooperation, helping the agent to form a long-term and consistent policy in the highly dynamic environment with many moving agents." }, { "heading": "2 RELATED WORK", "text": "MARL. MADDPG (Lowe et al., 2017) and COMA (Foerster et al., 2018) are actor-critic models for the settings of local reward and shared reward, respectively. A centralized critic that takes as input the observations and actions of all agents are used in both, which makes them hard to scale. PS-TRPO (Gupta et al., 2017) solves problems that were previously considered intractable by most MARL algorithms via sharing of policy parameters that also improves multi-agent cooperation. However, the cooperation is still limited without sharing information among agents. Sharing parameters of value function among agents is considered in (Zhang et al., 2018) and convergence guarantee is provided for linear function approximation. However, the proposed algorithms and their convergence are established only in fully observable environments. Value propagation is proposed in (Qu et al., 2019) for networked MARL, which uses softmax temporal consistency to connect value and policy updates. However, this method only works on networked agents with static connectivity. CommNet (Sukhbaatar et al., 2016) and BiCNet (Peng et al., 2017) communicate the encoding of local observation among agents. ATOC (Jiang & Lu, 2018) and TarMAC (Das et al., 2019) enable agents to learn when to communicate and who to send messages to, respectively, using attention mechanism. These communication models prove that communication does help for cooperation. However, full communication is costly and inefficient, while restrained communication may limit the range of cooperation.\nGraph Convolution and Relation. Many important real-world applications come in the form of graphs, such as social networks (Kipf & Welling, 2017), protein-interaction networks (Duvenaud et al., 2015), and 3D point cloud (Charles et al., 2017). Several frameworks (Henaff et al., 2015; Niepert et al., 2016; Kipf & Welling, 2017; Velickovic et al., 2017) have been architected to extract locally connected features from arbitrary graphs. A graph convolutional network (GCN) takes as input the feature matrix that summarizes the attributes of each node and outputs a node-level feature matrix. The function is similar to the convolution operation in CNNs, where the kernels are convolved across local regions of the input to produce feature maps. Using GCNs, interaction networks can reason the objects, relations and physics in complex systems, which has been proven difficult for CNNs. A few interaction frameworks have been proposed to predict the future states and underlying properties, such as IN (Battaglia et al., 2016), VIN (Watters et al., 2017), and VAIN (Hoshen, 2017). Relational reinforcement learning (RRL) (Zambaldi et al., 2018) embeds multi-head dotproduct attention (Vaswani et al., 2017) as relation block into neural networks to learn pairwise interaction representation of a set of entities in the agent’s state, helping the agent solve tasks with complex logic. Relational Forward Models (RFM) (Tacchetti et al., 2019) use supervised learning to predict the actions of all other agents based on global state. However, in partially observable environments, it is hard for RFM to learn to make accurate prediction with only local observation. MAGnet (Malysheva et al., 2018) learns relevance information in the form of a relevance graph, where relation weights are learned by pre-defined loss function based on heuristic rules, but relation weights in DGN are learned by directly minimizing the temporal-difference error of value function end-to-end. Agarwal et al. (2019) used attention mechanism for communication and proposed a\ncurriculum learning for transferable cooperation. However, these two methods require the objects in the environment are explicitly labeled, which is infeasible in many real-world applications." }, { "heading": "3 METHOD", "text": "We construct the multi-agent environment as a graph, where agents in the environment are represented by the nodes of the graph and each node i has a set of neighbors, Bi, which is determined by distance or other metrics, depending on the environment, and varies over time (e.g., the agents in i’s communication range or local observation). Moreover, neighboring nodes can communicate with each other. The intuition behind this is neighboring agents are more likely to interact with and affect each other. In addition, in many multi-agent environments, it may be costly and less helpful to take all other agents into consideration, because receiving a large amount of information requires high bandwidth and incurs high computational complexity, and agents cannot differentiate valuable information from globally shared information (Tan, 1993; Jiang & Lu, 2018). As convolution can gradually increase the receptive field of an agent1, the scope of cooperation is not restricted. Therefore, it is efficient and effective to consider only neighboring agents. Unlike the static graph considered in GCNs, the graph of multi-agent environment is dynamic and continuously changing over time as agents move or enter/leave the environment. Therefore, DGN should be able to adapt to the dynamics of the graph and learn as the multi-agent environment evolves.\n3.1 GRAPH CONVOLUTION\nThe problem is formulated as Decentralized Partially Observable Markov Decision Process (DecPOMDP), where at each timestep t each agent i receives a local observation oti, which is the property of node i in the graph, takes an action ati, and gets an individual reward rti . The objective is to maximize the sum of all agents’ expected returns. DGN consists of three types of modules: observation encoder, convolutional layer andQ network, as illustrated in Figure 1. The local observation oti is encoded into a feature vector hti by MLP for low-dimensional input or CNN for visual input. The convolutional layer integrates the feature vectors in the local region (including node i and its neighbors Bi) and generates the latent feature vector h ′t i . By stacking more convolutional layers, the receptive field of an agent gradu-\nally grows, where more information is gathered, and thus the scope of cooperation can also increase. That is, by one convolutional layer, node i can directly acquire the feature vectors from the encoders of nodes in one-hop (i.e., Bi). By stacking two layers, node i can get the output of the first convolutional layer of the nodes in one-hop, which contains the information from nodes in two-hop. However, regardless of how many convolutional layers are stacked, node i only communicates with its neighbors. This makes DGN practical in real-world applications, where each agent has limited communication range. In addition, details of the convolution kernel will be discussed in next subsection.\nAs the number and position of agents vary over time, the underlying graph continuously changes, which brings difficulties to graph convolution. To address the issue, we merge all agents’ feature vectors at time t into a feature matrix F t with size N×L in the order of index, where N is the number of agents and L is the length of feature vector. Then, we construct an adjacency matrix Cti with size (|Bi| + 1) × N for agent i, where the first row is the one-hot representation of the index of node i, and the jth row, j = 2, . . . , |Bi| + 1, is the one-hot representation of the index of the (j − 1)th neighbor. Then, we can obtain the feature vectors in the local region of node i by Cti × F t. Inspired by DenseNet (Huang et al., 2017), for each agent, the features of all the preceding layers are concatenated and fed into the Q network, so as to assemble and reuse the observation representation\n1The receptive field of an agent at a convolutional layer is its perceived agents at that layer.\nand features from different receptive fields, which respectively have distinctive contributions to the strategy that takes the cooperation at different scopes into consideration.\nDuring training, at each timestep, we store the tuple (O,A,O′,R, C) in the replay buffer, where O = {o1, · · · , oN} is the set of observations, A = {a1, · · · , aN} is the set of actions, O′ = {o′1, · · · , o′N} is the set of next observations, R = {r1, · · · , rN} is the set of rewards, and C = {C1, · · · , CN} is the set of adjacency matrix. Note that we drop time t in the notations for simplicity. Then, we sample a random minibatch of size S from the replay buffer and minimize the loss\nL(θ) = 1 S ∑ S 1 N N∑ i=1 (yi −Q (Oi,C , ai; θ))2 , (1)\nwhere yi = ri + γmaxa′Q ( O′i,C , a ′ i; θ ′) , Oi,C ⊆ O denotes the set of observations of the agents in i’s receptive fields determined by C, γ is the discount factor, and Q function, parameterized by θ, takes Oi,C as input and outputs Q value for agent i. The action of agent can change the graph at next timestep. Ideally, Q function should be learned on the changing graph. However, the graph may change quickly, which makes Q network difficult to converge. Thus, we keep C unchanged in two successive timesteps when computing the Q-loss in training to ease this learning difficulty. The gradients of Q-loss of all agents are accumulated to update the parameters. Then, we softly update the target network as θ′ = βθ + (1− β)θ′. Like CommNet (Sukhbaatar et al., 2016), DGN can also be seen as a factorization of a centralized policy that outputs actions for all the agents to optimize the average expected return. The factorization is that all agents share θ and the model of each agent is connected to its neighbors, dynamically determined by the graph of agents at each timestep. More convolutional layers (i.e., larger receptive field) yield a higher degree of centralization that mitigates non-stationarity. In addition, unlike other methods with parameter-sharing, e.g., DQN, that sample experiences from individual agents, DGN samples experiences based on the graph of agents, not individual agents, and thus takes into consideration the interactions between agents. Nevertheless, the parameter-sharing of DGN does not prevent the emergence of sophisticated cooperative strategies, as we will show in the experiments. Note that during execution each agent only requires the (latent) features from its neighbors (e.g., via communication) regardless of the number of agents, which makes DGN easily scale." }, { "heading": "3.2 RELATION KERNEL", "text": "Convolution kernels integrate the feature in the receptive field to extract the latent feature. One of the most important properties is that the kernel should be independent from the order of the input feature vectors. Mean operation as in CommNet (Sukhbaatar et al., 2016) meets this requirement, but it leads to only marginal performance gain. BiCNet (Peng et al., 2017) uses the learnable kernel, i.e., RNN. However, the input order of feature vectors severely impacts the performance, though the affect is alleviated by bi-direction mechanism. Further, convolution kernels should be able to learn how to abstract the relation between agents so as to integrate their input features.\nInspired by RRL (Zambaldi et al., 2018), we use multi-head dot-product attention as the convolutional kernel to compute interactions between agents. For each agent i, let B+i denote Bi and i. The input feature of each agent is projected to query, key and value representation by each independent attention head. For attention head m, the relation between i and j ∈ B+i is computed as\nαmij = exp\n( τ ·WmQhi · (WmKhj)T )∑ k∈B+i exp ( τ ·WmQhi · (WmKhk)T\n) , (2) where τ is a scaling factor. For each attention head, the value representations of all the input features are weighted by the relation and summed together. Then, the outputs of M attention heads for agent i are concatenated and then fed into function σ, i.e., one-layer MLP with ReLU non-linearities, to produce the output of the convolutional layer,\nh ′ i = σ(concatenate[ ∑ j∈B+i αmijW m V hj ,∀m ∈ M]). (3)\nFigure 2 illustrates the computation of the convolutional layer with relation kernel. Multi-head attention makes the kernel independent from the order of input feature vectors, and allows the kernel\nto jointly attend to different representation subspaces. More attention heads give more relation representations and make the training more stable empirically (Vaswani et al., 2017). Moreover, with multiple convolutional layers, higher order relation representations can be extracted, which effectively capture the interplay between agents and greatly help to make cooperative decision." }, { "heading": "3.3 TEMPORAL RELATION REGULARIZATION", "text": "Cooperation is a persistent and long-term process. Who to cooperate with and how to cooperate should be consistent and stable for at least a short period of time even when the state/feature of surrounding agents changes. Thus, the relation representation, i.e., the attention weight distribution over the neighboring agents produced by the relation kernel\n(Equation 2), should be also consistent and stable for a short period of time. To make the learned attention weight distribution stable over timesteps, we propose temporal relation regularization. Inspired by temporal-difference learning, we use the attention weight distribution in the next state as the target for the current attention weight distribution. We adopt KL divergence to measure how the current attention weight distribution is different from the target attention weight distribution. Minimizing the KL divergence as a regularization will encourage the agent to form the consistent relation representation and hence consistent cooperation. In CNNs/GCNs, higher layer learns more abstract representation. Similarly, in DGN, the relation representation captured by upper layer should be more abstract and stable. Thus, we apply temporal relation regularization to the upper layer. Moving average and RNN structures might help the relation representation be stable in static graph. However, in dynamic environment where the neighbors of the agent quickly change, averaging or integrating cannot be performed on the attention weights of different neighbors.\nIt should be noted that we only use target network to produce the target Q value. For the calculation of KL divergence between relation representations in two timesteps, we apply current network to the next state to produce the target relation representation. This is because relation representation is highly correlated with the weights of feature extraction. But update of such weights in target network always lags behind that of current network, making the relation representation produced by target network not consistent with that produced by current network.\nLet Gκm(Oi,C ; θ) denotes the attention weight distribution of relation representations of attention head m at convolutional layer κ for agent i. Then, with temporal relation regularization, the loss is modified as below\nL(θ) = 1 S ∑ S 1 N N∑ i=1 ((yi −Q (Oi,C , ai; θ))2 + λ 1 M M∑ m=1 DKL(Gκm(Oi,C ; θ)||Gκm(O′i,C ; θ)), (4)\nwhere λ is the coefficient for the regularization loss. Temporal relation regularization of upper layer in DGN helps the agent to form long-term and consistent action policy in the highly dynamical environment with many moving agents. This will further help agents to form cooperative behavior since many cooperative tasks need long-term consistent cooperation among agents to get the final reward. We will further analyze this in the experiments." }, { "heading": "4 EXPERIMENTS", "text": "For the experiments, we adopt a grid-world platform MAgent (Zheng et al., 2017). In the 30 × 30 grid-world environment, each agent corresponds to one grid and has a local observation that contains a square view with 11 × 11 grids centered at the agent and its own coordinates. The discrete actions are moving or attacking. Two scenarios, battle and jungle, are considered to investigate the cooperation among agents. Also, we build an environment, routing, that simulates routing in packet switching networks. These three scenarios are illustrated in Figure 3. In the experiments, we compare DGN with independent Q-learning, DQN, which is fully decentralized, CommNet (Sukhbaatar et al., 2016), and MeanField Q-learning (MFQ) (Yang et al., 2018b). We also evaluate two variants of DGN for ablation study, which are DGN without temporal relation regularization, denoted\nas DGN-R, and further DGN-R with mean kernels instead of relation kernels, denoted as DGN-M. In the experiments, DGN and the baselines are parameter-sharing and trained using Q-learning. Moreover, to ensure the comparison is fair, their basic hyperparameters are all the same and their parameter sizes are also similar. Please refer to Appendix for hyperparameters and experimental settings. The code of DGN is available at https://github.com/PKU-AI-Edge/DGN/." }, { "heading": "4.1 BATTLE", "text": "In this scenario, N agents learn to fight against L enemies who have superior abilities than the agents. The moving or attacking range of the agent is the four neighbor grids, however, the enemy can move to one of twelve nearest grids or attack one of eight neighbor grids. Each agent/enemy has six hit points (i.e., being killed by six attacks). After the death of an agent/enemy, the balance will be easily lost and hence we will add a new agent/enemy at a random location to maintain the balance. By that, we can make fair comparison among different methods in terms of kills, deaths and kill-death ratio besides reward for given timesteps. The pretrained DQN model built-in MAgent takes the role of enemy. As individual enemy is much powerful than individual agent, an agent has to collaborate with others to develop coordinated tactics to fight enemies. Moreover, as the hit point of enemy is six, agents have to consistently cooperate to kill an enemy.\nWe trained all the models with the setting of N = 20 and L = 12 for 2000 episodes. Figure 4 shows their learning curves in terms of mean reward. For all the models, the shadowed area is enclosed by the min and max value of three training runs, and the solid line in middle is the mean value (same for jungle and routing). DGN converges to much higher mean reward than other baselines, and its learning curve is more stable. MFQ outperforms CommNet and DQN which first get relative high reward, but eventually converge to much lower reward. As observed in the experiment, at the beginning of training, DQN and CommNet learn sub-optimum policies such as gathering as a group in a corner to avoid being attacked,\nsince such behaviors generate relatively high reward. However, since the distribution of reward is uneven, i.e., agents at the exterior of the group are easily attacked, learning from the “low reward experiences” produced by the sub-optimum policy, DQN and CommNet converge to more passive policies, which lead to much lower reward. We evaluate DGN and the baselines by running 30 test games, each game unrolled with 300 timesteps. Table 1 shows the mean reward, kills, deaths, and kill-death ratio.\nDGN agents learn a series of tactical maneuvers, such as encircling and envelopment of a single flank. For single enemy, DGN agents learn to encircle and attack it together. For a group of en-\nemies, DGN agents learn to move against and attack one of the enemy’s open flanks, as depicted in Figure 5a. CommNet agents adopt an active defense strategy. They seldom launch attacks but rather run away or gather together to avoid being attacked. DQN agents driven by self-interest fail to learn a rational policy. They are usually forced into a corner and passively react to the enemy’s attack, as shown in Figure 5b. MFQ agents do not effectively cooperate with each other because the mean action incurs the loss of important information that could help cooperation. In DGN, relation kernels can extract high order relations between agents through graph convolution, which can be easily exploited to yield cooperation. Therefore, DGN outperforms other baselines.\nAblations. As shown in Figure 4 and Table 1, comparing DGN and DGN-R, we see that the removal of temporal relation regularization incurs slight drop in performance. In the experiment, it is observed that DGN agents indeed behave more consistently and synchronously with each other, while DGN-R agents are more likely to be distracted by the new appearance of enemy or friend nearby and abandon its original intended trajectory. This results in fewer appearances of successful formation of encircling of a moving enemy, which might need consistent cooperation of agents to move across the field. DGN agents often overcome such distraction and show more long-term strategy and aim by moving more synchronously to chase the enemy until encircle and destroy it. From this experiment, we can see that temporal relation regularization indeed helps agents to form more consistent cooperation. Moreover, comparing DGN-R and DGN-M, we confirm that relation kernels that abstract the relation representation between agents indeed helps to learn cooperation. Although DGN-M and CommNet both use mean operation, DGN-M substantially outperforms CommNet. This is attributed to graph convolution can effectively extract latent features from gradually increased receptive field. The performance of DGN with different receptive fields is available in Appendix.\n4.2 JUNGLE\nThis scenario is a moral dilemma. There are N agents and L foods in the field, where foods are stationary. An agent gets positive reward by eating food, but gets higher reward by attacking other agent. At each timestep, each agent can move to or attack one of four neighboring grids. Attacking a blank grid gets a small negative reward (inhibiting excessive attacks). This experiment is to examine whether agents can learn collaboratively sharing resources rather than attacking each other. We trained all the models in the setting of N = 20 and L = 12 for 2000 episodes. Table 2 shows the mean reward and number of attacks between agents over 30 test runs, each game unrolled with 120 timesteps. Figure 6 shows their learning\ncurves. DGN outperforms all the baselines during training and test in terms of mean reward and number of attacks between agents. It is observed that DGN agents can properly select the close food and seldom hurt each other, and the food can be allocated rationally by the surrounding agents, as\nshown in Figure 5c. Moreover, attacks between DGN agents are much less than others, e.g., 2× less than MFQ. Sneak attack, fierce conflict, and hesitation are the characteristics of CommNet and DQN agents, as illustrated in Figure 5d, verifying their failure of learning cooperation." }, { "heading": "4.3 ROUTING", "text": "The network consists of L routers. Each router is randomly connected to a constant number of routers (three in the experiment), and the network topology is stationary. There are N data packets with a random size, and each packet is randomly assigned a source and destination router. If there are multiple packets with the sum size larger than the bandwidth of a link, they cannot go through the link simultaneously. In the experiment, data packets are agents, and they aim to quickly reach the destination while avoiding congestion. At each timestep, the observation of a packet is its own attributes (i.e., current location, destination, and data size), the attributes of cables connected to its current location (i.e., load, length), and neighboring data packets (on the connected cable or routers). It takes some timesteps for a data packet to go through a cable, a linear function of the cable length. The action space of a packet is the choices of next hop. Once the data packet arrives at the destination, it leaves the system and another data packet enters the system with random initialization.\nWe trained all the models with the setting of N = 20 and L = 20 for 2000 episodes. Figure 7 shows their learning curves. DGN converges to much higher mean reward and more quickly than the baselines. We evaluate all the models by running 10 test games, each game unrolled with 300 timesteps. Table 3 shows the mean reward, mean delay of data packets, and throughput, where the delay of a packet is measured by the timesteps taken from source to destination and the throughput is the number of delivered packets per timestep.\nTo better interpret the performance of the models, we calculate the shortest path for every pair of nodes in the network using Floyd algorithm. Then, during test, we directly calculate the delay and throughout based on the shortest path of each packet, which is Floyd in Table 3. Note that this delay is without considering the bandwidth limitation (i.e., data packets can go through any link simultaneously). Thus, this is the ideal case for the routing problem. When considering the bandwidth limit, we let each packet follow its shortest path, and if a link is congested, the packet will wait at the router until the link is unblocked. This is Floyd with Bandwidth Limit (BL) in Table 3, which can be considered as the practical solution.\nAs shown in Table 3, the performance of DGN is much better than other models and Floyd with BL.\nIn the experiment, it is observed that DGN agents tend to select the shortest path to the destination, and more interestingly, learn to select different paths when congestion is about to occur. DQN agents cannot learn the shortest path due to myopia and easily cause congestion at some links without considering the influence of other agents. Communication indeed helps as MFQ and CommNet outperform DQN. However, they are unable to develop the sophisticated strategies as DGN does and eventually converge to much lower performance.\nTo investigate how network traffic affects the performance of the models, we performed the experiments with heavier data traffic, i.e., N = 40 and L = 20, where all the models are directly applied to the setting without retraining. From Table 3, we can see that DGN is much better than Floyd with BL, and MFQ is also better than Floyd with BL. The reason is that Floyd with BL (i.e., simply following the shortest path) is favorable when traffic is light and congestion is rare, while it does not work well when traffic is heavy and congestion easily occurs. We further apply all the models learned in N = 20 and L = 20 to the setting of N = 60 and L = 20. DGN still outperforms Floyd with BL, while MFQ become worse than Floyd with BL. It is observed in the experiments that DGN without retraining outperforms Floyd with BL up to N = 140 and L = 20, available in Appendix. From the experiments, we can see that our model trained with fewer agents can well generalize to the setting with much more agents, which demonstrates that the policy that takes as input the integrated features from neighboring agents based on their relations scales well with the number of agents." }, { "heading": "5 CONCLUSIONS", "text": "We have proposed graph convolutional reinforcement learning. DGN adapts to the dynamics of the underlying graph of the multi-agent environment and exploits convolution with relation kernels to extract latent features from gradually increased receptive fields for learning cooperative strategies. Moreover, the relation representation between agents are temporally regularized to make the cooperation more consistent. Empirically, DGN significantly outperforms existing methods in a variety of cooperative multi-agent scenarios." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by NSF China under grant 61872009, Huawei Noah’s Ark Lab, and Peng Cheng Lab." }, { "heading": "A HYPERPARAMETERS", "text": "Table 4 summarizes the hyperparameters used by DGN and the baselines in the experiments." }, { "heading": "B EXPERIMENTAL SETTINGS", "text": "In jungle, the reward is 0 for moving, +1 for attacking (eating) the food, +2 for attacking other agent, −4 for being attacked, and −0.01 for attacking a blank grid. In battle, the reward is +5 for attacking the enemy, −2 for being killed, and −0.01 for attacking a blank grid. In routing, the bandwidth of each link is the same and set to 1. Each data packet is with a random size between 0 and 1. If the link to the next hop selected by a data packet is overloaded, the data packet will stay at the current router and be punished with a reward −0.2. Once the data packet arrives at the destination, it leaves the system and gets a reward +10. In the experiments, we fix the size of B to 3, because DGN is currently implemented based on TensorFlow which does not support dynamic computing graph (varying size of B). We also show how different sizes of B affect DGN’s performance in the following. Indeed, DGN adapts to dynamic environments, no matter how the number of agents changes, how the graph of agents changes, and how many neighbors each agent has." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "As aforementioned, larger receptive field yields a higher degree of centralization that mitigates nonstationarity. We also investigate this in the experiments. First we examine how DGN performs with different number of convolution layers. As illustrated in Figure 8, two convolutional layers indeed yield more stable learning curve than one layer as expected.\nWe also investigate how the size of neighbors |B| affects the performance of DGN. We set |B| of each agent to 1, 2, 3 and 4 in jungle. As illustrated in Figure 9, when |B| increases from 1 to 3, the performance improves. However, when |B| = 4, the performance drops, equivalent to |B| = 1. In addition, as shown in Figure 6, the full communication method, CommNet, has very limited performance. These verify that it may be less helpful and even negatively affect the performance to take all other agents into consideration.\nIdeally, Q function should be learned on the changing graph of agents. However, the quickly changing graph can make Q function difficult to converge. Thus, we fix the graph in two successive timesteps to mitigate the effect of changing graph and ease the learning difficulty. As shown in Figure 10, the learning curve of fixed graph indeed converges faster than that of unfixed graph. As keeping the graph of agents unchanged is necessary for temporal relation regularization, for fair comparison, we also remove temporal relation regularization for fixed graph.\nWe also perform additional experiments to compare DGN with ATOC and TarMAC in Battle. As shown in Figure 11, DGN outperforms ATOC. The reason is that LSTM kernel is worse than multi-head attention kernel in capturing relation between agents. Like CommNet, TarMAC is also a full communication method. Similarly, DGN also outperforms TarMAC. This again verifies that receiving re-\ndundant information may negatively affect the performance.\nWe also conducted additional experiments in routing to compare DGN (learned in the setting of N = 20 and L = 20) and Floyd with BL under increasingly heavier traffic, in terms of mean delay. As shown in Figure 12, DGN continuously outperforms Floyd with BL up to N = 140. After that, Floyd with BL outperforms DGN. The reason is that when the traffic becomes so heavy, the network is fully congested and there is no way to improve the performance. DGN learned in much lighter traffic may still try to find better routes, but this incurs extra delay.\nFigure 13 and Figure 14 show the learning curves of DGN, DGN-R and DGN-M in jungle and routing, respectively. We can see that DGN and DGNR outperforms DGN-M. DGN slightly outperforms DGN-R in routing, and they performs similarly in jungle. The reason is that in both jungle and routing the agents do not require cooperation consistency as much as in battle. In battle the agents need\nto cooperatively and consistently attack an enemy since it has six hit points. However, in jungle agents seldom move once they reach the status of sharing food, while in routing data packets (agents) with different destinations seldom share many links (cooperate continuously) along their paths." } ]
2,020
GRAPH CONVOLUTIONAL REINFORCEMENT LEARNING
SP:52cbd90cb8f3de333a12f8c42cad968d36b509c3
[ "This proposes two techniques to replace mixed-precision arithmetic with half-precision training for a large part of the training process. In the first approach, the authors simply switch all mixed-precision operations with half-precision operations, and can achieve performances slightly lower than SOTA. In the second approach, the authors propose to dynamically switch between mixed-operations and half-precision operations during training. The authors claim that this second approach can match SOTA results while using half-precision arithmetic for more than 94% of training.", "The author(s) propose to accelerate the training of deep neural networks while also maintain the performance of the trained model by switching between fully half-precision computation and mixed-precision computation. Compared to the commonly-used mixed-precision training strategy, the proposed method can accelerate the training speed. Besides, on an image classification task, models trained by the proposed method achieve comparable performance with those trained by mix-precision training or full-precision training." ]
Mixed-precision arithmetic combining both singleand half-precision operands on the same operation has been successfully applied to train deep neural networks. Despite the advantages of mixed-precision arithmetic in terms of reducing the need for key resources like memory bandwidth or register file size, it has a limited capacity for diminishing computing costs and requires 32-bits to represent its output operands. This paper proposes two approaches to replace mixed-precision for half-precision arithmetic during a large portion of the training. The first approach achieves accuracy ratios slightly slower than the state-of-the-art by using half-precision arithmetic during more than 99% of training. The second approach reaches the same accuracy as the state-of-the-art by dynamically switching between halfand mixed-precision arithmetic during training. It uses half-precision during more than 94% of the training process. This paper is the first in demonstrating that half-precision can be used for a very large portion of DNNs training and still reach state-of-the-art accuracy.
[]
[ { "authors": [ "Yohan Chatelain", "Eric Petit", "Pablo de Oliveira Castro", "Ghislain Lartigue", "David Defour" ], "title": "Automatic exploration of reduced floating-point representations in iterative methods", "venue": "In Euro-Par", "year": 2019 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Training deep neural networks with low precision multiplications", "venue": "(Section 5):1–10,", "year": 2014 }, { "authors": [ "P.D.A. Dawson" ], "title": "Düben. rpe v5: an emulator for reduced floating-point precision in large numerical simulations", "venue": "Geoscientific Model Development,", "year": 2017 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "CoRR, abs/1502.02551,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "Proceedings of the IEEE International Conference on Computer Vision, 2015 International Conference on Computer Vision, ICCV 2015:", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": null, "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George E Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara N Sainath" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "James Jeffers", "James Reinders", "Avinash Sodani" ], "title": "Intel Xeon Phi Processor High Performance Programming: Knights Landing Edition 2Nd Edition", "venue": null, "year": 2016 }, { "authors": [ "Dhiraj Kalamkar", "Dheevatsa Mudigere", "Naveen Mellempudi", "Dipankar Das", "Kunal Banerjee", "Sasikanth Avancha", "Dharma Teja Vooturi", "Nataraj Jammalamadaka", "Jianyu Huang", "Hector Yuen", "Jiyan Yang", "Jongsoo Park", "Alexander Heinecke", "Evangelos Georganas", "Sudarshan Srinivasan", "Abhisek Kundu", "Misha Smelyanskiy", "Bharat Kaul", "Pradeep Dubey" ], "title": "A Study of BFLOAT16 for Deep Learning Training", "venue": "URL http://arxiv.org/abs/1905.12322", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "A.J. Lawrance", "P.A.W. Lewis" ], "title": "An exponential moving-average sequence and point process (ema1)", "venue": "Journal of Applied Probability,", "year": 1977 }, { "authors": [ "Chi Keung Luk", "Robert Cohn", "Robert Muth", "Harish Patil", "Artur Klauser", "Geoff Lowney", "Steven Wallace", "Vijay Janapa Reddi", "Kim Hazelwood" ], "title": "Pin: Building customized program analysis tools with dynamic instrumentation", "venue": "ACM SIGPLAN Notices,", "year": 2005 }, { "authors": [ "Nigel Stephens", "Stuart Biles", "Matthias Boettcher", "Jacob Eapen", "Mbou Eyole", "Giacomo Gabrielli", "Matt Horsnell", "Grigorios Magklis", "Alejandro Martinez", "Nathanael Premillieu", "Alastair Reid", "Alejandro Rico", "Paul Walker" ], "title": "The arm scalable vector extension", "venue": "IEEE Micro,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Naigang Wang", "Jungwook Choi", "Daniel Brand", "Chia Yu Chen", "Kailash Gopalakrishnan" ], "title": "Training deep neural networks with 8-bit floating point numbers", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shibo Wang", "Pankaj Kanwar" ], "title": "Bfloat16: The secret to high performance on cloud tpus. 2019", "venue": "URL https://cloud.google.com/blog/products/ai-machine-learning/ bfloat16-the-secret-to-high-performance-on-cloud-tpus", "year": 2019 }, { "authors": [ "Yonghui Wu", "Mike Schuster", "Zhifeng Chen", "Quoc V Le", "Mohammad Norouzi", "Wolfgang Macherey", "Maxim Krikun", "Yuan Cao", "Qin Gao", "Klaus Macherey" ], "title": "Google’s neural machine translation system: Bridging the gap between human and machine translation", "venue": "arXiv preprint arXiv:1609.08144,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The use of Deep Neural Networks (DNNs) is becoming ubiquitous in areas like computer vision (Krizhevsky et al., 2012; Szegedy et al., 2015), speech recognition (Hinton et al., 2012), or language translation (Wu et al., 2016). DNNs display very remarkable pattern detection capacities and, more specifically, Convolutional Neural Networks (CNNs) are able to accurately detect and classify objects over very large image sets (Krizhevsky et al., 2012). Despite this success, a large amount of samples must be exposed to the model for tens or even hundreds of times during training until an acceptable accuracy threshold is reached, which drives up training costs in terms of resources like memory storage or computing time.\nTo mitigate these very large training costs, approaches based on data representation formats simpler than the Floating Point 32-bit (FP32) standard have been proposed (Courbariaux et al., 2014; Gupta et al., 2015). These approaches successfully mitigate the enormous training costs of DNNs by using data representation formats that either reduce computing costs or diminish the requirements in terms of memory storage and bandwidth. In particular, some of these proposals have shown the benefits of combining half-precision and single-precision compute during training in terms of keeping model accuracy and reducing compute and memory costs (Micikevicius et al., 2017; Kalamkar et al., 2019). These approaches accelerate linear algebra operations by accumulating half-precision input operands to generate 32-bit outputs. While this mixed-precision (MP) arithmetic can successfully reduce the use of resources like memory bandwidth or hardware components like register file size, it has a very limited capacity for diminishing computing costs and it is unable to reduce output data size.\nIn this paper we propose new training methodologies able to exclusively use half-precision for a large part of the training process, which constitutes a very significant improvement over mixedprecision approaches in terms of compute and memory bandwidth requirements. We propose two different approaches, the first one statically assigns either the Brain Floating Point 16-bit (BF16) or the FP32 format to the model parameters involved in the training process, while the second dynamically switches between BF16 and MP during training depending on its progress. Our approaches\ndo not require mixed-precision arithmetic while computing linear algebra operations for a large portion of the training process, which enables them to deliver the same performance as if they were operating with half-precision arithmetic during the whole training while providing the same model accuracy as if FP32 was used. This paper is the first in demonstrating that half-precision can be extensively used during DNNs training without the need for mixed-precision arithmetic. We made our code available1." }, { "heading": "2 BACKGROUND ON MIXED-PRECISION APPROACHES AND MOTIVATION", "text": "Mixed-Precision training has been extensively explored in recent years. Approaches mixing Floating Point 16-bit (FP16) and FP32 datatypes have been proposed (Micikevicius et al., 2017). In these approaches, multiplications of FP16 parameters are accumulated in FP32 registers to minimize data representation range and precision issues. Importantly, relevant phases of the training process like computing weight updates (WU) or dealing with batch normalization (BN) layers entirely use FP32, which implies that a FP32 representation of network weights and biases is kept during the whole training. This approach requires some additional computations to enforce that FP32 values are converted to FP16 without data representation range issues. This approach is used by Nvidia Tesla V100 GPUs via mixed-precision computing units called tensor cores, which are able to multiply FP16 parameters and store the results in FP32. Figure 1a displays the most fundamental operation of this approach combining FP16 and FP32, the mixed-precision Fused Multiply-Add (FMA) instruction, which computes D = A · B + C. Input parameters A and B are represented in the FP16 format. The result of the A ·B operation is kept in FP32 and added to the C parameter, which is represented in FP32 as well. The final output D is also represented in FP32. FMA instructions constitute around 60% of the whole training workload for several relevant CNN models, as Section 3 shows.\nA more recent approach proposes mixed-precision arithmetic combining BF16 and FP32 (Kalamkar et al., 2019). It is very close to its FP16 counterpart with the exception of the full to half precision conversion. Since BF16 has the same data representation range as FP32, conversion from full to half precision is very simple in this case since it just requires applying the Round to Nearest Even (RNE) technique. This approach also processes WU and BN layers with FP32. Figure 1b shows a representation of a mixed-precision FMA combining BF16 and FP32. It is very close to the previously described FP16-FP32 FMA with the only difference being the data representation format of input parameters A and B.\nWhile mixed-precision FMA instructions bring significant benefits since they require less memory bandwidth and register storage than FP32 FMAs, there is still a large margin for improvement if an entirely BF16 FMA like the one represented in Figure 1c could be extensively used for training purposes. First, since BF16 FMA requires exactly one half of the register storage of a FP32 FMA, it doubles its Single Instruction Multiple Data (SIMD) vectorization capacity and, therefore, it may significantly increase its FMA instructions per cycle ratio. Extensions of Instruction Set Architectures (ISA) to allow SIMD parallelism are becoming a key element for floating-point performance, which has motivated major hardware vendors to include them in their products (Jeffers et al., 2016; Stephens et al., 2017). Finally, BF16 FMA instructions also bring significant reductions in terms of memory bandwidth since they involve 50% and 25% less data than FP32 and MP FMAs, respectively. While half-precision arithmetic has not been used to train DNNs due to its lack of training convergence, this paper describes two techniques to fully use it while keeping the same convergence properties as FP32. This paper analyzes in detail 3 relevant training workloads in Section 3, and applies these findings to build its two main contributions in Section 4." }, { "heading": "3 WORKLOAD ANALYSIS", "text": "We consider three CNN models: AlexNet, Inception V2, and ResNet-50. Section 5 describes the exact way we use these models and the methodology we follow to analyze their training workload. Figure 2a shows an instruction breakdown of processing one batch on these networks. This figure shows how floating-point instructions constitute a large portion of these workloads. For example, they represent 58.44% of the total in the case of AlexNet. A very large portion of these floatingpoint instructions, 57.42% of the total, are FMA instructions. In the cases of Inception and ResNet-\n1https://github.com/dynamicprec/dynamic\n50, FMA instructions represent 60.93% and 62.95% of the total, respectively. Therefore, FMA instructions constitute a large portion of the whole training workload, while other FP32 instructions represent a small instruction count that remain below 1.1% for these three CNNs. This justifies to focus on FMA instructions, as executing them in half-precision has a large potential for performance improvement.\nPrior research (Micikevicius et al., 2017; Kalamkar et al., 2019) describes the need for using 32- bit arithmetic in Weight Updates (WU) and Batch Normalization (BN) layers when using training approaches based on mixed-precision arithmetic. We run an experimental campaign to confirm this observation and to measure the number of instructions devoted to WU and BN. For the case of ResNet-50, this instruction count is around 30 million instructions per batch, that is, just 0.04% of the FP instructions. AlexNet and Inception V2 produce similar results. In conclusion, reducing the cost of FMA instructions has a high potential for very significant performance improvements even if WU and BN layers are computed using full precision arithmetic.\nProcessing one training batch for the cases of AlexNet, Inception and ResNet-50 requires running 53.3, 37.2, and 70.0 billion dynamic instructions per batch, respectively. The number of model parameters drives the size of these workloads. AlexNet was trained with a batch size of 256, while Inception and ResNet use a batch size of 64." }, { "heading": "4 PROPOSAL", "text": "We propose two training methodologies that rely exclusively on half-precision BF16 for a large portion of the training process, i.e., a large portion of FMA instructions. Prior mixed-precision approaches preclude large gains in computing costs as some of the data elements remain in FP32. However, an FMA entirely relying on BF16 can potentially double the SIMD vectorization throughput of current processors and alleviate memory bandwidth requirements.\nWe first propose a scheme that performs all FMA instructions in BF16 (see Figure 1c) except those involved in computing WU and processing BN layers, which are entirely performed in FP32. While this method might not deliver the desired level of accuracy for all CNNs, Section 6 shows how it behaves remarkably well for the Inception V2 model, since it obtains the same level of accuracy as state-of-the training using MP and FP32.\nHowever, some CNNs cannot entirely rely on half-precision arithmetic during training. For example, Figure 2b shows top1 accuracy achieved by three training techniques during 15 epochs for ResNet-50. The first technique (referred as FP32 in Figure 2b) entirely relies in FP32 arithmetic, the second approach (referred as MP in Figure 2b) represents the state-of-the art Mixed-Precision training (Kalamkar et al., 2019), and the third approach (referred as BF16 in Figure 2b) performs all FMA instructions in BF16 except for WU and BN. While the BF16 approach behaves relatively well, it displays lower accuracy than MP and FP32 for all the epochs, which indicates the need for an approach able to take advantage of BF16 arithmetic while delivering the same accuracy results as mixed- or full-precision approaches. The methodology we use to generate Figure 2b is described in Section 5.\nOur second contribution dynamically switches between MP and BF16 to deliver the same accuracy as MP while relying in BF16 FMAs during a large portion of the training process. Algorithm 1 displays a high level pseudo-code of our proposal. It starts the training process using the state-of-the art mixed-precision approach (Kalamkar et al., 2019) for several batches, defined by numBatchesMP parameter. Then, it computes the Exponential Moving Average (EMA) (Lawrance & Lewis, 1977) of the training loss and, if its reduction is larger than a certain threshold (emaThreshold parameter), it computes the next numBatchesBF16 using BF16 FMAs, except for WU and BN. Once training has gone through these numBatchesBF16 batches, our algorithm checks EMA and compares its reduction with the emaThreshold parameter. If this reduction is not large enough, the algorithm switches back to MP arithmetic. Otherwise, it keeps using BF16 arithmetic for numBatchesBF16 batches before checking EMA again." }, { "heading": "5 EXPERIMENTAL METHODOLOGY", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Our experiments are performed on Intel Xeon Platinum 8160 processors, which include the AVX512 ISA. We use the Intel-Caffe (Intel, a) framework (version 1.1.6a). We use the Intel MKLDNN (Intel, c) (version 0.18.0) Deep Neural Network library and the Intel MKL library (Intel, b) (version 2019.0.3) to run numerical kernels since both libraries are optimized to perform well on our testing infrastructure. Finally, to define and run the experiments we use the pyCaffe python interface, which takes care of loading the data and orchestrating the execution." }, { "heading": "5.2 EMULATION OF BF16 USING DYNAMIC BINARY INSTRUMENTATION", "text": "Due to the lack of available hardware implementing the BF16 numerical format, we rely on an emulation technique to perform our experiments. Several approaches have been used in the past to emulate the behaviour of reduced floating-point representations, most notably via libraries that perform transformations like truncation and rounding (Chatelain et al., 2019; Dawson & Düben, 2017; Kalamkar et al., 2019). We develop a binary analysis tool based on PIN 3.7 (Luk et al., 2005). Our tool captures and instruments dynamic instructions, which enables adaptating numerical operands to the targeted numerical data format. Our approach seamlessly works on complex frameworks like PyTorch, Tensorflow, or Caffe, with interpreted languages, and is able to instrument instructions triggered by dynamically linked libraries. Our binary analysis tool performs the following steps:\n• It checks the current operation mode, which can be FP32, MP, or BF16 (see Figure 1). • It checks the current execution routine to determine if we are executing routines that belong\nto WU or BN layers. If that is the case, computation proceeds with FP32.\n• The tool intercepts the dynamic instructions of the workload and detects all floating-point operations, including FMAs. For each FMA instruction, operands that need to be rounded to BF16, depending on the current operation mode, are rounded using the RNE algorithm.\nAlgorithm 1 Dynamic Precision 1: numBatchesMP ← 10 . Number of consecutive MP batches 2: numBatchesBF16← 1000 . Number of consecutive BF16 batches 3: emaThreshold← 0.04 . Defines EMA reduction threshold 4: 5: precisionModeBF16← False . Indicates current precision mode, True means BF16 6: countBatchesBF16← 0 . Counts how many numBatchesBF16 have been executed 7: numBatchesTrain← numBatchesMP . Number of batches per training loop iteration 8: 9: for i = 0 to niter do . Training Loop: niter value depends on the number of epochs 10: train.step(numBatchesTrain) . Execute numBatchesTrain batches in precisionModeBF16 11: trainingLoss[i]← train.trainingLoss 12: if (i = 5) then . Initial history to calculate EMA 13: EMA← average(trainingLoss) 14: if (i > 5) then 15: EMAprev ← EMA 16: EMA← emaCalculation(trainingLoss,EMAprev) . Each numBatchesMP 17: if (precisionModeBF16! = True) then 18: if ((EMAprev − EMA) > emaThreshold) then . If training loss goes down 19: precisionModeBF16← True 20: changeToBF16() . Switch precision to BF16 21: else 22: countBatchesBF16← countBatchesBF16 + numBatchesTrain 23: if (countBatchesBF16 = numBatchesBF16) then 24: if ((EMAprev−EMA) > emaThreshold) then . If training loss goes down 25: countBatchesBF16← 0 . Stay in BF16 precision 26: else . If training loss stagnates 27: precisionModeBF16← False 28: changeToMP() . Switch precision to MP 29: countBatchesBF16← 0\n• The tool can dynamically change its operation mode anytime via a simple inter-process communication method that can be invoked from the python high-level interface.\nTo mitigate the overhead of our binary analysis tool, we implement two optimizations: First, we vectorize the truncation and rounding routines via AVX512 instructions. Second, we avoid redundant rounding and truncation operations by identifying instructions belonging to the same basic block sharing some input operands already stored in the register file. These two optimizations reduce the overhead of the tool from 100× to 25× with respect to native runs of the binary on real hardware." }, { "heading": "5.3 DYNAMIC AND STATIC TECHNIQUES", "text": "This paper considers two different types of training techniques: static schemes and dynamic schemes. When using static schemes, the training procedure uses the same data representation form for a given parameter during its complete execution. For example, the three techniques displayed in Figure 2b are static. We define the following schemes:\n• MP: FMA instructions belonging to WU and BN layers always use FP32 precision. The remaining FMA instructions use the mixed-precision approach represented in Figure 1b). This scheme replicates prior work on mixed-precision(Kalamkar et al., 2019).\n• BF16: FMA instructions belonging to WU and BN layers always use FP32 precision. The remaining FMA instructions use BF16 operands to multiply and to accumulate (Figure 1c).\nThe BF16 method is the first contribution of this paper. It extensively uses half-precision arithmetic while displaying good convergence properties.\nThe Dynamic scheme we propose in this paper switches between the MP and BF16 static techniques during training, as explained in Section 4 and detailed in Algorithm 1. This dynamic method im-\nproves the training convergence properties of BF16 while still relying in half-precision FMAs for a very large portion of the execution.\nThe EMA threshold (emaTreshold) is set at 4%. This value is computed as the average EMA reduction when using FP32 computations. The minimum number of batches to be performed in BF16, defined by the numBatchesBF16 parameter is set to 1,000, which precludes frequent unnecessary transitions between the two schemes. We set the numBatchesMP parameter to 10, which keeps the number of batches using the MP regime low while keeping its benefits in terms of convergence." }, { "heading": "5.4 CONVOLUTIONAL NEURAL NETWORK MODELS", "text": "To evaluate our proposals we consider the AlexNet (Krizhevsky et al., 2012), Inception V2 (Szegedy et al., 2015) and ResNet50 (He et al., 2015b) models. They are representative CNN state-of-the-art.\nWe use the ImageNet database (Deng et al., 2009) as training input. To keep execution times manageable when using our binary instrumentation tool, we run the experiments using a reduced ImageNet Database, similar to the Tiny ImageNet Visual Recognition challenge data set (Fei-Fei). Therefore, we use 256,000 images divided into 200 categories for training, and 10,000 images for validation. The images have no modifications in terms of the size. All the evaluated CNN models remain unmodified, the only change is loading a reduced dataset.\nAlexNet is selected due to its simplicity in terms of structure and amount of required computations. To train AlexNet we consider a batch size of 256 and the base learning rate is 0.01, which is adjusted every 20 epochs taking into account a weight decay of 0.0005 and a momentum of 0.9. This model is trained for 32 epochs.\nWe use Inception because it is a model conceived to reduce computational costs via cheap 1x1 convolutions. To train it we use a batch size of 64 and a base learning rate of 0.045, which is updated every 427 steps (0.11 epochs). The gamma, momentum and weight decay are set to 0.96, 0.9, and 0.0002, respectively. The training process is executed for 16 epochs.\nFinally we use ResNet-50. It is a network that delivers good accuracy and avoids the vanishing gradients issue by using residual blocks and the MSRA initializer (He et al., 2015a). We train it using a multi-step approach. The batch size is 64 and the base learning rate is 0.05, which is updated every 30 epochs. The gamma hyperparameter, momentum value, and weight decay are set to 0.1, 0.9, and 0.0001, respectively. The training process runs for a total of 32 epochs." }, { "heading": "6 EVALUATION", "text": "Figure 3 and Table 1 show results from our evaluation campaign. The x-axis of the three plots belonging to Figure 3 represent the epochs of the training process while the y-axis represents the accuracy reached by the model over the validation set. Table 1 shows the test accuracy we reach for the three network models when using the FP32 and MP baselines and our two contributions: BF16 and Dynamic.\nThe AlexNet model, due to its structure, shows a good response when lower precision numerical data types are used. As can be seen in Figure 3a all techniques converge, although the BF16 approach shows the worse accuracy when compared to the Dynamic or the MP techniques. Table 1 shows that FP32, MP, Dynamic, and BF16 reach top-5 accuracies of 84.50%, 84.43%, 84.02% and 82.56% for AlexNet after 32 epochs. Importantly, Dynamic reaches the same accuracy as FP32 and MP while using the BF16 approach for 94.60% of the FMAs. In contrast, the BF16 static technique does\n99.93% of the FMAs in full BF16 precision (0.07% are in WU and BN layers), but the accuracy drops by almost 3% in Top-1 and 2% in Top-5. This drop in accuracy happens just by doing an additional 5% BF16 FMAs. This give us some space to improve the Dynamic approach by reducing the percentage of BF16 FMAs with the objective to increase the accuracy of the model.\nFigure 3b shows the validation accuracy during 16 epochs for the Inception V2 model. It shows fluctuation on the accuracy evaluation during training due to its structure and hyperparameters tuning. Dynamic responds in a robust way to these changes, which highlights its general applicability. Table 1 shows that FP32, MP, Dynamic, and BF16 reach top-5 accuracies of 93.36%, 92.67%, 92.02%, and 92.05% for Inception V2 after 16 epochs.\nFinally, the evaluation on ResNet-50 demonstrates that the Dynamic approach is effective when applied to deeper CNNs. In this case, the precision of the model reaches state-of-the-art levels while using half-precision for 96.4% of the FMA instructions. Figure 3c and Table 1 display the exact accuracy numbers we get from our evaluation after 32 epochs. In this experiment the Top-1 accuracy drops just 1.2% comparing the BF16 and dynamic approaches, however we could improve the dynamic technique relaxing the quantity of BF16 FMA executed to gain more accuracy." }, { "heading": "6.1 SENSITIVITY ANALYSIS FOR DYNAMIC PRECISION ALGORITHM", "text": "We provide a sensitivity analysis for the parameters employed in Algorithm 1. The objective is to show that for a range of reasonable parameters the algorithm behaves as expected. To do this analysis we set one of the parameters to the currently used value (numBatchesMP to 10) to have a manageable number of combinations. We then test all the possible combinations using numBatchesBF16 = {500, 1000, 2000} and emaThreshold = {0.02, 0.04, 0.08}, that is, a total of 9 different combinations. As stated in Section 5.3, during our evaluation we used the configuration {numBatchesMP, numBatchesBF16, emaThreshold} = {10, 1000, 0.04} for all the evaluated networks. Figure 4 shows, for a number of ResNet-50 epochs, the accuracy obtained for each of the 9 tested configurations as part of the sensitivity analysis. The name convention for these configurations is Dyn-<emaThreshold> <numBatchesBF16>. In addition, we include accuracy for BF16, MP, and FP32 executions.\nAs shown in the figure, the accuracies obtained at each epoch are always above that of the BF16 technique. For early epochs (i.e., 2 and 4) the dynamic configurations remain between BF16 and FP32 accuracy, or even slightly above FP32, due to initial noise. As training advances all dynamic techniques behave similarly and present accuracies that are above BF16 and similar to those obtained with MP and FP32, as we would expect. The most important parameter is the emaThreshold, as it decides when a precision change occurs. As long as this parameter is reasonably set to detect training loss improvement or degradation the algorithm is bound to behave as expected." }, { "heading": "7 RELATED WORK", "text": "Prior work indicates that dynamic fixed-point is effective to train deep neural networks with low precision multipliers (Courbariaux et al., 2014). This approach obtains state-of-the-art results by uniformly applying the dynamic fixed point format with different scaling factors, which are driven by the overflow rate displayed by the fixed-point numbers. Our proposals target deeper neural networks than this approach and do not uniformly apply the same format to all network parameters. Instead, we differentiate between computations requiring FP32 during the whole training, like weight updates, from the ones that are well-suited for dynamic data representation schemes.\nPrevious approaches show the benefits of applying stochastic rounding to 16-bit fixed-point multiply and add operators Gupta et al. (2015). This previous work rely on FPGA emulation to show the benefits of stochastic rounding when applied to a custom fully connected neural network dealing with the MNIST dataset. The authors also consider a CNN similar to LeNet-5 to enlarge their experimental campaign.\nPrevious approaches propose a training process of DNN using 8-bit floating point numbers (Wang et al., 2018). They rely on a combination of 8-bit and 16-bit and additionally using stochastic rounding to obtain state-of-the-art results. The neural networks used in this previous approach are much simpler than the ones we consider in this paper, which do not allow 8-bit arithmetic.\nThe BF16 numerical format has been applied to specific-purpose hardware targeting deep neural networks (Wang & Kanwar, 2019). This specific hardware used an approach very similar to the mixed-precision techniques described in the paper by Kalamkar et al. (2019) and, consequently, our Dynamic approach can be applied on top of them to reduce computing costs." }, { "heading": "8 CONCLUSIONS AND FUTURE WORK", "text": "This paper analyzes the instruction breakdown of workloads focused on deep neural network training that rely on mixed-precision training. We show that mixed-precision FMAs constitute around 60% of these workloads and propose two approaches based on half-precision FMAs to accelerate the training process without hurting accuracy.\nThe first approach uses BF16 FMAs for most of the training workload, except routines involved in weight updates or batch normalization layers. This approach uses BF16 for more than 99% of the FMAs, which has a very strong potential for performance improvement, while reaching slightly smaller accuracy than the state-of-the-art. We propose a second approach that dynamically switches between different data representation formats. This dynamic approach uses BF16 for around 96% of the FMAs while reaching the same precision levels as the standard single-precision and mixedprecision approaches.\nOur two proposals are evaluated considering three state-of-the-art deep neural networks and a binary analysis tool that applies the required precision for each instruction. To the best of our knowledge, this is the first paper that demonstrates that half-precision can be used extensively on ≥94% of all FMAs during the training of very deep models without the need for mixed-precision arithmetic." } ]
2,019
SION TO ACCELERATE DEEP NEURAL NETWORK TRAINING
SP:d11f0eb42f1ea12686290a2936b5aee262c8d84d
[ "It is known that GNNs are vulnerable to the oversmoothing problem, in which feature vectors on nodes get closer as we increase the number of (message passing type graph convolution layers). This paper proposed PairNorm, which is a normalization layer for GNNs to tackle this problem. The idea is to pull apart feature vectors on a pair of non-adjacent nodes (based on the interpretation of Laplace-type smoothing by NT and Maehara (2019)). To achieve this approximately with low computational complexity, PairNorm keeps the sum of distances of feature vectors on all node pairs approximately the same throughout layers. The paper conducted empirical studies to evaluate the effectiveness of the method. PairNorm improved the prediction performance and enabled make GNNs deep, especially when feature vectors are missing in the large portion of nodes (the SSNC-MV problem).", "The article \"PairNorm: Tackling Oversmoothing in GNNs\" considers the interesting phenomenon of performance degradation of graph neural network when the depth of the network increases beyond the values of 2-4. The authors argue that one of the reasons for such behavior is so-called \"oversmoothing\", when intermediate representations become similar for all the nodes in the graph. The authors propose the special NN layer \"PairNorm\", which aims to battle with this issue." ]
The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PAIRNORM, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PAIRNORM is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PAIRNORM makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at https://github.com/LingxiaoShawn/PairNorm.
[ { "affiliations": [], "name": "TACKLING OVERSMOOTHING" }, { "affiliations": [], "name": "IN GNNS" }, { "affiliations": [], "name": "Lingxiao Zhao" }, { "affiliations": [], "name": "Leman Akoglu" } ]
[ { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate", "venue": "shift. CoRR,", "year": 2015 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR). OpenReview.net,", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Combining neural networks with personalized pagerank for classification on graphs", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Guohao Li", "Matthias Müller", "Ali Thabet", "Bernard Ghanem" ], "title": "Can GCNs go as deep as CNNs", "venue": null, "year": 1904 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning", "venue": "In Proceedings of the 32nd AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Hoang NT", "Takanori Maehara" ], "title": "Revisiting graph neural networks: All we have is low-pass filters", "venue": null, "year": 1905 }, { "authors": [ "Meng Qu", "Yoshua Bengio", "Jian Tang" ], "title": "Gmnn: Graph markov neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "The truly deep graph convolutional networks for node classification", "venue": null, "year": 1907 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Li", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations (ICLR). OpenReview.net,", "year": 2018 }, { "authors": [ "Felix Wu", "Amauri H. Souza Jr.", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Q. Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation Learning on Graphs with Jumping Knowledge Networks", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 } ]
[ { "heading": null, "text": "The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PAIRNORM, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PAIRNORM is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PAIRNORM makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at https://github.com/LingxiaoShawn/PairNorm." }, { "heading": "1 INTRODUCTION", "text": "Graph neural networks (GNNs) is a family of neural networks that can learn from graph structured data. Starting with the success of GCN (Kipf & Welling, 2017) on achieving state-of-the-art performance on semi-supervised classification, several variants of GNNs have been developed for this task; including GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018), SGC (Wu et al., 2019), and GMNN (Qu et al., 2019) to name a few most recent ones.\nA key issue with GNNs is their depth limitations. It has been observed that deeply stacking the layers often results in significant drops in performance for GNNs, such as GCN and GAT, even beyond just a few (2–4) layers. This drop is associated with a number of factors; including the vanishing gradients in back-propagation, overfitting due to the increasing number of parameters, as well as the phenomenon called oversmoothing. Li et al. (2018) was the first to call attention to the oversmoothing problem. Having shown that the graph convolution is a type of Laplacian smoothing, they proved that after repeatedly applying Laplacian smoothing many times, the features of the nodes in the (connected) graph would converge to similar values—the issue coined as “oversmoothing”. In effect, oversmoothing hurts classification performance by causing the node representations to be indistinguishable across different classes. Later, several others have alluded to the same problem (Xu et al., 2018; Klicpera et al., 2019; Rong et al., 2019; Li et al., 2019) (See §5 Related Work). In this work, we address the oversmoothing problem in deep GNNs. Specifically, we propose (to the best of our knowledge) the first normalization layer for GNNs that is applied in-between intermediate layers during training. Our normalization has the effect of preventing the output features of distant nodes to be too similar or indistinguishable, while at the same time allowing those of connected nodes in the same cluster become more similar. We summarize our main contributions as follows.\n• Normalization to Tackle Oversmoothing in GNNs: We introduce a normalization scheme, called PAIRNORM, that makes GNNs significantly more robust to oversmoothing and as a result enables the training of deeper models without sacrificing performance. Our proposed scheme capitalizes on the understanding that most GNNs perform a special form of Laplacian smoothing, which makes node features more similar to one another. The key idea is to ensure that the total pairwise feature distances remains a constant across layers, which in turn leads to distant pairs having less similar features, preventing feature mixing across clusters.\n• Speed and Generality: PAIRNORM is very straightforward to implement and introduces no additional parameters. It is simply applied to the output features of each layer (except the last one) consisting of simple operations, in particular centering and scaling, that are linear in the input size. Being a simple normalization step between layers, PAIRNORM is not specific to any particular GNN but rather applies broadly. • Use Case for Deeper GNNs: While PAIRNORM prevents performance from dropping significantly with increasing number of layers, it does not necessarily yield increased performance in absolute terms. We find that this is because shallow architectures with no more than 2–4 layers is sufficient for the often-used benchmark datasets in the literature. In response, we motivate a real-world scenario wherein a notable portion of the nodes have no feature vectors. In such settings, nodes benefit from a larger range (i.e., neighborhood, hence a deeper GNN) to “recover” effective feature representations. Through extensive experiments, we show that GNNs employing our PAIRNORM significantly outperform the ‘vanilla’ GNNs when deeper models are beneficial to the classification task." }, { "heading": "2 UNDERSTANDING OVERSMOOTHING", "text": "In this work, we consider the semi-supervised node classification (SSNC) problem on a graph. In the general setting, a graph G = (V, E ,X) is given in which each node i ∈ V is associated with a feature vector xi ∈ Rd where X = [x1, . . . ,xn]T denotes the feature matrix, and a subset Vl ⊂ V of the nodes are labeled, i.e. yi ∈ {1, . . . , c} for each i ∈ Vl where c is the number of classes. Let A ∈ Rn×n be the adjacency matrix and D = diag(deg1, . . . , degn) ∈ Rn×n be the degree matrix of G. Let à = A + I and D̃ = D + I denote the augmented adjacency and degree matrices with added self-loops on all nodes, respectively. Let Ãsym = D̃−1/2ÃD̃−1/2 and Ãrw = D̃−1à denote symmetrically and nonsymmetrically normalized adjacency matrices with self-loops.\nThe task is to learn a hypothesis that predicts yi from xi that generalizes to the unlabeled nodes Vu = V\\Vl. In Section 3.2, we introduce a variant of this setting where only a subset F ⊂ V of the nodes have feature vectors and the rest are missing." }, { "heading": "2.1 THE OVERSMOOTHING PROBLEM", "text": "Although GNNs like GCN and GAT achieve state-of-the-art results in a variety of graph-based tasks, these models are not very well-understood, especially why they work for the SSNC problem where only a small amount of training data is available. The success appears to be limited to shallow GNNs, where the performance gradually decreases with the increasing number of layers. This decrease is often attributed to three contributing factors: (1) overfitting due to increasing number of parameters, (2) difficulty of training due to vanishing gradients, and (3) oversmoothing due to many graph convolutions.\nAmong these, perhaps the least understood one is oversmoothing, which indeed lacks a formal definition. In their analysis of GCN’s working mechanism, Li et al. (2018) showed that the graph convolution of GCN is a special form of Laplacian smoothing. The standard form being (I−γI)X+ γÃrwX, the graph convolution lets γ = 1 and uses the symmetrically normalized Laplacian to obtain X̃ = ÃsymX, where the new features x̃ of a node is the weighted average of its own and its neighbors’ features. This smoothing allows the node representations within the same cluster become more similar, and in turn helps improve SSNC performance under the cluster assumption (Chapelle et al., 2006). However when GCN goes deep, the performance can suffer from oversmoothing where node representations from different clusters become mixed up. Let us refer to this issue of node representations becoming too similar as node-wise oversmoothing.\nAnother way of thinking about oversmoothing is as follows. Repeatedly applying Laplacian smoothing too many times would drive node features to a stationary point, washing away all the information from these features. Let x·j ∈ Rn denote the j-th column of X. Then, for any x·j ∈ Rn:\nlim k→∞ Ãksymx·j = πj and πj ‖πj‖1 = π , (1)\nwhere the normalized solution π ∈ Rn satisfies πi = √ degi∑\ni\n√ degi for all i ∈ [n]. Notice that π is independent of the values x·j of the input feature and is only a function of the graph structure (i.e.,\ndegree). In other words, (Laplacian) oversmoothing washes away the signal from all the features, making them indistinguishable. We will refer to this viewpoint as feature-wise oversmoothing.\nTo this end we propose two measures, row-diff and col-diff, to quantify these two types of oversmoothing. Let H(k) ∈ Rn×d be the representation matrix after k graph convolutions, i.e. H(k) = ÃksymX. Let h (k) i ∈ Rd be the i-th row of H(k) and h (k) ·i ∈ Rn be the i-th column of H(k). Then we define row-diff(H(k)) and col-diff(H(k)) as follows.\nrow-diff(H(k)) = 1\nn2 ∑ i,j∈[n] ∥∥∥h(k)i − h(k)j ∥∥∥ 2\n(2)\ncol-diff(H(k)) = 1\nd2 ∑ i,j∈[d] ∥∥∥h(k)·i /‖h(k)·i ‖1 − h(k)·j /‖h(k)·j ‖1∥∥∥ 2\n(3)\nThe row-diff measure is the average of all pairwise distances between the node features (i.e., rows of the representation matrix) and quantifies node-wise oversmoothing, whereas col-diff is the average of pairwise distances between (L1-normalized1) columns of the representation matrix and quantifies feature-wise oversmoothing." }, { "heading": "2.2 STUDYING OVERSMOOTHING WITH SGC", "text": "Although oversmoothing can be a cause of performance drop with increasing number of layers in GCN, adding more layers also leads to more parameters (due to learned linear projections W(k) at each layer k) which magnify the potential of overfitting. Furthermore, deeper models also make the training harder as backpropagation suffers from vanishing gradients.\nIn order to decouple the effect of oversmoothing from these other two factors, we study the oversmoothing problem using the SGC model (Wu et al., 2019). (Results on other GNNs are presented in §4.) SGC is simplified from GCN by removing all projection parameters of graph convolution layers and all nonlinear activations between layers. The estimation of SGC is simply written as:\nŶ = softmax(ÃKsym X W) (4) where K is the number of graph convolutions, and W ∈ Rd×c denote the learnable parameters of a logistic regression classifier.\nNote that SGC has a fixed number of parameters that does not depend on the number of graph convolutions (i.e. layers). In effect, it is guarded against the influence of overfitting and vanishing gradient problem with more layers. This leaves us only with oversmoothing as a possible cause of performance degradation with increasing K. Interestingly, the simplicity of SGC does not seem to be a sacrifice; it has been observed that it achieves similar or better accuracy in various relational classification tasks (Wu et al., 2019).\nDashed lines in Figure 1 illustrate the performance of SGC on the Cora dataset as we increase the number of layers (K). The training (cross-entropy) loss monotonically increases with larger K, potentially because graph convolution mixes node representations with their neighbors’ and makes them less distinguishable (training becomes harder). On the other hand, graph convolutions (i.e., smoothing) improve generalization ability, reducing the gap between training and validation/test loss\n1We normalize each column j as the Laplacian smoothing stationary point πj is not scale-free. See Eq. (1).\nup to K = 4, after which (over)smoothing begins to hurt performance. The row-diff and col-diff both continue decreasing monotonically with K, providing supporting evidence for oversmoothing." }, { "heading": "3 TACKLING OVERSMOOTHING", "text": "" }, { "heading": "3.1 PROPOSED PAIRNORM", "text": "We start by establishing a connection between graph convolution and an optimization problem, that is graph-regularized least squares (GRLS), as shown by NT & Maehara (2019). Let X̄ ∈ Rn×d be a new node representation matrix, with x̄i ∈ Rd depicting the i-th row of X̄. Then the GRLS problem is given as\nmin X̄ ∑ i∈V ‖x̄i − xi‖2D̃ + ∑ (i,j)∈E ‖x̄i − x̄j‖22 (5)\nwhere ‖zi‖2D̃ = z T i D̃zi. The first term can be seen as total degree-weighted least squares. The second is a graph-regularization term that measures the variation of the new features over the graph structure. The goal of the optimization problem can be stated as estimating new “denoised” features x̄i’s that are not too far off of the input features xi’s and are smooth over the graph structure.\nThe GRLS problem has a closed form solution X̄ = (2I − Ãrw)−1X, for which ÃrwX is the firstorder Taylor approximation, that is ÃrwX ≈ X̄. By exchanging Ãrw with Ãsym we obtain the same form as the graph convolution, i.e., X̃ = ÃsymX ≈ X̄. As such, graph convolution can be viewed as an approximate solution of (5), where it minimizes the variation over the graph structure while keeping the new representations close to the original.\nThe optimization problem in (5) facilitates a closer look to the oversmoothing problem of graph convolution. Ideally, we want to obtain smoothing over nodes within the same cluster, however avoid smoothing over nodes from different clusters. The objective in (5) dictates only the first goal via the graph-regularization term. It is thus prone to oversmoothing when convolutions are applied repeatedly. To circumvent the issue and fulfill both goals simultaneously, we can add a negative term such as the sum of distances between disconnected pairs as follows.\nmin X̄ ∑ i∈V ‖x̄i − xi‖2D̃ + ∑ (i,j)∈E ‖x̄i − x̄j‖22 − λ ∑ (i,j)/∈E ‖x̄i − x̄j‖22 (6)\nwhere λ is a balancing scalar to account for different volume and importance of the two goals.2 By deriving the closed-form solution of (6) and approximating it with first-order Taylor expansion, one can get a revised graph convolution operator with hyperparameter λ. In this paper, we take a different route. Instead of a completely new graph convolution operator, we propose a general and efficient “patch”, called PAIRNORM, that can be applied to any form of graph convolution having the potential of oversmoothing.\nLet X̃ (the output of graph convolution) and Ẋ respectively be the input and output of PAIRNORM. Observing that the output of graph convolution X̃ = ÃsymX only achieves the first goal, PAIRNORM serves as a normalization layer that works on X̃ to achieve the second goal of keeping disconnected pair representations farther off. Specifically, PAIRNORM normalizes X̃ such that the total pairwise squared distance TPSD(Ẋ) :=\n∑ i,j∈[n] ‖ẋi − ẋj‖22 is the same as TPSD(X). That is,∑\n(i,j)∈E\n‖ẋi − ẋj‖22 + ∑\n(i,j)/∈E\n‖ẋi − ẋj‖22 = ∑\n(i,j)∈E\n‖xi − xj‖22 + ∑\n(i,j)/∈E\n‖xi − xj‖22 . (7)\nBy keeping the total pairwise squared distance unchanged, the term ∑\n(i,j)/∈E ‖ẋi − ẋj‖22 is guaranteed to be at least as large as the original value ∑ (i,j)/∈E ‖xi − xj‖22 since the other term∑\n(i,j)∈E ‖ẋi − ẋj‖22 ≈ ∑ (i,j)∈E ‖x̃i − x̃j‖22 is shrunk through the graph convolution.\nIn practice, instead of always tracking the original value TPSD(X), we can maintain a constant TPSD value C across all layers, where C is a hyperparameter that could be tuned per dataset.\nTo normalize X̃ to constant TPSD, we need to first compute TPSD(X̃). Directly computing TPSD involves n2 pairwise distances that is O(n2d), which can be time consuming for large datasets.\n2There exist other variants of (6) that achieve similar goals, and we leave the space for future exploration.\nEquivalently, normalization can be done via a two-step approach where TPSD is rewritten as3 TPSD(X̃) = ∑\ni,j∈[n]\n‖x̃i − x̃j‖22 = 2n2 ( 1\nn n∑ i=1 ‖x̃i‖22 − ‖ 1 n n∑ i=1 x̃i‖22 ) . (8)\nThe first term (ignoring the scale 2n2) in Eq. (8) represents the mean squared length of node representations, and the second term depicts the squared length of the mean of node representations. To simplify the computation of (8), we subtract the row-wise mean from each x̃i, i.e., x̃ci = x̃i − 1n ∑n i x̃i where x̃ c i denotes the centered representation. Note that this shifting does not\naffect the TPSD, and furthermore drives the term ‖ 1n ∑n\ni=1 x̃i‖22 to zero, where computing TPSD(X̃) boils down to calculating the squared Frobenius norm of X̃c and overall takes O(nd). That is,\nTPSD(X̃) = TPSD(X̃c) = 2n‖X̃c‖2F . (9) In summary, our proposed PAIRNORM (with input X̃ and output Ẋ) can be written as a two-step, center-and-scale, normalization procedure:\nx̃ci = x̃i − 1\nn n∑ i=1 x̃i (Center) (10)\nẋi = s · x̃ci√\n1 n ∑n i=1 ‖x̃ci‖22 = s √ n · x̃\nc i√\n‖X̃c‖2F (Scale) (11)\nAfter scaling the data remains centered, that is, ‖ ∑n\ni=1 ẋi‖22 = 0. In Eq. (11), s is a hyperparameter that determines C. Specifically,\nTPSD(Ẋ) = 2n‖Ẋ‖2F = 2n ∑ i ‖s · x̃ c i√\n1 n ∑ i ‖x̃ci‖22 ‖22 = 2n s2 1 n ∑ i ‖x̃ci‖22 ∑ i ‖x̃ci‖22 = 2n2s2\n(12) Then, Ẋ := PAIRNORM(X̃) has row-wise mean 0 (i.e., is centered) and constant total pairwise squared distance C = 2n2s2. An illustration of PAIRNORM is given in Figure 2. The output of PAIRNORM is input to the next convolution layer.\nWe also derive a variant of PAIRNORM by replacing ∑n i=1 ‖x̃ci‖22 in Eq. (11) with n‖x̃ci‖22, such that the scaling step computes ẋi = s · x̃ c i\n‖x̃ci‖2 .\nWe call it PAIRNORM-SI (for Scale Individually), which imposes more restriction on node representations, such that all have the same L2-norm s. In practice we found that both PAIRNORM and PAIRNORM-SI work well for SGC, whereas PAIRNORM-SI provides better and more stable results for GCN and\nGAT. The reason why GCN and GAT require stricter normalization may be because they have more parameters and are more prone to overfitting. In Appx. A.6 we provide additional measures to demonstrate why PAIRNORM and PAIRNORM-SI work. In all experiments, we employ PAIRNORM for SGC and PAIRNORM-SI for both GCN and GAT.\nPAIRNORM is effective and efficient in solving the oversmoothing problem of GNNs. As a general normalization layer, it can be used for any GNN. Solid lines in Figure 1 present the performance\n3See Appendix A.1 for the detailed derivation.\nof SGC on Cora with increasing number of layers, where we employ PAIRNORM after each graph convolution layer, as compared to ‘vanilla’ versions. Similarly, Figure 3 is for GCN and GAT (PAIRNORM is applied after the activation of each graph convolution). Note that the performance decay with PAIRNORM-at-work is much slower. (See Fig.s 5–6 in Appx. A.3 for other datasets.)\nWhile PAIRNORM enables deeper models that are more robust to oversmoothing, it may seem odd that the overall test accuracy does not improve. In fact, the benchmark graph datasets often used in the literature require no more than 4 layers, after which performance decays (even if slowly). In the next section, we present a realistic use case setting for which deeper models are more likely to provide higher performance, where the benefit of PAIRNORM becomes apparent." }, { "heading": "3.2 A CASE WHERE DEEPER GNNS ARE BENEFICIAL", "text": "In general, oversmoothing gets increasingly more severe as the number of layers goes up. A task would benefit from employing PAIRNORM more if it required a large number of layers to achieve its best performance. To this effect we study the “missing feature setting”, where a subset of the nodes lack feature vectors. Let M ⊆ Vu be the set where ∀m ∈ M,xm = ∅, i.e., all of their features are missing. We denote with p = |M|/|Vu| the missing fraction. We call this variant of the task as semi-supervised node classification with missing vectors (SSNC-MV). Intuitively, one would require a larger number of propagation steps (hence, a deeper GNN) to be able to “recover” effective feature representations for these nodes.\nSSNC-MV is a general and realistic problem that finds several applications in the real world. For example, the credit lending problem of identifying low- vs. high-risk customers (nodes) can be modeled as SSNC-MV where a large fraction of nodes do not exhibit any meaningful features (e.g., due to low-volume activity). In fact, many graph-based classification tasks with the cold-start issue (entity with no history) can be cast into SSNC-MV. To our knowledge, this is the first work to study the SSNC-MV problem using GNN models.\nFigure 4 presents the performance of SGC, GCN, and GAT models on Cora with increasing number of layers, where we remove feature vectors from all the unlabeled nodes, i.e. p = 1. The models with PAIRNORM achieve a higher test accuracy compared to those without, which they typically reach at a larger number of layers. (See Fig. 7 in Appx. A.4 for results on other datasets.)" }, { "heading": "4 EXPERIMENTS", "text": "In section 3 we have shown the robustness of PAIRNORM-enhanced models against increasing number of layers in SSNC problem. In this section we design extensive experiments to evaluate the effectiveness of PAIRNORM under the SSNC-MV setting, over SGC, GCN and GAT models." }, { "heading": "4.1 EXPERIMENT SETUP", "text": "Datasets. We use 4 well-known benchmark datasets in GNN domain: Cora, Citeseer, Pubmed (Sen et al., 2008), and CoauthorCS (Shchur et al., 2018). Their statistics are reported in Appx. A.2. For Cora, Citeseer and Pubmed, we use the same dataset splits as Kipf & Welling (2017), where all nodes outside train and validation are used as test set. For CoauthorCS, we randomly split all nodes into train/val/test as 3%/10%/87%, and keep the same split for all experiments. Models. We use three different GNN models as our base model: SGC (Wu et al., 2019), GCN (Kipf & Welling, 2017), and GAT (Velickovic et al., 2018). We compare our PAIRNORM with residual connection method (He et al., 2016) over base models (except SGC since there is no “resid-\nual connected” SGC), as we surprisingly find it can slow down oversmoothing and benefit SSNCMV problem. Similar to us, residual connection is a general technique that can be applied to any model without changing its architecture. We focus on the comparison between the base models and PAIRNORM-enhanced models, rather than achieving the state of the art performance for SSNC and SSNC-MV. There exist a few other work addressing oversmoothing (Klicpera et al., 2019; Li et al., 2018; Rong et al., 2019; Xu et al., 2018) however they design specialized architectures and not simple “patch” procedures like PAIRNORM that can be applied on top of any GNN. Hyperparameters. We choose the hyperparameter s of PAIRNORM from {0.1, 1, 10, 50, 100} over validation set for SGC, while keeping it fixed at s = 1 for both GCN and GAT due to resource limitations. We set the #hidden units of GCN and GAT (#attention heads is set to 1) to 32 and 64 respectively for all datasets. Dropout with rate 0.6 and L2 regularization with penalty 5 · 10−4 are applied to GCN and GAT. For SGC, we vary number of layers in {1, 2, . . . 10, 15, . . . , 60} and for GCN and GAT in {2, 4, . . . , 12, 15, 20, . . . , 30}. Configurations. For PAIRNORM-enhanced models, we apply PAIRNORM after each graph convolution layer (i.e., after activation if any) in the base model. For residual-connected models with t skip steps, we connect the output of l-th layer to (l + t)-th, that is, H(l+t)new = H(l+t) + H(l) where H(l) denotes the output of l-th graph convolution (after activation). For the SSNC-MV setting, we randomly erase p fraction of the feature vectors from nodes in validation and test sets (for which we input vector 0 ∈ Rd), whereas all training (labeled) nodes keep their original features (See 3.2). We run each experiment within 1000 epochs 5 times and report the average performance. We mainly use a single GTX-1080ti GPU, with some SGC experiments ran on an Intel i7-8700k CPU." }, { "heading": "4.2 EXPERIMENT RESULTS", "text": "We first show the global performance gain of applying PAIRNORM to SGC for SSNC-MV under varying feature missing rates as shown in Table 1. PAIRNORM-enhanced SGC performs similar or better over 0% missing, while it significantly outperforms vanilla SGC for most other settings, especially for larger missing rates. #L denotes the best number of layers for the model that yields the largest average validation accuracy (over 5 runs), for which we report the average test accuracy (Acc). Notice the larger #L values for SGC-PN compared to vanilla SGC, which shows the power of PAIRNORM for enabling “deep” SGC models by effectively tackling oversmoothing.\nSimilar to Wu et al. (2019) who showed that the simple SGC model achieves comparable or better performance as other GNNs for various tasks, we found PAIRNORM-enhanced SGC to follow the same trend when compared with PAIRNORM-enhanced GCN and GAT, for all SSNC-MV settings. Due to its simplicity and extreme efficiency, we believe PAIRNORM-enhanced SGC sets a strong baseline for the SSNC-MV problem.\nWe next employ PAIRNORM-SI for GCN and GAT under the same setting, comparing it with the residual (skip) connections technique. Results are shown in Table 2 and Table 3 respectively for GCN and GAT. Due to space and resource limitations, we only show results for 0% and 100% missing rate scenarios. (We provide results for other missing rates (70, 80, 90%) over 1 run only in Appx. A.5.) We observe similar trend for GCN and GAT: (1) vanilla model suffers from performance drop under SSNC-MV with increasing missing rate; (2) both residual connections and PAIRNORM-SI enable deeper models and improve performance (note the larger #L and Acc); (3) GCN-PN and\nGAT-PN achieve performance that is comparable or better than just using skips; (4) performance can be further improved (albeit slightly) by using skips along with PAIRNORM-SI.4" }, { "heading": "5 RELATED WORK", "text": "Oversmoothing in GNNs: Li et al. (2018) was the first to call attention to the oversmoothing problem. Xu et al. (2018) introduced Jumping Knowledge Networks, which employ skip connections for multi-hop message passing and also enable different neighborhood ranges. Klicpera et al. (2019) proposed a propagation scheme based on personalized Pagerank that ensures locality (via teleports) which in turn prevents oversmoothing. Li et al. (2019) built on ideas from ResNet to use residual as well as dense connections to train deep GCNs. DropEdge Rong et al. (2019) proposed to alleviate oversmoothing through message passing reduction via removing a certain fraction of edges at random from the input graph. These are all specialized solutions that introduce additional parameters and/or a different network architecture. Normalization Schemes for Deep-NNs: There exist various normalization schemes proposed for deep neural networks, including batch normalization Ioffe & Szegedy (2015), weight normalization Salimans & Kingma (2016), layer normalization Ba et al. (2016), and so on. Conceptually these have substantially different goals (e.g., reducing training time), and were not proposed for graph neural networks nor the oversmoothing problem therein. Important difference to note is that larger depth in regular neural-nets does not translate to more hops of propagation on a graph structure." }, { "heading": "6 CONCLUSION", "text": "We investigated the oversmoothing problem in GNNs and proposed PAIRNORM, a novel normalization layer that boosts the robustness of deep GNNs against oversmoothing. PAIRNORM is fast to compute, requires no change in network architecture nor any extra parameters, and can be applied to any GNN. Experiments on real-world classification tasks showed the effectiveness of PAIRNORM, where it provides performance gains when the task benefits from more layers. Future work will explore other use cases of deeper GNNs that could further showcase PAIRNORM’s advantages.\n4 Notice a slight performance drop when PAIRNORM is applied at 0% rate. For this setting, and the datasets we have, shallow networks are sufficient and smoothing through only a few (2-4) layers improves generalization ability for the SSNC problem (recall Figure 1 solid lines). PAIRNORM has a small reversing effect in these scenarios, hence the small performance drop." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DERIVATION OF EQ. 8", "text": "" }, { "heading": "A.2 DATASET STATISTICS", "text": "" }, { "heading": "Name #Nodes #Edges #Features #Classes Label Rate", "text": "Cora 2708 5429 1433 7 0.052 Citeseer 3327 4732 3703 6 0.036 Pubmed 19717 44338 500 3 0.003 CoauthorCS 18333 81894 6805 15 0.030\nA.3 ADDITIONAL PERFORMANCE PLOTS WITH INCREASING NUMBER OF LAYERS" }, { "heading": "A.4 ADDITIONAL PERFORMANCE PLOTS WITH INCREASING NUMBER OF LAYERS UNDER", "text": "SSNC-MV WITH p = 1" }, { "heading": "A.5 ADDITIONAL EXPERIMENTS UNDER SSNC-MV WITH INCREASING MISSING", "text": "FRACTION p\nIn this section we report additional experiment results under the SSNC-MV setting with varying missing fraction, in particular p = {0.7, 0.8, 0.9, 1} and also report the base case where p = 0 for comparison.\nFigure 8 presents results on all four datasets for GCN vs. PAIRNORM-enhanced GCN (denoted PN for short). The models without any skip connections are denoted by *-0, with one-hop skip connection by *-1, and with one and two-hop skip connections by *-2. Barcharts on the right report the best layer that each model produced the highest validation accuracy, and those on the left report the corresponding test accuracy. Figure 9 presents corresponding results for GAT.\nWe discuss the take-aways from these figures on the following page.\nWe make the following observations based on Figures 8 and 9:\n• Performance of ‘vanilla’ GCN and GAT models without skip connections (i.e., GCN-0 and GAT-0) drop monotonically as we increase missing fraction p. • PAIRNORM-enhanced ‘vanilla’ models (PN-0, no skips) perform comparably or better than GCN-0 and GAT-0 in all cases, especially as p increases. In other words, with PAIRNORM at work, model performance is more robust against missing data. • Best number of layers for GCN-0 as we increase p only changes between 2-4. For GAT-0, it changes mostly between 2-6. • PAIRNORM-enhanced ‘vanilla’ models (PN-0, no skips) can go deeper, i.e., they can leverage a larger range of #layers (2-12) as we increase p. Specifically, GCN-PN-0 (GAT-PN-0) uses equal number or more layers than GCN-0 (GAT-0) in almost all cases. • Without any normalization, adding skip connections helps—GCN/GAT-1 and GCN/GAT-2 are better than GCN/GAT-0, especially as we increase p. • With PAIRNORM but no-skip, performance is comparable or better than just adding skips. • Adding skips on top of PAIRNORM does not seem to introduce any notable gains.\nIn summary, simply employing our PAIRNORM for GCN and GAT provides robustness against oversmoothing that allows them to go deeper and achieve improved performance under SSNC-MV." }, { "heading": "A.6 CASE STUDY: ADDITIONAL MEASURES FOR PAIRNORM AND PAIRNORM-SI WITH SGC", "text": "AND GCN\nTo better understand why PAIRNORM and PAIRNORM-SI are helpful for training deep GNNs, we report additional measures for (SGC and GCN) with (PAIRNORM and PAIRNORM-SI) over the Cora dataset. In the main text, we claim TPSD (total pairwise squared distances) is constant across layers for SGC with PAIRNORM (for GCN/GAT this is not guaranteed because of the influence of activation function and dropout layer). In this section we empirically measure pairwise (squared) distances for both SGC and GCN, with PAIRNORM and PAIRNORM-SI." }, { "heading": "A.6.1 SGC WITH PAIRNORM AND PAIRNORM-SI", "text": "To verify our analysis of PAIRNORM for SGC, and understand how the variant of PAIRNORM (PAIRNORM-SI) works, we measure the average pairwise squared distance (APSD) as well as the average pairwise distance (APD) between the representations for two categories of node pairs: (1) connected pairs (nodes that are directly connected in graph) and (2) random pairs (uniformly randomly chosen among the node set). APSD of random pairs reflects the TPSD, and APD of random pairs reflects the total pairwise distance (TPD). Under the homophily assumption of the labels w.r.t. the graph structure, we want APD or APSD of connected pairs to be small while keeping APD or APSD of random pairs relatively large.\nThe results are shown in Figure 10. Without normalization, SGC suffers from fast diminishing APD and APSD of random pairs. As we have proved, PAIRNORM normalizes APSD to be constant across layers, however it does not normalize APD, which appears to decrease linearly with increasing number of layers. Surprisingly, although PAIRNORM-SI is not theoretically proved to have a constant APSD and APD, empirically it achieves more stable APSD and APD than PAIRNORM. We were not able to prove this phenomenon mathematically, and leave it for further investigation.\nAPD does not capture the full information of the distribution of pairwise distances. To show how the distribution changes by increasing number of layers, we use Tensorboard to plot the histograms of pairwise distances, as shown in Figure 11. Comparing SGC and SGC with PAIRNORM, adding PAIRNORM keeps the left shift (shrinkage) of the distribution of random pair distances much slower than without normalization, while still sharing similar behavior of the distribution of connected pairwise distances. PAIRNORM-SI seems to be more powerful in keeping the median and mean of the distribution of random pair distances stable, while “spreading” the distribution out by increasing the variance. The performance of PAIRNORM and PAIRNORM-SI are similar, however it seems that PAIRNORM-SI is more powerful in stabilizing TPD and TPSD." }, { "heading": "Dataset: Cora", "text": "" }, { "heading": "A.6.2 GCN WITH PAIRNORM AND PAIRNORM-SI", "text": "The formal analysis for PAIRNORM and PAIRNORM-SI is based on SGC. GCN (and other GNNs) has learnable parameters, dropout layers, and activation layers, all of which complicate direct mathematical analyses. Here we perform similar empirical measurements for pairwise distances to get a rough sense of how PAIRNORM and PAIRNORM-SI work with GCN based on the Cora dataset. Figures 12 and 13 demonstrate how PAIRNORM and PAIRNORM-SI can help train a relatively deep (12 layers) GCN.\nNotice that oversmoothing occurs very quickly for GCN without any normalization, where both connected and random pair distances reach zero (!). In contrast, GCN with PAIRNORM or PAIRNORMSI is able to keep random pair distances relatively apart while allowing connected pair distances to shrink. As also stated in main text, using PAIRNORM-SI for GCN and GAT is relatively more\nstable than using PAIRNORM in general cases (notice the near-constant random pair distances in the rightmost subfigures). There are several possible explanations for why PAIRNORM-SI is more stable. First, as shown in Figure 10 and Figure 12, PAIRNORM-SI not only keeps APSD stable but also APD, further, the plots of distributions of pairwise distances (Figures 11 and 13) also show the power of PAIRNORM-SI (notice the large gap between smaller connected pairwise distances and the larger random pairwise distances). Second, we conjecture that restricting representations to reside on a sphere can make training stable and faster, which we also observe empirically by studying the training curves. Third, GCN and GAT tend to overfit easily for the SSNC problem, due to many learnable parameters across layers and limited labeled input data, therefore it is possible that adding more restriction on these models helps reduce overfitting." }, { "heading": "GCN GCN + PairNorm GCN + PairNorm-SI Dataset: Cora", "text": "All in all, these empirical measurements as illustrated throughout the figures in this section demonstrates that PAIRNORM and PAIRNORM-SI successfully address the oversmoothing problem for deep GNNs. Our work is the first to propose a normalization layer specifically designed for graph neural networks, which we hope will kick-start more work in this area toward training more robust and effective GNNs." } ]
2,020
null
SP:f077ef67a022348d5d4d455cb313a691cfe63e47
[ "This paper aims to search a sparse but competitive architecture with using a single fixed type of operation by proposing a channel-level neural architecture search (CNAS). Different from most previous NAS works, this paper conducts NAS process on channel-level such that different cell has different topology. CNAS provides a heuristic algorithm to calculate the saliency vector and zero out the channels iteratively until satisfying a given sparsity. This paper performs CNAS on Cifar-10 and ImageNet, and analyzes the topological properties of the final model. The results of experiment demonstrate CNAS can reach a competitive model with dense models searched by baselines. ", "This paper aims to propose a novel framework for neural architecture search. Although there have been many solutions in the literature, the authors try to build a NAS model that is sparse in structure while being similarly effective as conventional dense models. The method is straightforward - they select a single fixed operation as edges, and channels as vertices, and the problem of NAS can be directly solved by a gradient descent method. The sparsity can also be achieved on the level of channels." ]
There is growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced architecture search cost by sharing parameters, but there is still a challenging problem of designing search space. We consider search space is typically defined with its shape and a set of operations and propose a channel-level architecture search (CNAS) method using only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cell. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by CNAS achieves very competitive performance with dense models searched by the existing methods.
[]
[ { "authors": [ "J. Bayer", "D. Wierstra", "J. Togelius", "J. Schmidhuber" ], "title": "Evolving memory cell structures for sequence learning", "venue": "ICANN,", "year": 2009 }, { "authors": [ "G. Bender", "P.-J. Kindermans", "B. Zoph", "V. Vasudevan", "Q. Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "ICML,", "year": 2018 }, { "authors": [ "A. Brock", "T. Lim", "J.M. Ritchie", "N. Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": null, "year": 2018 }, { "authors": [ "H. Cai", "L. Zhu", "S. Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "ICLR,", "year": 2019 }, { "authors": [ "E.D. Cubuk", "B. Zoph", "D. Mané", "V. Vasudevan", "Q.V. Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "CoRR,", "year": 2018 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "CVPR,", "year": 2009 }, { "authors": [ "X. Dong", "Y. Yang" ], "title": "Searching for a robust neural architecture in four gpu hours", "venue": "CVPR,", "year": 2019 }, { "authors": [ "D. Ha", "A.M. Dai", "Q.V. Le" ], "title": "Hypernetworks", "venue": "ICLR,", "year": 2017 }, { "authors": [ "G.E. Hinton", "N. Srivastava", "A. Krizhevsky", "I. Sutskever", "R.R. Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv:1207.0580,", "year": 2012 }, { "authors": [ "H. Hu", "J. Langford", "R. Caruana", "S. Mukherjee", "E. Horvitz", "D. Dey" ], "title": "Efficient forward architecture search", "venue": "NIPS,", "year": 2019 }, { "authors": [ "G. Huang", "Z. Liu", "L. Maaten", "K.Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "R. Jozefowicz", "W. Zaremba", "I. Sutskever" ], "title": "An empirical exploration of recurrent network architectures", "venue": null, "year": 2015 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "C. Liu", "B. Zoph", "J. Shlens", "W. Hua", "L.-J. Li", "L. Fei-Fei", "A.L. Yuille", "J. Huang", "K. Murphy" ], "title": "Progressive neural architecture search", "venue": "ECCV,", "year": 2018 }, { "authors": [ "H. Liu", "K. Simonyan", "O. Vinyals", "C. Fernando", "K. Kavukcuoglu" ], "title": "Hierarchical representations for efficient architecture search", "venue": "ICLR,", "year": 2018 }, { "authors": [ "I. Loshchilov", "F. Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "ICLR,", "year": 2017 }, { "authors": [ "R. Miikkulainen", "J. Liang", "E. Meyerson", "A. Rawal", "D. Fink", "O. Francon", "B. Raju", "A. Navruzyan", "N. Duffy", "B. Hodjat" ], "title": "Evolving deep neural networks", "venue": "arXiv preprint arXiv:1703.00548,", "year": 2017 }, { "authors": [ "P. Molchanov", "S. Tyree", "T. Karras", "T. Aila", "J. Kautz" ], "title": "Pruning convolutional neural networks for resource efficient inference", "venue": "ICLR,", "year": 2017 }, { "authors": [ "Y.E. Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o(1/k2)", "venue": "Soviet Mathematics Doklady,", "year": 1983 }, { "authors": [ "H. Pham", "M.Y. Guan", "B. Zoph", "Q.V. Le", "J. Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "ICML,", "year": 2018 }, { "authors": [ "E. Real", "A. Aggarwal", "Y. Huang", "Q.V. Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "AAAI,", "year": 2019 }, { "authors": [ "R. Shin", "C. Packer", "D. Song" ], "title": "Differentiable neural network architecture search", "venue": "ICLR (Workshop),", "year": 2018 }, { "authors": [ "K.O. Stanley", "R. Miikkulainen" ], "title": "Evolving neural networks through augmenting topologies", "venue": "Evolutionary computation,", "year": 2002 }, { "authors": [ "M. Tan", "B. Chen", "R. Pang", "V. Vasudevan", "Q.V. Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "CoRR,", "year": 2018 }, { "authors": [ "J. Tompson", "R. Goroshin", "A. Jain", "Y. LeCun", "C. Bregler" ], "title": "Efficient object localization using convolutional networks", "venue": "CVPR,", "year": 2015 }, { "authors": [ "L. Xie", "A. Yuille" ], "title": "Genetic cnn", "venue": "ICCV,", "year": 2017 }, { "authors": [ "S. Xie", "H. Zheng", "C. Liu", "L. Lin" ], "title": "SNAS: stochastic neural architecture search", "venue": "ICLR,", "year": 2019 }, { "authors": [ "C. Zhang", "M. Ren", "R. Urtasun" ], "title": "Graph hypernetworks for neural architecture search", "venue": "ICLR,", "year": 2019 }, { "authors": [ "X. Zhang", "Z. Huang", "N. Wang" ], "title": "Single shot neural architecture search via direct sparse optimization", "venue": "ICLR,", "year": 2019 }, { "authors": [ "B. Zoph", "Q.V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "ICLR,", "year": 2017 }, { "authors": [ "B. Zoph", "V. Vasudevan", "J. Shlens", "Q.V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "CVPR, 2018", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Nowadays, deep neural networks (DNNs) are used extensively and successfully in many fields and applications such as computer vision, speech recognition, machine translation, and automated vehicles. Designing DNNs often requires significant architecture engineering, a large amount of trial and error by experts. Although transfer learning is widely used to save the efforts required for designing good architectures of DNNs from scratch, it is not always possible to use.\nRecently, there is growing interest in automating designing good neural network architectures (30; 31; 20; 24; 21; 15; 3; 28; 14; 2; 22; 7; 27; 4; 29; 10). Most of them can be categorized into reinforcement learning-based (RL) methods, evolutionary algorithm-based (EV) methods, hypernetwork-based (HY) methods, and gradient-based (GR) methods, in terms of the search algorithm.\nRL methods (30; 31; 20; 24) use a controller model that enumerates a bunch of candidate models, which are trained for a fixed number of epochs from scratch, and then, is updated using the validation accuracies of the candidate models evaluated on a validation set. To reduce the search space of candidate models, some of them (31; 20) assume each model is composed of multiple convolutional layers called cells having the same architecture and focuses on searching for the best cell architecture. For example, in NASNet (31), a cell is composed of five blocks, and each block composed of two operations, which are selected among a set of various convolution and pooling operations by the controller model. To reduce the search space, NASNet also transfers the learned architecture for a small dataset (e.g., CIFAR-10) to a large dataset (e.g., ImageNet). To optimize an architecture with less amount of computation, ENAS (20) exploits parameter (weight) sharing, which avoids training each candidate model from scratch by sharing the weights of candidate models. It constructs a large computational graph, where each subgraph represents the architecture of a candidate model, and the controller model is trained to search for a subgraph corresponding to a good candidate model.\nEV methods (23; 1; 12; 17; 26; 21; 15) also have been extensively studied. AmoebaNet (21) uses the same search space with NASNet, but searches a good cell architecture based on evolutionary algorithm instead of RL controller. The population is initialized with models with random architectures, and some models are sampled from the population. The model with the highest validation fitness within the samples is selected as the parent (i.e., exploitation), and a child having a mutation in terms of operations and skip connections is constructed from the parent (i.e., exploration). Hierarchical NAS (15) uses hierarchical representation for cell architecture where smaller graph motifs are used as building blocks to form larger motifs, instead of flat representation. Unfortunately, most of RL and EV methods, except ENAS, require an enormous amount of computing power for training thousands\nof child models. They usually need hundreds or thousands of GPU days for architecture search, which is almost impossible for a typical machine learning practitioner.\nHY methods and GR methods avoid such a large cost of architecture search by sharing parameters as in ENAS (20). HY methods (3; 28) bypass fully training candidate models by instead training an auxiliary model, a HyperNet (8), to dynamically and directly generate the weights of a candidate model. SMASH (3) generates an architecture of an entire network (i.e., macro search) in terms of the hyperparameters of filters (e.g., number, size) with fixing the type of operation in the HyperNet space, while GHN (28) generates an architecture of a cell (i.e., micro search) in terms of operations in the NAS search space.\nGR methods (22; 7; 27; 4; 29; 10) do not rely on controllers, evolutionary algorithm, and hypernetworks, but exploit gradient descent on network architectures, which can significantly improve the speed of NAS. They basically relax the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent. Here, search space corresponds to a parent network, and a child network (subgraph) can be derived from the parent network by gradient descent. Most of GR methods focus on searching a good cell architecture in terms of operations and repeating the same architecture as in NASNet. After architecture search, they should usually re-train the candidate architecture snapshot from scratch using the training set due to inconsistency between the performance of derived child networks and converged parent networks.\nAs described above, one of the major trends in NAS is exploiting the concept of parameter sharing through hypernetwork or gradient descent in order to reduce the cost (i.e., GPU days) of NAS. By parameter sharing, HY and GR methods can automatically optimize an architecture that can achieve the state-of-the-art performance on CIFAR-10 and ImageNet just within a few days (as summarized in Table 4). However, there is still a challenging problem in the above architecture search methods: designing search space. In principle, the search space should be large and expressive enough to capture a diverse set of promising candidate models, and at the same time, should be small enough to train with the limited amount of resources and time (2). Some methods (24; 15; 22) addressed that defining search space is extremely important for the performance of neural architecture search. The problem about designing search space may not be solved at once. The search space of the existing NAS methods is typically defined with a shape of the overall network and a set of operations such as identity, normal convolution, separable convolution, average pooling, and max pooling. Many of them follow the NASNet search space for the shape of the network (i.e., stacking cells) and define their own set of operations. Since the number of possible types of operations for search space is limited due to the search cost, the set of operations used itself may have a large impact on the performance of architecture search.\nIn this paper, we investigate the possibility of achieving competitive performance with the state-of-theart architecture search methods with using a fixed type of operation. To achieve such a performance, we focus on the sparsity of a model. A candidate model in the existing methods has multiple types of operations connected with each other via skip connections, and each operation takes the entire feature maps (called channel) of certain previous nodes or cells as input and returns its entire resulting channels as output. Thus, the candidate model can be regarded as a dense model in term of input and output channels of the operations. We propose a channel-level neural architecture search (CNAS) method that regards channels as vertices and a single fixed operation as edges and searches for a good architecture by gradient descent. The resulting model is sparse in terms of channels. CNAS uses the existing shape of search space (e.g., NASNet), but performs macro search. Thus, the resulting architecture has different topology at different cells. In CNAS, the final sparse architecture can be searched quickly due to its simplicity, and at the same time, can compensate for the disadvantage of using homogeneous operation due to its sparsity. For CIFAR-10, CNAS searches for the architecture in 1.1 GPU days, which achieves 2.28% test error with 4.6 million parameters and autoaugment.\nThe rest of the paper is organized as follows. Section 2 explains our method CNAS. Section 3 shows the experimental results, and Section 4 summarizes the characteristics of related methods. Section 5 concludes this paper." }, { "heading": "2 CNAS METHOD", "text": "Since we focus on investigating the possibility of architecture search relying on sparsity instead of the combination of operations in this paper, we mainly use the structure of NASNet for the shape of search space, which is composed of normal cells and reduction cells, and each cell is again composed of submodules called nodes (blocks in NASNet). In Section 3, we will show the result of CNAS using different shape of search space, in particular, the structure of DenseNet. Figure 1 shows the diagram of the search space of CNAS." }, { "heading": "2.1 SEARCH SPACE", "text": "The CNAS method consists of the following three steps: (1) Train the one-shot (i.e., full-edges) model in a fixed number of epochs to make it predictive of the validation accuracies of sparse models. (2) Search the most promising sparse model satisfying a given sparsity based on a criteria (e.g., Taylor) by zeroing out less important channels. (3) Re-train the most promising model from scratch (called CNAS-R) or fine-tune it (called CNAS-W) and then evaluate the final model on the test dataset.\nIn CNAS, a vertex in a cell or a node is a single channel, and an edge is an operation. In Figure 1(b) and (c), the thick edges are non-trivial operations involving convolution, where the solid thick ones are for pre-processing as in other methods, and the dotted solid thick ones are the part that can be changed by architecture search. The type of operation used in CNAS is fixed as a specific one, depthwise separable 3x3 convolution since operations are not the target of architecture search. In contrast, the number of types of operations in the existing NAS methods is at least several, and the types of operations are designed differently depending on the method.\nIn Figure 1(c), zeroing out less important channels of red squares also removes their outgoing edges to the next layer {yi}, i.e., does not apply the operation BN(Conv(ReLU(·))) to the red squares. In terms of low-level implementation, CNAS performs partial matrix operations between {y1, y2} and {x3, x4} and between {y3, y4} and {x5}. Since the red edges do not need to be calculated, the corresponding convolution kernels are also not necessary. Thus, the number of weight parameters between {xi} and {yi} is reduced by 38 in Node 1. That is, channel-level architecture search makes the model sparse. If {x1, x2, x3, x4} all are removed, then {y1, y2} are also removed, and only {y3, y4} is added to {z1, z2}. In general, the input channels of nodes (i.e., dotted squares) are removed differently depending on whether the cell is close to input data or close to the output layer. Thus, after architecture search, each cell in CNAS has different architecture in terms of the topology of vertices and edges." }, { "heading": "2.2 SEARCHING THE MOST PROMISING SPARSE MODEL", "text": "We search for the most promising sparse model satisfying a given sparsity ρ by zeroing out less important channels. Here, ρ indicates |W\n∗| |W | , where |W | is the number of weight parameters of the\none-shot model, and |W ∗| that of the final sparse model (0.0 ≤ ρ ≤ 1.0). As the criteria for evaluating the importance of channels, we adopt Taylor expansion (18). Algorithm 1 shows the outline of the evaluation. We denote the vector of entire input channels {xi} in the one-shot model as X and the length ofX as |X|. Likewise, we denote the vector of entire gradients ofX after a single minibatch as ∆X = {δxi}. When calculating gradients, we use the current sparse model W ′, which is initially the same with W , and the parameters of W ′ are not updated. X ′ and ∆X ′ are the vector of entire input channels and their gradients in the current sparse model, respectively. The saliency vector S has the same length with |X ′| and is initialized with zeros. We consider m minibatches for the input dataset D. Then, we get X ′ and ∆X ′ in each minibatch and calculate Taylor expansion using element-wise multiplication between both. The smaller xi ⊙ δxi is, the larger the value 1xi ⊙ δxi\nis. The dimension of the value is reduced to a single value, which is again accumulated to the corresponding saliency value in S. Then, we normalize S by applying layer-wise L2-normalization.\nAlgorithm 1: Calculation of Taylor expansion for CNAS\n1 for each Dk ∈ [D1, · · · , Dm] do 2 X ′,∆X ′ ←ForwardAndBackpropagation(Dk,W ′) 3 S ← S + DimensionReduction( 1X′⊙∆X′ , |X ′|) 4 S ← Normalization(S)\nAfter calculating the saliency vector S, we gradually zero out the input channels {xi} having the largest values, i.e., least important channels, among the remaining input channels. We let the ratio of zeroing out γ (0 < γ < 1). We typically use γ = 0.1, which means removing 10% input channels of the remaining input channels at each iteration. Thus, the number of input channels becomes 0.9|X| after the first iteration and 0.81|X| after the second iteration. We perform fine-tuning of a single epoch for the current sparse model W ′ between iterations. As the iteration goes on, the model becomes sparser and sparser. We stop the iterations when the sparsity of W ′ reaches the given ρ. After finding the final sparse model W ∗, we can initialize the parameters of W ∗ and re-train the model (called CNAS-R), or just fine-tune the parameters of W ∗ (called CNAS-W).\nWe incorporate spatial dropout (25) at training the one-shot model or the final sparse model in order to make that the model more robust. We do not use path dropout used in ENAS (20) and One-Shot (2) since it is too coarse to incorporate for our channel-level search. We also do not use conventional drouout (9) since it is too fine-grained to apply. Although One-Shot (2) consider the co-adaptation issue in which zeroing out operations from the one-shot model can cause the quality of the model’s prediction to degrade severely, we do not need to consider it since the final sparse model is obtained through gradually zeroing out by the ratio γ.\nWe check the correlation between the one-shot model and the final sparse model in terms of performance (test error). We generate 27 pairs of one-shot models of three cells and five nodes per cell with different initialization and train them for 150 epochs. Then, we search a single final sparse model from each one-shot model and train them for 310 epochs. Figure 2(a) shows a strong correlation\nbetween the one-shot and the final sparse model in terms of test error. We let E(·) a test error. X-axis means E(W1) − E(W2) where W1 and W2 are a pair of one-shot models s.t. E(W1) > E(W2). Y-axis means E(W ∗1 ) − E(W ∗2 ) where W ∗1 and W ∗2 are the final sparse models of W1 and W2, respectively. There are 27 points in the figure, and only two points are located below 0.0 at Y-axis. For the remaining 25 points, if W1 is better than W2, then W ∗1 is also better than W ∗ 2 . The test error of the one-shot model is computed before architecture search. There is no fine-tuning for the one-shot model. The test error of the optimal model is computed after fine-tuning. The correlation coefficient between X-axis and Y-axis is about 0.83. It means the way of searching the final sparse model is stable." }, { "heading": "2.3 TOPOLOGICAL PROPERTIES OF THE FINAL SPARSE MODEL", "text": "We describe the topological properties of the final sparse model W ∗ after channel-level architecture search. Table 1 shows the statistics of W ∗ compared with those of the one-shot model for CIFAR-10. Due to the space limit, we show only the top three cells, the bottom three cells and two reduction cells among 20 cells. From the statistics, we address two properties. First, the input channels in reduction cells are not removed as much as in other top and bottom cells. For example, in Cell 18, |X∗| is smaller than one-fifth of |X|. In contrast, in Cell 14, |X∗| = 5, 233 is almost the same with |X| = 5, 760 In reduction cells, the height and width of a channel is reduced by half, while the number of channels is increased by two times. As a result, the amount of information is reduced by half. Keeping input channels at reduction cells seems to be due to compensating the loss of information to achieve the lower error. Second, |z → x| is extremely low compared with |d→ x| in the top cells, whereas |z → x| is similar with |d → x| in the bottom cells. The former means that there is almost no edge among the nodes in the top cells and so the nodes are located horizontally, each of which is doing its own task. The latter means that there are a lot of edges among the nodes in the bottom cells and so the nodes are located vertically and horizontally as in Figure 1 with the necessity of aggressive abstraction." }, { "heading": "3 EXPERIMENTS", "text": "We use CIFAR-10 (13) and ImageNet (6) for our experiments. For training the one-shot model of CIFAR-10, we use 150 epochs with the Nesterov momentum (19) 0.9. We used a cosine learning rate schedule (16) with the initial learning rate lmax = 0.05, the minimum learning rate lmin = 0.0001, the initial number of epochs T0 = 10, and the multiplication factor Tmul = 2 and `2 weight decay of 2× 10−4. We train the final sparse model by using the same setting with the one-shot model, except the number of epochs, which is set to 630. For ImageNet, we use the same final sparse model for CIFAR-10 only after adding two more stem convolution layers and modifying the fully connected layer to handle the different number of outputs. We use 250 epochs for training the modified sparse" }, { "heading": "14 5,760 1,440 984,960 5,233 904,329 2,537 2,696", "text": "" }, { "heading": "7 2,880 720 259,200 2,244 207,684 1,258 986", "text": "" }, { "heading": "3 1,440 360 77,760 284 25,740 170 114", "text": "" }, { "heading": "2 1,440 360 75,168 653 39,753 367 286", "text": "" }, { "heading": "1 1,440 360 72,576 905 48,501 531 374", "text": "model with the Nesterov momentum 0.9 and the learning rate 0.05, which is decayed by a factor 0.98 after each epoch. All test errors in the result are the mean values of three evaluations." }, { "heading": "3.1 EVALUATION OF CNAS VARYING THE NUMBER OF NODES AND OPERATIONS", "text": "We check the performance of CNAS models having the same number of parameters while varying the number of nodes per cell or varying the number of operations per node. We use the one-shot model of two normal cells and one reduction cell. Figure 3(a) shows the result of CNAS models, which all have 1.1 M parameters, but different number of nodes per cell. The test error tends to be decreased as the number of nodes per cell increases (i.e., model becomes sparser), but slightly increases when the number of nodes is ten (i.e., too sparse). Figure 3(b) shows the result of CNAS models, which all have 0.15 M parameters, but different number of operations (of the same type) per cell. Although we use two operations per cell to follow the convention of NASNet, the difference in test error among three settings is quite small. This is mainly due to using a single type of operation (edge)." }, { "heading": "3.2 COMPARISON AMONG DIFFERENT PRUNING METHODS FOR CNAS", "text": "In this section, we evaluate the performance of no-pruning, randomly pruning, and our pruning (in Section 2.2). Here, no-pruning means making a small one-shot model of 0.15 M parameters and training the model without the pruning step. Randomly pruning means pruning each cell randomly with a given sparsity (e.g., 0.9, 0.5, 0.4, 0.3) and making the final model all have the number of parameters of 0.15 M. Thus, this setting does not have different topology at each cell, but rather has similar topology. Our pruning also has the 0.15 M parameters, but different topology at each cell as in Table 1. We use CIFAR-10 for comparison. Figure 2(b) shows the results of three settings. Among them, no-pruning shows the worst performance, while our pruning of CNAS shows the best performance. It means that a sparse model in terms of channel improves the performance compared with a dense model, and at the same time, different topology at each cell is important to achieve the better performance. Just for reference, we add the results of MNASNet (24) and ENAS (20) having the same number of parameters of 0.15 M. Both No-prune and ENAS are dense models in terms of channel, but ENAS shows a better performance than No-prune due to its various operations in search space. MNASNet (24) shows slightly worse performance than No-prune since its architecture is the one optimized for ImageNet." }, { "heading": "3.3 COMPARISON WITH OTHER METHODS", "text": "In this section, we present the comparison result with the state-of-the-art methods for CIFAR-10 and ImageNet. Table 2 shows the model size, test errors and GPU days (for architecture search methods) for CIFAR-10. For CNAS, we measure the first two steps, i.e., training the one-shot model and searching the final sparse model, as the GPU days for architecture search. The CNAS model used in the comparison is the same as the final sparse model in Table 1. Overall, CNAS achieves very competitive performance with only 1.1 GPU days and a moderate number of parameters (4.6 M) among all the methods compared. CNAS with autoaugment (5) can further improve the performance up to 2.28% test error.\nTable 3 shows the comparison result for ImageNet. The model size of CNAS slightly increases to 5.7 M due to adding two stem convolution layers to and modifying the fully connected layer of the final sparse model in Table 1. We denote this model just as CNAS. Overall, CNAS achieves comparable performance with other methods but does not show very competitive performance as in CIFAR-10. It is probably because we use the same final sparse model obtained from CIFAR-10 for ImageNet due to the limit of evaluation time. Searching and training an inherent final sparse model from ImageNet may further improve the performance with spending more GPU days for architecture search." }, { "heading": "3.4 USING DENSENET-LIKE SEARCH SPACE FOR CNAS", "text": "In this section, we apply CNAS to a different shape of search space. In particular, we use DenseNetBC (11) search space instead of NASNet search space. Figure 4 shows the diagram of the search space for CNAS. In Figure 4(a), each dense block consists of the 19 bottleneck layers, and there are transition layers between dense blocks. In each bottleneck layer in Figure 4(b), zeroing out less important channels in red squares also removes their outgoing edges to the next layer {yi}. x are concatenated to z in output to make skip connection. The number of x increases as the bottleneck layer number increases as in DenseNet. In Table 2, CNAS-R (DenseNet-BC) outperforms the original DenseNet-BC with the same number of parameters of 0.8 M. This means our CNAS method is effective in not only NASNet search space, but also different shapes of search space.\nWe note that the performance of CNAS-R (DenseNet-BC) with 4.6 M parameters is worse than that of CNAS-R using NASNet search space in Table 2. This means the shape of the search space of NASNet itself is superior to that of DenseNet." }, { "heading": "4 RELATED WORK", "text": "We have briefly explained the recently proposed NAS methods according to the search algorithm in Section 1. Table 4 summarizes their characteristics in terms of not only search algorithm, but also search space, search range and how to generate candidate parameters." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we proposed a channel-level neural architecture search (CNAS) method that considers channels instead of operations for search space of NAS. It only uses a single fixed type of operation and instead focuses on searching for a good sparse architecture in terms of channel. The resulting sparse model has different topology at different cell. In particular, the nodes in the bottom cells are located vertically and horizontally with the necessity of aggressive abstraction, but the ones in the top cells are located horizontally for doing their own tasks. For CIFAR-10, CNAS achieves 2.28% test error with 4.6 million parameters using the architecture searched for only 1.1 GPU days. We also showed that CNAS is effective in not only NASNet search space, but also different shapes of search space." } ]
2,019
CNAS: CHANNEL-LEVEL NEURAL ARCHITECTURE SEARCH
SP:b403e36027a1f260c7daead40764de7984c943ef
[ "This paper works on empirically demonstrating the connection between model connectivity and the lottery ticket hypothesis, which are individually explored in the literature. Here the model connectivity refers to the fact that SGD produces different solutions (from the randomness, such as data ordering) that are connected through model parameter transition paths of approximately equal loss/accuracy. The lottery ticket hypothesis tells that there exist sparse subnetworks of the corresponding full dense network which can attain as strong loss / accuracy as the full dense network. ", "This paper empirically presents a very interesting connection between two also very interesting phenomena (mode connectivity and lottery ticket hypothesis), while removing a previous limitation of the lottery ticket hypothesis on larger networks. through a good amount of experiments, the authors empirically showed these two phenomena co-occur together (i.e. matching networks are stable) and have positive correlation (i.e. the more “matching” the network the more “stable”), under different network architectures and datasets." ]
We introduce instability analysis, a framework for assessing whether the outcome of optimizing a neural network is robust to SGD noise. It entails training two copies of a network on different random data orders. If error does not increase along the linear path between the trained parameters, we say the network is stable. Instability analysis reveals new properties of neural networks. For example, standard vision models are initially unstable but become stable early in training; from then on, the outcome of optimization is determined up to linear interpolation. We leverage instability analysis to examine iterative magnitude pruning (IMP), the procedure underlying the lottery ticket hypothesis. On small vision tasks, IMP finds sparse matching subnetworks that can train in isolation from initialization to full accuracy, but it fails to do so in more challenging settings. We find that IMP subnetworks are matching only when they are stable. In cases where IMP subnetworks are unstable at initialization, they become stable and matching early in training. We augment IMP to rewind subnetworks to their weights early in training, producing sparse subnetworks of large-scale networks, including Resnet-50 for ImageNet, that train to full accuracy.
[]
[ { "authors": [ "Felix Draxler", "Kambis Veschgini", "Manfred Salmhofer", "Fred A Hamprecht" ], "title": "Essentially no barriers in neural network energy landscape", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The Lottery Ticket Hypothesis: Finding Sparse", "venue": "Trainable Neural Networks. In Int. Conf. Represent", "year": 2019 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker" ], "title": "The state of sparsity in deep neural networks, 2019", "venue": null, "year": 1902 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training Imagenet in 1 hour, 2017", "venue": null, "year": 2017 }, { "authors": [ "Guy Gur-Ari", "Daniel A Roberts", "Ethan Dyer" ], "title": "Gradient descent happens in a tiny subspace", "venue": "arXiv preprint arXiv:1812.04754,", "year": 2018 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yihui He", "Ji Lin", "Zhijian Liu", "Hanrui Wang", "Li-Jia Li", "Song Han" ], "title": "Amc: Automl for model compression and acceleration on mobile devices", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in Neural Information Processing Systems,", "year": 1990 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip H.S. Torr" ], "title": "SNIP: Single-shot Network Pruning based on Connection Sensitivity", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "arXiv preprint arXiv:1608.08710,", "year": 2016 }, { "authors": [ "Zhiyuan Li", "Sanjeev Arora" ], "title": "An exponential learning rate schedule for deep learning", "venue": "arXiv preprint arXiv:1910.07454,", "year": 2019 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the Value of Network Pruning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ari S Morcos", "Haonan Yu", "Michela Paganini", "Yuandong Tian" ], "title": "One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers", "venue": null, "year": 2019 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning, 2019. arXiv:1902.04742v2", "venue": "NeurIPS", "year": 2019 }, { "authors": [ "Russell Reed" ], "title": "Pruning algorithms-a survey", "venue": "IEEE transactions on Neural Networks,", "year": 1993 }, { "authors": [ "Leslie N Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "Leslie N. Smith", "Nicholay Topin" ], "title": "Super-convergence: Very fast training of residual networks using large learning rates, 2018", "venue": "URL https://openreview.net/forum?id=H1A5ztj3b", "year": 2018 }, { "authors": [ "Samuel L. Smith", "Quoc V. Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Samuel L. Smith", "Pieter-Jan Kindermans", "Quoc V. Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "In International Conference on Learning Representations,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The lottery ticket hypothesis (Frankle & Carbin, 2019) conjectures that neural networks contain sparse subnetworks that are capable of training in isolation from initialization to full accuracy. The sole empirical evidence in support of the lottery ticket hypothesis is a series of experiments using a procedure called iterative magnitude pruning (IMP). IMP returns a subnetwork of the original, randomly initialized network by training the network to completion, pruning the lowest-magnitude weights (Han et al., 2015), and resetting each remaining weight to its original initialization. On small networks for MNIST and CIFAR-10, IMP subnetworks can match the accuracy of the full network (we refer to such subnetworks as matching subnetworks) at sparsity levels far beyond those at which randomly reinitialized or randomly pruned subnetworks can do the same.\nThe lottery ticket hypothesis offers a new perspective on the role of overparameterization and raises the tantalizing prospect that there may exist much smaller neural networks that are capable of replacing the larger models we typically train today. Unfortunately, in more challenging settings, there is no empirical evidence that the lottery ticket hypothesis holds. IMP subnetworks of VGG and Resnet-style networks on CIFAR-10 and ImageNet perform no better than other kinds of sparse networks (Liu et al., 2019; Gale et al., 2019).\nIn this paper, we describe a new framework called instability analysis, which measures whether the outcome of optimizing a network is robust to SGD noise (in which case we call it stable). Instability analysis offers a range of new insights into the behavior of unpruned networks. For example, the outcome of optimization becomes stable to SGD noise early in training (3% for Resnet-20 on CIFAR-10 and 20% on Resnet-50 for ImageNet). Moreover, it distinguishes known cases where IMP succeeds and fails to find a matching subnetwork; namely, IMP subnetworks are only matching when they are stable. It also allows us to identify new scenarios where sparse, matching subnetworks emerge early in training in more challenging settings, including Resnet-50 and Inception-v3 on ImageNet. In doing so, our results demonstrate that instability analysis is a valuable scientific tool for investigating the behavior of neural networks.\nW0\nWk\nW 1T W 2 TInstability\nFigure 1: A diagram of instability analysis (text left).\nInstability analysis. Instability analysis is a technique to determine whether the outcome of optimization is robust to SGD noise. Figure 1 visualizes this process. We train two copies of the same network from initialization (W0) on different data orders (which models different samples of SGD noise). We then linearly interpolate (dashed line) between the trained weights (W 1T and W 2 T ) and examine the error along this path (blue curve). The instability of the network to SGD noise is the maximum increase in train or test error along this path (red line). We say a network is stable if error does not increase along the path, i.e., instability is 0. To examine instability at a later iteration k, we first train the network to iteration k (Wk) and make two copies afterwards. Instability is a property of a network with respect to an optimization procedure;\nwe focus on the standard procedure prescribed for the networks we examine.\nInstability analysis assesses a linear form of mode connectivity, a phenomenon where the minima found by two networks are connected by a path of constant error. Draxler et al. (2018) and Garipov et al. (2018) show that the modes of standard vision networks trained from different initializations are connected by piece-wise linear paths of constant error or loss. Based on this work, we expect that our networks will be connected by such paths. However, the modes found by Draxler et al. and Garipov et al. are not connected by linear paths. The only extant example of linear mode connectivity is by Nagarajan & Kolter (2019), who train MLPs from the same initialization on disjoint subsets of MNIST and find that the resulting networks are connected by linear paths of constant test error; we explore linear mode connectivity from points throughout training, we do so at a larger scale, and we focus on different samples of SGD noise rather than disjoint samples of data.\nResults. We begin by examining the instability of unpruned networks for MNIST, CIFAR-10, and ImageNet. All but the smallest MNIST network we study are unstable at initialization. However, by a point early in training (3% for Resnet-20 on CIFAR-10 and 20% for Resnet-50 on ImageNet), all networks become stable. In other words, from this point forward, the outcome of optimization is determined modulo linear interpolation. In fact, the entire trajectory of a stable network is so determined: when we train two copies of the network on different data orders, the states of the networks at each epoch are connected by linear paths over which test error does not increase.\nIn the lottery ticket context, we find that extremely sparse IMP subnetworks are matching only when they are stable, providing the first basis for understanding the mixed results in the literature. In doing so, we make a new connection between lottery ticket behavior and the optimization dynamics of neural networks. Inspired by our full network results, we modify IMP to rewind subnetwork weights to their values at iteration k rather than resetting them to initialization. For values of k that are early in training (in fact, earlier than the full networks), IMP subnetworks become stable in all cases we consider. Correspondingly, they also become matching. At these sparsity levels, randomly reinitialized and randomly pruned networks are neither stable nor matching. This connection between stability and accuracy suggests that linear mode connectivity is fundamental to sparse neural networks found by IMP and, thereby, to our current knowledge of the lottery ticket hypothesis.\nContributions. We make the following contributions:\n• We introduce instability analysis, which measures the maximum increase in error along the linear path between minima found by training the same network on different data orders.\n• On a range of image classification benchmarks including Resnet-50 on ImageNet, we observe that networks become stable to SGD noise early in training.\n• We show that stable networks are stable throughout the training process.\n• We use instability analysis to distinguish successes and failures of IMP (the core algorithm behind the lottery ticket hypothesis) as identified in previous work. Namely, extremely sparse IMP subnetworks are only matching when they are stable.\n• We augment IMP with rewinding to study subnetworks initialized after iteration 0. We show that IMP subnetworks become stable and matching early in training if not at initialization.\n• Using rewinding, we show how to find sparse, matching subnetworks in much larger settings than in previous work by setting the weights of IMP subnetworks to their values from early in training." }, { "heading": "2 PRELIMINARIES AND METHODOLOGY", "text": "Instability analysis via linear mode connectivity. Instability analysis evaluates whether the minima found when training two copies of a neural network on different samples of SGD noise (i.e., the random data order used during SGD) are linearly connected by a path over which error does not increase. The neural network in question could be randomly initialized (W0 in Figure 1) or the result of k training iterations (Wk). To perform instability analysis, we make two copies of the network and train them to completion with different random data orders (W 1T and W 2 T ). We then linearly interpolate between the trained weights (dashed line) and compute the train or test error at each point (blue curve) to determine whether it increased (minima are not linearly connected) or did not increase (minima are linearly connected).\nFormally, we capture training with SGD (or a variant) by a function As→t : RD ×U → RD, which maps weights Ws at iteration s and SGD randomness u ∼ U to updated weights Wt at iteration t by training for t− s steps (for s, t ∈ {1, .., T} and s < t). Algorithm 1 describes our procedure:\nAlgorithm 1 Stability analysis from iteration k. 1: Create a neural network with randomly initialized weights W0 ∈ Rd. 2: Train W0 to Wk under SGD noise u ∼ U . That is, Wk ← A0→k(W0, u). 3: Train Wk to W 1T under SGD noise u1 ∼ U . That is, W 1T ← Ak→T (Wk, u1). 4: Train Wk to W 2T under SGD noise u2 ∼ U . That is, W 2T ← Ak→T (Wk, u2). 5: Evaluate E(αW 1T + (1− α)W 2T ) for α ∈ [0, 1].\nWe describe the result of linear interpolation (step 5) with a quantity that we term instability. Let E(W ) denote the train or test error of a network parameterized by W . Let Ē = mean(E(W 1T ), E(W 2T )) be the average test error of W 1T and W 2T . Let Emax = supα∈[0,1] E(αW 1T + (1−α)W 2T ) be the highest test error when linearly interpolating betweenW 1T andW 2T . The instability is Emax − Ē (red line in Figure 1). When instability ≈ 0, the minima are linearly connected and the network is stable. In practice, we average the instability from three initializations and three data orders per initialization (nine combinations in total). We use 30 evenly-spaced values of α ∈ [0, 1]. Networks and datasets. We study image classification networks on MNIST, CIFAR-10, and ImageNet as specified in Table 1. All hyperparameters listed are the standard values for these networks from reference implementations or prior work as cited in Table 1. The warmup and low variants of Resnet-201 and VGG-16 are adapted from hyperparameters chosen by Frankle & Carbin (2019).\n1Frankle & Carbin (2019) mistakenly refer to Resnet-20 as “Resnet-18,” which is a separate network." }, { "heading": "3 NEURAL NETWORK INSTABILITY TO SGD NOISE", "text": "In this section, we study whether the outcome of optimization becomes robust to SGD noise after a certain amount of training. Concretely, we perform instability analysis (Algorithm 1) on the standard networks in Table 1 from many points during training to understand when, if ever, networks become stable to SGD noise. We find that, although only Lenet is stable at initialization, every network becomes stable early in training.\nNeural networks are unstable at initialization. We begin by studying the instability of neural networks at initialization. We do so by training two copies of the same, randomly-initialized network under different samples of SGD noise (that is, Algorithm 1 with k = 0). Figure 2 shows the train error (purple) and test error (red) when linearly interpolating between the minima found by these copies. With the exception of Lenet on MNIST, none of the networks we study are stable at initialization. In fact, both training and test error rise to the point of random guessing when linearly interpolating between the minima found under different data orders. Lenet’s error does rise slightly, but the increase is a small fraction of a percentage point. We conclude that, in general, larger-scale image classification networks are not stable at initialization.\nStability improves early in training. Although nearly all networks are unstable at initialization, they will inevitably become stable at some point. In the limit, they will be stable by the end of training, and it seems reasonable to expect that the final few steps of SGD are too insignificant to cause the network to enter linearly unconnected minima. In this experiment, we ask how early neural networks become stable. In other words, after what point in training is the outcome of optimization determined modulo linear interpolation regardless of the sample of SGD noise? To explore this behavior, we train a single copy of the network for k iterations or epochs before making two copies that we train to completion on different data orders (Algorithm 1 with k ≥ 0). Figure 3 plots the instability of the networks for various values of k. We measure instability as the maximum error during interpolation (the peaks in Figure 2) minus the mean of the errors of the two networks (the endpoints in Figure 2). In all cases, instability decreases as k increases, culminating in networks that are stable (i.e., instability≈ 0). The iteration at which stability emerges is surprisingly early. For example, it occurs from approximately iteration 2000 for Resnet-20 and VGG-16; in other words, after 3% of training, SGD noise cannot affect the final minimum modulo linear interpolation. Stability occurs later for Resnet-50: about epoch 18 (20% into training).\nInstability is essentially identical when measured in terms of train or test error (although train instability is slightly higher than test instability for Resnet-50), indicating that the minimum becomes determined on both the train and test surfaces around the same time. Going forward, we present all results with respect to test error for simplicity.\nStable networks are linearly connected throughout training. Stable networks arrive at minima that are linearly connected, but do the trajectories they follow throughout training also have this property? In other words, when training two copies of the same network with different noise, is there a linear path over which test error does not increase connecting the states of the networks at each iteration? To study this behavior, we linearly interpolate between the networks at every epoch of training and compute the test error instability. That is, we compute instability throughout training.\nFigure 4 plots instability throughout training for Resnet-20 and VGG-16 from different starting iterations k. For k = 0 (blue line), instability increases rapidly. In fact, it follows the same pattern as test error: as the test error of each network decreases, the maximum possible instability increases (since instability never exceeds random guessing). With larger values of k, instability increases more slowly throughout training. When k is sufficiently large that the networks are stable at the end of training, they are stable at every epoch of training (k = 2000, pink line). In other words, after iteration 2000, the networks follow identical optimization trajectories modulo linear interpolation.\nDiscussion. Our observations implicitly divide training into two phases: an initial, unstable phase in which the final “linearly connected” minimum is undetermined on account of SGD noise and a subsequent, stable phase in which the final linearly connected minimum becomes determined. From this perspective, our observations contribute to a growing body of literature suggesting that training experiences a noisy initial phase and a less stochastic second phase. For example, the eigenspectrum of the Hessian settles into a bulk of small eigenvalues and a few large outlier eigenvalues after some amount of training (Gur-Ari et al., 2018), and networks trained with large batch sizes and high learning rates benefit from learning rate warmup during the first part of training (Goyal et al., 2017). One possible way to exploit our observations could be to explore changing aspects of the optimization process (e.g., learning rate schedule or optimizer) similar to Goyal et al. (2017) once the network enters the stable phase in order to improve the performance of training; instability analysis makes it possible to evaluate the consequences of doing so.\nAs a scientific tool, we also believe instability analysis provides a framework for studying topics related to the scale and distribution of SGD noise, e.g., the relationship between batch size, learning rate, and generalization (Keskar et al., 2017; Smith & Le, 2018; Smith et al., 2018) and the efficacy of alternative learning rate schedules (Smith, 2017; Smith & Topin, 2018; Li & Arora, 2019)." }, { "heading": "4 INSTABILITY AND SPARSITY", "text": "We have long known that it is possible to prune neural networks after training, often removing 90% of connections or more with no reduction in accuracy after small amount of additional training (e.g., LeCun et al., 1990; Reed, 1993; Han et al., 2015; Gale et al., 2019; He et al., 2018). However, sparse networks are more difficult to train from scratch. At the most extreme levels of sparsity\nachievable by pruning, sparse networks trained in isolation generally reach lower test accuracy than dense networks (Han et al., 2015; Li et al., 2016; Liu et al., 2019; Frankle & Carbin, 2019).\nHowever, there is a known class of networks that remains accurate at these sparsity levels: winning lottery tickets. On small vision networks, iterative magnitude pruning (IMP) retroactively finds sparse subnetworks that were capable of training in isolation to full accuracy (Frankle & Carbin, 2019); we refer to subnetworks with this capability as matching subnetworks. The existence of winning lottery tickets raises the possibility that we might be able to replace conventional, dense networks with sparser subnetworks, creating new opportunities to improve the performance of training. However, in more challenging settings, subnetworks found by IMP with k = 0 are not matching at particularly high sparsities and perform no better than other subnetworks (Liu et al., 2019; Gale et al., 2019). In these contexts, there is no evidence that the lottery ticket hypothesis holds.\nMotivated by the possibility of training more efficient networks and a desire to explain the successes and failures of IMP, we study the relationship between instability and the accuracy of extremely sparse neural networks. Our central finding is that, although the accuracy of full networks in Section 3 seems unaffected by instability, the sparsest IMP subnetworks are matching only when they are stable. In other words, when SGD noise is sufficient to change the minimum that an IMP network finds (up to linear interpolation), test accuracy is lower. Randomly reinitialized and randomly pruned subnetworks are always both unstable and non-matching at all sparsity levels we consider." }, { "heading": "4.1 METHODOLOGY", "text": "Iterative magnitude pruning. Iterative magnitude pruning (IMP) is a procedure to retroactively find a subnetwork of the state of the full network at iteration k of training. As outlined in Algorithm 2, IMP trains a network to completion, prunes weights with the lowest magnitudes globally, and rewinds the remaining weights back to their values at iteration k. The result is a subnetwork (Wk,m) whereWk ∈ Rd is the state of the full network at iteration k andm ∈ {0, 1}d is a fixed binary vector that, when multiplied element-wise with Wk, produces the pruned network m Wk. We can either run IMP iteratively (training, pruning 20% of weights (Han et al., 2015; Frankle & Carbin, 2019), rewinding, and repeating until we reach a target sparsity) or in one-shot (pruning to the target sparsity in a single step). We use one-shot pruning on ImageNet networks for efficiency and iterative pruning in all other cases (Table 1). Frankle & Carbin (2019) only study rewinding to iteration 0; one of our contributions is to generalize IMP to any rewinding iteration k. When training a subnetwork from iteration k, we also rewind the learning rate schedule to its state at iteration k.\nAlgorithm 2 Iterative Magnitude Pruning (IMP) with rewinding to iteration k and N iterations. 1: Create a neural network with randomly initialized weights W0 ∈ Rd and initial pruning mask m = 1d. 2: Train W0 to Wk under SGD noise u ∼ U . That is, Wk ← A0→k(W0, u). 3: for n ∈ {1, . . . , N} do 4: Train m Wk to m WT under SGD noise u′ ∼ U . That is, WT ← Ak→T (m Wk, u′). 5: Prune the remaining entries with the lowest magnitudes from WT . Let m[i] = 0 if WT [i] is pruned. 6: Return m,Wk\nSparsity levels. Although we are interested in the behavior of networks at all sparsities, computational limits force us to focus on a specific sparsity level.2 In light of these restrictions, we focus on the highest sparsities for which IMP returns a matching network at any rewinding iteration k. The densities we examine are in Table 1, and Appendix A explains these choices. Doing so provides the best contrast between sparse networks that are matching and (1) the full, overparameterized neural networks and (2) other classes of sparse networks.\nExperimental approach. We study the relationship between stability and accuracy in extremely sparse subnetworks uncovered by IMP. IMP produces particularly sparse matching subnetworks and is the algorithm behind current lottery ticket results, so it merits close examination for both better scientific understanding and potential practical lessons for training sparse networks. As a basis\n2IMP entails training a network at least a dozen times to reach high levels of sparsity, and instability analysis requires training each of these networks a further nine times (three data orders and three kinds of sparsity) for many rewinding iterations. For rigor, we replicate each experiment three times with different initializations.\nfor comparison, we also examine two kinds of subnetworks that are not matching at the sparsities we consider: (1) IMP subnetworks that are randomly reinitialized and (2) subnetworks found by randomly pruning weights rather than pruning those with the lowest magnitudes. We exploit the fact that not all IMP subnetworks are matching: we contrast settings where IMP succeeds and fails to further understand the conditions under which IMP subnetworks are matching." }, { "heading": "4.2 EXPERIMENTS", "text": "IMP subnetworks are matching at initialization only when stable. We begin by studying sparse subnetworks trained in isolation from initialization (k = 0). As noted previously, not all IMP subnetworks are matching at the sparsity levels we consider for k = 0. Figure 5 shows the accuracy of the IMP subnetworks (blue) across all levels of sparsity for each of the hyperparameters in Table 1 (alongside randomly pruned subnetworks in orange and randomly reinitialized subnetworks in green for comparison). On Lenet, IMP subnetworks are matching at sparsities well beyond those at which other subnetworks are matching. The same is true for variants of Resnet-20 and VGG-16 with lower learning rates or learning rate warmup, changes proposed by Frankle & Carbin (2019) specifically to make it possible for IMP to find matching subnetworks. In contrast, IMP subnetworks of Resnet-50, Inception-v3, and standard configurations of Resnet-20 and VGG-16 perform similarly to randomly reinitialized and randomly pruned subnetworks.\nIn Figure 6, we analyze the instability of these subnetworks. At the sparsity levels we consider, IMP subnetworks are matching only when they are stable. The IMP subnetworks of Lenet, Resnet-20 (low, warmup), and VGG-16 (low, warmup) are stable and matching, while no other IMP subnetworks have either property. The low and warmup experiments are notable because these hyperparameters were selected by Frankle & Carbin (2019) to make it possible for IMP to find matching subnetworks without awareness that they also improve stability. This inadvertent causal experiment adds further evidence of a connection between instability and accuracy in IMP subnetworks.\nWith the exception of Lenet, no randomly reinitialized or randomly pruned subnetworks are stable or matching at these levels of sparsity. On Lenet, these subnetworks are not matching but test error only rises slightly when interpolating. For all other networks we consider, the error of these subnetworks approaches or reaches that of random guessing when interpolating.\nIMP subnetworks become stable and matching early in training. In the previous experiment, we saw that the IMP subnetworks are matching only when they are stable to SGD noise. In Section 3, we observed that unpruned networks become stable to SGD noise only after a certain amount of training. In this experiment, we combine these observations: we study whether IMP subnetworks become stable during training and, if so, whether improved accuracy follows. To do so, we examine\n0.0 0.5 1.0 Interpolation\n40\n60\n80\n100\nTe st\nEr ro\nr ( %\n)\nResnet-50 (30.0%)\n0.0 0.5 1.0 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr ( %\n)\nResnet-20 (16.8%)\n0.0 0.5 1.0 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr ( %\n)\nResnet-20 Low (8.6%)\n0.0 0.5 1.0 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr ( %\n)\nResnet-20 Warmup (6.9%)\n0.0 0.5 1.0 Interpolation\n2.0\n2.5\n3.0\n3.5\n4.0\nTe st\nEr ro\nr ( %\n)\nLenet (3.5%)\n0.0 0.5 1.0 Interpolation\n40\n60\n80\n100\nTe st\nEr ro\nr ( %\n)\nInception-v3 (30.0%)\n0.0 0.5 1.0 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr ( %\n)\nVGG-16 (1.5%)\n0.0 0.5 1.0 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr ( %\n)\nVGG-16 Low (5.5%)\n0.0 0.5 1.0 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr ( %\n)\nVGG-16 Warmup (1.5%)\nRandom Reinit Random Pruning IMP\nthe sparse subnetworks that result from training unpruned networks for k steps and subsequently applying pruning masks (and possibly reinitializing). We find these masks by running IMP with rewinding: we train the full network to completion, prune, and rewind each remaining weight to its value at step k. We then run standard instability analysis on these sparse networks from iteration k.\nThe blue dots in Figure 7 show the instability (rows 1 and 3) and test accuracy (rows 2 and 4) when rewinding IMP subnetworks to various points early in training. Those subnetworks that are unstable when rewound to iteration 0 (Resnet-20, VGG-16, Resnet-50, Inception-v3) become stable when rewound to points slightly later in training. IMP subnetworks of Resnet-20, VGG-16, and Resnet-50\nbecome stable at about iteration 500 (0.8% into training), iteration 1000 (1.6%), and epoch 4 (4.4%). Stability and accuracy of these sparse IMP subnetworks continue to correlate. Test error decreases alongside instability, with IMP subnetworks reaching the performance of the unpruned networks (gray lines) approximately when they become stable. IMP subnetworks that were matching and stable at iteration 0 generally remain so at other rewinding points, although Resnet-20 low and VGG-16 low experience increased test error at the latest rewinding points we consider.\nIMP subnetworks become stable at least as early as the unpruned networks (red) and much earlier for Resnet-50 (epoch 4 vs. 18). In contrast, randomly pruned subnetworks (orange) and randomly reinitialized IMP subnetworks (green) are unstable and non-matching at every rewinding iteration (with Lenet again the sole exception). We believe these subnetworks will eventually become stable later on; in some cases, instability of randomly pruned subnetworks decreases at the latest rewinding points we consider. This behavior suggests a potential broader link between subnetwork stability and accuracy: IMP subnetworks are matching and maintain or improve upon the stability behavior of the full networks, while other subnetworks are less accurate and become stable later if at all." }, { "heading": "4.3 DISCUSSION", "text": "The “lottery ticket hypothesis.” The lottery ticket hypothesis (Frankle & Carbin, 2019) conjectures that any “randomly initialized, dense neural network contains a subnetwork that—when trained in isolation—matches the accuracy of the original network.” The authors support this hypothesis by using IMP to find matching subnetworks at initialization in small vision networks. However, followup studies show (Liu et al., 2019; Gale et al., 2019) and we confirm that IMP does not find matching subnetworks in more challenging settings. We use instability analysis to distinguish the successes and failures of IMP as identified in previous work. In doing so, we make a new connection between the lottery ticket hypothesis and the optimization dynamics of neural networks.\nMoreover, by augmenting IMP with rewinding, we show how to find sparse, matching subnetworks in much larger settings than in previous work, albeit with subnetworks from early in training rather than at initialization. Our technique has already been adopted to create trainable subnetworks that transfer to new settings (Morcos et al., 2019), as a pruning method in its own right (Anonymous, 2020a), and to further study the lottery ticket hypothesis (Anonymous, 2020e;c;g;f;b).\nPruning. On larger-scale networks and tasks, we find that IMP subnetworks at extreme sparsities only become stable and matching after the full network has been trained for a small number of iterations or epochs. Recent methods have explored pruning neural networks at initialization (Lee et al., 2019; Anonymous, 2020d), but our results suggest that the best time to prune may be slightly later in training. By that same token, most modern pruning methods only begin to sparsify networks late in training or after training (Han et al., 2015; Gale et al., 2019; He et al., 2018). In these cases, the fact that there are matching subnetworks early in training suggests that there is potentially a substantial unexploited opportunity to prune neural networks much earlier than current methods.\nSGD noise and overparameterization. While dense neural networks train to full accuracy regardless of their stability, sparse networks in our experiments are only matching when they are stable. Although our results speak only to specific kinds of sparse networks (IMP subnetworks and our randomly reinitialized and randomly pruned baselines) at particularly extreme sparsity levels, they suggest a possible broader relationship between instability and accuracy of sparse networks. It is possible that sparse networks, which have fewer parameters than their dense counterparts, are less robust to instability during the early part of training." }, { "heading": "A SELECTING EXTREME SPARSITY LEVELS FOR IMP SUBNETWORKS", "text": "In this appendix, we describe how we select the sparsity level that we examine for each IMP subnetwork. For each network and hyperparameter configuration, our goal is to study the most extreme sparsity level at which matching subnetworks are known to exist early in training. To do so, we use IMP to generate subnetworks at many different sparsities for many different rewinding iterations (specifically, all of the rewinding iterations Figure 7). We then select the most extreme sparsity level at which any rewinding iteration produces a matching subnetwork.\nFigure 8 plots the maximum accuracy found by any rewinding iteration in red. The black line is the accuracy of the unpruned network to one standard deviation. For each network, we select the most extreme sparsity for which the red and black lines overlap. As a basis for comparison, Figure 8 also includes all of the other lines from Figure 5: the result of performing IMP with k = 0 (blue line), random pruning (orange line), and random reinitialization of the IMP subnetworks with k = 0 (green line).\nNote that, for computational reasons, Resnet-50 and Inception-v3 are pruned using one-shot pruning, meaning the networks are pruned to the target sparsity all at once. All other networks are pruned using iterative pruning, meaning the networks are pruned by 20% after each iteration of IMP until they reach the target sparsity.\nB INTERPOLATION DATA FOR UNPRUNED NETWORKS\nIn this appendix, we present the interpolation data for the instability analysis on the unpruned networks in Section 3." }, { "heading": "B.1 TEST ERROR", "text": "These graphs plot the test error when linearly interpolating for select values of k for the networks in Figure 3.\n0.00 0.25 0.50 0.75 1.00 Interpolation\n1.6\n1.8\n2.0\n2.2\n2.4\nTe st\nEr ro\nr\nLenet (100.0%) - Full Network\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 (100.0%) - Full Network\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 (100.0%) - Full Network\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\n100\nTe st\nEr ro\nr\nResnet-50 (100.0%) - Full Network\nk = 0 k = 4 k = 8\nk = 12 k = 16 k = 20" }, { "heading": "B.2 TRAIN ERROR", "text": "These graphs plot the train error when linearly interpolating for select values of k for the networks in Figure 3.\n0.00 0.25 0.50 0.75 1.00 Interpolation\n0.0\n0.2\n0.4\n0.6\nTe st\nEr ro\nr\nLenet (100.0%) - Full Network\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n0\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 (100.0%) - Full Network\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n0\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 (100.0%) - Full Network\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\n100\nTe st\nEr ro\nr\nResnet-50 (100.0%) - Full Network\nk = 0 k = 4 k = 8\nk = 12 k = 16 k = 20\nC INTERPOLATION DATA FOR SPARSE NETWORKS\nIn this appendix, we present the interpolation data for the instability analysis on the sparse networks in Section 4." }, { "heading": "C.1 TEST ERROR OF IMP SUBNETWORKS", "text": "These graphs plot the test error when linearly interpolating for select values of k for the IMP subnetworks in Figure 7. Percents in all figures are densities—the percent of weights remaining after pruning.\n0.00 0.25 0.50 0.75 1.00 Interpolation\n1.6\n1.8\n2.0\n2.2\nTe st\nEr ro\nr\nLenet (3.5%) - IMP\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\n100\nTe st\nEr ro\nr\nResnet-50 (30.0%) - IMP\nk = 0 k = 1 k = 2\nk = 3 k = 4 k = 6\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\n100\nTe st\nEr ro\nr\nInception-v3 (30.0%) - IMP\nk = 0 k = 2 k = 4 k = 6\nk = 8 k = 10 k = 12\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 (16.8%) - IMP\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n11.0\n11.5\nTe st\nEr ro\nr\nResnet-20 Low (8.6%) - IMP\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n10.5\n11.0\nTe st\nEr ro\nr\nResnet-20 Warmup (6.9%) - IMP\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60 80 Te st Er ro r\nVGG-16 (1.5%) - IMP\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n7.75\n8.00\n8.25\n8.50\n8.75\nTe st\nEr ro\nr\nVGG-16 Low (5.5%) - IMP\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n6.00\n6.25\n6.50\n6.75\nTe st\nEr ro\nr\nVGG-16 Warmup (1.5%) - IMP\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K" }, { "heading": "C.2 TEST ERROR OF RANDOMLY PRUNED SUBNETWORKS", "text": "These graphs plot the test error when linearly interpolating for select values of k for the randomly pruned subnetworks in Figure 7. Percents in all figures are densities—the percent of weights remaining after pruning.\n0.00 0.25 0.50 0.75 1.00 Interpolation\n3.0\n3.5\n4.0\n4.5\nTe st\nEr ro\nr\nLenet (3.5%) - Random Pruning\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n40\n60\n80\n100\nTe st\nEr ro\nr\nResnet-50 (30.0%) - Random Pruning\nk = 0 k = 1 k = 2\nk = 3 k = 4 k = 6\n0.00 0.25 0.50 0.75 1.00 Interpolation\n40\n60\n80\n100\nTe st\nEr ro\nr\nInception-v3 (30.0%) - Random Pruning\nk = 0 k = 2 k = 4 k = 6\nk = 8 k = 10 k = 12\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 (16.8%) - Random Pruning\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 Low (8.6%) - Random Pruning\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 Warmup (6.9%) - Random Pruning\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 (1.5%) - Random Pruning\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 Low (5.5%) - Random Pruning\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 Warmup (1.5%) - Random Pruning\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K" }, { "heading": "C.3 TEST ERROR OF RANDOMLY REINITIALIZED IMP SUBNETWORKS", "text": "These graphs plot the test error when linearly interpolating for select values of k for the randomly reinitialized IMP subnetworks in Figure 7. Percents in all figures are densities—the percent of weights remaining after pruning.\n0.00 0.25 0.50 0.75 1.00 Interpolation\n2.4\n2.6\n2.8\n3.0\nTe st\nEr ro\nr\nLenet (3.5%) - Random Reinit\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n40\n60\n80\n100\nTe st\nEr ro\nr\nResnet-50 (30.0%) - Random Reinit\nk = 0 k = 1 k = 2\nk = 3 k = 4 k = 6\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 (16.8%) - Random Reinit\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 Low (8.6%) - Random Reinit\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nResnet-20 Warmup (6.9%) - Random Reinit\nk = 0 k = 100 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 (1.5%) - Random Reinit\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 Low (5.5%) - Random Reinit\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K\n0.00 0.25 0.50 0.75 1.00 Interpolation\n20\n40\n60\n80\nTe st\nEr ro\nr\nVGG-16 Warmup (1.5%) - Random Reinit\nk = 0 k = 50 k = 250\nk = 500 k = 1K k = 2K" }, { "heading": "D L2 DISTANCES FOR UNPRUNED NETWORKS", "text": "In this appendix, we present the L2 distances between pairs of full networks trained on different data orders from iteration k, the experiment in Section 3. This data parallels Figure 3. We do not yet have L2 distance data for the ImageNet networks, although we plan to add it to the next version of the paper.\n0 25 100 500 2K 10K Rewinding Iteration (log)\n0\n20\n40 60 L2 D ist an ce\nLenet\n0 100 500 2K 10K Rewinding Iteration (log)\n0\n20\n40\n60\nL2 D\nist an\nce\nResnet-20\n0 25 100 500 2K 10K Rewinding Iteration (log)\n0\n20\n40\n60\n80\nL2 D\nist an\nce\nVGG-16\nThe L2 distance between the networks decreases linearly as k increases. Interestingly, we observe no clear relationship between the L2 distance and the network instability. For example, there does not appear to be a critical L2 distance threshold that is crossed when the networks become stable. This is in contrast to our observations in Appendix E, where L2 distance between IMP networks correlates with instability, dropping to a lower value when the subnetworks become stable." }, { "heading": "E L2 DISTANCES FOR SPARSE NETWORKS", "text": "In this appendix, we present the L2 distances between pairs of sparse networks trained on different data orders from iteration k, the experiment in Section 4. This data parallels Figure 7. We do not yet have L2 distance data for the ImageNet networks, although we plan to add it to the next version of the paper.\n0 25 100 500 2K Rewinding Iteration (log)\n0\n10\n20\nL2 D\nist an\nce\nLenet (3.5%)\n0 100 500 2K 10K Rewinding Iteration (log)\n0\n20\n40\n60\nL2 D\nist an\nce\nResnet-20 (16.8%)\n0 100 500 2K 10K Rewinding Iteration (log)\n0\n10\n20\n30\nL2 D\nist an\nce\nResnet-20 Low (8.6%)\n0 100 500 2K 10K Rewinding Iteration (log)\n0\n10\n20\n30\n40\nL2 D\nist an\nce\nResnet-20 Warmup (6.9%)\n0 25 100 500 2K 10K Rewinding Iteration (log)\n0\n50 100 L2 D ist\nan ce\nVGG-16 (1.5%)\n0 25 100 500 2K 10K Rewinding Iteration (log)\n0\n20\n40\nL2 D\nist an\nce\nVGG-16 Low (5.5%)\n0 25 100 500 2K 10K Rewinding Iteration (log)\n0\n20\n40\n60\nL2 D\nist an\nce\nVGG-16 Warmup (1.5%)\nThe L2 distance between IMP subnetworks follows the same pattern as instability. When the network is unstable, the L2 distance plateaus at a higher level, the same level as randomly reinitialized and randomly pruned networks. As instability decreases, L2 distance also decreases. When the subnetwork becomes stable, L2 distance plateaus at a lower level than the randomly reinitialized and randomly pruned networks. Importantly, this lower level is still non-zero. These results contrast with thise in Appendix D, where we do not observe a relationship between instability and L2 distance between the full networks." } ]
2,019
null
SP:2237245aeb115eb318e447d63ab3a4614d8eec06
[ "This paper proposes a way to compress past hidden states for modeling long sequences. Attention is used to query the compressed representation. The authors introduce several methods for compression such as convolution, pooling etc. The outcome is a versatile model that enables long-range sequence modeling, achieving strong results on not only language model tasks but also RL and speech. For testing and evaluating the modeling of really long context sequence modeling, the authors introduce PG-19, a new benchmark based on Project Gutenberg narratives. ", "This paper investigates a so-called \"compressive transformer\" approach. The idea is to compress distant past memories into a coarse-grained representation while keeping a fine-grained representation for close past memories. A variety of compression techniques and training strategies have been investigated in the paper and verified using tasks from multiple domains including language modeling, speech synthesis and reinforcement learning. Particularly, the authors propose a new benchmark PG-19 for long-term sequence modeling. " ]
We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97 bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new openvocabulary language modelling benchmark derived from books, PG-19.
[ { "affiliations": [], "name": "Jack W. Rae" }, { "affiliations": [], "name": "Anna Potapenko" }, { "affiliations": [], "name": "Siddhant M. Jayakumar" }, { "affiliations": [], "name": "Chloe Hillier" }, { "affiliations": [], "name": "Timothy P. Lillicrap" } ]
[ { "authors": [ "R. Al-Rfou", "D. Choe", "N. Constant", "M. Guo", "L. Jones" ], "title": "Character-level language modeling with deeper self-attention", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "A. Baevski", "M. Auli" ], "title": "Adaptive input representations for neural language modeling", "venue": "arXiv preprint arXiv:1809.10853,", "year": 2019 }, { "authors": [ "D. Bahdanau", "K. Cho", "Y. Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "S. Bai", "J.Z. Kolter", "V. Koltun" ], "title": "Convolutional sequence modeling revisited, 2018a", "venue": "URL https://openreview.net/forum?id=rk8wKk-R-", "year": 2018 }, { "authors": [ "S. Bai", "J.Z. Kolter", "V. Koltun" ], "title": "Trellis networks for sequence modeling", "venue": "arXiv preprint arXiv:1810.06682,", "year": 2018 }, { "authors": [ "D.M. Blei", "A.Y. Ng", "M.I. Jordan" ], "title": "Latent dirichlet allocation", "venue": "J. Mach. Learn. Res.,", "year": 2003 }, { "authors": [ "C. Chelba", "T. Mikolov", "M. Schuster", "Q. Ge", "T. Brants", "P. Koehn", "T. Robinson" ], "title": "One billion word benchmark for measuring progress in statistical language modeling", "venue": "arXiv preprint arXiv:1312.3005,", "year": 2013 }, { "authors": [ "R. Child", "S. Gray", "A. Radford", "I. Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "J. Chung", "S. Ahn", "Y. Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "arXiv preprint arXiv:1609.01704,", "year": 2016 }, { "authors": [ "Z. Dai", "Z. Yang", "Y. Yang", "W.W. Cohen", "J. Carbonell", "Q.V. Le", "R. Salakhutdinov" ], "title": "Transformerxl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "Y.N. Dauphin", "A. Fan", "M. Auli", "D. Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "arXiv preprint arXiv:1612.08083,", "year": 2016 }, { "authors": [ "J. Devlin", "M.-W. Chang", "K. Lee", "K. Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "L. Espeholt", "H. Soyer", "R. Munos", "K. Simonyan", "V. Mnih", "T. Ward", "Y. Doron", "V. Firoiu", "T. Harley", "I. Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "E. Grave", "A. Joulin", "N. Usunier" ], "title": "Improving neural language models with a continuous cache", "venue": "arXiv preprint arXiv:1612.04426,", "year": 2016 }, { "authors": [ "A. Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "A. Graves", "G. Wayne", "I. Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "A. Graves", "G. Wayne", "M. Reynolds", "T. Harley", "I. Danihelka", "A. Grabska-Barwińska", "S.G. Colmenarejo", "E. Grefenstette", "T. Ramalho", "J. Agapiou" ], "title": "Hybrid computing using a neural network with dynamic external memory", "venue": null, "year": 2016 }, { "authors": [ "F. Hill", "A. Bordes", "S. Chopra", "J. Weston" ], "title": "The goldilocks principle: Reading children’s books with explicit memory representations", "venue": "arXiv preprint arXiv:1511.02301,", "year": 2015 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "A. Holtzman", "J. Buys", "M. Forbes", "Y. Choi" ], "title": "The curious case of neural text degeneration", "venue": "arXiv preprint arXiv:1904.09751,", "year": 2019 }, { "authors": [ "M. Hutter" ], "title": "The human knowledge compression contest", "venue": "URL http://prize. hutter1. net,", "year": 2012 }, { "authors": [ "N. Kalchbrenner", "L. Espeholt", "K. Simonyan", "A. v. d. Oord", "A. Graves", "K. Kavukcuoglu" ], "title": "Neural machine translation in linear time", "venue": "arXiv preprint arXiv:1610.10099,", "year": 2016 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "T. Kočiskỳ", "J. Schwarz", "P. Blunsom", "C. Dyer", "K.M. Hermann", "G. Melis", "E. Grefenstette" ], "title": "The narrativeqa reading comprehension challenge", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "B. Krause", "L. Lu", "I. Murray", "S. Renals" ], "title": "Multiplicative lstm for sequence modelling", "venue": "arXiv preprint arXiv:1609.07959,", "year": 2016 }, { "authors": [ "B. Krause", "E. Kahembwe", "I. Murray", "S. Renals" ], "title": "Dynamic evaluation of transformer language models", "venue": "CoRR, abs/1904.08378,", "year": 2019 }, { "authors": [ "G. Lample", "A. Sablayrolles", "M. Ranzato", "L. Denoyer", "H. Jégou" ], "title": "Large memory layers with product keys", "venue": "arXiv preprint arXiv:1907.05242,", "year": 2019 }, { "authors": [ "S. Merity", "C. Xiong", "J. Bradbury", "R. Socher" ], "title": "Pointer sentinel mixture models", "venue": "arXiv preprint arXiv:1609.07843,", "year": 2016 }, { "authors": [ "T. Mikolov", "M. Karafiát", "L. Burget", "J. Černockỳ", "S. Khudanpur" ], "title": "Recurrent neural network based language model", "venue": "In Eleventh Annual Conference of the International Speech Communication Association,", "year": 2010 }, { "authors": [ "A. Oord", "Y. Li", "I. Babuschkin", "K. Simonyan", "O. Vinyals", "K. Kavukcuoglu", "G. Driessche", "E. Lockhart", "L. Cobo", "F. Stimberg" ], "title": "Parallel wavenet: Fast high-fidelity speech synthesis", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "A. v. d. Oord", "S. Dieleman", "H. Zen", "K. Simonyan", "O. Vinyals", "A. Graves", "N. Kalchbrenner", "A. Senior", "K. Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "D. Paperno", "G. Kruszewski", "A. Lazaridou", "Q. Pham", "R. Bernardi", "S. Pezzelle", "M. Baroni", "G. Boleda", "R. Fernández", "K. Erk" ], "title": "The lambada dataset: Word prediction requiring a broad discourse context", "venue": "Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "J. Rae", "J.J. Hunt", "I. Danihelka", "T. Harley", "A.W. Senior", "G. Wayne", "A. Graves", "T. Lillicrap" ], "title": "Scaling memory-augmented neural networks with sparse reads and writes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "J.W. Rae", "C. Dyer", "P. Dayan", "T.P. Lillicrap" ], "title": "Fast parametric learning with activation memorization", "venue": "arXiv preprint arXiv:1803.10049,", "year": 2018 }, { "authors": [ "B.A. Richards", "P.W. Frankland" ], "title": "The persistence and transience of memory", "venue": null, "year": 2017 }, { "authors": [ "D.E. Rumelhart", "G.E. Hinton", "R.J. Williams" ], "title": "Learning representations by back-propagating errors", "venue": null, "year": 1986 }, { "authors": [ "A. Santoro", "R. Faulkner", "D. Raposo", "J. Rae", "M. Chrzanowski", "T. Weber", "D. Wierstra", "O. Vinyals", "R. Pascanu", "T. Lillicrap" ], "title": "Relational recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "M. Shoeybi", "M. Patwary", "R. Puri", "P. LeGresley", "J. Casper", "B. Catanzaro" ], "title": "Megatron-lm: Training multi-billion parameter language models using model parallelism, 2019", "venue": null, "year": 2019 }, { "authors": [ "S. Smith", "P. jan Kindermans", "C. Ying", "Q.V. Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": null, "year": 2018 }, { "authors": [ "S. Sukhbaatar", "E. Grave", "P. Bojanowski", "A. Joulin" ], "title": "Adaptive attention span in transformers", "venue": "arXiv preprint arXiv:1905.07799,", "year": 2019 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A.N. Gomez", "Ł. Kaiser", "I. Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "F. Wu", "A. Fan", "A. Baevski", "Y.N. Dauphin", "M. Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": null, "year": 1901 }, { "authors": [ "Z. Yang", "Z. Dai", "Y. Yang", "J. Carbonell", "R. Salakhutdinov", "Q.V. Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": null, "year": 1906 }, { "authors": [ "L. Zhou", "Y. Zhou", "J.J. Corso", "R. Socher", "C. Xiong" ], "title": "End-to-end dense video captioning with masked transformer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Y. Zhu", "R. Kiros", "R. Zemel", "R. Salakhutdinov", "R. Urtasun", "A. Torralba", "S. Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "J.G. Zilly", "R.K. Srivastava", "J. Koutnı́k", "J. Schmidhuber" ], "title": "Recurrent highway networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Dai" ], "title": "2019) for a detailed discussion). The Compressive Transformer now has a maximum temporal range", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans have a remarkable ability to remember information over long time horizons. When reading a book, we build up a compressed representation of the past narrative, such as the characters and events that have built up the story so far. We can do this even if they are separated by thousands of words from the current text, or long stretches of time between readings. During daily life, we make use of memories at varying time-scales: from locating the car keys, placed in the morning, to recalling the name of an old friend from decades ago. These feats of memorisation are not achieved by storing every sensory glimpse throughout one’s lifetime, but via lossy compression. We aggressively select, filter, or integrate input stimuli based on factors of surprise, perceived danger, or repetition — amongst other signals (Richards and Frankland, 2017).\nMemory systems in artificial neural networks began with very compact representations of the past. Recurrent neural networks (RNNs, Rumelhart et al. (1986)) learn to represent the history of observations in a compressed state vector. The state is compressed because it uses far less space than the history of observations — the model only preserving information that is pertinent to the optimization of the loss. The LSTM (Hochreiter and Schmidhuber, 1997) is perhaps the most ubiquitous RNN variant; it uses learned gates on its state vector to determine what information is stored or forgotten from memory.\nHowever since the LSTM, there has been great benefit discovered in not bottlenecking all historical information in the state, but instead in keeping past activations around in an external memory and attending to them. The Transformer (Vaswani et al., 2017) is a sequence model which stores the hidden activation of every time-step, and integrates this information using an attention operator (Bahdanau et al., 2014). The Transformer will thus represent the past with a tensor (depth × memory size × dimension) of past observations that is, in practice, an order of magnitude larger than an LSTM’s hidden state. With this granular memory, the Transformer has brought about a step-change in state-of-the-art performance, within machine translation (Vaswani et al., 2017), language modelling (Dai et al., 2019; Shoeybi et al., 2019), video captioning (Zhou et al., 2018), and a multitude of language understanding benchmarks (Devlin et al., 2018; Yang et al., 2019) amongst others.\nOne drawback in storing everything is the computational cost of attending to every time-step and the storage cost of preserving this large memory. Several works have focused on reducing the computational cost of attention with sparse access mechanisms (Rae et al., 2016; Child et al., 2019;\n∗Authors contributed equally, † DeepMind, London, UK. ‡ CoMPLEX, Computer Science, University College London, UK. Please direct correspondence to {jwrae, apotapenko}@google.com.\nSukhbaatar et al., 2019; Lample et al., 2019). However sparse attention does not solve the storage problem, and often requires custom sparse kernels for efficient implementation. Instead we look back to the notion of compactly representing the past. We show this can be built with simple dense linear-algebra components, such as convolutions, and can reduce both the space and compute cost of our models.\nWe propose the Compressive Transformer, a simple extension to the Transformer which maps past hidden activations (memories) to a smaller set of compressed representations (compressed memories). The Compressive Transformer uses the same attention mechanism over its set of memories and compressed memories, learning to query both its short-term granular memory and longer-term coarse memory. We observe this improves the modelling of text, achieving state-of-the-art results in character-based language modelling — 0.97 bpc on Enwik8 from the Hutter Prize (Hutter, 2012) — and word-level language modelling — 17.1 perplexity on WikiText-103 (Merity et al., 2016). Specifically, we see the Compressive Transformer improves the modelling of rare words.\nWe show the Compressive Transformer works not only for language, but can also model the waveform of high-frequency speech with a trend of lower likelihood than the TransformerXL and Wavenet (Oord et al., 2016) when trained over 400,000 steps. We also show the Compressive Transformer can be used as a memory component within an RL agent, IMPALA (Espeholt et al., 2018), and can successfully compress and make use of past observations.\nFurthermore we present a new book-level language-modelling benchmark PG-19, extracted from texts in Project Gutenberg1, to further promote the direction of long-context sequence modelling. This is over double the size of existing LM benchmarks and contains text with much longer contexts." }, { "heading": "2 RELATED WORK", "text": "There have been a variety of recent attempts to extend the range of attention, particularly in the Transformer, or to replace the attention operation with something less expensive. Wu et al. (2019) show that a convolution-like operator that runs in linear time can actually exceed the performance of the quadratic-time self-attention layer in the Transformer at sentence-to-sentence translation and sentence-level language modelling. However such a mechanism inhibits the flow of information across a large number of time-steps for a given layer, and has not shown to be beneficial for longrange sequence modelling.\nDai et al. (2019) propose the TransformerXL, which keeps past activations around in memory. They also propose a novel relative positional embedding scheme which they see outperforms the Transformer’s original absolute positional system. Our model incorporates both of these ideas, the use of a memory to preserve prior activations and their relative positional embedding scheme.\nThe Sparse Transformer (Child et al., 2019) uses fixed sparse attention masks to attend to roughly√ n locations in memory. This approach still requires keeping all memories around during training, however with careful re-materialization of activations and custom kernels, the authors are able to train the model with a reasonable budget of memory and compute. When run on Enwik8, the much larger attention window of 8, 000 improves model performance, but overall it does not significantly outperform a simpler TransformerXL with a much smaller attention window.\nThe use of dynamic attention spans is explored in Sukhbaatar et al. (2019). Different attention heads can learn to have shorter or longer spans of attention — and they observe this achieves state-ofthe-art in character-based language modelling. This idea could easily be combined with our contribution — a compressive memory. However an efficient implementation is not possible on current dense-linear-algebra accelerators, such as Google’s TPUs, due to the need for dynamic and sparse computation. Our approach builds on simple dense linear algebra components, such as convolutions." }, { "heading": "3 MODEL", "text": "We present the Compressive Transformer, a long-range sequence model which compacts past activations into a compressed memory2. The Compressive Transformer is a variant of the Transformer\n1Project Gutenberg: https://www.gutenberg.org/ 2A TF implementation can be found in Sonnet: https://github.com/deepmind/sonnet\n(Vaswani et al., 2017), a deep residual network which only uses attention to propagate information over time (namely multi-head attention). We build on the ideas of the TransformerXL (Dai et al., 2019) which maintains a memory of past activations at each layer to preserve a longer history of context. The TransformerXL discards past activations when they become sufficiently old (controlled by the size of the memory). The key principle of the Compressive Transformer is to compress these old memories, instead of discarding them, and store them in an additional compressed memory." }, { "heading": "3.1 DESCRIPTION", "text": "We define nm and ncm to be the number of respective memory and compressive memory slots in the model per layer. The overall input sequence S = x1, x2, . . . , x|s| represents input observations (e.g. tokens from a book). These are split into fixed-size windows of size ns for the model to process in parallel. The model observes x = xt, . . . , xt+ns at time t, which we refer to as the sequence (e.g. in Figure 1). As the model moves to the next sequence, its ns hidden activations are pushed into a fixed-sized FIFO memory (like the TransformerXL) of size nm. The oldest ns activations in memory are evicted, but unlike the TransformerXL we do not discard them. Instead we apply a compression operation, fc : Rns×d → Rb ns c c×d, mapping the ns oldest memories to bnsc c compressed memories which we then store in a secondary FIFO compressed memory of size ncm. d denotes the hidden size of activations and c refers to the compression rate, a higher value indicates more coarse-grained compressed memories. The overall temporal range of the model becomes l× (ns + nm + c ∗ ncm), where l is the number of layers — as discussed in Supplementary Section A. The full architecture is described in Algorithm 1.\nAlgorithm 1 Compressive Transformer At time zero\n1: m0 ← 0 // Initialize memory to zeros (l × nm × d) 2: cm0 ← 0 // Initialize compressed memory to zeros (l × ncm × d) At time t 3: h(1) ← xWemb // Embed input sequence(ns × d) 4: for layer i = 1, 2, . . . , l do 5: mem(i) ← concat(cm(i)t ,m (i) t ) // ((ncm + nm)× d)\n6: ã(i) ← multihead attention(i)(h(i),mem(i)t ) // MHA over both mem types (ns × d) 7: a(i) ← layer norm(ã(i) + h(i)) // Regular skip + layernorm (ncm × d) 8: old mem(i) ←m(i)t [: ns] // Oldest memories to be forgotten (ns × d) 9: new cm(i) ← f (i)c (old mem(i)) // Compress oldest memories by factor c (bnsc c × d)\n10: m(i)t+1 ← concat(m (i) t ,h (i))[−nm :] // Update memory (nm × d) 11: cm(i)t ← concat(cm (i) t ,new cm\n(i))[−ncm :] // Update compressed memory (ncm × d) 12: h(i+1) ← layer norm(mlp(i)(a(i)) + a(i)) // Mixing MLP (ns × d)\nAlgorithm 2 Attention-Reconstruction Loss 1: Lattn ← 0 2: for layer i = 1, 2, . . . , l do 3: h(i) ← stop gradient(h(i)) // Stop compression grads from passing... 4: old mem(i) ← stop gradient(old mem(i)) // ...into transformer network. 5: Q,K,V← stop gradient(attention params at layer i) // Re-use attention weight matrices. 6: def attn(h,m)← σ((hQ) (mK))(mV) // Use content-based attention (no relative). 7: new cm(i) ← f (i)c (old mem(i)) // Compression network (to be optimized). 8: Lattn ← Lattn + ||attn(h(i),old mem(i))− attn(h(i),new cm(i))||2" }, { "heading": "3.2 COMPRESSION FUNCTIONS AND LOSSES", "text": "For choices of compression functions fc we consider (1) max/mean pooling, where the kernel and stride is set to the compression rate c; (2) 1D convolution also with kernel & stride set to c; (3) dilated convolutions; (4) most-used where the memories are sorted by their average attention (usage) and the most-used are preserved. The pooling is used as a fast and simple baseline. The mostused compression scheme is inspired from the garbage collection mechanism in the Differentiable Neural Computer (Graves et al., 2016) where low-usage memories are erased. The convolutional compression functions contain parameters which require training.\nOne can train the compression network using gradients from the loss; however for very old memories this requires backpropagating-through-time (BPTT) over long unrolls. As such we also consider some local auxiliary compression losses. We consider an auto-encoding loss where we reconstruct the original memories from the compressed memories Lae = ||old mem(i) − g(new cm(i))||2, where g : R ns c ×d → Rns×d is learned. This is a lossless compression objective — it attempts to retain all information in memory. We also consider an attention-reconstruction loss described in Algorithm 2 which reconstructs the content-based attention over memory, with content-based attention over the compressed memories. This is a lossy objective, as information that is no longer attended to can be discarded, and we found this worked best. We stop compression loss gradients from passing into the main network as this prevents learning. Instead the Transformer optimizes the task objective and the compression network optimizes the compression objective conditioned on task-relevant representations; there is no need to mix the losses with a tuning constant." }, { "heading": "4 PG-19 BENCHMARK", "text": "As models begin to incorporate longer-range memories, it is important to train and benchmark them on data containing larger contexts. Natural language in the form of text provides us with a vast repository of data containing long-range dependencies, that is easily accessible. We propose a new language modelling benchmark, PG-19, using text from books extracted from Project Gutenberg 3. We select Project Gutenberg books which were published over 100 years old, i.e. before 1919 (hence the name PG-19) to avoid complications with international copyright, and remove short texts. The dataset contains 28, 752 books, or 11GB of text — which makes it over double the size of BookCorpus and Billion Word Benchmark." }, { "heading": "4.1 RELATED DATASETS", "text": "The two most benchmarked word-level language modelling datasets either stress the modelling of stand-alone sentences (Billion Word Benchmark from Chelba et al. (2013)) or the modelling of a small selection of short news articles (Penn Treebank processed by Mikolov et al. (2010)). Merity et al. (2016) proposed the WikiText-103 dataset, which contains text from a high quality subset of English-language wikipedia articles. These articles are on average 3, 600 words long. This dataset has been a popular recent LM benchmark due to the potential to exploit longer-range dependencies (Grave et al., 2016; Rae et al., 2018; Bai et al., 2018b). However recent Transformer models, such\n3PG-19 is available at https://github.com/deepmind/pg19\nas the TransformerXL (Dai et al., 2019) appear to be able to exploit temporal dependencies on the order of several thousand words. This motivates a larger dataset with longer contexts.\nBooks are a natural choice of long-form text, and provide us with stylistically rich and varied natural language. Texts extracted from books have been used for prior NLP benchmarks; such as the Children’s Book Test (Hill et al., 2015) and LAMBADA (Paperno et al., 2016). These benchmarks use text from Project Gutenberg, an online repository of books with expired US copyright, and BookCorpus (Zhu et al., 2015), a prior dataset of 11K unpublished (at time of authorship) books. CBT and LAMBADA contain extracts from books, with a specific task of predicting held-out words. In the case of LAMBADA the held-out word is specifically designed to be predictable for humans with access to the full textual context — but difficult to guess with only a local context.\nCBT and LAMBADA are useful for probing the linguistic intelligence of models, but are not ideal for training long-range language models from scratch as they truncate text extracts to at most a couple of paragraphs, and discard a lot of the books’ text. There has been prior work on training models on book data using BookCorpus directly (e.g. BERT from Devlin et al. (2018)) however BookCorpus is no longer distributed due to licensing issues, and the source of data is dynamically changing — which makes exact benchmarking difficult over time.\nThe NarrativeQA Book Comprehension Task (Kočiskỳ et al., 2018) uses Project Gutenberg texts paired with Wikipedia articles, which can be used as summaries. Due to the requirement of needing a corresponding summary, NarrativeQA contains a smaller selection of books: 1,527 versus the 28,752 books in PG-19. However it is reasonable that PG-19 may be useful for pre-training book summarisation models." }, { "heading": "4.2 STATISTICS", "text": "A brief comparison of PG-19 to other LM datasets can be found in Table 1. We intentionally do not limit the vocabulary by unk-ing rare words, and release the dataset as an open-vocabulary benchmark. To compare models we propose to continue measuring the word-level perplexity. This can still be computed for any chosen character-based, byte-based or subword-based scheme. To do this, one calculates the total cross-entropy loss L = − ∑ t log(pt|p<t) over the given validation or test subset using a chosen tokenization scheme, and then one normalizes this value by the number of words: L/nwords where nwords is the total number of words in the given subset, taken from Table 2. The word-level perplexity is thus eL/nwords . For sake of model comparisons, it is important to use the exact number of words computed in Table 2 as the normalisation constant.\nAlongside quantitative analyses, we build an LDA topic model (Blei et al., 2003) for a qualitative inspection of the text. We present key words for several topics in the Supplementary Table 10. These topics include art, education, naval exploration, geographical description, war, ancient civilisations, and more poetic topics concerning the human condition — love, society, religion, virtue etc. This contrasts to the more objective domains of Wikipedia and news corpora." }, { "heading": "5 EXPERIMENTS", "text": "We optimised all models with Adam (Kingma and Ba, 2014). We used a learning rate schedule with a linear warmup from 1e-6 to 3e-4 and a cosine decay back down to 1e-n6. For characterbased LM we used 4, 000 warmup steps with 100, 000 decay steps, and for word-based LM we used 16, 000 warmup steps with 500, 000 decay steps. We found that decreasing the optimisation update frequency helped (see Section 5.5.1), namely we only applied parameter updates every 4 steps after 60, 000 iterations. However we found the models would optimise well for a range of warmup/warm-\nTable 3: Eval. perplexities on PG-19.\nValid. Test\n36L TransformerXL 45.5 36.3 36L Compressive Transf. 43.4 33.6\nTable 5: Compression approaches on Enwik8.\nCompression fn Compression loss BPC\nConv BPTT 0.996 Max Pooling N/A 0.986 Conv Auto-encoding 0.984 Mean Pooling N/A 0.982 Most-used N/A 0.980 Dilated conv Attention 0.977 Conv Attention 0.973\ndown values. We clipped the gradients to have a norm of at most 0.1, which was crucial to successful optimisation." }, { "heading": "5.1 PG-19", "text": "We benchmark the Compressive Transformer against the TransformerXL on the newly proposed PG19 books dataset. Because it is open-vocabulary, we train a subword vocabulary of size 32000 with SubwordTextEncoder from the tfds package in TensorFlow and use the dataset statistics to compute word-level perplexity, as described in Section 4.2. We train a 36 layer Compressive Transformer with a window size of 512, both memory and compressed memory size of 512, and compression rateC = 2. We compare this to a 36 layer TransformerXL trained with window size 512 and attention window 1024. The model was trained on 256 TPUv3 cores with a total batch size of 512 and converged after processing around 100 billion subword tokens. We display the results in Table 3 where we see the Compressive Transformer obtains a test perplexity of 33.6 versus the TransformerXL’s 36.3. Despite the dataset size, it is clearly a challenging domain. This can suit as a first baseline on the proposed long-range language modelling benchmark. We show samples from this model in Supplementary Section F. The model is able to generate long-form narrative of varying styles: from character dialogue, first person diary entries, to descriptive third-person text." }, { "heading": "5.2 ENWIK8", "text": "We compare the TransformerXL and the Compressive Transformer on the standard character-level language modelling benchmark Enwiki8 taken from the Hutter Prize (Hutter, 2012), which contains 100M bytes of unprocessed Wikipedia text. We select the first 90MB for training, 5MB for validation, and the latter 5MB for testing — as per convention. We train 24-layer models with a sequence window size of 768. During training, we set the TransformerXL’s memory size to 2304, and for the Compressive Transformer we use memory of size 768 and compressed memory of size 1152 with compression rate C = 3. During evaluation, we increased the TransformerXL memory size to 4096 and the compressed memory in our model to 3072 (after sweeping over the validation set), obtaining the numbers reported in Table 4. We show the effect of scaling the compressed memory size and evaluation performance in Supplementary Section C. The proposed model achieves the new state-of-the-art on this dataset with 0.97 bits-per-character.\nWe compare compression functions and the use of auxiliary losses in Table 5. We sweep over compression rates of 2, 3, and 4 and report results with the best performing value for each row. BPTT signifies that no auxiliary compression loss was used to train the network other than the\noverall training loss. To feed gradients into the compression function we unrolled the model over double the sequence length and halved the batch size to fit the larger unroll into memory." }, { "heading": "5.3 WIKITEXT-103", "text": "We train an eighteen-layered Compressive Transformer on the closed-vocabulary word-level language modelling benchmark WikiText-103, which contains articles from Wikipedia. We train the model with a compressed memory size, memory size, and a sequence window size all equal to 512. We trained the model over 64 Tensor Processing Units (TPU) v3 with a batch size of 2 per core — making for a total batch size of 128. The model converged in a little over 12 hours. We found the single-layer convolution worked best, with a compression rate of c = 4. This model obtained 17.6 perplexity on the test set. By tuning the memory size over the validation set — setting the memory size to 500, and compressed memory size to 1, 500 — we obtain 17.1 perplexity. This is 1.2 perplexity points over prior state of the art, and means the model places a ≈ 5% higher probability on the correct word over the prior SotA TransformerXL.\nIt is worth noting that in Table 6 we do not list methods that use additional training data, or that make use of test-time labels to continue training the model on the test set (known as dynamic evaluation (Graves, 2013)). If we incorporate a very naive dynamic evaluation approach of loading a model checkpoint and continuing training over one epoch of the test set, then we obtain a test perplexity of 16.1. This is slightly better than the published 16.4 from Krause et al. (2019) — which uses a more sophisticated dynamic evaluation approach on top of the TransformerXL. However in most settings, one does not have access to test-time labels — and thus we do not focus on this setting. Furthermore there has been great progress in showing that more data equates to much better language modelling; Shoeybi et al. (2019) find a large transformer 8B-parameter transformer trained on 170GB of text obtains 10.7 word-level perplexity on WikiText-103. However it is not clear to what extent the WikiText-103 test set may be leaked inside these larger training corpora. For clarity of model comparisons, we compare to published results trained on the WikiText-103 training set.\nWe break perplexity down by word frequency in Table 7 and see the Compressive Transformer makes only a small modelling improvement for frequent words (2.6% over the TransformerXL baseline) but obtains a much larger improvement of ≈ 20% for infrequent words. Furthermore, we see 10X improvement in modelling rare words over the prior state-of-the-art LSTM language model published in 2018 — which demonstrates the rate of progress in this area." }, { "heading": "5.4 COMPRESSIBILITY OF LAYERS", "text": "We can use compression to better understand the model’s mode of operation. We inspect how compressible Transformer’s activations are as they progress through higher layers in the network. One may expect representations to become more difficult to compress at higher layers, if more semantic information is represented there. We monitor the compression loss at each layer of our best-performing Compressive Transformer models trained on Enwik8 and WikiText-103 and display these in Supplementary Section B Figure 6. We note that the compression loss is about one order of magnitude higher for word-level language modelling (WikiText-103) over character-level langauge\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Compressed Memory Memory Sequence\n0.00\n0.02\n0.04\n0.06\n0.08\n0.10\n0.12\n0.14\nA v e ra\ng e a\ntt e n ti\no n w\ne ig\nh t\nFigure 2: Attention weight on Enwik8. Average attention weight from the sequence over the compressed memory (oldest), memory, and sequence (newest) respectively. The sequence self-attention is causally masked, so more attention is placed on earlier elements in the sequence. There is an increase in attention at the transition from memory to compressed memory.\n5000 10000 15000 20000 25000 Training iterations\n0.7\n0.8\n0.9\n1.0\n1.1\n1.2\nT ra\nin in\ng B\nP C Change learning rate\nduring training Learning Rate 0.0 1e-9 1e-7 3e-4 3e-4 update period=2\nFigure 3: Learning rate analysis. Reducing the learning rate (e.g. to zero) during training (on Enwik8) harms training performance. Reducing the frequency of optimisation updates (effectively increasing the batch size) is preferable.\nmodelling (Enwik8). Furthermore the first layer of the Transformer is highly compressible. However there is not a clear trend of compression cost increasing with layer depth." }, { "heading": "5.5 ATTENTION", "text": "We inspect where the network is attending to on average, to determine whether it is using its compressed memory. We average the attention weight over a sample of 20, 000 sequences from a trained model on Enwik8. We aggregate the attention into eighteen buckets, six for each of the compressed memory, memory, and sequence respectively. We set the size of the sequence, memory and compressed memory all to be 768. We plot this average attention weight per bucket in Figure 2 with a 1σ standard error. We see most of the attention is placed on the current sequence; with a greater weight placed on earlier elements of the sequence due to the causal self-attention mechanism which masks future attention weights. We also observe there is an increase in attention from the oldest activations stored in the regular memory, to the activations stored in the compressed memory. This goes against the trend of older memories being accessed less frequently — and gives evidence that the network is learning to preserve salient information." }, { "heading": "5.5.1 OPTIMISATION SCHEDULE", "text": "We make an observation about an interesting but undesirable meta-learning phenomenon during long-context training. When the learning rate is tuned to be much smaller (or set to zero) during training, performance degrades drastically both for the TransformerXL and the Compressive Transformer. This is displayed in Figure 3.\nUsually we consider distributional shift from the training data to the test data, but we can also observe a shift in the model when transferring from a training to evaluation mode (even when the model is evaluated on the training data). In this case, this is due to the online updating of parameters whilst processing long contiguous articles. We would like the model to generalise well to scenarios where it is not continuously optimised. Updating the parameters only at article boundaries (and then\nresetting the state) could be one solution for long-range memory models, but this would slow down learning significantly.\nInstead, we propose reducing the frequency of optimisation updates during training. We find this allows for the best of both worlds — fast initial learning with frequent updates, and better generalisation near the end of training with less frequent updates (e.g. every 4 steps). Reducing the optimisation frequency increases the effective batch size, which has also been shown to be preferable to learning rate decay in image modelling (Smith et al., 2018). We observed a final performance improvement in our TransformerXL baseline on Enwik8, from 0.995 — which approximately replicates the published result — to 0.984 — which matches the most recent SotA architecture. We note, the additional space and compute cost of accumulating gradients is negligible across iterations, so there was no performance regression in using this scheme." }, { "heading": "5.6 SPEECH", "text": "We train the Compressive Transformer on the waveform of speech to assess its performance on different modalities. Speech is interesting because it is sampled at an incredibly high frequency, but we know it contains a lot of information on the level of phonemes and entire phrases.\nTo encourage long-term reasoning, we refrain from conditioning the model on speaker identity or text features, but focus on unconditional speech modelling. We train the model on 24.6 hours of 24kHz North American speech data. We chunk the sequences into windows of size 3840, roughly 80ms of audio, and compare a 20-layer Compressive Transformer to a 20-layer TransformerXL and a 30-layer WaveNet model (Oord et al., 2016) — a state-of-the-art audio generative model used to serve production speech synthesis applications at Google (Oord et al., 2018). All networks have approximately 40M parameters, as WaveNet is more parameter-efficient per layer. We train each network with 32 V100 GPUs, and a batch size of 1 per core (total batch size of 32) using synchronous training.\nWaveNet processes an entire chunk in parallel, however the TransformerXL and Compressive Transformer are trained with a window size of 768 and a total memory size of 1, 568 (for the Compressive Transformer we use 768 memory + 768 compressed). We thus unroll the model over the sequence. Despite this sequential unroll, the attention-based models train at only half the speed of WaveNet. We see the test-set negative-log-likelihood in Figure 4, and observe that a Compressive Transformer with a compression rate of 4 is able to outperform the TransformerXL and maintain a slim advantage over WaveNet. However we only trained models for at most one week (with 32GPUs) and it would be advantageous to continue training until full convergence — before definitive conclusions are made." }, { "heading": "5.7 REINFORCEMENT LEARNING", "text": "Compression is a good fit for video input sequences because subsequent frames have high mutual information. Here we do not test out the Compressive Transformer on video, but progress straight to a reinforcement learning agent task that receives a video stream of visual observations — but must ultimately learn to use its memory to reason over a policy.\nWe test the Compressive Transformer as a drop-in replacement to an LSTM in the IMPALA setup (Espeholt et al., 2018). Otherwise, we use the same training framework and agent architecture as described in the original work with a fixed learning rate of 1.5e-5 and entropy cost coefficient of 2e-3. We test the Compressive Transformer on a challenging memory task within the DMLab-30 (Beattie et al., 2016) domain, rooms select nonmatching object. This requires the agent to explore a room in a visually rich 3D environment and remember the object present. The agent can then advance to a second room where it must select the object not present in the original room. This necessitates that the agent both remember events far in the past, and also learn to efficiently reason about them.\nWe fix both the memory and compressed memory sizes to 64. In Figure 5, we present results for a range of compression rates, averaged over 3 seeds. We see that the best performing agents endowed with the Compressive Transformer are able to solve the task to human-level. We note that the model with compression rate 1 is unable to learn the task to the same proficiency. The speed of learning and stability seem to increase proportionally with higher rates of compression (up to a limit) – i.e.\nthe effective memory window of the agent – and we find compression rate 4 to once again be the best performing. We see this as a promising sign that the architecture is able to efficiently learn, and suitably use, compressed representations of its visual input and hope to test this more widely in future work." }, { "heading": "6 CONCLUSION", "text": "In this paper we explore the notion of compression as a means of extending the temporal receptive field of Transformer-based sequence models. We see a benefit to this approach in the domain of text, with the Compressive Transformer outperforming existing architectures at long-range language modelling. To continue innovation in this area, we also propose a new book-level LM benchmark, PG-19. This may be used to compare long-range language models, or to pre-train on other longrange reasoning language tasks, such as NarrativeQA (Kočiskỳ et al., 2018).\nWe see the idea of compressive memories is applicable not only to the modality of text, but also audio, in the form of modelling the waveform of speech, and vision, within a reinforcement-learning agent trained on a maze-like memory task. In both cases, we compare to very strong baselines (Wavenet (Oord et al., 2016) and IMPALA (Espeholt et al., 2018)).\nThe main limitation of this work is additional complexity, if the task one wishes to solve does not contain long-range reasoning then the Compressive Transformer is unlikely to provide additional benefit. However as a means of scaling memory and attention, we do think compression is a simpler approach to dynamic or sparse attention — which often requires custom kernels to make efficient. One can build effective compression modules from simple neural network components, such as convolutions. The compression components are immediately efficient to run on GPUs and TPUs.\nMemory systems for neural networks began as compressed state representations within RNNs. The recent wave of progress using attention-based models with deep and granular memories shows us that it is beneficial to refrain from immediately compressing the past. However we hypothesise that more powerful models will contain a mixture of granular recent memories and coarser compressed memories. Future directions could include the investigation of adaptive compression rates by layer, the use of long-range shallow memory layers together with deep short-range memory, and even the use of RNNs as compressors. Compressive memories should not be forgotten about just yet." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank Chris Dyer, Felix Gimeno, and Koray Kavukcuoglu for reviewing the manuscript. We thank Peter Dayan, Adam Santoro, Jacob Menick, Emilio Parisotto, Hyunjik Kim, Simon Osindero, Sergey Bartunov, David Raposo, and Daan Wierstra for ideas regarding model design. We thank Yazhe Li and Aaron Van de Oord for their help and advice in instrumenting speech modelling experiments. Finally, we thank our wider DeepMind colleagues for supporting this project with stimulating discussions, engineering infrastructure, and positive reinforcement signals." }, { "heading": "A TEMPORAL RANGE OF THE COMPRESSIVE TRANSFORMER", "text": "The TransformerXL with a memory of size n has a maximum temporal range of l × n with an attention cost of O(n2s + nsn) (see Dai et al. (2019) for a detailed discussion). The Compressive Transformer now has a maximum temporal range of l×(ns+nm+c∗ncm) with an attention cost of O(n2s + ns(nm + ncm)). For example, setting ncm = nm = n/2 and c = 3 we obtain a maximum temporal range that is two times greater than the TransformerXL with an identical attention cost. Thus if we can learn in the c > 1 compressed setting, the temporal range of the model can be significantly increased." }, { "heading": "B COMPRESSION ACROSS LAYERS", "text": "We inspect the compression loss broken down by the layer index, to investigate whether there is a trend in network depth with how compressible the representations are. The compression loss here refers to the attention-reconstruction attention loss. We plot this for a 24 layer trained model on Enwik8, and an 18 layer model trained on WikiText-103. The compression loss for characterbased language modelling is about one order of magnitude lower than that of word-level language modelling. The first layer’s representations are highly compressible, however from then on there is no fixed trend. Some non-contiguous layers have a very similar compression loss (e.g. 4 & 6, 5 & 7) which suggests information is being routed from these layer pairs via the skip connection." }, { "heading": "C COMPARISON OF COMPRESSED MEMORY SIZES", "text": "We compare the best test perplexity obtained for the Compressive Transformer trained on WikiText103 and Enwik8 across a range of compressed memory sizes. For both models, the best model used a 1D convolution compression network with a compression rate of 3. The Enwik8 model was trained with an embedding size of 1024, 8 attention heads, 24 layers, an mlp hidden size of 3072, a sequence window size of 768, and a memory size of 768. We see the best compressed memory size is 3, 072 in this sweep, facilitating a total attention window of 3840. The WikiText-103 model was trained with an embedding size of 1024, adaptive inputs using the same parameters as (Sukhbaatar et al., 2019), 16 attention heads, 18 layers, an mlp hidden size of 4096, a sequence window of size 512 and a memory of size 512. The best compressed memory size is 1536 resulting in a total attention window of c. 2048." }, { "heading": "D PG-19 PREPROCESSING", "text": "The raw texts from the Gutenberg project were minimally pre-processed by removing boilerplate license text. We then also replaced discriminatory words with a unique 〈DWx〉 token using the Ofcom list of discriminatory words 4." }, { "heading": "E PG-19 TOPICS", "text": "We present top-words for some of the topics on the PG-19 corpus. These were generated with LDA topic model (Blei et al., 2003)." }, { "heading": "F PG-19 SAMPLES", "text": "We show a few different samples from the Compressive Transformer trained on PG-19. We use Nucleus Sampling with p = 0.98 (Holtzman et al., 2019). We choose extracts of books from the test set as prefixes. We see the model is able to continue in the style of the text, creating artificial dialogue or descriptive text, and remembering the names of characters over hundreds of words.\nAs the Compressive Transformer is trained without state resetting, it is actually slightly out of sample when provided with the (relatively) short contexts. This is because its memory and compressed memory may be still empty (whereas they are always full during training). However we see a trend of the samples usually improving towards the end.\n4https://www.ofcom.org.uk/__data/assets/pdf_file/0023/91625/ OfcomQRG-AOC.pdf\nContext from Baby Mine by Maragaret Mayo:\nCHAPTER I\nEven in college Alfred Hardy was a young man of fixed ideas and high ideals and proud of it.\nHis friend, Jimmy Jinks, had few ideas and no ideals, and was glad of it, and before half of their first college term had passed, Jimmy had ridded himself of all such worries as making up his own mind or directing his own morals. Alfred did all these things so much better, argued Jimmy, furthermore, Alfred LIKED to do them–Jimmy owed it to his friend to give him that pleasure.\nCompressive Transformer:\nHe did not like doing them in the most elegant of all possible marble walks–where freshmen and sophomores stand and talk and laugh and chatter till evening falls and look cool and still and cool and beautifully white in spite of the hot sun, but had to go there for a month before going home. It happened one day that he and Jimmy were strolling together in the green shade of St. Martin’s, and Jimmy naturally asked why he was so silent. He was very talkative as most boys are who have once got the habit of it on shore, so the need for answering was soon supplied.\n”I am wondering,” he said, ”why my old friend and chum, Jimmy Jinks, is living a single life and so has no boys. Have you not noticed that many of his old friends had their own boys to follow in their footsteps and help, and often helped successfully too?”\n”Oh, yes,” said Alfred, ”it was plain enough then that he liked to live by himself and got none of his present pleasures or entertainments because of the other fellow’s giving it to him.”\n”Then it IS sad,” said Jimmy, ”that an old friend like the friend of Big Lights, Jimmy Reynolds, should be living the single life and having no boys.”\n”And it is pretty true, too,” said Alfred, ”yet I believe it is the best thing that could happen even now.”\n”And so do I,” said Jimmy, ”and I also believe that it is time someone did something to make the things worse for the other’s partner.”\n”And so do I,” said Alfred, ”and so do I, and so do I.”\n”What can we do?” asked Jimmy.\n”I don’t know,” said Alfred, ”but if someone will come down to the valley and go through a regular mill with the old forces at work, doing the work of the two, and doing it better than any other Miller he ever saw or ever hoped to see, I think there will be a good deal of change in the things and things to be cared for for and for driven and counter-driven.”\n”I will come and do that mill,” said Jimmy.\nContext from The Diary of Samuel Pepys:\nJune 1st. Having taken our leaves of Sir W. Batten and my Lady, who are gone this morning to keep their Whitsuntide, Sir W. Pen and I and Mr. Gauden by water to Woolwich, and there went from ship to ship to give order for and take notice of their forwardness to go forth, and then to Deptford and did the like, having dined at Woolwich with Captain Poole at the tavern there. From Deptford we walked to Redriffe, calling at the half-way house, and there come into a room where there was infinite of new cakes placed that are made against Whitsuntide, and there we were very merry. By water home, and there did businesses of the office. Among others got my Lord’s imprest of L1000 and Mr. Creed’s of L10,000 against this voyage their bills signed. Having wrote letters into the country and read some things I went to bed.\nCompressive Transformer:\nSo I rose and did some business at the office, and then to dinner, where great abundance of good things of the kitchen I only being partaker in the feast. After dinner I went to look at the yard and found it in a very bad condition, so I went home and there finding no delight in seeing things I should make shift to get in my way and so I will leave off.\nSo I took leave of my great friend, and the next day being Sunday, my wife and I and a great number of friends went by water to Deptford, and there took the oaths of allegiance and supremacy and promised such obedience to the Lord as should hereafter turn to the advantage of our State. Being come back we found the shops and warehouses in a pitiable condition, so my wife and I and all the rest of the friends went home and there did the like and by Saturday day I being alone in my office I found myself in a very bad situation, for my Lord, and Mr. Creed, and Pett were all standing ready to receive and receive the oaths of allegiance and right to make use of the ten days’ licence I had allowed for this purpose by virtue of the law that was such a help to the servant. Having looked at the ships, I told my wife what I had seen and that I was greatly troubled at my situation and, I hope that will mend it.\nSo that is the end of a long and patient seeking after those things which in my opinion I deserve and may get if I can only by fair means get them.\nContext from The Patrol of the Sun Dance Trail by Ralph Connor:\nHigh up on the hillside in the midst of a rugged group of jack pines the Union Jack shook out its folds gallantly in the breeze that swept down the Kicking Horse Pass. That gallant flag marked the headquarters of Superintendent Strong, of the North West Mounted Police, whose special duty it was to preserve law and order along the construction line of the Canadian Pacific Railway Company, now pushed west some scores of miles.\nAlong the tote-road, which ran parallel to the steel, a man, dark of skin, slight but wiry, came running, his hard panting, his streaming face, his open mouth proclaiming his exhaustion. At a little trail that led to the left he paused, noted its course toward the flaunting flag, turned into it, then struggled up the rocky hillside till he came to the wooden shack, with a deep porch running round it, and surrounded by a rustic fence which enclosed a garden whose neatness illustrated a characteristic of the British soldier. The runner passed in through the gate and up the little gravel walk and began to ascend the steps.\nCompressive Transformer:\nAs though by mutual consent, all the three members of the Superintendent Force were standing on the veranda watching his progress, a man came strolling slowly down the path, stopping often to gaze curiously about him, then stopping once more when he saw the four standing on the veranda. His face was dark, his eyes prominent, and the lines that furrowed his forehead showed marks of many woes, yet his step was alert and he began to mount the steps with steady strides.\nThe man’s attention was diverted by the unexpected appearance of the two policemen; a third man came slipping up, grumbling under his breath, and stopped long enough to look doubtfully around him, then slipped away again. In the meantime, the four on the veranda continued to watch the approach of the new arrival. In spite of himself, the Superintendent Force became conscious of a curious feeling–uneasiness that was almost fear–in the presence of that strange man. He would not allow himself to admit the fact, yet he felt it in his bones. But to the watchers, the plain, everyday features of that stranger and his coming, seemed only just what the Seven White Shee owed him–their weight, their hurry, their blast.\nHad a chariot been made for the good of the three horses, and had the driver been given orders that he should speed them that he might win, they would have been heartening things in the sight of the veteran and the victor. To you they would have been unintelligible to the root of your understanding. When you gaze up in the faces of those four gray horses, you can see clearly through the clouds of dust that rise from their hoofs, and discern plainly where the banker is and where the hobo. Then you will understand why you shall not press the bitter grapes and why you shall not spurn the generous doctrines. You will understand why you shall not praise the lash or the spur, for you will know where the true would be and where the false would be. Then you will understand why you, a man with reason and heart, need not tear your hair over-bitter and why you need not laugh over the blunders of an ignorant man.\nAbout nine o’clock that morning, two buggies, drawn by powerful horses, crossed the Rubicon and turned the railroad from Sandhurst into the Hollow of the Mountains. And though the charioteers stood at their horses’ heads, and their drivers cried at their loudest, there was not a man in the four teams who did not feel that his day was worth all the toil and all the peril that he had undergone. And if there were a man in them who did not know that–who did not feel that the road through the Hollow of the Mountains is made easy by the arrival of travelers and by the coming of government, there was one who did not at that moment care whether his day’s work were worth all the toil and all the danger that he had had to endure or whether it were not worth more than all.\nAUTHOR CONTRIBUTIONS\nModel and Experiment design: JR, TL, AP, SJ Dataset creation: AP, JR, CH Text experiments: JR, AP RL experiments: SJ Speech experiments: JR" } ]
2,020
COMPRESSIVE TRANSFORMERS FOR LONG-RANGE SEQUENCE MODELLING
SP:a11a05bf95d8dcd7adb929912430615c73f4b531
[ "This paper performs an empirical evaluation of generalization by TD methods with neural nets as function approximators. To quantify generalization, the paper considers the change in the loss function at similar states to the one where the update rule is being applied (where “similar” is usually defined as nearby in time). It comes to a variety of conclusions including that TD(0) does not induce much generalization, that TD(0) does not induce as much generalization as supervised learning, and that the choice of optimizer and objective changes the behavior according to their generalization criteria in various ways.", "The manuscript is analyzing the \"generalization\" in TD(lambda) methods. It includes supervised learning from trajectories, on-policy imitation learning, and basic RL setting. Moreover, memoization performance has also been measured. Main conclusion is the fact that TD(0) performs very similar to tabular learning failing to transfer inductive biases between states. There are also additional surprising results about optimization." ]
Current Deep Reinforcement Learning (DRL) methods can exhibit both data inefficiency and brittleness, which seem to indicate that they generalize poorly. In this work, we experimentally analyze this issue through the lens of memorization, and show that it can be observed directly during training. More precisely, we find that Deep Neural Networks (DNNs) trained with supervised tasks on trajectories capture temporal structure well, but DNNs trained with TD(0) methods struggle to do so, while using TD(λ) targets leads to better generalization.
[]
[ { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "Striving for simplicity in off-policy deep reinforcement learning", "venue": "arXiv preprint arXiv:1907.04543,", "year": 2019 }, { "authors": [ "Ankesh Anand", "Evan Racah", "Sherjil Ozair", "Yoshua Bengio", "Marc-Alexandre Côté", "R Devon Hjelm" ], "title": "Unsupervised state representation learning in atari", "venue": null, "year": 1906 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Oron Anschel", "Nir Baram", "Nahum Shimkin" ], "title": "Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Devansh Arpit", "Stanisław Jastrzębski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron Courville", "Yoshua Bengio" ], "title": "A closer look at memorization in deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Leemon Baird" ], "title": "Residual algorithms: Reinforcement learning with function approximation", "venue": "In Machine Learning Proceedings", "year": 1995 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Richard Bellman" ], "title": "A markovian decision process", "venue": "Journal of mathematics and mechanics,", "year": 1957 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Wesley Chung", "Somjit Nath", "Ajin Joseph", "Martha White" ], "title": "Two-timescale networks for nonlinear value function approximation", "venue": "In 7th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Karl Cobbe", "Oleg Klimov", "Chris Hesse", "Taehoon Kim", "John Schulman" ], "title": "Quantifying generalization in reinforcement learning", "venue": "arXiv preprint arXiv:1812.02341,", "year": 2018 }, { "authors": [ "Damien Ernst", "Pierre Geurts", "Louis Wehenkel" ], "title": "Tree-based batch mode reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Jesse Farebrother", "Marlos C Machado", "Michael Bowling" ], "title": "Generalization and regularization in dqn", "venue": "arXiv preprint arXiv:1810.00123,", "year": 2018 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Matthew Soh", "Sergey Levine" ], "title": "Diagnosing bottlenecks in deep q-learning algorithms", "venue": "arXiv preprint arXiv:1902.10250,", "year": 2019 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Peter Henderson", "Joshua Romoff", "Joelle Pineau" ], "title": "Where did my optimum go?: An empirical analysis of gradient descent optimization in policy gradient methods", "venue": "arXiv preprint arXiv:1810.02525,", "year": 2018 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Nitish Srivastava", "Kevin Swersky" ], "title": "Neural networks for machine learning lecture 6a overview of mini-batch gradient descent", "venue": null, "year": 2012 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Shibani Santurkar", "Dimitris Tsipras", "Firdaus Janoos", "Larry Rudolph", "Aleksander Madry" ], "title": "Are deep policy gradient algorithms truly policy gradient algorithms", "venue": "arXiv preprint arXiv:1811.02553,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: a method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Yixuan Li", "Jason Yosinski", "Jeff Clune", "Hod Lipson", "John E Hopcroft" ], "title": "Convergent learning: Do different neural networks learn the same representations? In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Erik Talvitie", "Joel Veness", "Matthew Hausknecht", "Michael Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "arXiv preprint arXiv:1611.03673,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Guido F Montufar", "Razvan Pascanu", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "On the number of linear regions of deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ari S Morcos", "David GT Barrett", "Neil C Rabinowitz", "Matthew Botvinick" ], "title": "On the importance of single directions for generalization", "venue": "arXiv preprint arXiv:1803.06959,", "year": 2018 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli" ], "title": "Zero-shot task generalization with multi-task deep reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Samet Oymak", "Zalan Fabian", "Mingchen Li", "Mahdi Soltanolkotabi" ], "title": "Generalization guarantees for neural networks via harnessing the low-rank structure of the jacobian", "venue": "arXiv preprint arXiv:1906.05392,", "year": 2019 }, { "authors": [ "Charles Packer", "Katelyn Gao", "Jernej Kos", "Philipp Krähenbühl", "Vladlen Koltun", "Dawn Song" ], "title": "Assessing generalization in deep reinforcement learning", "venue": "arXiv preprint arXiv:1810.12282,", "year": 2018 }, { "authors": [ "Hugo Penedones", "Damien Vincent", "Hartmut Maennel", "Sylvain Gelly", "Timothy Mann", "Andre Barreto" ], "title": "Temporal difference learning with neural networks-study of the leakage propagation problem", "venue": "arXiv preprint arXiv:1807.03064,", "year": 2018 }, { "authors": [ "Joelle Pineau" ], "title": "The machine learning reproducibility checklist", "venue": null, "year": 2019 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Martin Riedmiller" ], "title": "Neural fitted q iteration–first experiences with a data efficient neural reinforcement learning method", "venue": "In European Conference on Machine Learning,", "year": 2005 }, { "authors": [ "Gavin A Rummery", "Mahesan Niranjan" ], "title": "On-line Q-learning using connectionist systems, volume 37", "venue": null, "year": 1994 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Richard S Sutton" ], "title": "Learning to predict by the methods of temporal differences", "venue": "Machine learning,", "year": 1988 }, { "authors": [ "Richard S Sutton" ], "title": "On the virtues of linear learning and trajectory distributions", "venue": "In Proceedings of the Workshop on Value Function Approximation,", "year": 1995 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": null, "year": 1995 }, { "authors": [ "Pierre Thodoroff", "Audrey Durand", "Joelle Pineau", "Doina Precup" ], "title": "Temporal regularization for markov decision process", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "John N Tsitsiklis", "Benjamin Van Roy" ], "title": "Analysis of temporal-diffference learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 1997 }, { "authors": [ "Oriol Vinyals", "Timo Ewalds", "Sergey Bartunov", "Petko Georgiev", "Alexander Sasha Vezhnevets", "Michelle Yeo", "Alireza Makhzani", "Heinrich Küttler", "John Agapiou", "Julian Schrittwieser" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Oriol Vinyals", "Remi Munos", "Samy Bengio" ], "title": "A study on overfitting in deep reinforcement learning", "venue": "arXiv preprint arXiv:1804.06893,", "year": 2018 }, { "authors": [ "Mnih" ], "title": "2013): 3 convolutional layers with kernels of shape", "venue": null, "year": 2013 }, { "authors": [ "Morcos" ], "title": "Note that current literature suggests that having fewer large singular values is a sign of generalization in classifiers, see in particular Oymak et al", "venue": "Raghu et al", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) trained on supervised learning tasks using i.i.d. data have shown the capacity to learn quickly even from a small amount of samples (Hardt et al., 2016). Intuitively, this is due to each sample also providing information about the estimate corresponding to other samples; research suggests that DNNs first extract structures that are informative of the modes of the data (even if later on they can also memorize see Zhang et al. (2016); Arpit et al. (2017)), and that they can transfer well (Yosinski et al., 2014; Li et al., 2015), even from relatively few samples. In contrast, in Deep Reinforcement Learning (DRL), the number of samples required for an agent to learn successfully is often very high; many modern algorithms struggle to perform well until they acquire tens of millions of samples (Mirowski et al., 2016; Vinyals et al., 2017; Hessel et al., 2018), and some even diverge to bad solutions (Anschel et al., 2017). While there are many facets to sample complexity and brittleness, we posit that a contributing factor is a lack of what we call gradient update generalization, i.e., whether performing updates at one state provides useful information about the value/policy at other states.\nGeneralization in RL is of two types: (a) generalization to unseen states–will an agent trained on a single MDP pick the optimal action for a state it has never seen before? (b) generalization to unseen tasks–will an agent trained on a distribution of MDPs know how to act in an MDP it has never seen before? Both of these facets are actively studied. For example, Farebrother et al. (2018) expose some generalization failures on the Atari domain (Bellemare et al., 2013) and study the impact of regularization, Zhang et al. (2018) study the generalization capabilities of DRL agents on randomized mazes, Packer et al. (2018) study the extrapolation capabilities of DRL agents trained on a distribution of environment parameters (e.g. pole mass in CartPole) outside of the training distribution, Cobbe et al. (2018) find that even on procedurally generated environments, DRL agents can easily overfit on their training set unless regularized, Oh et al. (2017) study the embedding regularizations necessary for agents to generalize to new instruction sequences on navigation tasks.\nIn this study, we are not interested in measuring state generalization (i.e. predictions for unseen states), nor task generalization (i.e. in terms of the quality of the behaviour), but rather generalization within the process of stochastic gradient learning. In other words, since any kind of generalization must arise through the accumulation of parameter updates, it seems useful to measure whether these parameter updates are themselves general. To this end, we propose the measure of gradient update generalization, best understood as a side-effect of neural networks sharing parameters over their entire input space. That is, updating parameters after seeing one state will change the prediction for virtually all other states; we are interested in measuring that change.\nTD methods are a broad class of RL algorithms that form a target for an update by utilizing the current estimate of the value function. They include TD(0) and TD(λ) methods for estimating the value of a fixed policy, as well as Sarsa and Q-learning algorithms for control. TD methods have\nachieved success in some challenging tasks (Tesauro, 1995; Mnih et al., 2013; Hessel et al., 2018), but they are also known to have problems when coupled with function approximation (Sutton, 1995; Baird, 1995; Tsitsiklis & Van Roy, 1997; Chung et al., 2018). Previous studies explicitly addressed problems such as leakage propagation in TD (Penedones et al., 2018), while others aimed to provide sampling improvements (Schaul et al., 2015; Andrychowicz et al., 2017; Fu et al., 2019), explicit temporal regularization (Thodoroff et al., 2018), or auxiliary tasks which push the agent to learn more about the temporal structure in the data (Jaderberg et al., 2016).\nTo our knowledge, no study to date has focused on the dynamics of the generalization process itself, within TD-based DRL methods1 such as deep Q-Learning (Riedmiller, 2005; Mnih et al., 2013), Sarsa (Rummery & Niranjan, 1994), and TD(λ) (Sutton, 1988; Schulman et al., 2015). For this study, we introduce the aforementioned measure of gradient update generalization, which enables us to differentiate the learning behaviours of different methods. Overall, we find that:\n1. when doing a TD(0) update for a single state, parameters change in such a way that the value prediction of other states is generally not affected, surprisingly even for states that are close either temporally or in an annotated “ground truth” state space; 2. DNNs trained with TD(0), in contrast with DNNs trained on a memorization task or using a supervised objective, do not entirely memorize their state space, yet also do not generalize in the way we would expect; 3. both the choice of optimizer and the nature of the objective impact the generalization behaviours of models; in particular, when increasing the λ parameter in TD(λ), DNNs appear to capture more temporal structure." }, { "heading": "2 TECHNICAL BACKGROUND", "text": "A Markov Decision Process (MDP) (Bellman, 1957; Sutton & Barto, 2018)M = 〈S,A,R, P, γ〉 consists of a state space S, an action space A, a reward function R : S → R and a transition probability distribution P (s′|s, a). RL agents aim to optimize the expectation of the long-term return:\nG(St) = ∞∑ k=t γk−tR(Sk). (1)\nwhere γ ∈ [0, 1) is called the discount factor. Policies π(a|s) map states to action distributions. Value functions V π and Qπ map states/states-action pairs to expected returns, and can be expressed recursively:\nV π(St) = Eπ[G(St)] = Eπ[R(St) + γV (St+1)|At ∼ π(St), St+1 ∼ P (St, At)] (2) Qπ(St, At) = Eπ[R(St) + γ ∑ a π(a|St+1)Q(St+1, a)|St+1 ∼ P (St, At)] (3)\nWhile V π could also be learned via regression to observed values of G, these recursive equations give rise to the Temporal Difference (TD) update rules for policy evaluation, relying on current estimates of V to bootstrap, e.g.:\nV (St)← V (St)− α(V (St)− (R(St) + γV (St+1))), (4) where α ∈ [0, 1) is the step-size. Bootstrapping leads also to algorithms such as Q-Learning (Watkins & Dayan, 1992) and fitted-Q (Ernst et al., 2005; Riedmiller, 2005):\nLQL(St, At, Rt, St+1) = [Qθ(St, At)− (Rt + γmax a Qθ(St+1, a))] 2, (5)\nSarsa (Rummery & Niranjan, 1994):\nLSarsa(St, At, Rt, St+1, At+1) = [Qθ(St, At)−(Rt+γQθ(St+1, At+1))]2 with At ∼ π(St) (6) 1In contrast, policy-gradient algorithms such as PPO (Schulman et al., 2017) A3C (Mnih et al., 2016) and SAC (Haarnoja et al., 2018) are capable of learning good policies without necessarily having learned a good value function, and although interesting results have emerged to understand learning behaviours in policy-gradient methods (Ilyas et al., 2018), these methods build upon TD and analyzing them would add undesired confounders.\nand TD(λ), which trades off between the unbiased target G(St) and the biased TD(0) target (biased due to relying on the estimated V (St+1)), using a weighted averaging of future targets called a λ-return (Sutton, 1988; Munos et al., 2016):\nGλ(St) = (1− λ) ∞∑ n=1 λn−1 γnV (St+n) + n−1∑ j=0 γjR(St+j) (7) LTD(λ)(St) = (Vθ(St)−Gλ(St))2 (8)\n(note that the return depends implicitly on the trajectory followed from St). When λ = 0, the loss is simply (Vθ(St)− (Rt + γVθ(St+1)))2, leading to the algorithm called TD(0) (Sutton, 1988)." }, { "heading": "3 UPDATE GENERALIZATION IN DEEP RL", "text": "We will now define the measure we propose in order to quantify the speed at which generalization to unseen states occurs, and to characterize the structure under which this generalization occurs. We define gradient update generalization as the expected improvement in the loss function L : Θ×X → R after updating parameters θ ∈ Θ, on sample XU ∈ X , using update function UL : Θ× X → Θ (e.g. SGD or a semi-gradient methods like TD(0)):\nYL(XU ; θ, U) = EX [L(X ; θ)− L(X ;UL(θ,XU ))]. (9) If generalization from the samples in XU to X is good, this measure of gain should be large, and intuitively fewer other samples should be needed to achieve a desired level of performance. On the other hand, if on average the loss only decreases for the samples XU used in training, then more data in X −XU will have to be visited before the model can learn. Hence, this measure is related to both sample complexity and the speed of learning (see Fig. 15 for empirical confirmation of this phenomenon).\nAs computing the exact expectation is usually intractable, we empirically measure gains on different subsets X ⊂ X . In particular, when X is chosen to be a slice around XU in the replay buffer, we write Y near. We also subscript Y with the corresponding loss’ subscript, e.g. for (5), LQL, we write YQL. In this study, we are interested in TD-based methods that rely heavily on bootstrapping, Q-Learning, Sarsa, and TD(λ), and measure Y using their respective losses, (5), (6), and (8).\nStructure in DNNs A common intuition in deep learning (Zhang et al., 2016; Arpit et al., 2017; Zhang et al., 2018) is that DNNs first learn about the structure of their data, meaning the underlying (usually linear) factors of variation of the data being mapped into the hidden units’ space via parameter sharing. These factors of variation are usually conceptualized as a low-dimensional space where each dimension explains part of the data (Bengio et al., 2013). It is commonly assumed that a model which generalizes well will naturally capture these factors in the configuration of its parameters, in which case the gradient of the prediction w.r.t. all examples sharing the same latent factors of variation will be very close; updating with only one sample will change the prediction for all the related examples. Hence, a DNN which captures structure correctly should show high gradient update generalization.\nTemporal structure in RL Data used in RL algorithms usually exhibits two additional types of structure: coherence of the inputs in a trajectory over time (e.g. pixel values in adjacent frames are often similar), and smoothness of the value function in time (in the sparse-reward case with γ close to 1, V (St) ≈ γV (St+1), which is smooth in time, aside from rare discontinuities upon seeing rewards). Since RL data consists of trajectories which often have strong temporal structure of both types, we hypothesize that the gain Y near of temporally correlated examples should increase closer in time to the sample used in the update.\nParameter sharing Another indirect measure of update generalization related to parameter sharing is the difference since last visit, which we denote as ∆. At each update iteration k, we compute the difference between the value Vθk(s) or Qθk(s, a) predicted from the current parameters, θk, and Vθlast(s)(s) or Qθlast(s)(s, a) , i.e. the prediction made the last time state s was used for a gradient update.2 To illustrate, if Vθ was a lookup table, ∆ would always be 0, while for a DNN, when states\n2In practice, we simply cache the value prediction for all states in a replay buffer (as states in a continuous state space are unlikely to be encountered many times), and update the cache after a minibatch update (for those states only).\nare aliased together, ∆ should accurately reflect the effect of parameter sharing after performing sequences of updates (in contrast, (9) uses only a single update)." }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "We will now perform a series of experiments aimed at assessing the amount of generalization of various bootstrapping algorithms, compared to supervised learning, in combination with DNNs.\nFirst, we test whether DNNs have a large gradient update generalization gain when trained under ideal conditions (data generated by expert policies and labelled with correct values, which can be used in supervised learning). Then, we test the policy evaluation case (using the same input data, but bootstrapped targets instead of supervised learning). We then test the usual control case, when no expert trajectories are available. Finally, we measure the effect of TD(λ) on generalization gain in policy evaluation, as well as test Q-Learning’s robustness to withheld data.\nWe perform our experiments on the Atari environment (Bellemare et al., 2013), with the stochastic setup recommended by Machado et al. (2018). We use a standard DQN architecture (Mnih et al., 2013). In order to generate expert trajectories, we use rollouts from a policy trained with Rainbow (Hessel et al., 2018); we denote D∗ a dataset of transitions obtained with this agent, and θ∗ the parameters after training that agent. For control experiments, we use Mnih et al. (2013)’s Q-Learning setup. When measuring Y near we choose the nearest 60 examples in time to a given state-action pair (30 previous and 30 following on the same trajectory)." }, { "heading": "3.2 ASSESSING TEMPORAL STRUCTURE WITH SUPERVISED LEARNING", "text": "In this experiment, we will assess if temporal structure, as described above, exists and can be captured by our architecture. To do so, we train DNNs starting from random parameters but with “ideal\" targets coming from the expert parameters θ∗ and expert trajectories D∗; this removes all non-stationarity from the learning. We train Qθ with 3 different objectives:\nMC: LMC(s, a; θ) = (Qθ(s, a)−G(D ∗)(s))2 (10)\nReg: Lreg(s, a; θ) = (Qθ(s, a)−Qθ∗(s, a))2 (11) TD∗: LTD∗(s, a, r, s; θ) = (Qθ(s, a)− (r + γmax\na′ Qθ∗(s\n′, a′)))2 (12)\nwhere by G(D ∗)(s) we denote the Monte-Carlo return within the datasetD∗, as in (1). Note that since LTD∗ “bootstraps” to θ∗, this should be roughly equivalent to Lreg , the latter being plain supervised learning (or some sort of distillation, à-la Hinton et al. (2012)).\nResults are visualized in Fig. 1 for experiments ran on MsPacman, Asterix, and Seaquest for 10 runs each. Results are averaged over these three environments (they have similar magnitudes and variance). Learning rates are kept constant, they affect the magnitude but not the shape of these curves.\nWe draw two conclusions from these results. First, as seen in Fig. 1a & 1b, all curves tend to have large gains around x = 0 (the sample used in the update), especially from indices -10 to 10, showing that there is some amount of temporal structure captured by both objectives. Since Qθ∗ is a good approximation, we expect that Qθ∗(s, a) ≈ (r + γmaxa′ Qθ∗(s′, a′)), so Lreg and LTD∗ have similar targets and we expect them to have similar behaviours. Indeed, in Fig. 1 their curves mostly overlap. Second, there is a clear asymmetry between training on expectations (i.e. the learned Q(s, a) or maxa′ Q(s′, a′)) and high-variance Monte-Carlo returns (red and blue curves in Fig. 1a). We hypothesize that since the returns G are computed from the same state sequence that is used to measure the gain, G is truly informative of the expected value of future states. Strangely, this does not seem to be the case for past states, which is surprising.3 On the other hand, while G appears more informative of future expected returns, it is not particularly more informative of future sampled returns than past returns, which explains the symmetric nature of the MC gain shown in Fig. 1b.\n3A possible explanation is that, due to the exponential discounting nature of returns (V (St) ≈ γkV (St+k) aside discontinuities when R 6= 0), the correlation between the current and future returns simply has a larger magnitude than with past returns. This might push DNNs to prefer to “assign capacity” w.r.t. future returns.\nAnother striking distinction in these curves appears between the Adam (Kingma & Ba, 2015) and RMSProp (Hinton et al., 2012) optimizers.4 When moving far away from s, RMSProp tends to induce a negative gain, while Adam tends to induce a near-zero gain. This is seen in Fig. 1a where RMSProp’s TD gain is below 0 for states more than 10 steps away from the sample used in an update. Note that similar differences appear in virtually all following experiments, which we discuss later." }, { "heading": "3.3 POLICY EVALUATION AND TD GAIN", "text": "We have seen that DNNs can capture some temporal structure and have good gradient update generalization when given good quality inputs and targets. We will now remove the expert targets generated using the pretrained θ∗, but we will keep the expert inputs. This corresponds to policy evaluation on expert trajectories, and we would expect to see slightly worse generalization than in the previous case.\nWe run policy evaluation with 2 objectives, LQL and LSarsa as defined in (5), and (6) respectively, using a frozen target to bootstrap (Mnih et al., 2013), updated after every 10k minibatches. Experiments are run on 24 Atari environments (see A.1.1) for 10 runs each. Gain results are visualized in Fig. 2, averaged over the 24 environments.\nThe main observation from Fig. 2a is how narrow the peak around 0 is, suggesting that whenever a state’s value is updated, the prediction for other states does not change much in expectation, as if the representation were almost tabular, with estimates for encountered states being memorized. The conclusion we draw is that, with a fixed data distribution, DNNs bootstrapping to an evolving target network will not proprely capture temporal structure, but will still be able to learn (at least in the sense of correctly approximating the value function).\nAnother worrying observation is that RMSProp consistently has negative expected gain for nearby samples (but large, larger than Adam, positive gain on XU , the minibatch sample), suggesting that parameters trained with this optimizer memorize input-output pairs rather than assign capacity to generalize." }, { "heading": "3.4 COMPARING MEMORIZATION BEHAVIOUR IN POLICY EVALUATION", "text": "The previous results established that some amount of memorization is done during TD-based policy evaluation. Quantifying memorization is still an open problem, but in this experiment we offer an\n4It has been reported that Adam is less sensitive than RMSProp to hyperparameters in value-based methods (Hessel et al., 2018), although evidence suggests it doesn’t help policy gradients (Henderson et al., 2018).\ninteresting qualitative inspection to confirm that TD-based methods may lie somewhere between pure memorization (acting like a lookup table) and strong generalization (capturing all latent factors).\nIn Zhang et al. (2016), the authors compare images classifiers trained with true labels to classifiers trained with random labels (in which case the model has to simply memorize the labels), finding that, surprisingly, both can reach 0 training error. While this suggests that DNNs may also memorize when given the true labels, further studies showed many behavioural differences between the two setups, notably that DNNs first captured structure, and only afterwards fit random noise (Arpit et al., 2017).\nTaking inspiration from Zhang et al. (2016), we assign a random class in [N ] to every state in D∗, change our Q function to be a usual classifier with N outputs, and introduce a new objective, Lrand, which is simply the cross-entropy between the random class and the prediction. Experiments are run on MsPacman, Breakout, and Seaquest. We use datasets of sizes 10k, 100k, and 500k, and use N ∈ {2, 10, 50}. Interestingly, the architecture of Mnih et al. (2013) that is reused here struggles to reach 0 error5 (for example, a model trained with 10k samples with N = 2 reaches 5.7% error, while a model trained with 500k and N = 50 totally fails at 85% error, see Table ??).\nFig. 3 shows the evolution during training of the distribution of ∆(S,A) = Q(S,A; θcurrent) − Q(S,A; θlast(S)), where θlast(S) represents the value of the parameters when S was last used in a minibatch update, and θcurrent represents the value of the parameters right before using S for the most recent update. If the parameters were those of a look-up table, ∆ would always be 0. For losses other than Lrand (Q-Learning, Sarsa, and MC) we reuse the results of the previous section (with a dataset size of 500k).\nThe difference between Fig. 3a and Fig. 3b-d is compelling, and somewhat reassuring. In Fig. 3a the log-likelihood for ∆ = 0 is above -2 (white) showing that it is very unlikely for the prediction at a state to have changed by more than ±0.01 when it is updated. In contrast, the distribution of ∆ is more spread out in Fig. 3b-d. Combined with the fact that the memorization experiment does not reach 0 error, this allows us to confidently claim that DQN is not fully memorizing its state space. Even though the gain curve in Fig. 2 is very close to 0, except at the update sample (i.e. temporal structure is poorly captured), some structure is captured by DNNs that allow them to learn about a state without having to use it explicitly in an update.\n5This could be due to the particularly shallow architecture of Mnih et al. (2013), as architectures with less parameters but more layers are commonly assumed to have more effective capacity. It has indeed been shown that deeper models can distinguish between exponentially more linear regions (Montufar et al., 2014)." }, { "heading": "3.5 TD GAIN IN CONTROL", "text": "Having removed θ∗ in section 3.3, we now additionally remove D∗ and simply perform Q-Learning from scratch on MsPacman, Asterix, and Seaquest for 10M steps.\nResults are shown in Fig. 4. Interestingly, while Q-Learning does not have as strong a gain as the regressions from Fig. 1, it has a larger gain than policy evaluation. This may have several causes, and we investigate two:\n• Initially, because of the random exploratory policy, the DNN sees little data, and may be able to capture a minimal set of factors of variation; then, upon seeing new states, the extracted features are forced to be mapped onto those factors of variation, improving them, leading to a natural curriculum. By looking at the singular values of the last hidden layer’s matrix after 100k steps, we do find that there is a consistently larger spread in the policy evaluation case than the control case (see appendix A.3), showing that in the control case fewer factors are initially captured. This effect diminishes as training progresses. • Having run for 10M steps, control models could have been trained on more data and thus be\nforced to generalize better; this turns out not to be the case, as measuring the same quantities for only the first 500k steps yields very similar magnitudes (see appendix A.4).\nInterestingly, these results are consistent with those of Agarwal et al. (2019), who study off-policy learning. Among many other results, Agarwal et al. (2019) find that off-policy-retraining a DQN model on another DQN agent’s lifetime set of trajectories yields much worse performance on average.\nWhile the authors suggest the strongly off-policy aspect of this experiment as the cause, our results still show differences between control-Q-Learning and policy-evaluation-Q-Learning, which are both done “on-policy” in our setup, suggesting there are more factors at play than only off-policyness.\nNote that we also additionally run experiments with SGD and Momentum-SGD optimizers to highlight the difference between Adam, that has a momentum component, and RMSprop, which only scales per-parameter learning rates. Predictably, Momentum-SGD’s behaviour is similar to Adam, and SGD’s to RMSprop.\n3.6 TD(λ) AND RELIANCE ON BOOTSTRAPPING\nTD(λ) trades off between the immediate biased estimates of the future values and the true return through its λ parameter. To observe the effect of this parameter we perform policy evaluation on D∗ with the LTD(λ) objective on MsPacman. Results are shown in Fig. 5, where we can observe that (1) increasing λ increases near gain without overly increasing update-sample gain (2) as for LMC , there is an asymmetry: updating informs us more about the future than about the past, on average. Results for the distribution of ∆ are shown in Fig. 3(e,f) (and appendix A.6), where we see that the closer λ is to 1, the more the TD(λ) objective creates updates that affect all states.\nThese results seem to indicate that TD(λ) better captures factors of variation. One cause could be that the more one relies on a sequence of DNN predictions (i.e. the sequence of n-step returns of the λ-return depend on the successive V (St+i)) to build a target, the more correlation there is between states and targets (due to DNN smoothness), and the more temporal coherence there is (and thus more opportunities for DNNs to capture the temporal dimension’s correlations). This is hard to verify empirically, but we can proxy the correlation measure via the similarity between gradients. We do indeed find that the closer λ is to 1, the higher the average cosine similarity between gradients is (see appendix A.5). This suggests that it may be advantageous to use λ-returns in environments where generalization is important." }, { "heading": "3.7 TESTING GENERALIZATION WITH AN INTRA-TASK TEST SET", "text": "Another way to assess whether agents fail to properly generalize in the sense of statistical inference – making predictions about states without visiting them – is to create a test set to measure generalization error. We do so on the MsPacman Atari environment, as it contains many opportunities for generalization in translational invariances (locally, the optimal action only depends on the surrounding\nconfiguration of the agent, reward pellets, ghosts). We train our agent with the usual DQN setup (Mnih et al., 2013) but prevent the insertion of a state into the replay buffer with some probability p. More specifically, we use the RAM (ground truth) state information to exclude observations from training. We run 5 seeds for each p ∈ {0, 0.1, 0.25, 0.5}. Results are shown in Fig. 6, where we see that witholding only 10% of states already slightly affects agents. At 50%, performance is significantly reduced. While this is somewhat expected and consistent with the literature (Farebrother et al., 2018), it again attests that TD-based methods can struggle with generalization, as observed also by Packer et al. (2018), who study interpolation and extrapolation failures in deep RL agents." }, { "heading": "3.8 ADDITIONAL OBSERVATIONS", "text": "On other structures Our figures mostly show gradient update generalization gain as a function of “time” (temporal distance within a trajectory), but there might be structure elsewhere. We measured gain as a function of 3 different metrics: ground truth state distance by reusing the Annotated Atari RAM of Anand et al. (2019), value distance (as DNNs may alias states with the same value), and feature distance. Unfortunately, we were unable to find correlations (see appendix A.2).\nOn convergence Figures 1, 2, and 4 show values averaged over the course of training. We find that except in the first few iterations, these curves remain constant throughout training (see figures in A.4) and show no sign of convergence. This is also consistent with previous studies, as DQN is known to not converge on Atari (Anschel et al., 2017).\nOn variance While V ar(YL) tends to be large, we find that the confidence interval of the mean is always small, and would barely appear on most of our plots. Additionally, although generalization gain is typically a fraction of the magnitude of the value function, it is consistently non-zero.\nOn optimizers We find that the systematic differences we see between Adam and RMSProp also occur in behaviour, where control agents trained with RMSProp tend to get slightly more reward. An interpretation of our results is that RMSProp memorizes faster than Adam: it has much larger on-sample gain, it tends to make the singular values of the weight matrices larger, and it has negative near-sample gain, suggesting that capacity is spent memorizing on average. In Atari tasks, memorization can be an efficient strategy (although it is sensitive to noise, see Machado et al. (2018)). Hence, the better performance of RMSProp on Atari is consistent with our claims. This property may not be as desireable in more complex environments requiring generalization." }, { "heading": "4 DISCUSSION", "text": "RL is generally considered a harder problem than supervised learning. Hence, the fact that TD-style methods require more samples than supervised learning when used with deep nets is not necessarily\nsurprising. However, with the same data and the same final targets (the “true\" value function), it is not clear why TD updates lead to parameters that generalize worse than supervised learning. This could be a problem, as most RL methods rely on the TD mechanism in one way or another. In particular, our results show that both Q-Learning and Sarsa generalize poorly, leading to DNNs that memorize the training data (not unlike table lookup). Our results also suggest that TD(λ), although not widely used in recent DRL, improves generalization. Finally, we find differences between Adam and RMSProp that we initially did not anticipate. Very little work has been done to understand and improve the coupling between optimizers and TD, and our results indicate that this would be an important future work direction.\nOur work suggests that the RL community should pay special attention to the current research on generalization in DNNs, because approaching the TD bootstrapping mechanism as a supervised learning problem does not seem to leverage the full generalization potential of DNNs." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 EXTRA EXPERIMENTAL DETAILS", "text": "" }, { "heading": "A.1.1 ATARI GAMES", "text": "Some sections are only run with 3 Atari games, MsPacman, Asterix and Seaquest. We chose these three games as they exhibit features that seem particularly amenable to generalization.\nThe full 24 games we use for policy evaluation tests are Alien, Amidar, Assault, Asterix, BankHeist, Boxing, Breakout, ChopperCommand, CrazyClimber, DemonAttack, Freeway, Frostbite, Gopher, Hero, Jamesbond, Kangaroo, Krull, KungFuMaster, MsPacman, PrivateEye, Qbert, RoadRunner, Seaquest, and UpNDown.\nAll our experiments train parameters with the following architecture, as per Mnih et al. (2013): 3 convolutional layers with kernels of shape 4× 32× 8× 8, 32× 64× 4× 4, and 64× 64× 3× 3 and with stride 4, 2, and 1 respectively, followed by two fully-connected layers of shape 9216× 512 and 512× |A|, A being the legal action set for a given game. All activation are leaky ReLUs (Maas et al.) except for the last layer which is linear (as it outputs value functions or unnormalized logits for the classification experiment)." }, { "heading": "A.1.2 FIGURE HYPERPARAMETERS", "text": "The experiments of Figures 1 and 2 run for 500k steps, measuring gains every 500 updates. A learning rate of 10−4 is used, with L2 weight regularization of 10−4. When boostrapping to a frozen network, the frozen network is updated every 10k updates. We use γ = 0.99, a minibatch size of 32, an of 5% to generate D∗, and a buffer size of 500k. Each setting is run with 10 random seeds. The random seeds affect the generation of D∗, the weight initialization, the minibatch sampling, and the choice of actions in -greedy rollouts.\nThe values of ∆ in Figure 3 are measured at every update. The memorization procedure is also run for 500k steps, which is enough for the model to seemingly converge (albeit to a non 0 error).\nThe Q-Learning control experiments of Figures 4 and 6 are run for 10M steps with the same setup as previously described. Each setting is run with 10 random seeds. For every environment step, one minibatch update is done.\nThe experiments of Figure 5 are run for 500k steps, as previously described. λ-targets are computed with the forward view, using the frozen network to compute the target values – this allows us to cheaply recompute all λ-targets once every 10k steps when we update the frozen network. Each setting is run with 5 random seeds.\nNote that we did experiment with various learning rates, momentum settings, DNN regularizations, and other common tricks–our conclusions remained the same, but for simplicity of presentation we stick to commonly used hyperparameters." }, { "heading": "A.1.3 MINIBATCHES", "text": "All plots are generated from measures taken during training, either during policy evaluation or Q-Learning. While learning rates and minibatch sizes do have an influence on the results, most of the time results remain mostly the same and for simplicity of presentation only a subset of the experiments we performed are shown. Ideally, gain should be measured for a single example, but we found that results were the same and lower variance for larger minibatch sizes. As such we consistently use 32 examples per minibatch, which reflects current practice in RL." }, { "heading": "A.1.4 REPRODUCIBILITY CHECKLIST", "text": "We follow the Machine Learning reproducibility checklist (Pineau, 2019), and refer to corresponding sections in the text when relevant.\nFor all models and algorithms presented, check if you include:\n• A clear description of the mathematical setting, algorithm, and/or model. We use unmodified algorithms, described in the technical background, and only analyse their behaviour. The measures we propose are straightforward to implement and only require minimal changes • An analysis of the complexity (time, space, sample size) of any algorithm. The measures\nwe propose only add a constant instrumentation overhead. • A link to a downloadable source code, with specification of all dependencies, includ-\ning external libraries. All code is included in supplementary materials, dependencies are documented within.\nFor any theoretical claim, check if you include:\n• A statement of the result. We make no theoretical claim. • A clear explanation of any assumptions. idem. • A complete proof of the claim. idem.\nFor all figures and tables that present empirical results, check if you include:\n• A complete description of the data collection process, including sample size. We collect data by running standard implementations of common algorithms with repeated runs. • A link to a downloadable version of the dataset or simulation environment. Included in\nthe code available in supplementary materials. • An explanation of any data that were excluded, description of any pre-processing step.\nWe generally chose hyperparameters that best represent state-of-the-art usage, then if necessary that best represent our findings. In most cases only minor learning rate adjutments were necessary, although they would not significantly change most plots. • An explanation of how samples were allocated for training / validation / testing. As we\nare only interested in the training process this is not fully applicable. • The range of hyper-parameters considered, method to select the best hyper-parameter\nconfiguration, and specification of all hyper-parameters used to generate results. See section A.1. • The exact number of evaluation runs. idem. • A description of how experiments were run. idem. • A clear definition of the specific measure or statistics used to report results. See section\n3. • Clearly defined error bars. Figures with error bars compute a bootstrapped 95% confidence\ninterval of the mean. • A description of results with central tendency(e.g. mean) & variation(e.g. stddev). idem. • A description of the computing infrastructure used Almost all experiments were run on\nP100 GPUs, otherwise they were run on Intel i7 processors." }, { "heading": "A.2 LOOKING FOR OTHER STRUCTURES IN TD GAIN", "text": "On top of measuring Y nearTD , i.e. TD gain for nearby examples, we also measure TD gain as a function of ground truth distance, value distance and feature distance. In all cases we find the expected gain to be close to 0 with only few interesting patterns to the gain curves.\nGround truth distance The Arcade Learning Environment (Bellemare et al., 2013) provides, in addition to pixel observations, the current memory (RAM) state of the game, which consists of 128 8- bit values. In Anand et al. (2019), the authors annotate the relevant individual bytes, discarding those not representing anything of value. For example, in MsPacman they identify 17 values consisting of the player’s position, the ghosts’ positions, etc. In Figure 7a, we plot YTD as a function of the L1 RAM distance (for only the relevant bytes) between XU and X (i.e. the update samples and the test samples; we use 2048 test samples each time we perform this measure, which is every 1000 minibatch updates, for a total of 1024000 samples). There seems to be no correlation between YTD and the distance.\n0 60 120 180 240 300 360 420 480 540 600 L1 Distance in ground truth values\n−10−3\n0\n10−3\nT D\nga in\n,Y T\nD\nMsPacman MC MsPacman QL MsPacman Sarsa Seaquest MC Seaquest QL Seaquest Sarsa\n(a) Policy evaluation, TD gain as a function of annotated ground truth distance.\n0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 L1 Distance in V , |V (XU )−V (X)|\n−10−1\n−10−2\n−10−3\n0\n10−3\n10−2\nT D\nga in\n,Y T\nD\nMsPacman MC MsPacman QL MsPacman Sarsa Seaquest MC Seaquest QL Seaquest Sarsa\n(b) Policy evaluation, TD gain as a function of distance in value predictions.\nFigure 7: TD gain as a function of other metrics. Shaded regions are the standard error to the mean.\nValue distance DNNs might alias together (blend into a single point in latent space) all states that have the same value. This would mean that changing the value prediction for one of those states will change the value prediction for all other aliased states. In Figure 7b we see that this effect might occur in the [0, 0.02] bin, where TD gain tends to be positive for TD methods. For Monte-Carlo returns in MsPacman, the effect is negative on average in the [0, 0.02] bin, but otherwise positive from [0.02, 0.3]. There seems to be no correlation otherwise to TD methods.\nFeature distance We perform again a similar experiment on MsPacman, but using as a distance metric the cosine similarity of activations in the second to last layer. While we do find a minor correlation with distance, its magnitude is very small. In addition, this correlation is easily explained by the geometry of the shallow neural network we use. Since we use leaky ReLUs and measure distance in the second to last layer, when any two states are close in the hidden space, changing the mapping to the output for one state will change the output for the other state with high probability. Thus what we observe here is closer to how much states are entangled in the hidden space rather than a real measure of how update generalization is linked to distance (in true or hidden space). Interestingly, hidden states trained with SGD seem much more entangled, their cosine distance is often 0. This does not seem to be the case with networks trained with Adam and RMSProp." }, { "heading": "A.3 SINGULAR VALUES, CONTROL VS POLICY EVALUATION", "text": "Figure 9 shows the spread of singular values after 100k minibatch updates on MsPacman for the Q-Learning objective and Adam/RMSProp. The difference between the control case and policy evaluation supports our hypothesis that policy evaluation initially captures more factors of variation.\nIt remains unclear if the effect of the control case initially having fewer captured factors of variation leads to a form of feature curriculum.\nFigure 10 shows the spread of singular values after 500k minibatch updates for TD(λ). Interestingly, larger λ values yield larger singular values and a wider distribution. Presumably, TD(λ) having a less biased objective allows the parameters to capture all the factors of variation faster rather than to rely on bootstrapping to gradually learn them.\nNote that current literature suggests that having fewer large singular values is a sign of generalization in classifiers, see in particular Oymak et al. (2019), as well as Morcos et al. (2018) and Raghu et al. (2017). It is not clear whether this holds for regression, nor in our case for regression to value functions, but interestingly all runs (even for TD(λ)), have a dramatic cutoff in singular values after about the 200th SV, suggesting that there may be in this order of magnitude many underlying factors in MsPacman, and that by changing the objective and the data distribution, a DNN may be able to capture them faster or slower." }, { "heading": "A.4 EVOLUTION OF TD GAIN WITH TRAINING", "text": "Figure 11 shows the evolution of TD gain during training; in relation to previous figures like Figure 1, the y axis is now Fig. 1’s x axis – the distance to the update sample in the replay buffer, the y axis is now training time, and the color is now Fig. 1’s y axis – the magnitude of the TD gain.\nAnother interesting observation with respect to the evluation of TD gain from Fig. 3 is that the density is asymmetric, predictions tend to increase during training. This is consistent with the fact that DNNs are initialized to predict 0 in expectation, while value functions for agents that receive mostly positive rewards will tend to be positive. Also note that Fig. 3 differs very little when using Adam over RMSProp.\nA.5 COSINE SIMILARITIES IN TD(λ)\nFigures 12 and 13 show cosine similarities between gradients after 500k updates.\nA.6 DIFFERENCES SINCE LAST VISIT IN TD(λ)\nSee Figure 14, and Figure 3 for reference.\nA.7 IS NEAR TD GAIN INDICATIVE OF SPEED OF LEARNING?\nIn Figure 15 we plot, for an agent trained on MsPacman, the lifetime agent reward (i.e. the AUC of a curve as in Figure 6) as a function of the lifetime average near TD gain Y nearTD . The line is a linear regression of the points, with a correlation coefficient of r = 0.4337. We vary the capacity of the agent by changing the number of hidden units of every layer, between 0.25× and 4times the original size." } ]
2,019
null
SP:f6624fed0b38b3937355e3b4c9e4c1070d60dcc3
[ "This paper is a neural architecture search paper. In particular, it applies this to finding better neural architectures for video understanding, emphasizing exploring the video temporal resolutions needed and how to combine intermediate representations capturing appearance and motion. It introduces a somewhat new algorithm for connection-strength-weighted architecture evolution focused on this high-level information fusion problem of video understanding.", "This paper aims to adapt the standard neural architecture search scheme to search a two-input convolutional neural network for video representations. To this end, the paper formulates a direct acyclic graph with two input nodes (for RGB image and optical flow), where each node represents some pre-composed layers and edge represents the data flow with a trainable weight. The searching policy is a modified evolutionary algorithm, which is guided by the trainable weights on the edge, and a set of graph limitations are in-place to avoid over-complicated graphs. The best-selected model outperforms previous baselines and achieves a new state-of-the-art on two video datasets." ]
Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to include the time dimension, using modules such as 3D convolutions, or by using two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity and spatio-temporal interactions for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin. We obtain 58.6% mAP on Charades and 34.27% accuracy on Moments-in-Time.
[ { "affiliations": [], "name": "VIDEO ARCHITECTURES" }, { "affiliations": [], "name": "Michael S. Ryoo" }, { "affiliations": [], "name": "AJ Piergiovanni" }, { "affiliations": [], "name": "Mingxing Tan" }, { "affiliations": [], "name": "Anelia Angelova" } ]
[ { "authors": [ "Karim Ahmed", "Lorenzo Torresani" ], "title": "Connectivity learning in multi-branch networks", "venue": "In Workshop on Meta-Learning (MetaLearn),", "year": 2017 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L. Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE TPAMI,", "year": 2018 }, { "authors": [ "Ali Diba", "Mohsen Fayyaz", "Vivek Sharma", "Manohar Paluri", "Jurgen Gall", "Rainer Stiefelhagen", "Luc Van Gool" ], "title": "Holistic large scale video understanding", "venue": null, "year": 1904 }, { "authors": [ "Christoph Feichtenhofer", "Axel Pinz", "Richard Wildes" ], "title": "Spatiotemporal residual networks for video action recognition", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Christoph Feichtenhofer", "Axel Pinz", "Andrew Zisserman" ], "title": "Convolutional two-stream network fusion for video action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Christoph Feichtenhofer", "Axel Pinz", "Richard P Wildes" ], "title": "Spatiotemporal multiplier networks for video action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Christoph Feichtenhofer", "Haoqi Fan", "Jitendra Malik", "Kaiming He" ], "title": "Slowfast networks for video recognition", "venue": "arXiv preprint arXiv:1812.03982,", "year": 2018 }, { "authors": [ "David E. Goldberg", "Kalyanmoy Deb" ], "title": "A comparative analysis of selection schemes used in genetic algorithms", "venue": "In Foundations of Genetic Algorithms,", "year": 1991 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Colin Lea", "Michael D. Flynn", "Rene Vidal", "Austin Reiter", "Gregory D. Hager" ], "title": "Temporal convolutional networks for action segmentation and detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture seach", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Mathew Monfort", "Alex Andonian", "Bolei Zhou", "Kandan Ramakrishnan", "Sarah Adel Bargal", "Tom Yan", "Lisa Brown", "Quanfu Fan", "Dan Gutfruend", "Carl Vondrick" ], "title": "Moments in time dataset: one million videos for event understanding", "venue": "arXiv preprint arXiv:1801.03150,", "year": 2018 }, { "authors": [ "AJ Piergiovanni", "Michael S Ryoo" ], "title": "Representation flow for action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V. Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Gunnar A. Sigurdsson", "Gül Varol", "Xiaolong Wang", "Ali Farhadi", "Ivan Laptev", "Abhinav Gupta" ], "title": "Hollywood in homes: Crowdsourcing data collection for activity understanding", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Gunnar A Sigurdsson", "Santosh Divvala", "Ali Farhadi", "Abhinav Gupta" ], "title": "Asynchronous temporal fields for action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Two-stream convolutional networks for action recognition in videos", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Du Tran", "Lubomir D Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "C3d: generic features for video analysis", "venue": "CoRR, abs/1412.0767,", "year": 2014 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A closer look at spatiotemporal convolutions for action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Limin Wang", "Yuanjun Xiong", "Zhe Wang", "Yu Qiao", "Dahua Lin", "Xiaoou Tang", "Luc Van Gool" ], "title": "Temporal segment networks: Towards good practices for deep action recognition", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Videos as space-time region graphs", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Chao-Yuan Wu", "Christoph Feichtenhofer", "Haoqi Fan", "Kaiming He", "Philipp Krähenbühl", "Ross Girshick" ], "title": "Long-term feature banks for detailed video understanding", "venue": "arXiv preprint arXiv:1812.05038,", "year": 2018 }, { "authors": [ "Chao-Yuan Wu", "Manzil Zaheer", "Hexiang Hu", "R Manmatha", "Alexander J Smola", "Philipp Krähenbühl" ], "title": "Compressed video action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Saining Xie", "Chen Sun", "Jonathan Huang", "Zhuowen Tu", "Kevin Murphy" ], "title": "Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Saining Xie", "Alexander Kirillov", "Ross Girshick", "Kaiming He" ], "title": "Exploring randomly wired neural networks for image recognition", "venue": "In CoRR:1904.01569,", "year": 2019 }, { "authors": [ "Fisher Yu", "Vladlen Koltun" ], "title": "Multi-scale context aggregation by dilated convolutions", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Christopher Zach", "Thomas Pock", "Horst Bischof" ], "title": "A duality based approach for realtime tv-l 1 optical flow", "venue": "In Joint Pattern Recognition Symposium,", "year": 2007 }, { "authors": [ "Bolei Zhou", "Alex Andonian", "Aude Oliva", "Antonio Torralba" ], "title": "Temporal relational reasoning in videos", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning to represent videos is a challenging problem. Because a video contains spatio-temporal data, its representation is required to abstract both appearance and motion information. This is particularly important for tasks such as activity recognition, as understanding detailed semantic contents of the video is needed. Previously, researchers approached this challenge by designing a two-stream model for appearance and motion information respectively, combining them by late or intermediate fusion to obtain successful results: Simonyan & Zisserman (2014); Feichtenhofer et al. (2016b;a; 2017; 2018). However, combining appearance and motion information is an open problem and the study on how and where different modalities should interchange representations and what temporal aspect/resolution each stream (or module) should focus on has been very limited.\nIn this paper, we investigate how to learn feature representations across spatial and motion visual clues. We propose a new multi-stream neural architecture search algorithm with connection learning guided evolution, which focuses on finding higher-level connectivity between network blocks taking multiple input streams at different temporal resolutions. Each block itself is composed of multiple residual modules with space-time convolutional layers, learning spatio-temporal representations. Our architecture learning not only considers the connectivity between such multi-stream, multi-resolution blocks, but also merges and splits network blocks to find better multi-stream video CNN architectures. Our objective is to address two main questions in video representation learning: (1) what feature representations are needed at each intermediate stage of the network and at which resolution and (2) how to combine or exchange such intermediate representations (i.e., connectivity learning). Unlike previous neural architecture search methods for images that focus on finding a good ‘module’ of convolutional layers to be repeated in a single-stream networks (Zoph et al., 2018; Real et al., 2019), our objective is to search for higher-level connections between multiple sequential or concurrent blocks to form multi-stream architectures.\nWe propose the concept of AssembleNet, a new method of fusing different sub-networks with different input modalities and temporal resolutions. AssembleNet is a general formulation that\nallows representing various forms of multi-stream CNNs as directed graphs, coupled with an efficient evolutionary algorithm to explore the network connectivity. Specifically, this is done by utilizing the learned connection weights to guide evolution, in addition to randomly combining, splitting, or connecting sub-network blocks. AssembleNet is a ‘family’ of learnable architectures; they provide a generic approach to learn connectivity among feature representations across input modalities, while being optimized for the target task. We believe this is the first work to (i) conduct research on automated architecture search with multi-stream connections for video understanding, and (ii) introduce the new connection-learning-guided evolutionary algorithm for neural architecture search.\nFigure 1 shows an example learned AssembleNet. The proposed algorithm for learning video architectures is very effective: it outperforms all prior work and baselines on two very challenging benchmark datasets, and establishes a new state-of-the-art. AssembleNet models use equivalent number of parameters to standard two-stream (2+1)D ResNet models." }, { "heading": "2 PREVIOUS WORK", "text": "A video is a spatio-temporal data (i.e., image frames concatenated along time axis), and its representation must abstract both spatial and temporal information. Full 3D space-time (i.e., XYT) convolutional layers as well as (2+1)D convolutional layers have been popularly used to represent videos (Tran et al., 2014; Carreira & Zisserman, 2017; Tran et al., 2018; Xie et al., 2018). Researchers studied replacing 2D convolutional layers in standard image-based CNNs such as Inception (Szegedy et al., 2016) and ResNet (He et al., 2016), so that it can be directly used for video classification.\nTwo-stream network designs, which combine motion and appearance inputs, are commonly used (e.g., Simonyan & Zisserman, 2014; Feichtenhofer et al., 2016a; 2017; 2016b). Combining appearance information at two different temporal resolutions (e.g., 24 vs. 3 frames per second) with intermediate connections has been proposed by Feichtenhofer et al. (2018). Late fusion of the two-stream representations or architectures with more intermediate connections (Diba et al., 2019), have also been explored. However, these video CNN architectures are the result of careful manual designs by human experts.\nNeural Architecture Search (NAS), the concept of automatically finding better architectures based on data, is becoming increasingly popular (Zoph & Le, 2017; Zoph et al., 2018; Liu et al., 2018). Rather than relying on human expert knowledge to design a CNN model, neural architecture search allows the machines to generate better performing models optimized for the data. The use of reinforcement learning controllers (Zoph & Le, 2017; Zoph et al., 2018) as well as evolutionary algorithms (Real et al., 2019) have been studied, and they meaningfully outperform handcrafted architectures. Most of\nthese works focus on learning architectures of modules (i.e., groupings of layers and their connections) to be repeated within a fixed single-stream meta-architecture (e.g., ResNet) for image-based object classification. One-shot architecture search to learn differentiable connections (Bender et al., 2018; Liu et al., 2019) has also been successful for images. However, it is very challenging to directly extend such work to find multi-stream models for videos, as it requires preparing all possible layers and interactions the final architecture may consider using. In multi-stream video CNNs, there are many possible convolutional blocks with different resolutions, and fully connecting them requires a significant amount of memory and training data, which makes it infeasible.\nOur work is also related to Ahmed & Torresani (2017) which used learnable gating to connect multiple residual module branches, and to the RandWire network (Xie et al., 2019), which showed that randomly connecting a sufficient number of convolutional layers creates performant architectures. However, similar to previous NAS work, the latter focuses only on generating connections between the layers within a block. The meta-architecture is fixed as a single stream model with a single input modality. In this work, our objective is to learn high-level connectivity between multi-stream blocks for video understanding driven by data. We confirm experimentally that in the multi-stream video CNNs, where multiple types of input modalities need to be considered at various resolutions, randomly connecting blocks is insufficient and the proposed architecture learning strategy is necessary." }, { "heading": "3 ASSEMBLENET", "text": "We propose a new principled way to find better neural architectures for video representation learning. We first expand a video CNN to a multi-resolution, multi-stream model composed of multiple sequential and concurrent neural blocks, and introduce a novel algorithm to search for the optimal connectivity between the blocks for a given task.\nWe model a video CNN architecture as a collection of convolutional blocks (i.e., sub-networks) connected to each other. Each block is composed of a residual module of space-time convolutional layers repeated multiple times, while having its own temporal resolution. The objective of our video architecture search is to automatically (1) decide the number of parallel blocks (i.e., how many streams to have) at each level of the network, (2) choose their temporal resolutions, and (3) find the optimal connectivity between such multi-stream neural blocks across various levels. The highly interconnected convolutional blocks allow learning of the video representations combining multiple input modalities at various temporal resolutions. We introduce the concept of connection-learningguided architecture evolution to enable multi-stream architecture search.\nWe name our final architecture as an ‘AssembleNet’, since it is formulated by assembling (i.e., merging, splitting, and connecting) multiple building blocks." }, { "heading": "3.1 GRAPH FORMULATION", "text": "In order to make our neural architecture evolution consider multiple different streams with different modalities at different temporal resolutions, we formulate the multi-stream model as a directed acyclic graph. Each node in the graph corresponds to a sub-network composed of multiple convolutional layers (i.e., a block), and the edges specify the connections between such sub-networks. Each architecture is denoted as Gi = (Ni, Ei) where Ni = {n0i, n1i, n2i, · · · } is the set of nodes and Ei is the set of edges defining their connectivity.\nNodes. A node in our graph representation is a ResNet block composed of a fixed number of interleaved 2D and (2+1)D residual modules. A ‘2D module’ is composed of a 1x1 conv. layer, one 2D conv. layer with filter size 3x3, and one 1x1 convolutional layer. A ‘(2+1)D module’ consists of a temporal 1D convolutional layer (with filter size 3), a 2D conv. layer, and a 1x1 conv. layer. In each block, we repeat a regular 2D residual module followed by the (2+1)D residual module m times.\nEach node has its own block level, which naturally decides the directions of the edges connected to it. Similar to the standard ResNet models, we made the nodes have a total of four block levels (+ the stem level). Having multiple nodes of the same level means the architecture has multiple parallel ‘streams’. Figure 1 illustrates an example. Each level has a different m value: 1.5, 2, 3, and 1.5. m = 1.5 means that there is one 2D module, one (2+1)D module, and one more 2D module. As a result, the depth of our network is 50 conv. layers. We also have a batch normalization layer followed by a ReLU after every conv. layer.\nThere are two special types of nodes with different layer configurations: source nodes and sink nodes. A source node in the graph directly takes the input and applies a small number of convolutional/pooling layers (it is often referred as the ‘stem’ of a CNN model). In video CNNs, the input is a 4D tensor (XYT + channel) obtained by concatenating either RGB frames or optical flow images along the time axis. Source nodes are treated as level-0 nodes. The source node is composed of one 2D conv. layer of filter size 7x7, one 1D temporal conv. layer of filter size 5, and one spatial max pooling layer. The 1D conv. is omitted in optical flow stems. A sink node generates the final output of the model, and it is composed of one pooling, one fully connected, and one softmax layer. The sink node is also responsible for combining the outputs of multiple nodes at the highest level, by concatenating them after the pooling. More details are provided in Appendix.\nEach node in the graph also has two attributes controlling the convolutional block: its temporal resolution and the number of channels. We use temporally dilated 1D convolution to dynamically change the resolution of the temporal convolutional layers in different blocks, which are discussed more below. The channel size (i.e., the number of filters) of a node could take arbitrary values, but we constrain the sum of the channels of all nodes in the same block level to be a constant so that the capacity of an AssembleNet model is equivalent to a ResNet model with the same number of layers.\nTemporally Dilated 1D Convolution. One of the objectives is to allow the video architectures to look at multiple possible temporal resolutions. This could be done by preparing actual videos with different temporal resolutions as in Feichtenhofer et al. (2018) or by using temporally ‘dilated convolutions as we introduce here. Having dilated filters allow temporal 1D conv. layers to focus on different temporal resolution without losing temporal granularity. This essentially is a 1D temporal version of standard 2D dilated convolutions used in Chen et al. (2018) or Yu & Koltun (2016):\nLet k be a temporal filter (i.e., a vector) with size 2d + 1. The dilated convolution operator ∗r is similar to regular convolution but has different steps for the summation, described as:\n(F ∗r k)(t) = ∑\nt1+rt2=t\nF (t1)k(t2 + d) (1)\nwhere t, t1, and t2 are time indexes. r indicates the temporal resolution (or the amount of dilation), and the standard 1D temporal convolution is a special case where r = 1. In the actual implementation, this is done by inserting r − 1 number of zeros between each element of k to generate k′, and then convolving such zero-inflated filters with the input: F ∗r k = F ∗ k′. Importantly, the use of the dilated convolution allows different intermediate sub-network blocks (i.e., not just input stems) to focus on very different temporal resolutions at different levels of the convolutional architecture.\nNote that our temporally dilated convolution is different from the one used in Lea et al. (2017), which designed a specific layer to combine representations from different frames with various step sizes. Our layers dilate the temporal filters themselves. Our dilated convolution can be viewed as a direct temporal version of the standard dilated convolutions used in Chen et al. (2018); Yu & Koltun (2016).\nEdges. Each directed edge specifies the connection between two sub-network blocks, and it describes how a representation is transferred from one block to another block. We constrain the direction of each edge so that it is connected from a lower level block to a higher level block to avoid forming a cycle and allow parallel streams. A node may receive inputs from any number of lower-level nodes (including skip connections) and provide its output to any number of higher-level nodes.\nOur architectures use a (learnable) weighted summation to aggregate inputs given from multiple connected nodes. That is, an input to a node is computed as F in = ∑ i sigmoid(wi) · F outi , where F outi are output tensors (i.e., representations) of the nodes connected to the node and wi are their corresponding weights. Importantly, each wi is considered as a variable that has to be learned from training data through back propagation. This has two key advantages compared to conventional feature map concatenation: (i) The input tensor size is consistent regardless of the number of connections. (ii) We use learned connection weights to ‘guide’ our architecture evolution algorithm in a preferable way, which we discuss more in Section 3.2.\nIf the inputs from different nodes differ in their spatial size, we add spatial max pooling and striding to match their spatial size. If the inputs have different channel sizes, we add a 1x1 conv. layer to match the bigger channel size. Temporal sizes of the representations is always consistent in our graphs, as there is no temporal striding in our formulation and the layers in the nodes are fully convolutional." }, { "heading": "3.2 EVOLUTION", "text": "We design an evolutionary algorithm with discrete mutation operators that modify nodes and edges in architectures over iterations. The algorithm maintains a population of P different architectures, P = {G1, G2, · · · , G|P |}, where each architecture G is represented with a set of nodes and their edges as described above.\nThe initial population is formed by preparing a fixed number of randomly connected architectures (e.g., |P | = 20). Specifically, we (1) prepare a fixed number of stems and nodes at each level (e.g., two per level), (2) apply a number of node split/merge mutation operators which we discuss more below, and (3) randomly connect nodes with the probability p = 0.5 while discarding architectures with graph depth < 4. As mentioned above, edges are constrained so that there is no directed edge reversing the level ordering. Essentially, a set of overly-connected architectures are used as a starting point. Temporal resolutions are randomly assigned to the nodes.\nWe use the tournament selection algorithm (Goldberg & Deb, 1991) as the main evolution framework: At each evolution round, the algorithm updates the population by selecting a ‘parent’ architecture and mutating (i.e., modifying) it to generate a new ‘child’ architecture. The parent is selected by randomly sampling a subset of the entire population P ′ ⊂ P , and then computing the architecture with the highest ‘fitness’: Gp = argmaxGi∈P ′f(Gi) where f(G) is the fitness function. Our fitness is defined as a video classification accuracy of the model, measured by training the model with a certain number of initial iterations and then evaluating it on the validation set as its proxy task. More specifically, we use top-1 accuracy + top-5 accuracy as the fitness function. The child is added into the population, and the model with the least fitness is discarded from the population.\nA child is evolved from the parent by following two steps. First, it changes the block connectivity (i.e., edges) based on their learned weights: ‘connection-learning-guided evolution’. Next, it applies a random number of mutation operators to further modify the node configuration. The mutation operators include (1) a random modification of the temporal resolution of a convolutional block (i.e., a node) as well as (2) a merge or split of a block. When splitting a node into two nodes, we make their input/output connections identical while making the number of channels in their convolutional layers half that of the node before the split (i.e., C = Cp/2 where Cp is the channel size of the parent). More details are found in Appendix. As a result, we maintain the total number of parameters, since splitting or merging does not change the number of parameters of the convolutional blocks.\nConnection-Learning-Guided Mutation. Instead of randomly adding, removing or modifying block connections to generate the child architecture, we take advantage of the learned connection weights from its parent architecture. Let Ep be the set of edges of the parent architecture. Then the edges of the child architecture Ec are inherited from Ep, by only maintaining high-weight connections while replacing the low-weight connections with new random ones. Specifically, Ec = E1c ∪ E2c :\nE1c = {e ∈ Ep |We > B} , E2c = { e ∈ (E∗ − Ep) |\n|Ep − E1c | |E − Ep| > Xe\n} (2)\nwhere X ∼ unif(0, 1) and E∗ is the set of all possible edges. E1c corresponds to the edges the child architecture inherits from the parent architecture, decided based on the learned weight of the edge We.\nThis is possible because our fitness measure involves initial proxy training of each model, providing the learned connection weight values We of the parent.\nB, which controls whether or not to keep an edge from the parent architecture, could either be a constant threshold or a random variable following a uniform distribution: B = b or B = XB ∼ unif(0, 1). E2c corresponds to the new randomly added edges which were not in the parent architecture. We enumerate through each possible new edge, and randomly add it with the probably of |Ep − E1c |/|E−Ep|. This makes the expected total number of added edges to be |Ep−E1c |, maintaining the size of Ec. Figure 2 shows an example of the evolution process and Figure 3 shows final architectures.\nEvolution Implementation Details. Initial architectures are formed by randomly preparing either {2 or 4} stems, two nodes per level at levels 1 to 3, and one node at level 4. We then apply 1∼5 random number of node split operators so that each initial architecture has a different number of nodes. Each node is initialized with a random temporal resolution of 1, 2, 4, or 8. As mentioned, each possible connection is then added with the probability of p = 0.5. At each evolution round, the best-performing parent architecture is selected from a random subset of 5 from the population. The child architecture is generated by modifying the connections from the parent architecture (Section 3.2). A random number of node split, merge, or temporal resolution change mutation operators (0∼4) are then applied. Evaluation of each architecture (i.e., measuring the fitness) is done by training the model for 10K iterations and then measuring its top-1 + top-5 accuracy on the validation subset. The Moments-in-Time dataset, described in the next section, is used as the proxy dataset to measure fitness. The evolution was run for ∼200 rounds, although a good performing architecture was found within only 40 rounds (e.g., Figure 3-right). Figure 1 shows the model found at the 165th round. 10K training iterations of each model during evolution took 3∼5 hours; with our setting, evolving a model for 40 rounds took less than a day with 10 parallel workers." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS", "text": "Charades Dataset. We first test on the popular Charades dataset (Sigurdsson et al., 2016) which is unique in the activity recognition domain as it contains long sequences. It is one of the largest public datasets with continuous action videos, containing 9848 videos of 157 classes (7985 training and 1863 testing videos). Each video is ∼30 seconds. It is a challenging dataset due to the duration and variety of the activities. Activities may temporally overlap in a Charades video, requiring the model to predict multiple class labels per video. We used the standard ‘Charades v1 classify’ setting for the evaluation. To comply with prior work (e.g. Feichtenhofer et al., 2018), we also report results when pre-training on Kinetics (Carreira & Zisserman, 2017), which is another large-scale dataset.\nTable 1: Reported state-of-the-art action classification performances (vs. AssembleNet) on Charades. ‘2-stream (2+1)D ResNet-50’ is the two-stream model with connection learning for level-4 fusion.\nMethod pre-train modality mAP\n2-stream (Simonyan & Zisserman, 2014) UCF101 RGB+Flow 18.6 Asyn-TF (Sigurdsson et al., 2017) UCF101 RGB+Flow 22.4 CoViAR (Wu et al., 2018b) ImageNet Compressed 21.9 MultiScale TRN (Zhou et al., 2018) ImageNet RGB 25.2 I3D (Carreira & Zisserman, 2017) Kinetics RGB 32.9 I3D (from Wang et al., 2018) Kinetics RGB 35.5 I3D-NL (Wang et al., 2018) Kinetics RGB 37.5 STRG (Wang & Gupta, 2018) Kinetics RGB 39.7 LFB-101 (Wu et al., 2018a) Kinetics RGB 42.5 SlowFast-101 (Feichtenhofer et al., 2018) Kinetics RGB+RGB 45.2 2-stream (2+1)D ResNet-50 (ours) MiT RGB+Flow 48.7 2-stream (2+1)D ResNet-50 (ours) Kinetics RGB+Flow 50.4 2-stream (2+1)D ResNet-101 (ours) Kinetics RGB+Flow 50.6 AssembleNet-50 (ours) MiT RGB+Flow 53.0 AssembleNet-50 (ours) Kinetics RGB+Flow 56.6 AssembleNet-101 (ours) Kinetics RGB+Flow 58.6\nTable 2: State-of-the-art action classification accuracies on Moments in Time (Monfort et al., 2018).\nMethod modality Top-1 Top-5\nResNet50-ImageNet RGB 27.16 51.68 TSN (Wang et al., 2016) RGB 24.11 49.10 Ioffe & Szegedy (2015) Flow 11.60 27.40 TSN-Flow (Wang et al., 2016) Flow 15.71 34.65 TSN-2Stream (Wang et al., 2016) RGB+F 25.32 50.10 TRN-Multi (Zhou et al., 2018) RGB+F 28.27 53.87 Two-stream (2+1)D ResNet-50 RGB+F 28.97 55.55 I3D (Carreira & Zisserman, 2017) RGB+F 29.51 56.06 AssembleNet-50 RGB+F 31.41 58.33 AssembleNet-50 (with Kinetics) RGB+F 33.91 60.86 AssembleNet-101 (with Kinetics) RGB+F 34.27 62.71\nWe note that Kinetics is shrinking in size (∼15% videos removed from the original Kinetics-400) and the previous versions are no longer available from the official site.\nMoments in Time (MiT) Dataset. The Moments in Time (MiT) dataset (Monfort et al., 2018) is a large-scale video classification dataset with more than 800K videos (∼3 seconds per video). It is a very challenging dataset with the state-of-the-art models obtaining less than 30% accuracy. We use this dataset for the architecture evolution, and train/test the evolved models. We chose the MiT dataset because it provides a sufficient amount of training data for video CNN models and allows stable comparison against previous models. We used its standard classification evaluation setting." }, { "heading": "4.2 RESULTS", "text": "Tables 1 and 2 compare the performance of AssembleNet against the state-of-the-art models. We denote AssembleNet more specifically as AssembleNet-50, since its depth is 50 layers and has an equivalent number of parameters to ResNet-50. AssembleNet-101 is its 101 layer version having equivalent number of parameters to ResNet-101. AssembleNet is outperforming prior works on both datasets, setting new state-of-the-art results for them. Its performance on MiT is the first above 34%. We also note that the performances on Charades is even more impressive at 58.6 whereas previous known best results are 42.5 and 45.2. For these experiments, the architecture search was done on the MiT dataset, and then the found models are trained and tested on both datasets, which demonstrates that the found architectures are useful across datasets.\nTable 3: Comparison between AssembleNet and architectures without evolution, but with connection weight learning. Four-stream models are reported here for the first time, and are very effective. All these models have a similar number of parameters.\nArchitecture MiT Charades\nTwo-stream (late fusion) 28.97 46.5 Two-stream (fusion at lv. 4) 30.00 48.7 Two-stream (flow→RGB inter.) 30.21 49.5 Two-stream (fully, fuse at 4) 29.87 50.5 Four-stream (fully, fuse at 4) 29.98 50.7 Random (+ connection learning) 29.91 50.1 AssembleNet-50 31.41 53.0\nTable 4: Ablation comparing different AssembleNet architectures found with full vs. constrained search spaces. The models are trained from scratch.\nArchitecture MiT\nBaseline (random + conn. learning) 29.91 No mutation 30.26 RGB-only 30.30 Without temporal dilation 30.49 Two-stem only 30.75 Full AssembleNet-50 31.41\nIn addition, we compare the proposed connection-learning-guided evolution with random architecture search and the standard evolutionary algorithm with random connection mutations. We made the standard evolutionary algorithm randomly modify 1/3 of the total connections at each round, as that is roughly the number of edges the connection-learning-guided evolution modifies. Figure 4 shows the results, visualizing the average fitness score of the three top-performing models in each pool. We observe that the connection-learning-guided evolution is able to find better architectures, and it is able to do so more quickly. The standard evolution performs similarly to random search and is not as effective. We believe this is due to the large search space the approach is required to handle, which is exponential to the number of possible connections. For instance, if there are N nodes, the search space complexity is 2O(N\n2) just for the connectivity search. Note that the initial ∼30 rounds are always used for random initialization of the model population, regardless of the search method." }, { "heading": "4.3 ABLATION STUDIES", "text": "We conduct an ablation study comparing the evolved AssembleNet to multiple (2+1)D two-stream (or multi-stream) architectures which are designed to match the abilities of Assemblenet but without evolution. We note that these include very strong architectures that have not been explored before, such as the four-stream model with dense intermediate connectivity. We design competitive networks having various connections between streams, where the connection weights are also learned (see the supplementary material for detailed descriptions and visualizations). Note that all these models have equivalent capacity (i.e., number of parameters). The performance difference is due to network structure. Table 3 shows the results, demonstrating that these architectures with learnable interconnectivity are very powerful themselves and evolution is further beneficial. The Moments in Time models were trained from scratch, and the Charades models were pre-trained on MiT. In particular, we evaluated an architecture with intermediate connectivity from the flow stream to RGB, inspired by Feichtenhofer et al. (2016b; 2018) (+ connection weight learning). It gave 30.2% accuracy on MiT and 49.5% on Charades, which are not as accurate as AssembleNet. Randomly generated models (from 50 rounds of search) are also evaluated, confirming that such architectures do not perform well.\nFurther, we conduct another ablation to confirm the effectiveness of our search space. Table 4 compares the models found with our full search space vs. more constrained search spaces, such as only using two stems and not using temporal dilation (i.e., fixed temporal resolution)." }, { "heading": "4.4 GENERAL FINDINGS", "text": "As the result of connection-learning-guided architecture evolution, non-obvious and non-intuitive connections are found (Figure 3). As expected, more than one possible “connectivity” solution can yield similarly good results. Simultaneously, models with random connectivity perform poorly compared to the found AssembleNet. Our observations also include: (1) The models prefer to have only one block at the highest level, although we allow the search to consider having more than one block at that level. (2) The final block prefers simple connections gathering all outputs of the blocks in the 2nd to last level. (3) Many models use multiple blocks with different temporal resolutions at the same level, justifying the necessity of the multi-stream architectures. (4) Often, there are 1 or 2 blocks heavily connected to many other blocks. (5) Architectures prefer using more than 2 streams, usually using 4 at many levels." }, { "heading": "5 CONCLUSION", "text": "We present AssembleNet, a new approach for neural architecture search using connection-learningguided architecture evolution. AssembleNet finds multi-stream architectures with better connectivity and temporal resolutions for video representation learning. Our experiments confirm that the learned models significantly outperform previous models on two challenging benchmarks." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 SUPER-GRAPH VISUALIZATION OF THE CONNECTIVITY SEARCH SPACE", "text": "Figure 5 visualizes all possible connections and channel/temporal resolution options our architecture evolution is able to consider. The objective of our evolutionary algorithm could be interpreted as finding the optimal sub-graph (of this super-graph) that maximizes the performance while maintaining the number of total parameters. Trying to directly fit such entire super-graph into the memory was infeasible in our experiments." }, { "heading": "A.2 CHANNEL SIZES OF THE LAYERS AND NODE SPLIT/MERGE MUTATIONS", "text": "As we described in the paper, each node (i.e., a convolutional block) has a parameter C controlling the number of filters of the convolutional layers in the block. When splitting or merging blocks, the number of filters are split or combined respectively. Figure 6 provides a visualization of a block with number of filter specified to the right and a split operation. While many designs are possible, we design the blocks and splitting as follows. The size of 1x1 convolutional layers and 1D temporal convolutional layers are strictly governed by C, having the channel size of C (some 4C). On the other hand, the number of 2D convolutional layer is fixed per level as a constant Dv where v is the level of the block. D1 = 64, D2 = 128, D3 = 256, and D4 = 512. The layers in the stems have 64 channels if there are only two stems and 32 if there are four stems.\nWhen a node is split into two nodes, we update the resulting two nodes’ channel sizes to be half of their original node. This enables us to maintain the total number of model parameters before and after the node split to be identical. The first 1x1 convolutional layer will have half the parameters after the split, since its output channel size is now 1/2. The 2D convolutional layer will also have exactly half the parameters, since its input channel size is 1/2 while the output channel size is staying fixed. The next 1x1 convolutional layer will have the fixed input channel size while the output channel size becomes 1/2: thus the total number of parameters would be 1/2 of the original parameters.\nMerging of the nodes is done in an inverse of the way we split. When merging two nodes into one, the merged node inherits all input/output connections from the two nodes: we take a union of all the connections. The channel size of the merged node is the sum of the channel sizes of the two nodes being merged. The temporal dilation rate of the merged node is randomly chosen between the two nodes before the merge." }, { "heading": "A.3 HAND-DESIGNED MODELS USED IN THE ABLATION STUDY", "text": "Figure 7 illustrates the actual architectures of the hand-designed (2+1)D CNN models used in our ablation study. We also show the final learned weights of the connections, illustrating which connections the model ended up using or not using. We note that these architectures are also very enlightening as the connectivity within them are learned in the process. We observe that stronger connections tend to be formed later for 2-stream architectures. For 4-stream architectures, stronger connections do form early, and, not surprisingly, a connection to at least one node of a different\nmodality is established, i.e. a node stemming from RGB will connect to at least one flow node at the next level and vice versa.\nBelow is a more detailed description of the networks used in the paper: “Two-stream (late fusion)” means that the model has two separate streams at every level including the level 4, and the outputs of such two level 4 nodes are combined for the final classification. “Fusion at lv. 4” is the model that only has one level 4 node to combine the outputs of the two level 3 nodes using a weighted summation. “Two-stream (fully)” means that the model has two nodes at each level 1-3 and one node at level 4, and each node is always connected to every node in the immediate next level. “Flow→RGB” means that only the RGB stream nodes combine outputs from both RGB and flow stream nodes of the immediate lower level." }, { "heading": "12 3, [4, 8, 10, 11], 256, 1, 2", "text": "" }, { "heading": "A.4 ASSEMBLENET MODEL/LAYER DETAILS", "text": "We also provide the final AssembleNet model in table form in Table 5. In particular, the 2nd element of each block description shows the list of where the input to that block is coming from (i.e., the connections). As already mentioned, 2D and (2+1)D residual modules are repeated in each block. The number of repetitions m are 1.5, 2, 3, and 1.5 at each level. m = 1.5 means that we have one 2D residual module, one (2+1)D module, and one more 2D module. This makes the number of convolutional layers of each block at levels 1-4 to be 9, 12, 18, and 9. In addition, a stem has at most 2 convolutional layers. The total depth of our network is 50, similar to a conventional (2+1)D ResNet-50. For AssembleNet-101, we use m = 1.5, 2, 11.5, and 1.5 at each level.\nIf a block has a spatial stride of 2, the striding happens at the first 2D convolutional layer of the block. In the stem which has the spatial stride of 4, the striding of size 2 happens twice, once at the 2D convolutional layer and at the max pooling layer. As mentioned, the model has a batch normalization layer followed by ReLU after every convolutional layer regardless of its type (i.e., 2D, 1D, and 1x1). 2D conv. filter sizes are 3x3, and 1D conv. filter sizes are 3." }, { "heading": "A.5 SINK NODE DETAILS", "text": "When each evolved or baseline (2+1)D model is applied to a video, it generates a 5D (BTYXC) tensor after the final convolutional layer, where B is the size of the batch and C is the number of channels. The sink node is responsible for mapping this into the output vector, whose dimensionality is identical to the number of video classes in the dataset. The sink node first applies a spatial average pooling to generate a 3D (BTC) tensor. If there are multiple level 4 nodes (which rarely is the case), the sink node combines them into a single tensor by averaging/concatenating them. Averaging or concatenating does not make much difference empirically. Next, temporal average/max pooling is applied to make the representation a 2D (BC) tensor (average pooling was used for the MiT dataset and max pooling was used for Charades), and the final fully connected layer and the soft max layer is applied to generate the final output." }, { "heading": "A.6 TRAINING DETAILS", "text": "For the Moments in Time (MiT) dataset training, 8 videos are provided per TPU core (with 16GB memory): the total batch size (for each gradient update) is 512 with 32 frames per video. The batch size used for Charades is 128 with 128 frames per video. The base framerate we used is 12.5 fps for MiT and 6 fps for Charades. The spatial input resolution is 224x224 during training. We used the standard Momentum Optimizer in TensorFlow. We used a learning rate of 3.2 (for MiT) and 25.6 (for Charades), 12k warmup iterations, and cosine decay. No dropout is used, weight decay is set to 1e-4 and label smoothing set to 0.2.\nTraining a model for 10k iterations (during evolution) took 3∼5 hours and fully training the model (for 50k iterations) took ∼24 hours per dataset. We used the TV-L1 optical flow extraction algorithm (Zach et al., 2007) implemented with tensor operations by Piergiovanni & Ryoo (2019) to obtain flow input." }, { "heading": "A.7 EVALUATION DETAILS", "text": "When evaluating a model on the MiT dataset, we provide 36 frames per video. The duration of each MiT video is 3 seconds, making 36 frames roughly correspond to the entire video. For the Charades dataset where each video duration is roughly∼30 seconds, the final class labels are obtained by applying the model to five random 128 frame crops (i.e., segments) of each video. The output multi-class labels are max-pooled to get the final label, and is compared to the ground truth to measure the average precision scores. The spatial resolution used for the testing is 256x256." } ]
2,020
null
SP:1fcf3b2eec374cb379819564c4dbf5cfabe3ff8a
[ " The authors consider the problem of estimating average treatment effects when observed X and treatment T causes Y. Observational data for X,T,Y is available and strong ignorability is assumed. Previous work (Shalit et al 2017) introduced learning a representation that is invariant in distribution across treatment and control groups and using that with treatment to estimate Y. However, authors point out that this representation being forced to be invariant still does not drive the selection bias to zero. A follow up work (Hassanpour and Greiner 2019) - corrects for this by using additional importance weighting that estimates the treatment selection bias given the learnt representation. However, the authors point out even this is not complete in general, as X could be determined by three latent factors, one that is the actual confounder between treatment and outcome and the other that affects only the outcome and the other that affects only the treatment. Therefore, the authors propose to have three representations and enforce independence between representation that solely determines outcome and the treatment and make other appropriate terms depend on the respective latent factors. This gives a modified objective with respect to these two prior works.", "The paper proposes an algorithm that identifies disentangled representation to find out an individual treatment effect. A very specific model that tries to find out the underlying dynamics of such a problem is proposed and is learned by minimizing a suggested objective that takes the strengths of previous approaches. The method is demonstrated in a synthetic dataset and IHDP dataset and shown to outperform other previous methods by a large margin." ]
We consider the challenge of estimating treatment effects from observational data; and point out that, in general, only some factors based on the observed covariates X contribute to selection of the treatment T , and only some to determining the outcomes Y . We model this by considering three underlying sources of {X, T, Y } and show that explicitly modeling these sources offers great insight to guide designing models that better handle selection bias in observational datasets. This paper is an attempt to conceptualize this line of thought and provide a path to explore it further. In this work, we propose an algorithm to (1) identify disentangled representations of the above-mentioned underlying factors from any given observational dataset D and (2) leverage this knowledge to reduce, as well as account for, the negative impact of selection bias on estimating the treatment effects from D. Our empirical results show that the proposed method achieves state-of-the-art performance in both individual and population based evaluation measures.
[ { "affiliations": [], "name": "COUNTERFACTUAL REGRESSION" }, { "affiliations": [], "name": "Negar Hassanpour" }, { "affiliations": [], "name": "Russell Greiner" } ]
[ { "authors": [ "Peter C Austin" ], "title": "An introduction to propensity score methods for reducing the effects of confounding in observational studies", "venue": "Multivariate Behavioral Research,", "year": 2011 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE TPAMI,", "year": 2013 }, { "authors": [ "Alina Beygelzimer", "John Langford" ], "title": "The offset tree for learning with partial labels", "venue": "In ACM SIGKDD", "year": 2009 }, { "authors": [ "Léon Bottou", "Jonas Peters", "Joaquin Quinonero Candela", "Denis Xavier Charles", "Max Chickering", "Elon Portugaly", "Dipankar Ray", "Patrice Y Simard", "Ed Snelson" ], "title": "Counterfactual reasoning and learning systems: The example of computational advertising", "venue": null, "year": 2013 }, { "authors": [ "Vincent Dorie" ], "title": "NPCI: Non-parametrics for causal inference, 2016. https://github.com/ vdorie/npci", "venue": null, "year": 2016 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "JMLR, 13(Mar):723–773,", "year": 2012 }, { "authors": [ "Negar Hassanpour", "Russell Greiner" ], "title": "A novel evaluation methodology for assessing off-policy learning methods in contextual bandits", "venue": "In Canadian AI,", "year": 2018 }, { "authors": [ "Negar Hassanpour", "Russell Greiner" ], "title": "Counterfactual regression with importance sampling weights", "venue": "In IJCAI, pp. 5880–5887,", "year": 2019 }, { "authors": [ "Jennifer L Hill" ], "title": "Bayesian nonparametric modeling for causal inference", "venue": "Journal of Computational and Graphical Statistics,", "year": 2011 }, { "authors": [ "Guido W Imbens" ], "title": "Nonparametric estimation of average treatment effects under exogeneity: A review", "venue": "Review of Economics and Statistics,", "year": 2004 }, { "authors": [ "Guido W. Imbens", "Donald B. Rubin" ], "title": "Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction", "venue": null, "year": 2015 }, { "authors": [ "Guido W Imbens", "Jeffrey M Wooldridge" ], "title": "Recent developments in the econometrics of program evaluation", "venue": "Journal of Economic Literature,", "year": 2009 }, { "authors": [ "Fredrik Johansson", "Uri Shalit", "David Sontag" ], "title": "Learning representations for counterfactual inference", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Kun Kuang", "Peng Cui", "Bo Li", "Meng Jiang", "Shiqiang Yang", "Fei Wang" ], "title": "Treatment effect estimation with data-driven variable decomposition", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Christos Louizos", "Uri Shalit", "Joris M Mooij", "David Sontag", "Richard Zemel", "Max Welling" ], "title": "Causal effect inference with deep latent-variable models", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Domain adaptation: Learning bounds and algorithms", "venue": "arXiv preprint arXiv:0902.3430,", "year": 2009 }, { "authors": [ "Paul R Rosenbaum", "Donald B Rubin" ], "title": "The central role of the propensity score in observational studies for causal effects", "venue": null, "year": 1983 }, { "authors": [ "Donald B Rubin" ], "title": "Estimating causal effects of treatments in randomized and nonrandomized studies", "venue": "Journal of Educational Psychology,", "year": 1974 }, { "authors": [ "Uri Shalit", "Fredrik D. Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: Generalization bounds and algorithms", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Hidetoshi Shimodaira" ], "title": "Improving predictive inference under covariate shift by weighting the loglikelihood function", "venue": "Journal of Statistical Planning And Inference,", "year": 2000 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement Learning: An Introduction, volume 1", "venue": null, "year": 1998 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Batch learning from logged bandit feedback through counterfactual risk", "venue": "minimization. JMLR,", "year": 2015 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Counterfactual risk minimization: Learning from logged bandit feedback", "venue": "In ICML,", "year": 2015 }, { "authors": [ "In NeurIPS", "2015c. Liuyi Yao", "Sheng Li", "Yaliang Li", "Mengdi Huai", "Jing Gao", "Aidong Zhang" ], "title": "Representation learning", "venue": null, "year": 2015 } ]
[ { "heading": null, "text": "We consider the challenge of estimating treatment effects from observational data; and point out that, in general, only some factors based on the observed covariates X contribute to selection of the treatment T , and only some to determining the outcomes Y . We model this by considering three underlying sources of {X, T, Y } and show that explicitly modeling these sources offers great insight to guide designing models that better handle selection bias in observational datasets. This paper is an attempt to conceptualize this line of thought and provide a path to explore it further.\nIn this work, we propose an algorithm to (1) identify disentangled representations of the above-mentioned underlying factors from any given observational dataset D and (2) leverage this knowledge to reduce, as well as account for, the negative impact of selection bias on estimating the treatment effects from D. Our empirical results show that the proposed method achieves state-of-the-art performance in both individual and population based evaluation measures." }, { "heading": "1 INTRODUCTION", "text": "As we rely more and more on artificial intelligence (AI) to automate the decision making processes, accurately estimating the causal effects of taking different actions gains an essential role. A prominent example is precision medicine – i.e., the customization of health-care tailored to each individual patient – which attempts to identify which medical procedure t ∈ T will benefit a certain patient x the most, in terms of the treatment outcome y ∈ R. Learning such models requires answering counterfactual questions (Rubin, 1974; Pearl, 2009) such as: “Would this patient have lived longer [and by how much], had she received an alternative treatment?”.\nFor notation: a dataset D = { [xi, ti, yi] }Ni=1 used for treatment effect estimation has the following format: for the ith instance (e.g., patient), we have some context information xi ∈ X ⊆ RK (e.g., age, BMI, blood work, etc.), the administered treatment ti chosen from a set of treatment options T (e.g., {0: medication, 1: surgery}), and the respective observed outcome yi ∈ Y (e.g., survival time; Y ⊆ R+) as a result of receiving treatment ti. Note that D only contains the outcome of the administered treatment (aka observed outcome: yi), but not the outcome(s) of the alternative treatment(s) (aka counterfactual outcome(s): yti for t ∈ T \\ {ti}), which are inherently unobservable. For the binary-treatment case, we denote the alternative treatment as ¬ti = 1− ti. Pearl (2009) demonstrates that, in general, causal relationships can only be learned by experimentation (on-line exploration), or running a Randomized Controlled Trial (RCT), where the treatment assignment does not depend on the individual X – see Figure 1(a). In many cases, however, this is expensive, unethical, or even infeasible. Here, we are forced to approximate treatment effects from off-line datasets collected through Observational Studies. In such datasets, the administered treatment T depends on some or all attributes of individual X – see Figure 1(b). Here, as Pr(T |X ) 6= Pr(T ), we say these datasets exhibit selection bias (Imbens & Rubin, 2015). Figure 2 illustrates selection bias in an example (synthetic) observational dataset.\nHere, we want to accurately estimate the Individual Treatment Effect (ITE) for each instance i – i.e., to estimate ei = y1i − y0i . We frame the solution as learning the function f : X × T → Y that can accurately predict the outcomes (both observed ŷiti as well as counterfactuals ŷi¬ti) given the context information xi for each individual. As mentioned earlier, there are two challenges associated with estimating treatment effects:\n(i) The fact that counterfactual outcomes are unobservable (i.e., not present in any training data) makes estimating treatment effects more difficult than the generalization problem in the supervised learning paradigm. This is an inherent characteristic of this task.\n(ii) Selection bias in observational datasets implies having fewer instances within each treatment arm at specific regions of the domain. This sparsity, in turn, would decrease the accuracy and confidence of predicting counterfactuals at those regions.\nThis paper addresses the second challenge by investigating the root causes of selection bias, by dissecting and identifying the underlying factors that can generate an observational dataset D, and leveraging this knowledge to reduce, as well as account for, the negative impact of selection bias on estimating the treatment effects from D. In this work, we borrow ideas from the representation learning literature (Bengio et al., 2013) in order to reduce selection bias and from the domain adaptation literature (Shimodaira, 2000) in order to account for the remainder selection bias that (might) still exist after its reduction.\nOur analysis relies on the following assumptions: Assumption 1: Unconfoundedness (Rosenbaum & Rubin, 1983) – There are no unobserved confounders (i.e., covariates that contribute to both treatment selection procedure as well as determination of outcomes). Formally, {Y t}t∈T ⊥ T | X . Assumption 2: Overlap (Imbens, 2004) – Every individual x should have a non-zero chance of being assigned to any treatment arm. That is, Pr(T = t |X=x ) 6= 0 ∀t ∈ T , ∀x ∈ X .\nThese two assumptions together are called strong ignorability (Rosenbaum & Rubin, 1983). Imbens & Wooldridge (2009) showed that strong ignorability is sufficient for ITE to be identifiable.\nWithout loss of generality, we assume that the random variable X follows a(n unknown) joint probability distribution Pr(X |Γ,∆,Υ ), treatment T follows Pr(T |Γ,∆ ), and outcome Y T follows Pr\nT (Y T |∆,Υ ), where Γ, ∆, and Υ represent the three underlying factors1 that generate an obser1 Examples for: (Γ) rich patients receiving the expensive treatment while poor patients receiving the cheap one – although outcomes of the possible treatments are not particularly dependent on patients’ wealth status; (∆) younger patients receiving surgery while older patients receiving medication; and (Υ) genetic information that determines the efficacy of a medication, however, such relationship is unknown to the attending physician.\nvational dataset D. The respective graphical model is illustrated in Figure 3. Conforming with the statements above, note that the graphical model also suggests that selection bias is induced by factors Γ and ∆, where ∆ represents the confounding factors between T and Y .\nMain contribution: We argue that explicit identification of the underlying factors {Γ,∆,Υ } in observational datasets offers great insight to guide designing models that better handle selection bias and consequently achieve better performance in terms of estimating ITEs. In this paper, we propose a model, named Disentangled Representations for CounterFactual Regression (DR-CFR), that is optimized to do exactly that. We also present experiments that demonstrate the advantages of this perspective; and show empirically that the proposed method outperforms state-of-the-art models in a variety of data generation scenarios with different dimensionality of factors; see below." }, { "heading": "2 RELATED WORKS", "text": "Selection bias in observational datasets is equivalent to a domain adaptation scenario where a model is trained on a “source” (observed) data distribution, but should perform well on a “target” (counterfactual) one. Learning treatment effects from observational datasets is closely related to “off-policy learning from logged bandit feedback” – cf., (Swaminathan & Joachims, 2015a), whose goal is learning an optimal policy that selects the best personalized treatment for each individual. A common statistical solution is re-weighting certain data instances to balance the source and target distributions. The majority of re-weighting approaches belong to the Inverse Propensity Weighting (IPW) family of methods – cf., (Austin, 2011; Bottou et al., 2013; Swaminathan & Joachims, 2015c). While IPW methods are unbiased, they suffer from high variance. Swaminathan & Joachims (2015b) proposed the Counterfactual Risk Minimization (CRM) principle to alleviate this issue. In summary, re-weighting is an attempt to account for the selection bias.\nJohansson et al. (2016) is among the pioneer works that explored ways to use techniques from representation learning (Bengio et al., 2013) to reduce the selection bias. Shalit et al. (2017) present a refined version of (Johansson et al., 2016)’s method that learns a common representation space Φ(x) = φ by minimizing the discrepancy (Mansour et al., 2009) (hereinafter “disc”) between the conditional distributions of φ given t=0 versus φ, given t=1. That is,\ndisc ({ Φ(xi) } i: ti=0 , { Φ(xi) } i: ti=1 ) (1)\nwhich is (effectively) a regularization term that attempts to reduce selection bias in the learned representation. On top of this representation learning network, they trained two regression networks ht(φ) – one for each treatment arm (t ∈ {0, 1}) – that predict the outcomes. Hassanpour & Greiner (2019) argued that the learned representation cannot and should not remove all the selection bias, as the confounders not only contribute to choosing a treatment but also to determining the respective outcomes.2 As a result, where there are confounders (which is a common situation), even φ would exhibit some selection bias, although less than that in the original domain x. They built on the work of (Shalit et al., 2017) by introducing context-aware importance sampling\n2 While Hassanpour & Greiner (2019) presented a graphical model similar to our Figure 3, they only used it to investigate the nature of selection bias. N.b., they did not implement the idea of learning disentangled representations for counterfactual regression; instead, their method [like (Shalit et al., 2017)] learns a common representation φ that can represent only the confounders, but not the other factors. Our approach extends theirs by providing an algorithm that can learn disentangled representations of the underlying factors from observational datasets.\nweights, that attempt to account for the above-mentioned remainder selection bias. These weights\nωi = 1 + Pr(φi | ¬ti ) Pr(φi | ti ) = 1 + Pr( ti ) 1− Pr( ti ) ·\n1− π ( ti |φi ) π ( ti |φi\n) (2) are designed to enhance performance of estimating both factual as well as counterfacual outcomes (by the 1 and Pr(φ | ¬t )Pr(φ | t ) terms, respectively), where π ( ti |φi ) is the probability of assigning the observed ti conditioned on the learned context φi.\nNote that both (Shalit et al., 2017) and (Hassanpour & Greiner, 2019) use Φ to model the concatenation of factors ∆ and Υ (see Figure 3). Although it does make sense that there should be no discrepancy between conditional distributions of Υ, the ∆ factor should model the confounding factors, which by definition, must embed some information about treatment assignment. This would result in a positive discrepancy between conditional distributions of ∆ that should not be minimized. Thus, minimizing Equation (1) with respect to Φ can lead to problematic results as it discards some of the confounders.\nYao et al. (2018) proposed the Similarity preserved Individual Treatment Effect (SITE) method, which extends Shalit et al. (2017)’s framework by adding a local similarity preserving component. This component acts as a regularization term, that attempts to retain the same neighbourhood relationships in the learned representation space as exhibited in the original space, by matching the propensity scores Pr( t=1 |x ) and Pr( t=1 |φ ). This, however, results in learning sub-optimal representations when Γ 6= ∅ as SITE tries to keep instances whose Γs are far apart, also far apart in φ. In other words, this component penalizes reducing selection bias in φ by not discarding the irrelevant information present in Γ even when it does not hurt the outcome estimation at all.\nOur work has many similarities to (Kuang et al., 2017), who decomposed X into two subsets: confounding and adjustment variables, which are similar to our ∆ and Υ factors respectively. They then used an optimization algorithm for identifying these variables, to ultimately find an unbiased estimate of the Average Treatment Effect (ATE). We extend their work in three ways: (i) In addition to confounders and adjustment variables, we also identify the factors that determine the treatment and have no effect on the outcome (i.e., Γ). (ii) Unlike (Kuang et al., 2017) that take a linear approach by tagging the raw features as either confounders or adjustment variables, our proposed method has the capacity to learn [non-linear] representations of the underlying factors. (iii) Our method facilitates estimating both ATE as well ITE, whereas (Kuang et al., 2017) cannot provide estimates of ITEs." }, { "heading": "3 LEARNING DISENTANGLED REPRESENTATIONS", "text": "We assume, without loss of generality, that any dataset of the form {X, T, Y } is generated from three underlying factors {Γ,∆,Υ }, as illustrated in Figure 3. 3 Observe that the factor Γ (resp., Υ) partially determines only T (resp., Y ), but not the other variables; and ∆ includes the confounding factors between T and Y . This graphical model suggests that selection bias is induced by factors Γ and ∆. It also shows that the outcome depends on the factors ∆ and Υ. Inspired by this graphical model, our model architecture incorporates the following components:\n• Three representation learning networks; one for each underlying factor: Γ(x), ∆(x), and Υ(x). • Two regression networks; one for each treatment arm: h0( ∆(x),Υ(x) ) and h1( ∆(x),Υ(x) ). • Two logistic networks: π0 ( t |Γ(x),∆(x) ) to model the logging policy – aka behaviour policy in\nReinforcement Learning; cf., (Sutton & Barto, 1998) – and π ( t |∆(x) ) to design weights that account for the confounders’ impact.\n3 Note that the assumption of unconfoundedness still holds; here is why: Short: Observing either X or ∆ blocks the path from T to Y, which supports the unconfoundedness assumption. Long: Once the representation networks are learned from the observational data, we can compute the latent factors {Γ,∆,Υ } fromX only. Therefore, although these factors are not explicitly observed, they are effectively observed, in that they are derived directly from the observed X , and so should not be categorized as “unobserved confounders”. For example, the latent factor for “zip code” in X is “socio-economic status” (perhaps in ∆). In other words, “socio-economic status” can be inferred from “zip code” which can be viewed as a proxy for it.\nWe therefore try to minimize the following objective function:\nJ(Γ,∆,Υ, h0, h1, π0) = 1\nN N∑ i=1 ω ( ti,∆(xi) ) · L [ yi, h ti ( ∆(xi),Υ(xi) ) ] (3)\n+ α · disc ( {Υ(xi)}i:ti=0, {Υ(xi)}i:ti=1 ) (4)\n+ β · 1 N N∑ i=1 − log [ π0 ( ti |Γ(xi),∆(xi) ) ] (5)\n+ λ ·Reg(Γ,∆,Υ, h0, h1, π0) (6) where ω ( ti,∆(xi) ) is the re-weighting function; L [ yi, h ti ( ∆(xi),Υ(xi) ) ] is the prediction loss for\nobserved outcomes (aka factual loss); disc ( {Υ(x)}i:ti=0, {Υ(x)}i:ti=1 ) calculates the discrepancy between conditional distributions of Υ given t=0 versus given t=1; − log π0( · ) is the cross entropy loss of predicting the assigned treatments given the learned context; and Reg( · ) is the regularization term for penalizing model complexity. The following sections elaborate on each of these terms. 3.1 FACTUAL LOSS: L [ y, ht ( ∆(x),Υ(x)\n) ] Similar to (Johansson et al., 2016; Shalit et al., 2017; Hassanpour & Greiner, 2019; Yao et al., 2018), we train two regression networks h0 and h1, one for each treatment arm. As guided by the graphical model in Figure 3, the inputs to these regression networks are the outputs of the ∆(x) and Υ(x) representation networks and their outputs are the predicted outcomes for their respective treatments.\nNote that the prediction loss L can only be calculated on the observed outcomes (hence the name factual loss), as counterfactual outcomes are not available in any training set. This would be an L2-loss for real-valued outcomes and a log-loss for binary outcomes. By minimizing the factual loss, we ensure that the union of the learned representations ∆(x) and Υ(x) retain enough information needed for accurate estimation of the observed outcomes. 3.2 RE-WEIGHTING FUNCTION: ω ( t,∆(x)\n) We follow (Hassanpour & Greiner, 2019)’s design for weights as re-stated in Equation (2), with the modification that we employ ∆ to calculate the weights instead of Φ. Although following the same design, we anticipate our weights should perform better in practice than those in (Hassanpour & Greiner, 2019) as: (i) no confounders are discarded due to minimizing the imbalance loss (because our disc is defined based on Υ, not Φ); and (ii) only the legitimate confounders are used to derive the weights (i.e., ∆), not the ones that have not contributed to treatment selection (i.e., Υ).\nNotably, the weights design in Equation (2) is different from the common practice in re-weighting techniques (e.g., IPW) in that the weights are calculated based on all factors that determine T (i.e., Γ as well as ∆). However, we argue that incorporation of Γ in the weights might result in emphasizing the wrong instances. In other words, since the factual loss L is only sensitive to factors ∆ and Υ, and not Γ, re-weighting L according to Γ would yield a wrong objective function to be optimized. 3.3 IMBALANCE LOSS: disc ( {Υ(xi)}i:ti=0, {Υ(xi)}i:ti=1\n) According to Figure 3, Υ should be independent of T due to the collider structure at Y . Therefore,\nΥ ⊥ T =⇒ Pr( Υ |T ) = Pr( Υ ) =⇒ Pr( Υ |T =0 ) = Pr( Υ |T =1 ) (7)\nWe used Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) to calculate dissimilarity between the two conditional distributions of Υ given t=0 versus t=1.\nBy minimizing the imbalance loss, we ensure that the learned factor Υ embeds no information about T and all the confounding factors are retained in ∆. Capturing all the confounders in ∆ and only in ∆ is the hallmark of the proposed method, as we will use it for optimal re-weighting of the factual loss term (next section). Note that this differs from Shalit et al. (2017)’s approach in that they do not distinguish between the independent factors ∆ and Υ; and minimizing the loss defined on only one factor Φ which might erroneously suggest discarding some of the confounders in ∆.\n3.4 CROSS ENTROPY LOSS: − log [ π0 ( t |Γ(x),∆(x) ) ] We model the logging policy as a logistic regression network parameterized by [W0, b0 ] as follows: π0( t |ψ ) = [ 1 + e−( 2t−1 )(ψ·W0+b0 ) ]−1 , where ψ is the concatenation of matrices Γ and ∆. Minimizing the cross entropy loss enforces learning Γ and ∆ in a way that allows π0( · ) to predict the assigned treatments. In other words, the union of the learned representations of Γ and ∆ retain enough information to recover the logging policy that guided the treatment assignments." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 BENCHMARKS", "text": "Evaluating treatment effect estimation methods is problematic on real-world datasets since, as mentioned earlier, their counterfactual outcomes are inherently unobservable. A common solution is to synthesize datasets where the outcomes of all possible treatments are available, then discard some outcomes to create a proper observational dataset with characteristics (such as selection bias) similar to a real-world one – cf., (Beygelzimer & Langford, 2009; Hassanpour & Greiner, 2018). In this work, we use two such benchmarks: our synthetic series of datasets as well as a publicly available benchmark: the Infant Health and Development Program (IHDP) (Hill, 2011)." }, { "heading": "4.1.1 SYNTHETIC DATASETS", "text": "We generated our synthetic datasets according to the following process, which takes as input the sample size N ; dimensionalities [mΓ,m∆,mΥ] ∈ Z+(3); for each factor L ∈ {Γ,∆,Υ }, the means and covariance matrices (µL,ΣL); and a scalar ζ that determines the slope of the logistic curve.\n• For each latent factor L ∈ {Γ,∆,Υ } – Form L by drawing N instances (each of size mL) from N (µL,ΣL), – Concatenate Γ, ∆, and Υ to make the covariates matrix X [of size N×(mΓ+m∆+mΥ)] – Concatenate Γ and ∆ to make Ψ [of size N×(mΓ+m∆)] – Concatenate ∆ and Υ to make Φ [of size N×(m∆+mΥ)]\n• For treatment T : – Sample mΓ+m∆ tuple of coefficients θ from N (0, 1)mΓ+m∆ – Define the logging policy as π0( t=1 | z ) = 11+exp(−ζz) , where z = Ψ · θ – For each instance xi, sample treatment ti from the Bernoulli distribution with parameter π0( t=1 | zi )\n• For outcomes Y 0 and Y 1: – Sample m∆+mΥ tuple of coefficients ϑ0 and ϑ1 from N (0, 1)m∆+mΥ – Define y0 = (Φ◦Φ◦Φ+0.5) ·ϑ0/(m∆+mΥ)+ε and y1 = (Φ◦Φ) ·ϑ1/(m∆+mΥ)+ε,\nwhere ε is a white noise sampled from N (0, 0.1) and ◦ is the symbol for element-wise (Hadamard/Schur) product.\nWe considered all the viable datasets in a mesh generated by mΓ,m∆,mΥ ∈ {0, 4, 8}. This creates 24 scenarios4 that consider all possible situations in terms of the relative sizes of the factors Γ, ∆, and Υ. For each scenario, we synthesized five datasets with various initial random seeds." }, { "heading": "4.1.2 INFANT HEALTH AND DEVELOPMENT PROGRAM (IHDP)", "text": "The original RCT data was designed to evaluate the effect of specialist home visits on future cognitive test scores of premature infants. Hill (2011) induced selection bias by removing a non-random subset of the treated population to create a realistic observational dataset. The resulting dataset contains 747 instances (608 control, 139 treated) with 25 covariates. We run our experiments on the same benchmark (100 realizations of outcomes) provided by and used in (Johansson et al., 2016; Shalit\n4There are not 23 =27 scenarios because we removed the three tuples: (0, 0, 0), (4, 0, 0), and (8, 0, 0), as any scenario with ∆=Υ=∅ would generate outcomes that are pure noise.\n(a) Slice of the weights matrix that connects {the variables in X belonging to Γ} to {the first layer of the representation network that attempts to identify Γ}. The size of this slice is mΓ×K.\n(b) Slice of the weights matrix that connects {the variables in X not belonging to Γ} to {the first layer of the representation network that attempts to identify Γ}. The size of this slice is (m∆+mΥ)×K.\nFigure 4: Visualization of slicing the learned weights matrix in the first layer of the representation network (number of neurons: K) for identifying Γ (best viewed in color).\n(a) Identification of Γ (b) Identification of ∆ (c) Identification of Υ\nFigure 5: Radar charts that visualize the capability of DR-CFR in identifying the underlying factors Γ, ∆, and Υ. Each vertex on the polygons is identified with the factors’ dimension sequence (mΓ_m∆_mΥ) of the associated synthetic dataset. The polygons’ radii are scaled between 0:0.09 and quantify the average weights of the first slice (in dotted magenta) and the second slice (in cyan).\net al., 2017). Outcomes of this semi-synthetic benchmark were simulated according to response surfaces provided in the Non-Parametric Causal Inference (NPCI) package (Dorie, 2016)." }, { "heading": "4.2 RESULTS AND DISCUSSIONS", "text": "" }, { "heading": "4.2.1 EVALUATING IDENTIFICATION OF FACTORS {Γ,∆,Υ }", "text": "First, we want to determine if the proposed method is able to identify the variables that belong to each underlying factor. To do so, we look at the weight matrix in the first layer of each representation network, which is of size (mΓ+m∆+mΥ)×K, where K is the number of neurons in the first hidden layer of the respective representation network. For example, to check if Γ is identified properly, we partition the weights matrix into two slices, as shown in Figure 4, and calculate the average of each slice. The first slice [referred to as SΓ; highlighted in Figure 4(a)] pertains to “ Γ’s ground truth variables in X ” and the second slice [S¬Γ; Figure 4(b)] pertains to “variables in X that do not belong to Γ”. Constructing S∆ , S¬∆ , SΥ , and S¬Υ follow a similar procedure.\nIf the proposed method achieves a good identification, then we expect the average of the absolute values of weights in SΓ should be higher than that of S¬Γ; this same claim should hold for (S∆,S¬∆) and (SΥ,S¬Υ) as well. Note that only the relative relationships between the average weights in either of the slices matter; since this analysis is aimed at checking whether, for example, for identifying Γ, its respective representation network has indeed learned to emphasize on “Γ’s ground truth variables\nin X ” more than the other variables in X . Figure 5 illustrates the identification performance of DR-CFR according to this analysis; showing empirically that the proposed method successfully identifies all the three underlying factors, for all synthetic datasets." }, { "heading": "4.2.2 EVALUATING ESTIMATION OF TREATMENT EFFECTS", "text": "Given a synthetic dataset (that include both factual as well as counterfactual outcomes), one can evaluate treatment effect estimation methods with two types of performance measures:\n• Individual-based: “Precision in Estimation of Heterogeneous Effect” PEHE= √\n1 N ∑N i=1 (êi−ei) 2\nwhere êi = ŷ1i − ŷ0i is the predicted effect and ei = y1i − y0i is the true effect. • Population-based: “Bias of the Average Treatment Effect” ATE = ∣∣ATE − ÂTE∣∣ where ATE = 1 N ∑N i=1 y 1 i − 1N ∑N j=1 y 0 j in which y 1 i and y 0 j are the true outcomes for the respective treatments\nand ÂTE is calculated based on the estimated outcomes.\nIn this paper, we compare performances of the following treatment effect estimation methods: 5\n• CFR: CounterFactual Regression (Shalit et al., 2017). • CFR-ISW: CFR with Importance Sampling Weights (Hassanpour & Greiner, 2019). • SITE: Similarity preserved Individual Treatment Effect (Yao et al., 2018). • DR-CFR: Disentangled Representations for CFR – our proposed method.\nFigure 6 visualizes the PEHE measures in radar charts for these four methods, trained with datasets of size N=2,500 (left) and N=10,000 (right). As expected, all methods perform better with observing more training data; however, DR-CFR took the most advantage by reducing PEHE the most (by 0.15, going down from 0.60 to 0.45), while CFR, CFR-ISW, and SITE reduced PEHE by 0.07, 0.08, and 0.08 respectively.\nTable 1 summarizes the PEHE and ATE measures (lower is better) for all scenarios, in terms of mean and standard deviation of all the 24×5 datasets, in order to give a unified view on the performance.\n5Note that all four methods share the same core code-base: based on CFR (developed by Johansson et al. (2016) and Shalit et al. (2017)) and so they share very similar model architectures. To allow for fair comparison, we searched their respective hyperparameter spaces, constrained to ensure that all had the same model complexity.\nTable 1: Synthetic datasets (24×5 with N=10,000)\nMethods PEHE ATE CFR 0.61 (0.05) 0.021 (0.018) CFR-ISW 0.58 (0.06) 0.017 (0.009) SITE 0.63 (0.05) 0.035 (0.039) DR-CFR 0.45 (0.11) 0.013 (0.006)\nTable 2: IHDP datasets (100 with N=747)\nMethods PEHE ATE CFR 0.81 (0.30) 0.13 (0.12) CFR-ISW 0.73 (0.28) 0.11 (0.10) SITE 0.73 (0.33) 0.10 (0.09) DR-CFR 0.65 (0.37) 0.03 (0.04)\nPEHE and ATE measures (lower is better) represented in the form of “mean (standard deviation)”.\nDR-CFR achieves the best performance among the contending methods. These results are statistically significant based on the Welch’s unpaired t-test with α= 0.05. Table 2 summarizes the PEHE and ATE measures on the IHDP benchmark. The results are reported in terms of mean and standard deviation over the 100 datasets with various realizations of outcomes. Again, DR-CFR achieves the best performance (statistically significant for ATE) among the contending methods." }, { "heading": "5 FUTURE WORKS AND CONCLUSION", "text": "The majority of methods proposed to estimate treatment effects – including this work – fall under the category of discriminative approaches. A promising direction is to consider developing generative models, in an attempt to shed light on the true underlying data generating mechanism. Perhaps this could also facilitate generating new, virtual, yet realistic data instances – similar to what is done in computer vision. Louizos et al. (2017)’s method is a notable generative approach, which uses Variational Auto-Encoder (VAE) to extract latent confounders from their observed proxies. While that work is an interesting step in that direction, it is not yet capable of addressing the problem of selection bias. We believe that our proposed perspective on the problem can be helpful to solve this open question. This is left to future work.\nIn this paper, we studied the problem of estimating treatment effect from observational studies. We argued that not all factors in the observed covariates X might contribute to the procedure of selecting treatment T , or more importantly, determining the outcomes Y . We modeled this using three underlying sources of X , T , and Y , and showed that explicit identification of these sources offers great insight to help us design models that better handle selection bias in observational datasets. We proposed an algorithm, Disentangled Representations for CounterFactual Regression (DR-CFR), that can (1) identify disentangled representations of the above-mentioned underlying sources and (2) leverage this knowledge to reduce as well as account for the negative impact of selection bias on estimating the treatment effects from observational data. Our empirical results showed that the proposed method achieves state-of-the-art performance in both individual and population based evaluation measures." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors gratefully acknowledge financial support from Natural Sciences and Engineering Research Council of Canada (NSERC) and Alberta Machine Intelligence Institute (Amii). We wish to thank Dr. Pouria Ramazi and Shivam Raj for fruitful conversations, and Dr. Fredrik Johansson for publishing/maintaining the code-base for the CFR method online. We also would like to thank the ICLR 2020 anonymous reviewers, as well as Dr. Kun Kuang and Tianle Liu, for their valuable reviews, which helped improve this paper." } ]
2,020
null
SP:ec3b4ae82ca6f34505dbb909d0a705804f8eb22c
[ "This paper proposes two machine learning adaptations of the Bayesian truth serum approach to aggregating predictions from human experts. The first method proposed involves training two regression models for each classifier in the ensemble that predicts the proportion of other classifiers that assign the same label to a novel instance. The second approach is to train a binary classifier that, based on the features associated with an instance, determines whether the most common or second most common prediction made by individual ensemble members should be the prediction made by the ensemble.", "Inspired by work in ensembling human decisions, the authors propose an ensembling technique called \"Machine Truth Serum\" (based off \"Bayesian Truth Serum\"). Instead of using majority vote to ensemble the decisions of several classifiers, this paper follows the \"surprisingly popular\" algorithm; the ensembled decision is the decision whose posterior probability (based on several classifiers) most exceeds a prior probability (given by classifier(s) trained to predict the posterior predictions). It's quite a nice idea to bring this finding from human decision-making to machine learning. If it worked in machine learning, it would be quite surprising, as the surprisingly popular algorithm risks that the ensemble makes a decision against the majority vote, which is usually consider the safe/default option for ensembling." ]
Wisdom of the crowd (Surowiecki, 2005) revealed a striking fact that the majority answer from a crowd is often more accurate than any individual expert. We observed the same story in machine learning ensemble methods (Dietterich, 2000) leverage this idea to combine multiple learning algorithms to obtain better classification performance. Among many popular examples is the celebrated Random Forest (Ho, 1995), which applies the majority voting rule in aggregating different decision trees to make the final prediction. Nonetheless, these aggregation rules would fail when the majority is more likely to be wrong. In this paper, we extend the idea proposed in Bayesian Truth Serum (Prelec, 2004) that “a surprisingly more popular answer is more likely the true answer” to classification problems. The challenge for us is to define or detect when an answer should be considered as being “surprising”. We present two machine learning aided methods which aim to reveal the truth when it is minority instead of majority who has the true answer. Our experiments over real-world datasets show that better classification performance can be obtained compared to always trusting the majority voting. Our proposed methods also outperform popular ensemble algorithms. Our approach can be generically applied as a subroutine in ensemble methods to replace majority voting rule.
[]
[ { "authors": [ "James Bennett", "Stan Lanning" ], "title": "The netflix prize", "venue": "In Proceedings of KDD cup and workshop,", "year": 2007 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Chih-Chung Chang", "Chih-Jen Lin" ], "title": "Libsvm: A library for support vector machines", "venue": "ACM transactions on intelligent systems and technology (TIST),", "year": 2011 }, { "authors": [ "Kay-Yut Chen", "Leslie R Fine", "Bernardo A Huberman" ], "title": "Eliminating public knowledge biases in information-aggregation mechanisms", "venue": "Management Science,", "year": 2004 }, { "authors": [ "Xi Chen", "Qihang Lin", "Dengyong Zhou" ], "title": "Statistical decision making for optimal budget allocation in crowd labeling", "venue": "The Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Thomas G Dietterich" ], "title": "Ensemble methods in machine learning", "venue": "In International workshop on multiple classifier systems,", "year": 2000 }, { "authors": [ "Yoav Freund", "Robert E Schapire" ], "title": "A decision-theoretic generalization of on-line learning and an application to boosting", "venue": "Journal of computer and system sciences,", "year": 1997 }, { "authors": [ "Pascal Germain", "Alexandre Lacasse", "Francois Laviolette", "Mario Marchand", "Jean-Francis Roy" ], "title": "Risk bounds for the majority vote: From a pac-bayesian analysis to a learning algorithm", "venue": "The Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Tin Kam Ho" ], "title": "Random decision forests", "venue": "In Proceedings of 3rd international conference on document analysis and recognition,", "year": 1995 }, { "authors": [ "Qiang Liu", "Jian Peng", "Alexander T Ihler" ], "title": "Variational inference for crowdsourcing", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "M Granger Morgan" ], "title": "Use (and abuse) of expert elicitation in support of decision making for public policy", "venue": "Proceedings of the National Academy of Sciences,", "year": 2014 }, { "authors": [ "Bo Pang", "Lillian Lee" ], "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "venue": "In Proceedings of the 42nd annual meeting on Association for Computational Linguistics,", "year": 2004 }, { "authors": [ "Chao-Ying Joanne Peng", "Kuk Lida Lee", "Gary M Ingersoll" ], "title": "An introduction to logistic regression analysis and reporting", "venue": "The journal of educational research,", "year": 2002 }, { "authors": [ "Dražen Prelec" ], "title": "A bayesian truth serum", "venue": "for subjective data. science,", "year": 2004 }, { "authors": [ "Dražen Prelec", "H Sebastian Seung", "John McCoy" ], "title": "A solution to the single-question crowd wisdom", "venue": "problem. Nature,", "year": 2017 }, { "authors": [ "Vikas C Raykar", "Shipeng Yu", "Linda H Zhao", "Gerardo Hermosillo Valadez", "Charles Florin", "Luca Bogoni", "Linda Moy" ], "title": "Learning from crowds", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Frank Rosenblatt" ], "title": "The perceptron: a probabilistic model for information storage and organization in the brain", "venue": "Psychological review,", "year": 1958 }, { "authors": [ "Joseph P Simmons", "Leif D Nelson", "Jeff Galak", "Shane Frederick" ], "title": "Intuitive biases in choice versus estimation: Implications for the wisdom of crowds", "venue": "Journal of Consumer Research,", "year": 2010 }, { "authors": [ "Push Singh", "Thomas Lin", "Erik T Mueller", "Grace Lim", "Travell Perkins", "Wan Li Zhu" ], "title": "Open mind common sense: Knowledge acquisition from the general public", "venue": "In OTM Confederated International Conferences\" On the Move to Meaningful Internet Systems\",", "year": 2002 }, { "authors": [ "Yuchen Zhang", "Xi Chen", "Dengyong Zhou", "Michael I Jordan" ], "title": "Spectral methods meet em: A provably optimal algorithm for crowdsourcing", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Dengyong Zhou", "Sumit Basu", "Yi Mao", "John C Platt" ], "title": "Learning from the wisdom of crowds by minimax entropy", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Dengyong Zhou", "Qiang Liu", "John Platt", "Christopher Meek" ], "title": "Aggregating ordinal labels from crowds by minimax conditional entropy", "venue": "In International conference on machine learning,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Wisdom of the crowd harnesses the power of aggregated opinion of a diverse group rather than a few individuals. Though initially proposed for mainly aggregating human judgements, this idea has been successfully implemented in the context of machine learning. In particular, ensemble learning was proposed and studied to improve prediction performance by combining several learning models to obtain better results compared to a single one (Dietterich, 2000). The developed ensemble techniques have shown consistent benefits in real-world machine learning applications, evidenced by the Netflix Competition (Bennett et al., 2007) and Kaggle competition. Popular ensemble methods include Boosting (e.g., AdaBoost (Freund & Schapire, 1997)), Bootstrap aggregating (bagging), Stacking (Bishop, 2006), and Random Forest (Ho, 1995).\nThe most popular, as well as simple, way to perform aggregation is via majority voting rule. The classical example is Random Forest, which outputs the majority answer from multiple trained decision trees. Inference methods (Raykar et al., 2010; Zhang et al., 2014; Liu et al., 2012; Zhou et al., 2012; 2014) have been applied to perform smarter aggregation that aims to outperform majority-voted answers. These methods often leverage homogeneous assumption of certain hidden models over a large number of data points in order to perform joint inference.\nNonetheless, all above methods rely on the assumption that the majority answer is more likely to be correct - this is also true for the more sophisticated inference models, as the inferences will mostly likely initiate based on majority-voted answers (when the algorithm has no prior information). While enjoying this assumption that majority tends to be correct, this claim is questionable in settings where special knowledge is needed to infer the truth, but it is owned by few individuals when they are not widely shared (Chen et al., 2004; Simmons et al., 2010; Prelec et al., 2017). Echoing to the above problem of aggregating human judgements, we face similar challenge when aggregating classifiers’ predictions in machine learning. For example, we have a deep learning (Goodfellow et al., 2016) classification model which performs the best among multiple models when used in the ensemble method. For some data point, the classification result of this deep learning model may be the correct minority. In this situation, applying majority voting leads to wrong answers.\nWe aim to complement the literature via studying whether we can aggregate classifiers better than majority voting even when majority opinion is wrong. We also target a method that can operate over each data point separately without assuming homogeneous assumptions across a massive dataset.\nThe question sounds unlikely to resolve at a first look, but we are inspired by the seminal work Bayesian Truth Serum (BTS) (Prelec, 2004; Prelec et al., 2017) which approached this question in the setting of incentivizing and aggregating truthful human judgements. The core idea behind BTS is simple and elegant: the correctness of an answer does not rely on its popularity, but rather whether it is “surprisingly” popular or not - here an answer that has a higher posterior (computed from reports of the crowds) than its prior is taken as being “surprisingly” popular, and should be considered as the true answer. This argument has a very intuitive Bayesian reasoning: the signal that improves over its prior is more likely to be informative. Prelec et al. (2017) also argued that via eliciting a peer prediction information, which is defined as the fraction of “how many other people would agree with you” from each agent, he will be able to construct an informative prior to compare with the majority vote posterior aggregation. BTS operates over each single question separately, without seeing a large number of similar tasks (in order to leverage a certain homogeneity assumption).\nIn this paper, we make a connection between these two seemingly irrelevant topics, and extend the key idea in Bayesian Truth Serum to aggregating classifiers’ predictions. The challenge is that we would not be able to elicit a belief from a classifier on “how many other classifiers would agree with themselves”, which renders the task of computing the prior difficult. We proposed two machine learning aided algorithms to mimic the procedure of reporting the peer prediction information, which we jointly name as Machine Truth Serum (MTS). We firstly propose Heuristic Machine Truth Serum (HMTS). In HMTS, we pair each baseline classifier (an agent) with a regressor model, which is trained to predict the peer prediction information using a processed training dataset. With the predictions from the regressors, we will be able to apply the idea of BTS to decide on whether to adopt the minority as the answer via comparing the prior (computed using the regressor) and the posterior for each label. Then we proposed Discriminative Machine Truth Serum (DMTS). In DMTS, we directly train one classifier to predict whether adopting the minority as the answer or not. As for the training complexity of our algorithm, the training time of HMTS is linear in the number of label classes because of the training of extra regressors. DMTS will only need to train one additional classifier and both the training and the running time are almost the same as the basic majority voting algorithm. Therefore our proposed methods are very practical to implement and run.\nOur contributions summarize as follows: (1) We propose Heuristic Machine Truth Serum (HMTS) and Discriminative Machine Truth Serum (DMTS) to complement ensemble methods, which can detect when minority should be considered the final prediction instead of the majority. (2) Our experiments over 6 binary and 6 multiclass classification real-world datasets reveal promising results of our approach in improving over majority voting. Our proposed methods also outperform popular ensemble algorithms. (3) To pair with our experimental results, we also provide analytical evidences for the correctness of our proposed approaches. (4) Our approaches can be generically applied in ensemble methods to replace simple majority voting rules.\nThe rest of the paper is organized as follows. Section 2 introduces some related works. Section 3 reviews preliminaries and BTS. Section 4 introduces our Machine Truth Serum approaches. Section 5 presents our experimental results. Section 6 concludes our paper." }, { "heading": "2 RELATED WORK", "text": "Wisdom of the crowd (Surowiecki, 2005) are often considered as being more accurate than a few elite individuals in applications including decision making of public policy (Morgan, 2014), answering the questions on general world knowledge (Singh et al., 2002), and so on. Typical algorithms for extracting wisdom of the crowd are based on majority voting, and the assumption that the majority opinion is more likely to be correct (Surowiecki, 2005). There is another line of machine learning works on proposing inference methods, including Expectation Maximization method (Raykar et al., 2010; Zhang et al., 2014), Variational Inference (Liu et al., 2012; Chen et al., 2015), and Minimax Entropy Inference (Zhou et al., 2012; 2014) to crowdsourcing settings, aiming to uncover the true labels from the noisy labels provided by non-expert crowdsourcing workers. Most relevant to us, (Prelec, 2004; Prelec et al., 2017) proposed a Bayesian Truth Serum method to extract the subjective\njudgment of minority expert by collecting not only people’s judgements but also how many percentage of the population share the same opinion.\nIn machine learning, ensemble methods combining multiple learning algorithms usually performs better than any single method (Dietterich, 2000). Ensemble methods consist of a rich family of algorithms. For instance, AdaBoost (Freund & Schapire, 1997) and Random Forest (Ho, 1995) are two different and commonly used ones. AdaBoost tries to optimize weighted voting outcomes, while Random Forest train and test using the majority voting rule. But these popular ensemble methods will be wrong when the minority is the correct answer.\nIn both the setting of aggregating human judgements and classifiers’ predictions, most works, except for (Prelec, 2004), would fail when the majority opinion is instead likely to be wrong. But BTS only works in the setting of aggregating human judgements by collecting subjective judgment data. Based on the ideas proposed by Prelec (2004) and Prelec et al. (2017), we proposed two machine learning aided algorithms to find the correct answer when it is minority instead of majority in the setting of classifiers’ predictions. As our proposed methods are machine learning algorithms, they can be trained and the predictions will be made automatically instead of collecting subjective judgment data as the case in (Prelec, 2004)." }, { "heading": "3 PRELIMINARY", "text": "In this paper, we consider both binary and multiclass classification problems. Nonetheless, for simplicity of demonstration, our main presentation focuses on binary classification. A multi-class extension of our method is presented in Section 4.3.\nSuppose that we have a training dataset D := {(xi, yi)}Ni=1 and a test dataset T := {(xi, yi)}Ti=1, where xi ∈ X ⊆ Rd is a d-dimensional vector. We have K baseline classifiers F := {f1, f2, ..., fK : X → {0, 1}} that map each feature vector to a binary classification outcome. Ensemble method such as boosting algorithms can combine {f1, f2, ..., fK} to get better prediction results than each single one. For instance, Random Forest first applies the bootstrap aggregating to train multiple different decision trees to correct overfitting problems of decision trees. After training, the majority rule will be applied to generate the prediction result.\nThe above dependence on the majority voting rule is ubiquitous in ensemble methods. The key assumption of using the majority rule is that the majority is more likely to be correct than random guessing. Denoting as Maj({f1(x), f2(x), ..., fK(x)}) the majority answer from the K classifiers, formally, most, if not all, methods require that P (Maj({f1(x), f2(x), ..., fK(x)}) 6= y) < 0.5. Our goal is still to construct a single aggregator A({f1, f2, ..., fK}) that takes the classifiers’ predictions on each data point as inputs and generates an accurate aggregated prediction. But we aim to provide instruction to cases where it is possible that P (Maj({f1(x), f2(x), ..., fK(x)}) 6= y) > 0.5. The challenge is to detect when the minority population has the true answer." }, { "heading": "3.1 BAYESIAN TRUTH SERUM", "text": "(Prelec, 2004) considers the following human judgement elicitation problem: There are a set of agents denoted by {ai}Ki=1. The designer aims to collect subjective judgement from each agent about an unknown event y ∈ {0, 1} and aggregate accordingly. Each of the agent i needs to report his own predicted label li ∈ {0, 1} for y, and the percentage of other agents he believes will agree with him pi ∈ [0, 1]. We will also call this second belief information as the peer prediction information.\nDenote the belief of agent i as Bi. pi is defined as follows: pi = EBi (∑ j 6=i 1(lj=li) K−1 ) .\nWe, as the designer, obtain the prediction labels {li}Ki=1 and the percentage information {pi}Ki=1 from all the agents. The posterior for each label is defined as the actual percentage of this label which can be easily calculated utilizing the prediction results: (for label 1)\nPosterior(1) = ∑ i 1(li = 1)\nK (1)\nIn (Prelec, 2004; Prelec et al., 2017), Prelec et al. promote the idea of using the average predicted percentage of the responding label as the approximation of the priors: (for label 1).\nPrior(1) = ∑K i=1 p 1(li=1) i · (1− pi)1−1(li=1)\nK (2)\nIf Posterior(1) > Prior(1), label 1 will be taken as the surprisingly more popular answer, which should be considered as the true answer ŷ, even though it might be in minority’s hands. The same rule is applied to label 0. Formally, if we denote ŷ as the aggregated answer:\nŷ = { 1 if Prior(1) < Posterior(1); 0 if Prior(1) > Posterior(1). (3)\nThe rest of the paper will focus on generalizing the above idea to aggregate classifiers’ predictions." }, { "heading": "4 MACHINE TRUTH SERUM", "text": "In this section, we introduce Machine Truth Serum (MTS). Suppose we have access to a set of baseline classifiers. Each classifier can be treated as an agent. We’d like to build a BTS-ish aggregation method to aggregate the classifiers’ predictions. The challenge is to compute the priors from the classifiers - machine-trained classifiers do not encode beliefs as human agents do, so we cannot elicit the peer prediction information from them directly. We propose two machine learning aided approaches to perform the generation of this peer prediction information. We firstly introduce two MTS approaches for binary classification and then extend these approaches to multiclass classification case." }, { "heading": "4.1 HEURISTIC MACHINE TRUTH SERUM", "text": "We first introduce heuristic machine truth serum (HMTS). The high level idea is to train a regression model for each classifier to predict the percent of the agreement from other classifiers on the prediction of each particular data point. After getting the predicted labels and the predicted peer prediction information of the classifiers, we can again approximate the priors using the predicted peer prediction information for each classifier, compute the average and compare it to posterior. In this part, HMTS for binary classification will be introduced firstly and its multiclass extension is stated in Section 4.3.\nGiven the training data D = {(xi, yi)}Ni=1 and multiple classifiers {fj}Kj=1, we first try to compute the j-th classifier’s “belief” of the fraction of other classifiers that would “agree” with it. Denote this number as ȳji for each training sample (xi, yi). ȳ j i can be computed as follows:\nȳji =\n∑ j 6=k 1(fj(xi) = fk(xi))\nK − 1 , (4)\nBy above, we have pre-processed the training data to obtain DHj := {(xi, ȳ j i )}Ni=1, j = 1, ...,K, which can serve as the training data to predict the peer prediction information of classifier j (again to recall, peer prediction information is the fraction of other classifiers that classifier j believes would agree with it). We then train peer prediction regression models {gj}Kj=1 onDHj := {(xi, ȳ j i )}Ni=1, j = 1, ...,K respectively to map xi to ȳ j i . We consider different class labels and will first train two regression models: gj,0 and gj,1 are two belief regression models of classifier j and trained on the examples whose predicted labels are 0s (DHj,0 := {(xi, ȳ j i ) : fj(xi) = 0}Ni=1) and 1s (DHj,1 := {(xi, ȳji ) : fj(xi) = 1}Ni=1) respectively. Then compute the following prior of label 1 for each xi:\ngj(xi) = { gj,1(xi) if fj(xi) = 1; 1− gj,0(xi) if fj(xi) = 0. (5)\nAfter obtaining these peer prediction regression models gjs, the prior and posterior of (xi, yi) ∈ T in the test dataset are then calculated by,\nPrior(xi, l = 1) :=\n∑ j gj(xi)\nK (6)\nPosterior(xi, l = 1) :=\n∑ j 1(fj(xi) = 1)\nK (7)\nAlgorithm 1 Heuristic Machine Truth Serum (Binary classification) Require:\n1: Input: 2: D = {(x1, y1), ..., (xN , yN )}: training data 3: T = {(x1, y1), ..., (xT , yT )}: testing data 4: F = {f1, ..., fK}: classifiers\nProcedure: 5: Train K classifiers (F) on the training data 6: for j = 1 to K do 7: for i = 1 to N do 8: Compute ȳji according to Eqn.(4) 9: end for 10: Train machine belief gj,0, gj,1 on training dataset DHj := {(xi, ȳ j i )}Ni=1. 11: end for 12: for t = 1 to T do 13: Compute Prior(xi, l = 1) and Posterior(xi, l = 1) according to Eqn.(6) and Eqn.(7) 14: if Prior(xi, l = 1) < Posterior(xi, l = 1) then 15: Output “surprising” answer 1 as the final prediction. 16: else if Prior(xi, l = 1) > Posterior(xi, l = 1) then 17: Output “surprising” answer 0 as the final prediction. 18: end if 19: end for\nIf Prior(xi, l = 1) < Posterior(xi, l = 1), the “surprsing” answer 1 will be considered as the true answer. The decision rule is similar for label 0. The procedure is illustrated in Algorithm 1." }, { "heading": "4.2 DISCRIMINATIVE MACHINE TRUTH SERUM", "text": "The Heuristic Machine Truth Serum above relies on training models to predict the peer prediction information for each classifier (which will be used to compute the priors) and compare them to the posteriors, and then decide on whether to follow the minority opinion or not. We notice the above task of determining whether to follow the minority or not is also a binary classification question. We can therefore utilize a classification model to directly predict for each data point whether the minority should be chosen as the answer or not.\nWe propose Discriminative Machine Truth Serum (DMTS). Again, DMTS for binary classification will be introduced firstly and its multiclass extension is stated in Section 4.3. With DMTS, a new training dataset DD := {xi, ŷi}Ni=1 about whether considering the minority as the final answer or not is constructed. Each data DD := (xi, ŷi), for i = 1, ..., N , in this new training dataset is calculated as follows: for each (xi, yi) ∈ D\nŷi = { 1 if majority of F on xi is different from the true label; 0 if majority of F on xi is same as the true label. (8)\nNow with above preparation, predicting whether majority is correct or not becomes a standard classification problem on DD := {xi, ŷi}Ni=1. This is readily solvable by applying standard techniques. In our experiments, we will mainly use a Multi-Layer Perceptron (MLP) (Goodfellow et al., 2016) denoted as f . f is trained on this new training dataset and can directly predict whether we should adopt the minority as the answer or not. f does not restrict to MLP and can be other classifiers. We have tried several other methods, such as logistic regression, and similar conclusions are obtained. The procedure is further illustrated in Algorithm 2 in Appendix A.3." }, { "heading": "4.3 MULTICLASS EXTENSION OF HMTS AND DMTS", "text": "HMTS and DMTS can be extended to multiclass classification problem with the same ideas by modifying them accordingly. In the multiclass case, l ∈ C = {0, 1, ..., L} is denoted as the class label of the dataset. Consider HMTS first. For each classifier j, we need to consider different class labels of regression models {gj,l}, where l ∈ C = {0, 1, ..., L}. gj,l is the belief regression model of classifier j and trained on the examples whose predicting labels are ls.\nAgain compute the following prior for each xi\ng∗j,l(xi) = { gj,l(xi) if fj(xi) = l;( 1− gj,fj(xi)(xi) ) · ratiol if fj(xi) 6= l,\n(9)\nwhere ratiol = gj,l(xi)∑\nc∈C:c 6=fj(xi) gj,c(xi)\nis defined as the ratio of the l’s belief to the summation of all\nthe other classes’ beliefs except for the predicted class utilizing majority rule.\nIn HMTS, Eqn. (6) and (7) can be modified to the following:\nPrior(xi, l = c) :=\n∑K j=1 g ∗ j,c(xi)\nK (10)\nPosterior(xi, l = c) :=\n∑K j=1 1(fj(xi) = c)\nK (11)\nWe then compute all the priors and posteriors of each class label based on Eqn. (10) and (11). It is possible that there exist more than one class labels whose posterior is larger than its prior. We define the set containing all these label classes as\nCsat = {c | Prior(xi, l = c) < Posterior(xi, l = c), c ∈ C}. We predict the class label which has the biggest improvement from its prior to posterior:\nargmaxc∈Csat ∣∣Posterior(xi, l = c)− Prior(xi, l = c)∣∣ .\nIn DMTS, firstly we need to train a model that decides whether to apply the minority as the final answer which are very similar to the binary case. The difference is that we will then choose the minority answer as the predicted answer instead of using majority if i) it has the most votes in the minority answers and ii) the prediction result of classifier obtained in the training phase is 1 (we should use minority)." }, { "heading": "4.4 THEORETICAL ANALYSIS", "text": "We conduct a formal analysis about the correctness of our proposed algorithms. Not surprisingly, the key ideas of the proofs are adapted from the proof for BTS (Prelec et al., 2017). For simplicity, we only present the theorems for binary classification. The proofs of multiclass ones are similar to the binary case. The details of proofs are left to the Appendix A.1.\nTo set up for presenting the theorems, we restate our problem: we assume that each xi can take on any value in the discrete set {s1, ..., sm} for the simplicity of proof. In practice, conceptually each feature vector can be represented by an assigned (large-enough) categorical number. One can consider sk(k = 1, 2, , ,m) as a code for each feature vector. The proof based on continuous value can be deduced similarly.\nHere we have two worlds wio (o = 0 or 1) of different class labels for any xi. One world is actual, the other one is counterfactual. If we say wi1 is the actual world for xi, it means the predicting answer of xi in this world is 1 and y = 1 is also the ground truth label of xi. wi0 is the counterfactual world and the predicting answer of xi in this world is 0. In this paper, we are considering infinite samples. While finite samples is practical setting, it is important to first analyze and conclude some deductions in the infinite sample ideal case. Theorem 4.1. No algorithm exists for deducting the correct classification answer relying exclusively on feature vector distribution of true class label, P (sk|yo∗), k = 1, ...,m and correctly computed posterior distribution over all possible classification labels given feature vectors, P (yo|sk), k = 1, ...,m, o = 0, 1 for any xi. o∗ is the true class label. Theorem 4.2. For any xi, the average estimate of the prior prediction for the correct classification answer will be underestimated if not every classifier provides the correct classification prediction. Therefore, the minority should be the final answer instead of the majority if the prior (estimated prediction) is less than the posterior.\nTheorem 1 indicates that exclusively posterior probabilities based methods such as majority voting can not infer the true answer for all the time. In Theorem 2, the posterior and prior are the prediction distribution of other classifiers for each classifier - both are provided by our proposed MTS algorithms.\nComplexity For HMTS, for example in our experiments, another 15 · (L + 1) (label classes {0, 1, ..., L}) simple regressors will be trained to predict others’ beliefs based on 15 baseline classifiers. So the total training time is linear in the number of label classes. After training the extra regressors, running the algorithm only requires taking L + 1 averages (15 of the 15 · (L + 1) regressors each) and compare with average posterior. DMTS will only need to train one additional classifier based on 15 classifiers and both the training and the running time are almost the same as the basic majority voting algorithm. The above complexity analysis shows our methods are very practical." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we present our experimental results. Particularly we test our proposed two MTS algorithms on 6 binary and 6 multiclass real-world classification datasets. Experimental results show that consistently better classification accuracy can be obtained compared to always trusting the majority voting outcomes." }, { "heading": "5.1 DATASETS", "text": "In this section, 6 binary and 6 multiclass classification benchmark datasets (Pang & Lee, 2004) are used to conduct the experiments. The statistical information of these datasets are described in Table 4 in Appendix A.2. In this paper, each of the datasets we used has a small size - we chose to focus on the small data regime where the classifiers are likely to make mistakes. This is a better fit to our setting where majority opinion can be wrong with a good chance. For the splitting of training and testing, we used the original setting for the datasets providing training and testing files separately. For other datasets, only one data file is given. For the testing results’ statistical significance, more data is distributed to testing dataset and 50/50 is considered as the splitting of training and testing." }, { "heading": "5.2 EXPERIMENTAL SETUP AND RESULTS", "text": "In our binary classification experiments, we consider 5 commonly used binary classification algorithms which are Perceptron (Rosenblatt, 1958), Logistic Regression (LR) (Peng et al., 2002), Random Forest (RF), Support Vector Machine (SVM) (Chang & Lin, 2011), and MLP. In order to test the usefulness of our methods, we experiment with a noisy environment - we flipped the true class label with three noisy rates to construct three binary classifiers for each of the 5 methods which have mediocre performance on the test datasets. We wanted to diversify our classifiers by introducing different noisy rates (varying the data distribution). Our experiments used 0.06, 0.08, and 0.1 (probability of flipping the label) for each family of classifier. We also tried other values such as 0.1, 0.2, and 0.3, and we reached similar conclusions. In total, 15 different classifiers are obtained as the baseline classifiers.\nThe experimental results on the 6 binary classification datasets are reported in Table 1. From these results, we observe that Heuristics Machine Truth Serum (HMTS) tends to have more robust and better performances than Discriminative Machine Truth Serum (DMTS) in most datasets, especially in the small-size datasets. These can be explained by the fact DMTS itself is a MLP classifier which\nneeds a larger size of data to get good results. That HMTS can improve the classification accuracy in the small size of dataset is particularly useful in some fields such as healthcare in which collecting data is very time-consuming and expensive. As for the running time, DMTS is faster than HMTS as HMTS needs to compute the peer prediction results of all the 15 classifiers and DMTS only predicts once.\nWe also tested our extension to multi-class classification problems. Experimental results on 6 multiclass classification datasets are reported in Table 2. We observe that HMTS and DMTS obtained similarly good performance in the accuracy metric because the size of multi-class classification datasets is larger and the MLP of DMTS can perform better than the binary case.\nFinally, we compare between several popular ensemble algorithms and our proposed approaches. We list the testing accuracy for Adaboost with 15 decision tree base estimators, Random Forest with 15 decision trees, Weighted Majority(Germain et al., 2015), Stacking with the same setting of 15 classifiers utilized in our two MTS algorithms and Logistic Regression or SVM as meta classifier, HMTS, and DMTS for all 12 datasets in Table 3. As shown in the table, HMTS and DMTS outperform Adaboost, Random Forest, Weighted Majority, and Stacking in 8 datasets and are very close to the best in 3 datasets. An outlier dataset is Wall-Following and we found that decision tree based methods can get perfect performance on it. Compared to other weighted methods, we’d like to note that our aggregation operates on each single task separately - this means that our method will be more robust when the difficulty levels of tasks differ drastically in the dataset. None of the other weighted methods (with fixed and learned weights) has this feature. We also find that our method is robust to a smaller number of classifiers, in contrast to, say Adaboosting." }, { "heading": "6 DISCUSSION AND CONCLUDING REMARKS", "text": "This paper proposes two machine learning aided methods HMTS and DMTS to detect when the minority should be the final answer instead of majority. Our experiments over 6 binary and 6 multiclass real-world datasets show that better classification performance can be obtained compared to always trusting the majority voting. Our proposed methods also outperform popular ensemble algorithms on three randomly selected datasets and can be generically applied as a subroutine in ensemble methods to replace majority voting. For future work, we plan to try more types of classifiers, especially the recent deep learning models, to train the belief models for baseline classifiers and apply our methods to more real-world datasets." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF THEOREMS IN SECTION 4.4\nIn this part, we provided the detailed proof of two theorems which are the analytical evidences for the correctness of our proposed approaches. For simplicity, we only show the proof details of binary classification. The proof of multiclass classification is similar to the binary case. This proof is largely adapted from (Prelec et al., 2017). Nonetheless we reproduce the details for completeness. Theorem A.1. No algorithm exists for inferring the correct classification answer relying exclusively on feature vector distribution of true class label, P (sk|yo∗), k = 1, ...,m and correctly computed posterior distribution over all possible classification labels given feature vectors, P (yo|sk), k = 1, ...,m, o = 0, 1 for any xi. o∗ is the true class label.\nProof. In this proof, for any arbitrarily selected class label, we can construct a world model in which this selected class label is predicted as the answer and it is also the ground truth class label. And this world model can also generate feature vector distribution of true class label and correctly computed posterior distribution over all possible classification labels given feature vectors.\nBased on the description of theorem, P (sk|yo∗), k = 1, ...,m and P (yo|sk), k = 1, ...,m, o = 0, 1 are known. But we don’t know which class label is the correct answer yo∗ . We can arbitrarily selected any class label yo as the ground truth class label. In the following part, a corresponding world model Q(sk, yo) which can generate the known P (sk|yo∗) and P (yo|sk) will be constructed. Because the known parts don’t constrain the prior over the feature vector - these priors can model differences in the baseline classifiers. In particular, we can set the prior to:\nQ(sk) = P (sk|yo∗) P (yo|sk) (∑ r P (sr|yo∗) P (yo|sr) )−1 , k = 1, ...,m\nBecause posteriors in the constructed world model must equal to known posteriors: Q(yo|sk) = P (yo|sk), for k = 1, ...,m, o = 0, 1. So we can get the joint distribution of answer yo and the feature vector sk in the constructed world model:\nQ(yo, sk) = Q(yo|sk)Q(sk) = P (sk|yo∗) (∑\nr\nP (sr|yo∗) P (yo|sr)\n)−1\nThen we can get the marginal distribution yo in the constructed world by summing over k:\nQ(yo) = ∑ k P (sk|yo∗) (∑ r P (sr|yo∗) P (yo|sr) )−1 = (∑ r P (sr|yo∗) P (yo|sr) )−1 After getting the marginal distributions Q(sk), Q(yo), and the matching posteriors, Q(yo|sk) = P (yo|sk), for k = 1, ...,m, the feature vector distribution of true class label in the constructed world, Q(sk|yo) can be calculated by:\nQ(sk|yo) = Q(yo|sk)Q(sk)\nQ(yo) = P (sk|yo∗)\nBecause yo was arbitrarily chosen, this theorem is proved.\nTheorem 1 shows that any algorithm relying exclusively on feature vector distribution of true class label and correctly computed posterior distribution over all possible classification labels given feature vectors (e.g. majority voting) can not deduct the correct classification answer.\nIn the following part, we are considering the extra information which is the estimation of other classifiers’ prediction results. We use P (vo|sk) to represent the how many percentage of classifiers will predict yo given sk. We also define world classification function W (sk) = P (wo|sk). Two thresholds c0 and c1 = 1− c0 are given to make the final classification result. The classification rule is as follows:\nW (sk) = { w0 if P (w0|sk) > c0; w1 if P (w1|sk) > c1.\nTheorem A.2. For any xi, the average estimate of the prior prediction for the correct classification answer will be underestimated if not every classifier provides the correct classification prediction .\nProof. We first prove that the actual percentage of correctly predicted classifiers for the true answer in the actual world exceeds counterfactual world’s percentage for the true answer, P (vo∗ |wo∗) > P (vo∗ |wk), k 6= o∗. By the definition of W (sk), we can get P (wo∗ |vo∗) > co∗ , P (wo∗ |vk) < co∗ . Then we have P (wo∗ |vo∗)P (vk) > P (wo∗ |vk)P (vk). So\nP (wo∗ |vo∗) > P (wo∗ |vo∗)P (vo∗) + P (wo∗ |vk)P (vk) = P (wo∗) (12) According to Bayesian rule, we have the following deduction:\nP (vo∗ |wo∗) P (vo∗ |wk) = P (wo∗ |vo∗)P (wk) P (wk|vo∗)P (wo∗) = P (wo∗ |vo∗) 1− P (wo∗ |vo∗) 1− P (wo∗) P (wo∗)\n(13)\nBased on (12), (13) is greater than one. So P (vo∗ |wo∗) > P (vo∗ |wk), k 6= o∗ is proved. The estimate of classification prediction given the feature value sj can be computed by marginalizing the actual and counterfactual worlds, P (vo∗ |sj) = P (vo∗ |wo∗)P (wo∗ |sj) + P (vo∗ |wk)P (wk|sj). And we proved that P (vo∗ |wo∗) > P (vo∗ |wk), k 6= o∗. Therefore, P (vo∗ |sj) ≤ P (vo∗ |wo∗). It will be the strict inequality unless P (wo∗ |sj) = 1. Because some feature vectors will lead to strict inequality, the average estimate of the prior prediction will be strictly underestimated. This theorem is proved.\nA.2 DATASETS\nIn this section, the detailed information of 6 binary and 6 multiclass classification datasets are described in Table 4. ‘#’ of Inst. stands for the number of instances. ‘#’ of Attr. stands for the number of attributes. ‘%’ of Maj stands for the percentage of majority class.\nA.3 ALGORITHM 2 IN SECTION 4.2\nIn this part, we provided the detailed algorithm description for DMTS in Algorithm 2.\nAlgorithm 2 Discriminative Machine Truth Serum (Binary classification) Require:\n1: Input: 2: D = {(x1, y1), ..., (xN , yN )}: training data 3: T = {(x1, y1), ..., (xT , yT )}: testing data\nProcedure: 4: for i = 1 to N do 5: Compute ŷi according to Eqn.(8) 6: end for 7: Train DMTS classifier f on the dataset {xi, ŷi}Ni=1 8: for t = 1 to T do 9: Compute the classification result yt := f(xt) 10: if yt = 0 then 11: Stay with the majority answer. 12: else if yt = 1 then 13: Predict with the minority answer. 14: end if 15: end for" } ]
2,019
null
SP:0754d0dac07aed6b6672a6b0393087d90a5fe535
[ "This paper presents the « Behavior Suite for Reinforcement Learning » (bsuite), which is a set of RL tasks (called « experiments ») meant to evaluate an algorithm’s ability to solve various key challenges in RL. Importantly, these experiments are designed to run fast enough that one can benchmark a new algorithm within a reasonable amount of time (and money). They can thus be seen as a « test suite » for RL, limited to small toy problems but very useful to efficiently debug RL algorithms and get an overview of some of their key properties. The paper describes the motivation behind bsuite, shows detailed results from some classical RL algorithms on a couple of experiments, and gives a high-level overview of how the code is structured.", "In this paper, the authors propose a set of benchmarks for evaluating different aspects of reinforcement learning algorithms such as generalisation, exploration, and memory. The aim is to provide a set of simple environments to better understand the RL algorithms and also to provide a set of scores that summarise the performance in each respect. The code of the benchmark is also released." ]
This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source github.com/deepmind/bsuite, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.
[ { "affiliations": [], "name": "Ian Osband" }, { "affiliations": [], "name": "Yotam Doron" }, { "affiliations": [], "name": "Matteo Hessel" }, { "affiliations": [], "name": "John Aslanides" }, { "affiliations": [], "name": "Eren Sezener" }, { "affiliations": [], "name": "Andre Saraiva" }, { "affiliations": [], "name": "Katrina McKinney" }, { "affiliations": [], "name": "Tor Lattimore" }, { "affiliations": [], "name": "Csaba Szepesvari" }, { "affiliations": [], "name": "Satinder Singh" }, { "affiliations": [], "name": "Benjamin Van Roy" }, { "affiliations": [], "name": "Richard Sutton" }, { "affiliations": [], "name": "David Silver" }, { "affiliations": [], "name": "Hado Van Hasselt" } ]
[ { "authors": [ "Mart́ın Abadi" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org", "venue": null, "year": 2015 }, { "authors": [ "Mohammad Gheshlaghi Azar", "Ian Osband", "Rémi Munos" ], "title": "Minimax regret bounds for reinforcement learning", "venue": "In Proc. of ICML,", "year": 2017 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Andrew G Barto", "Richard S Sutton", "Charles W Anderson" ], "title": "Neuronlike adaptive elements that can solve difficult learning control problems", "venue": "IEEE transactions on systems, man, and cybernetics,", "year": 1983 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine learning and the bias-variance trade-off", "venue": "arXiv preprint arXiv:1812.11118,", "year": 2018 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The Arcade Learning Environment: An Evaluation Platform for General Agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Léon Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent", "venue": "In Proceedings of COMPSTAT’2010,", "year": 2010 }, { "authors": [ "Pablo Samuel Castro", "Subhodeep Moitra", "Carles Gelada", "Saurabh Kumar", "Marc G. Bellemare" ], "title": "Dopamine: A Research Framework for Deep Reinforcement Learning. 2018", "venue": "URL http://arxiv. org/abs/1812.06110", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Thomas G Dietterich" ], "title": "Hierarchical reinforcement learning with the maxq value function decomposition", "venue": "Journal of artificial intelligence research,", "year": 2000 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Richard Evans", "Jim Gao" ], "title": "Deepmind AI reduces google data centre cooling bill by 40 https: //deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/, 2016", "venue": null, "year": 2016 }, { "authors": [ "Kunihiko Fukushima" ], "title": "Neural network model for a mechanism of pattern recognition unaffected by shift in position-neocognitron", "venue": "IEICE Technical Report, A,", "year": 1979 }, { "authors": [ "John C Gittins" ], "title": "Bandit processes and dynamic allocation indices", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1979 }, { "authors": [ "Peter Henderson", "Riashat Islam", "Philip Bachman", "Joelle Pineau", "Doina Precup", "David Meger" ], "title": "Deep reinforcement learning that matters", "venue": "CoRR, abs/1709.06560,", "year": 2017 }, { "authors": [ "Alexey Grigorevich Ivakhnenko" ], "title": "The group method of data of handling; a rival of the method of stochastic approximation", "venue": "Soviet Automatic Control,", "year": 1968 }, { "authors": [ "Thomas Jaksch", "Ronald Ortner", "Peter Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Kenji Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "M. Kearns", "S. Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine Learning,", "year": 2002 }, { "authors": [ "Jeannette Kiefer", "Jacob Wolfowitz" ], "title": "Stochastic estimation of the maximum of a regression function", "venue": null, "year": 1952 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Shane Legg", "Marcus Hutter" ], "title": "A collection of definitions of intelligence", "venue": "Frontiers in Artificial Intelligence and applications,", "year": 2007 }, { "authors": [ "Kurt Lewin" ], "title": "Psychology and the process of group living", "venue": "The Journal of Social Psychology,", "year": 1943 }, { "authors": [ "Xiuyuan Lu", "Benjamin Van Roy" ], "title": "Ensemble sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Erik Talvitie", "Joel Veness", "Matthew Hausknecht", "Michael Bowling" ], "title": "Revisiting the Arcade Learning Environment: Evaluation protocols and open problems for general agents", "venue": "arXiv preprint arXiv:1709.06009,", "year": 2017 }, { "authors": [ "Marvin Minsky" ], "title": "Steps towards artificial intelligence", "venue": "Proceedings of the IRE,", "year": 1961 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level Control through Deep Reinforcement Learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In Proc. of ICML,", "year": 2016 }, { "authors": [ "Andrew William Moore" ], "title": "Efficient memory-based learning for robot control", "venue": null, "year": 1990 }, { "authors": [ "Arun Nair", "Praveen Srinivasan", "Sam Blackwell", "Cagdas Alcicek", "Rory Fearon" ], "title": "Massively Parallel Methods for Deep Reinforcement Learning", "venue": "In ICML Workshop on Deep Learning,", "year": 2015 }, { "authors": [ "John O’Keefe", "Jonathan Dostrovsky" ], "title": "The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat", "venue": "Brain research,", "year": 1971 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped DQN", "venue": "In Advances In Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Ian Osband", "Daniel Russo", "Zheng Wen", "Benjamin Van Roy" ], "title": "Deep exploration via randomized value functions", "venue": "arXiv preprint arXiv:1703.07608,", "year": 2017 }, { "authors": [ "Ian Osband", "John Aslanides", "Albin Cassirer" ], "title": "Randomized prior functions for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ian Osband", "Yotam Doron", "Matteo Hessel", "John Aslanides", "Eren Sezener", "Andre Saraiva", "Katrina McKinney", "Tor Lattimore", "Csaba Szepesvari", "Satinder Singh", "Benjamin Van Roy", "Richard Sutton", "David Silver", "Hado Van Hasselt" ], "title": "Behaviour suite for reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Fernando Pérez", "Brian E. Granger" ], "title": "IPython: a system for interactive scientific computing", "venue": "Computing in Science and Engineering,", "year": 2007 }, { "authors": [ "Frank Rosenblatt" ], "title": "The perceptron: a probabilistic model for information storage and organization in the brain", "venue": "Psychological review,", "year": 1958 }, { "authors": [ "AL Samuel" ], "title": "Some studies oin machine learning using the game of checkers", "venue": "IBM Journal of Researchand Development,", "year": 1959 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of Go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Alexander L Strehl", "Michael L Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Richard Sutton", "Andrew Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2017 }, { "authors": [ "R.S. Sutton" ], "title": "Learning to predict by the methods of temporal differences", "venue": "Machine learning,", "year": 1988 }, { "authors": [ "Brian Tanner", "Adam White" ], "title": "Rl-glue: Language-independent software for reinforcement-learning experiments", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and TD-gammon", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Hado van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep Reinforcement Learning with Double Q-Learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Junyoung Chung", "Michael Mathieu", "Jaderberg" ], "title": "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/, 2019", "venue": null, "year": 2019 }, { "authors": [ "Tingwu Wang", "Xuchan Bao", "Ignasi Clavera", "Jerrick Hoang", "Yeming Wen", "Eric Langlois", "Shunshi Zhang", "Guodong Zhang", "Pieter Abbeel", "Jimmy Ba" ], "title": "Benchmarking model-based reinforcement learning", "venue": "URL http://arxiv.org/abs/1907.02057", "year": 1907 }, { "authors": [ "Van Roy" ], "title": "2017), but are consistent with previous empirical findings (Osband et al., 2017", "venue": "Russo et al.,", "year": 2017 } ]
[ { "heading": "1 Introduction", "text": "The reinforcement learning (RL) problem describes an agent interacting with an environment with the goal of maximizing cumulative reward through time (Sutton & Barto, 2017). Unlike other branches of control, the dynamics of the environment are not fully known to the agent, but can be learned through experience. Unlike other branches of statistics and machine learning, an RL agent must consider the effects of its actions upon future experience. An efficient RL agent must address three challenges simultaneously:\n1. Generalization: be able to learn efficiently from data it collects. 2. Exploration: prioritize the right experience to learn from. 3. Long-term consequences: consider effects beyond a single timestep.\nThe great promise of reinforcement learning are agents that can learn to solve a wide range of important problems. According to some definitions, an agent that can learn to perform at or above human level across a wide variety of tasks is an artificial general intelligence (AGI) (Minsky, 1961; Legg et al., 2007). Interest in artificial intelligence has undergone a resurgence in recent years. Part of this interest is driven by the constant stream of innovation and success on high profile challenges previously deemed impossible for computer systems. Improvements in image recognition are a clear example of these accomplishments, progressing from individual digit recognition (LeCun et al., 1998), to mastering ImageNet in only a few years (Deng et al., 2009; Krizhevsky et al., 2012). The advances in RL systems have been similarly impressive: from checkers (Samuel, 1959), to Backgammon (Tesauro, 1995), to Atari games (Mnih et al., 2015a), to competing with professional players at DOTA (Pachocki et al., 2019) or StarCraft (Vinyals et al., 2019) and beating world champions at Go (Silver et al., 2016). Outside of playing games, decision systems are increasingly guided by AI systems (Evans & Gao, 2016).\n∗Corresponding author iosband@google.com.\nAs we look towards the next great challenges for RL and AI, we need to understand our systems better (Henderson et al., 2017). This includes the scalability of our RL algorithms, the environments where we expect them to perform well, and the key issues outstanding in the design of a general intelligence system. We have the existence proof that a single self-learning RL agent can master the game of Go purely from self-play (Silver et al., 2018). We do not have a clear picture of whether such a learning algorithm will perform well at driving a car, or managing a power plant. If we want to take the next leaps forward, we need to continue to enhance our understanding." }, { "heading": "1.1 Practical theory often lags practical algorithms", "text": "The practical success of RL algorithms has built upon a base of theory including gradient descent (Bottou, 2010), temporal difference learning (Sutton, 1988) and other foundational algorithms. Good theory provides insight into our algorithms beyond the particular, and a route towards general improvements beyond ad-hoc tinkering. As the psychologist Kurt Lewin said, ‘there is nothing as practical as good theory’ (Lewin, 1943). If we hope to use RL to tackle important problems, then we must continue to solidify these foundations. This need is particularly clear for RL with nonlinear function approximation, or ‘deep RL’. At the same time, theory often lags practice, particularly in difficult problems. We should not avoid practical progress that can be made before we reach a full theoretical understanding. The successful development of algorithms and theory typically moves in tandem, with each side enriched by the insights of the other. The evolution of neural network research, or deep learning, provides a poignant illustration of how theory and practice can develop together (LeCun et al., 2015). Many of the key ideas for deep learning have been around, and with successful demonstrations, for many years before the modern deep learning explosion (Rosenblatt, 1958; Ivakhnenko, 1968; Fukushima, 1979). However, most of these techniques remained outside the scope of developed learning theory, partly due to their complex and non-convex loss functions. Much of the field turned away from these techniques in a ‘neural network winter’, focusing instead of function approximation under convex loss (Cortes & Vapnik, 1995). These convex methods were almost completely dominant until the emergence of benchmark problems, mostly for image recognition, where deep learning methods were able to clearly and objectively demonstrate their superiority (LeCun et al., 1998; Krizhevsky et al., 2012). It is only now, several years after these high profile successes, that learning theory has begun to turn its attention back to deep learning (Kawaguchi, 2016; Bartlett et al., 2017; Belkin et al., 2018). The current theory of deep RL is still in its infancy. In the absence of a comprehensive theory, the community needs principled benchmarks that help to develop an understanding of the strengths and weakenesses of our algorithms." }, { "heading": "1.2 An ‘MNIST’ for reinforcement learning", "text": "In this paper we introduce the Behaviour Suite for Reinforcement Learning (or bsuite for short): a collection of experiments designed to highlight key aspects of agent scalability. Our aim is that these experiments can help provide a bridge between theory and practice, with benefits to both sides. These experiments embody fundamental issues, such as ‘exploration’ or ‘memory’ in a way that can be easily tested and iterated. For the development of theory, they force us to instantiate measurable and falsifiable hypotheses that we might later formalize into provable guarantees. While a full theory of RL may remain out of reach, the development of clear experiments that instantiate outstanding challenges for the field is a powerful driver for progress. We provide a description of the current suite of experiments and the key issues they identify in Section 2. Our work on bsuite is part of a research process, rather than a final offering. We do not claim to capture all, or even most, of the important issues in RL. Instead, we hope to provide a simple library that collects the best available experiments, and makes them easily accessible to the community. As part of an ongoing commitment, we are forming a bsuite committee that will periodically review the experiments included in the official bsuite release. We provide more details on what makes an ‘excellent’ experiment in Section 2, and on how to engage in their construction for future iterations in Section 5.\nThe Behaviour Suite for Reinforcement Learning is a not a replacement for ‘grand challenge’ undertakings in artificial intelligence, or a leaderboard to climb. Instead it is a collection of diagnostic experiments designed to provide insight into key aspects of agent behaviour. Just as the MNIST dataset offers a clean, sanitised, test of image recognition as a stepping stone to advanced computer vision; so too bsuite aims to instantiate targeted experiments for the development of key RL capabilities. The successful use of illustrative benchmark problems is not unique to machine learning, and our work is similar in spirit to the Mixed Integer Programming Library (MIPLIB) (miplib2017). In mixed integer programming, and unlike linear programming, the majority of algorithmic advances have (so far) eluded theoretical analysis. In this field, MIPLIB serves to instantiate key properties of problems (or types of problems), and evaluation on MIPLIB is a typical component of any new algorithm. We hope that bsuite can grow to perform a similar role in RL research, at least for those parts that continue to elude a unified theory of artificial intelligence. We provide guidelines for how researchers can use bsuite effectively in Section 3." }, { "heading": "1.3 Open source code, reproducible research", "text": "As part of this project we open source github.com/deepmind/bsuite, which instantiates all experiments in code and automates the evaluation and analysis of any RL agent on bsuite. This library serves to facilitate reproducible and accessible research on the core issues in reinforcement learning. It includes:\n• Canonical implementations of all experiments, as described in Section 2. • Reference implementations of several reinforcement learning algorithms. • Example usage of bsuite with alternative codebases, including ‘OpenAI Gym’. • Launch scripts for Google cloud that automate large scale compute at low cost.1 • A ready-made bsuite Jupyter notebook with analyses for all experiments. • Automated LATEX appendix, suitable for inclusion in conference submission.\nWe provide more details on code and usage in Section 4. We hope the Behaviour Suite for Reinforcement Learning, and its open source code, will provide significant value to the RL research community, and help to make key conceptual issues concrete and precise. bsuite can highlight bottlenecks in general algorithms that are not amenable to hacks, and reveal properties and scalings of algorithms outside the scope of current analytical techniques. We believe this offers an avenue towards great leaps on key issues, separate to the challenges of large-scale engineering (Nair et al., 2015). Further, bsuite facilitates clear, targeted and unified experiments across different code frameworks, something that can help to remedy issues of reproducibility in RL research (Tanner & White, 2009; Henderson et al., 2017)." }, { "heading": "1.4 Related work", "text": "The Behaviour Suite for Reinforcement Learning fits into a long history of RL benchmarks. From the beginning, research into general learning algorithms has been grounded by the performance on specific environments (Sutton & Barto, 2017). At first, these environments were typically motivated by small MDPs that instantiate the general learning problem. ‘CartPole’ (Barto et al., 1983) and ‘MountainCar’ (Moore, 1990) are examples of classic benchmarks that has provided a testing ground for RL algorithm development. Similarly, when studying specific capabilities of learning algorithms, it has often been helpful to design diagnostic environments with that capability in mind. Examples of this include ‘RiverSwim’ for exploration (Strehl & Littman, 2008) or ‘Taxi’ for temporal abstraction (Dietterich, 2000). Performance in these environments provide a targeted signal for particular aspects of algorithm development. As the capabilities or RL algorithms have advanced, so has the complexity of the benchmark problems. The Arcade Learning Environment (ALE) has been instrumental in driving\n1At August 2019 pricing, a full bsuite evaluation for our DQN implementation cost under $6.\nprogress in deep RL through surfacing dozens of Atari 2600 games as learning environments (Bellemare et al., 2013). Similar projects have been crucial to progress in continuous control (Duan et al., 2016; Tassa et al., 2018), model-based RL (Wang et al., 2019) and even rich 3D games (Beattie et al., 2016). Performing well in these complex environments requires the integration of many core agent capabilities. We might think of these benchmarks as natural successors to ‘CartPole’ or ‘MountainCar’. The Behaviour Suite for Reinforcement Learning offers a complementary approach to existing benchmarks in RL, with several novel components: 1. bsuite experiments enforce a specific methodology for agent evaluation beyond just the\nenvironment definition. This is crucial for scientific comparisons and something that has become a major problem for many benchmark suites (Machado et al., 2017) (Section 2). 2. bsuite aims to isolate core capabilities with targeted ‘unit tests’, rather than integrate general learning ability. Other benchmarks evolve by increasing complexity, bsuite aims to remove all confounds from the core agent capabilities of interest (Section 3). 3. bsuite experiments are designed with an emphasis on scalability rather than final performance. Previous ‘unit tests’ (such as ‘Taxi’ or ‘RiverSwim’) are of fixed size, bsuite experiments are specifically designed to vary the complexity smoothly (Section 2). 4. github.com/deepmind/bsuite has an extraordinary emphasis on the ease of use, and compatibility with RL agents not specifically designed for bsuite. Evaluating an agent on bsuite is practical even for agents designed for a different benchmark (Section 4)." }, { "heading": "2 Experiments", "text": "This section outlines the experiments included in the Behaviour Suite for Reinforcement Learning 2019 release. In the context of bsuite, an experiment consists of three parts:\n1. Environments: a fixed set of environments determined by some parameters. 2. Interaction: a fixed regime of agent/environment interaction (e.g. 100 episodes). 3. Analysis: a fixed procedure that maps agent behaviour to results and plots.\nOne crucial part of each bsuite analysis defines a ‘score’ that maps agent performance on the task to [0, 1]. This score allows for agent comparison ‘at a glance’, the Jupyter notebook includes further detailed analysis for each experiment. All experiments in bsuite only measure behavioural aspects of RL agents. This means that they only measure properties that can be observed in the environment, and are not internal to the agent. It is this choice that allows bsuite to easily generate and compare results across different algorithms and codebases. Researchers may still find it useful to investigate internal aspects of their agents on bsuite environments, but it is not part of the standard analysis. Every current and future bsuite experiment should target some key issue in RL. We aim for simple behavioural experiments, where agents that implement some concept well score better than those that don’t. For an experiment to be included in bsuite it should embody five key qualities:\n• Targeted: performance in this task corresponds to a key issue in RL. • Simple: strips away confounding/confusing factors in research. • Challenging: pushes agents beyond the normal range. • Scalable: provides insight on scalability, not performance on one environment. • Fast: iteration from launch to results in under 30min on standard CPU.\nWhere our current experiments fall short, we see this as an opportunity to improve the Behaviour Suite for Reinforcement Learning in future iterations. We can do this both through replacing experiments with improved variants, and through broadening the scope of issues that we consider. We maintain the full description of each of our experiments through the code and accompanying documentation at github.com/deepmind/bsuite. In the following subsections, we pick two bsuite experiments to review in detail: ‘memory length’ and ‘deep sea’, and review these examples in detail. By presenting these experiments as examples, we can emphasize what we think makes bsuite a valuable tool for investigating core RL issues. We do provide a high level summary of all other current experiments in Appendix A.\nTo accompany our experiment descriptions, we present results and analysis comparing three baseline algorithms on bsuite: DQN (Mnih et al., 2015a), A2C (Mnih et al., 2016) and Bootstrapped DQN (Osband et al., 2016). As part of our open source effort, we include full code for these agents and more at bsuite/baselines. All plots and analysis are generated through the automated bsuite Jupyter notebook, and give a flavour for the sort of agent comparisons that are made easy by bsuite." }, { "heading": "2.1 Example experiment: memory length", "text": "Almost everyone agrees that a competent learning system requires memory, and almost everyone finds the concept of memory intuitive. Nevertheless, it can be difficult to provide a rigorous definition for memory. Even in human minds, there is evidence for distinct types of ‘memory’ handled by distinct regions of the brain (Milner et al., 1998). The assessment of memory only becomes more difficult to analyse in the context of general learning algorithms, which may differ greatly from human models of cognition. Which types of memory should we analyse? How can we inspect belief models for arbitrary learning systems? Our approach in bsuite is to sidestep these debates through simple behavioural experiments. We refer to this experiment as memory length; it is designed to test the number of sequential steps an agent can remember a single bit. The underlying environment is based on a stylized T-maze (O’Keefe & Dostrovsky, 1971), parameterized by a length N ∈ N. Each episode lasts N steps with observation ot = (ct, t/N) for t = 1, .., N and action space A = {−1,+1}. The context c1 ∼ Unif(A) and ct = 0 for all t ≥ 2. The reward rt = 0 for all t < N , but rN = Sign(aN = c1). For the bsuite experiment we run the agent on sizes N = 1, .., 100 exponentially spaced and look at the average regret compared to optimal after 10k episodes. The summary ‘score’ is the percentage of runs for which the average regret is less than 75% of that achieved by a uniformly random policy.\nMemory length is a good bsuite experiment because it is targeted, simple, challenging, scalable and fast. By construction, an agent that performs well on this task has mastered some use of memory over multiple timesteps. Our summary ‘score’ provides a quick and dirty way to compare agent performance at a high level. Our sweep over different lengths N provides empirical evidence about the scaling properties of the algorithm beyond a simple pass/fail. Figure 2a gives a quick snapshot of the performance of baseline algorithms. Unsurprisingly, actor-critic with a recurrent neural network greatly outperforms the feedforward DQN and Bootstrapped DQN. Figure 2b gives us a more detailed analysis of the same underlying data. Both DQN and Bootstrapped DQN are unable to learn anything for length > 1, they lack functioning memory. A2C performs well for all N ≤ 30 and essentially random for all N > 30, with quite a sharp cutoff. While it is not surprising that the recurrent agent outperforms feedforward architectures on a memory task, Figure 2b gives an excellent insight into the scaling properties of this architecture. In this case, we have a clear explanation for the observed performance: the RNN agent was trained via backprop-through-time with length 30. bsuite recovers an empirical evaluation of the scaling properties we would expect from theory." }, { "heading": "2.2 Example experiment: deep sea", "text": "Reinforcement learning calls for a sophisticated form of exploration called deep exploration (Osband et al., 2017). Just as an agent seeking to ‘exploit’ must consider the long term\nconsequences of its actions towards cumulative rewards, an agent seeking to ‘explore’ must consider how its actions can position it to learn more effectively in future timesteps. The literature on efficient exploration broadly states that only agents that perform deep exploration can expect polynomial sample complexity in learning (Kearns & Singh, 2002). This literature has focused, for the most part, on uncovering possible strategies for deep exploration through studying the tabular setting analytically (Jaksch et al., 2010; Azar et al., 2017). Our approach in bsuite is to complement this understanding through a series of behavioural experiments that highlight the need for efficient exploration. The deep sea problem is implemented as an N ×N grid with a one-hot encoding for state. The agent begins each episode in the top left corner of the grid and descends one row per timestep. Each episode terminates after N steps, when the agent reaches the bottom row. In each state there is a random but fixed mapping between actions A = {0, 1} and the transitions ‘left’ and ‘right’. At each timestep there is a small cost r = −0.01/N of moving right, and r = 0 for moving left. However, should the agent transition right at every timestep of the episode it will be rewarded with an additional reward of +1. This presents a particularly challenging exploration problem for two reasons. First, following the ‘gradient’ of small intermediate rewards leads the agent away from the optimal policy. Second, a policy that explores with actions uniformly at random has probability 2−N of reaching the rewarding state in any episode. For the bsuite experiment we run the agent on sizes N = 10, 12, .., 50 and look at the average regret compared to optimal after 10k episodes. The summary ‘score’ computes the percentage of runs for which the average regret drops below 0.9 faster than the 2N episodes expected by dithering.\nDeep Sea is a good bsuite experiment because it is targeted, simple, challenging, scalable and fast. By construction, an agent that performs well on this task has mastered some key properties of deep exploration. Our summary score provides a ‘quick and dirty’ way to compare agent performance at a high level. Our sweep over different sizes N can help to provide empirical evidence of the scaling properties of an algorithm beyond a simple pass/fail. Figure 3 presents example output comparing A2C, DQN and Bootstrapped DQN on this\ntask. Figure 4a gives a quick snapshot of performance. As expected, only Bootstrapped DQN, which was developed for efficient exploration, scores well. Figure 4b gives a more detailed analysis of the same underlying data. When we compare the scaling of learning with problem size N it is clear that only Bootstrapped DQN scales gracefully to large problem sizes. Although our experiment was only run to size 50, the regular progression of learning times suggest we might expect this algorithm to scale towards N > 50.\n3 How to use bsuite\nThis section describes some of the ways you can use bsuite in your research and development of RL algorithms. Our aim is to present a high-level description of some research and engineering use cases, rather than a tutorial for the code installation and use. We provide examples of specific investigations using bsuite in Appendixes C, D and E. Section 4 provides an outline of our code and implementation. Full details and tutorials are available at github.com/deepmind/bsuite. A bsuite experiment is defined by a set of environments and number of episodes of interaction. Since loading the environment via bsuite handles the logging automatically, any agent interacting with that environment will generate the data required for required for analysis through the Jupyter notebook we provide (Pérez & Granger, 2007). Generating plots and analysis via the notebook only requires users to provide the path to the logged data. The ‘radar plot’ (Figure 5) at the start of the notebook provides a snapshot of agent behaviour, based on summary scores. The notebook also contains a complete description of every experiment, summary scoring and in-depth analysis of each experiment. You can interact with the full report at bit.ly/bsuite-agents.\nIf you are developing an algorithm to make progress on fundamental issues in RL, running on bsuite provides a simple way to replicate benchmark experiments in the field. Although\nmany of these problems are ‘small’, in the sense that their solution does not necessarily require large neural architecture, they are designed to highlight key challenges in RL. Further, although these experiments do offer a summary ‘score’, the plots and analysis are designed to provide much more information than just a leaderboard ranking. By using this common code and analysis, it is easy to benchmark your agents and provide reproducible and verifiable research. If you are using RL as a tool to crack a ‘grand challenge’ in AI, such as beating a world champion at Go, then taking on bsuite gridworlds might seem like small fry. We argue that one of the most valuable uses of bsuite is as a diagnostic ‘unit-test’ for large-scale algorithm development. Imagine you believe that ‘better exploration’ is key to improving your performance on some challenge, but when you try your ‘improved’ agent, the performance does not improve. Does this mean your agent does not do good exploration? Or maybe that exploration is not the bottleneck in this problem? Worse still, these experiments might take days and thousands of dollars of compute to run, and even then the information you get might not be targeted to the key RL issues. Running on bsuite, you can test key capabilities of your agent and diagnose potential improvements much faster, and more cheaply. For example, you might see that your algorithm completely fails at credit assignment beyond n = 20 steps. If this is the case, maybe this lack of credit-assignment over long horizons is the bottleneck and not necessarily exploration. This can allow for much faster, and much better informed agent development - just like a good suite of tests for software development. Another benefit of bsuite is to disseminate your results more easily and engage with the research community. For example, if you write a conference paper targeting some improvement to hierarchical reinforcement learning, you will likely provide some justification for your results in terms of theorems or experiments targeted to this setting.2 However, it is typically a large amount of work to evaluate your algorithm according to alternative metrics, such as exploration. This means that some fields may evolve without realising the connections and distinctions between related concepts. If you run on bsuite, you can automatically generate a one-page Appendix, with a link to a notebook report hosted online. This can help provide a scientific evaluation of your algorithmic changes, and help to share your results in an easily-digestible format, compatible with ICML, ICLR and NeurIPS formatting. We provide examples of these experiment reports in Appendices B, C, D and E." }, { "heading": "4 Code structure", "text": "To avoid discrepancies between this paper and the source code, we suggest that you take practical tutorials directly from github.com/deepmind/bsuite. A good starting point is bit.ly/bsuite-tutorial: a Jupyter notebook where you can play the code right from your browser, without installing anything. The purpose of this section is to provide a high-level overview of the code that we open source. In particular, we want to stress is that bsuite is designed to be a library for RL research, not a framework. We provide implementations for all the environments, analysis, run loop and even baseline agents. However, it is not necessary that you make use of them all in order to make use of bsuite. The recommended method is to implement your RL agent as a class that implements a policy method for action selection, and an update method for learning from transitions and rewards. Then, simply pass your agent to our run loop, which enumerates all the necessary bsuite experiments and logs all the data automatically. If you do this, then all the experiments and analysis will be handled automatically and generate your results via the included Jupyter notebook. We provide examples of running these scripts locally, and via Google cloud through our tutorials. If you have an existing codebase, you can still use bsuite without migrating to our run loop or agent structure. Simply replace your environment with environment = bsuite.load and record(bsuite id) and add the flag bsuite id to your code. You can then complete a full bsuite evaluation by iterating over the bsuite ids defined in\n2A notable omission from the bsuite2019 release is the lack of any targeted experiments for ‘hierarchical reinforcement learning’ (HRL). We invite the community to help us curate excellent experiments that can evaluate quality of HRL.\nsweep.SWEEP. Since the environments handle the logging themselves, your don’t need any additional logging for the standard analysis. Although full bsuite includes many separate evaluations, no single bsuite environment takes more than 30 minutes to run and the sweep is naturally parallel. As such, we recommend launching in parallel using multiple processes or multiple machines. Our examples include a simple approach using Python’s multiprocessing module with Google cloud compute. We also provide examples of running bsuite from OpenAI baselines (Dhariwal et al., 2017) and Dopamine (Castro et al., 2018). Designing a single RL agent compatible with diverse environments can cause problems, particularly for specialized neural networks. bsuite alleviates this problem by specifying an observation spec that surfaces the necessary information for adaptive network creation. By default, bsuite environments implement the dm env standards (Muldal et al., 2017), but we also include a wrapper for use through Openai gym (Brockman et al., 2016). However, if your agent is hardcoded for a format, bsuite offers the option to output each environment with the observation spec of your choosing via linear interpolation. This means that, if you are developing a network suitable for Atari and particular observation spec, you can choose to swap in bsuite without any changes to your agent." }, { "heading": "5 Future iterations", "text": "This paper introduces the Behaviour Suite for Reinforcement Learning, and marks the start of its ongoing development. With our opensource effort, we chose a specific collection of experiments as the bsuite2019 release, but expect this collection to evolve in future iterations. We are reaching out to researchers and practitioners to help collate the most informative, targeted, scalable and clear experiments possible for reinforcement learning. To do this, submissions should implement a sweep that determines the selection of environments to include and logs the necessary data, together with an analysis that parses this data. In order to review and collate these submissions we will be forming a bsuite committee. The committee will meet annually during the NeurIPS conference to decide which experiments will be included in the bsuite release. We are reaching out to a select group of researchers, and hope to build a strong core formed across industry and academia. If you would like to submit an experiment to bsuite or propose a committee member, you can do this via github pull request, or via email to bsuite.committee@gmail.com. We believe that bsuite can be a valuable tool for the RL community, and particularly for research in deep RL. So far, the great success of deep RL has been to leverage large amounts of computation to improve performance. With bsuite, we hope to leverage largescale computation for improved understanding. By collecting clear, informative and scalable experiments; and providing accessible tools for reproducible evaluation we hope to facilitate progress in reinforcement learning research." }, { "heading": "A Experiment summary", "text": "This appendix outlines the experiments that make up the bsuite 2019 release. In the interests of brevity, we provide only an outline of each experiment here. Full documentation for the environments, interaction and analysis are kept with code at github.com/deepmind/bsuite." }, { "heading": "A.1 Basic learning", "text": "We begin with a collection of very simple decision problems, and standard analysis that confirms an agent’s competence at learning a rewarding policy within them. We call these experiments ‘basic’, since they are not particularly targeted at specific core issues in RL, but instead test a general base level of competence we expect all general agents to attain." }, { "heading": "A.1.1 Simple bandit", "text": "component description environments Finite-armed bandit with deterministic rewards [0, 0.1, ..1] (Gittins, 1979). 20 seeds. interaction 10k episodes, record regret vs optimal. score regret normalized [random, optimal] → [0,1] issues basic\nA.1.2 MNIST\ncomponent description environments Contextual bandit classification of MNIST with ±1 rewards (LeCun et al., 1998). 20 seeds. interaction 10k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues basic, generalization\nA.1.3 Catch\ncomponent description environments A 10x5 Tetris-grid with single block falling per\ncolumn. The agent can move left/right in the bottom row to ‘catch’ the block. 20 seeds.\ninteraction 10k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues basic, credit assignment" }, { "heading": "A.1.4 Cartpole", "text": "component description environments Agent can move a cart left/right on a plane\nto keep a balanced pole upright (Barto et al., 1983), 20 seeds.\ninteraction 10k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues basic, credit assignment, generalization" }, { "heading": "A.1.5 Mountain car", "text": "component description environments Agent drives an underpowered car up a hill (Moore, 1990), 20 seeds. interaction 10k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues basic, credit assignment, generalization" }, { "heading": "A.2 Stochasticity", "text": "To investigate the robustness of RL agents to noisy rewards, we repeat the experiments from Section A.1 under differing levels of Gaussian noise. This time we allocate the 20 different seeds across 5 levels of Gaussian noise N(0, σ2) for σ = [0.1, 0.3, 1, 3, 10] with 4 seeds each." }, { "heading": "A.3 Problem scale", "text": "To investigate the robustness of RL agents to problem scale, we repeat the experiments from Section A.1 under differing reward scales. This time we allocate the 20 different seeds across 5 levels of reward scaling, where we multiply the observed rewards by λ = [0.01, 0.1, 1, 10, 100] with 4 seeds each." }, { "heading": "A.4 Exploration", "text": "As an agent interacts with its environment, it observes the outcomes that result from previous states and actions, and learns about the system dynamics. This leads to a fundamental tradeoff: by exploring poorly-understood states and actions the agent can learn to improve future performance, but it may attain better short-run performance by exploiting its existing knowledge. Exploration is the challenge of prioritizing useful information for learning, and the experiments in this section are designed to necessitate efficient exploration for good performance." }, { "heading": "A.4.1 Deep sea", "text": "component description environments Deep sea chain environments size N=[5..50]. interaction 10k episodes, record average regret. score % of runs with ave regret < 90% random issues exploration" }, { "heading": "A.4.2 Stochastic deep sea", "text": "component description environments Deep sea chain environments with stochastic transitions, N(0,1) reward noise, size N=[5..50]. interaction 10k episodes, record average regret. score % of runs with ave regret < 90% random issues exploration, stochasticity" }, { "heading": "A.4.3 Cartpole swingup", "text": "component description environments Cartpole ‘swing up’ problem with sparse re-\nward (Barto et al., 1983), heigh limit x=[0, 0.5, .., 0.95].\ninteraction 1k episodes, record average regret. score % of runs with average return > 0 issues exploration, generalization" }, { "heading": "A.5 Credit assignment", "text": "Reinforcement learning extends contextual bandit decision problem to allow long term consequences in decision problems. This means that actions in one timestep can effect dynamics in future timesteps. One of the challenges of this setting is that of credit assignment, and the experiments in this section are designed to highlight these issues." }, { "heading": "A.5.1 Umbrella length", "text": "component description environments Stylized ‘umbrella problem’, where only the\nfirst decision matters and long chain of confounding variables. Vary length 1..100 logarithmically.\ninteraction 1k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues credit assignment, stochasticity" }, { "heading": "A.5.2 Umbrella features", "text": "component description environments Stylized ‘umbrella problem’, where only the\nfirst decision matters and long chain of confounding variables. Vary features 1..100 logarithmically.\ninteraction 1k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues credit assignment, stochasticity" }, { "heading": "A.5.3 Discounting chain", "text": "component description environments Experiment designed to highlight issues of discounting horizon. interaction 1k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues credit assignment" }, { "heading": "A.6 Memory", "text": "Memory is the challenge that an agent should be able to curate an effective state representation from a series of observations. In this section we review a series of experiments in which agents with memory can perform much better than those that only have access to the immediate observation." }, { "heading": "A.6.1 Memory length", "text": "component description environments T-maze with a single binary context, grow length 1..100 logarithmically. interaction 1k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues credit assignment" }, { "heading": "A.6.2 Memory bits", "text": "component description environments T-maze with length 2, vary number of bits to remember 1..100 logarithmically. interaction 1k episodes, record average regret. score regret normalized [random, optimal] → [0,1] issues credit assignment\nB bsuite report as conference appendix\nIf you run an agent on bsuite, and you want to share these results as part of a conference submission, we make it easy to share a single-page ‘bsuite report’ as part of your appendix. We provide a simple LATEXfile that you can copy/paste into your paper, and is compatible out-the-box with ICLR, ICML and NeurIPS style files. This single page summary displays the summary scores for experiment evaluations for one or more agents, with plots generated automatically from the included ipython notebook. In each report, two sections are left for the authors to fill in: one describing the variants of the agents examined and another to give some brief commentary on the results. We suggest that authors promote more in-depth analysis to their main papers, or simply link to a hosted version of the full bsuite analysis online. You can find more details on our automated reports at github.com/deepmind/bsuite. The sections that follow are example bsuite reports, that give some example of how these report appendixes might be used. We believe that these simple reports can be a good complement to conference submissions in RL research, that ‘sanity check’ the elementary properties of algorithmic implementations. An added bonus of bsuite is that it is easy to set up a like for like experiment between agents from different ‘frameworks’ in a way that would be extremely laborious for an individual researcher. If you are writing a conference paper on a new RL algorithm, we believe that it makes sense for you to include a bsuite report in the appendix by default.\nC bsuite report: benchmarking baseline agents\nThe Behaviour Suite for Reinforcement Learning, or bsuite for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the bsuite project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. This report provides a snapshot of agent performance on bsuite2019, obtained by running the experiments from github.com/deepmind/bsuite (Osband et al., 2019)." }, { "heading": "C.1 Agent definition", "text": "In this experiment all implementations are taken from bsuite/baselines with default configurations. We provide a brief summary of the agents run on bsuite2019:\n• random: selects action uniformly at random each timestep. • dqn: Deep Q-networks (Mnih et al., 2015b). • boot dqn: bootstrapped DQN with prior networks (Osband et al., 2016; 2018). • actor critic rnn: an actor critic with recurrent neural network (Mnih et al., 2016)." }, { "heading": "C.2 Summary scores", "text": "Each bsuite experiment outputs a summary score in [0,1]. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. A detailed analysis of each of these experiments may be found in a notebook hosted on Colaboratory bit.ly/bsuite-agents." }, { "heading": "C.3 Results commentary", "text": "• random performs uniformly poorly, confirming the scores are working as intended. • dqn performs well on basic tasks, and quite well on credit assignment, generalization, noise and\nscale. DQN performs extremely poorly across memory and exploration tasks. The feedforward MLP has no mechanism for memory, and =5%-greedy action selection is inefficient exploration. • boot dqn is mostly identically to DQN, except for exploration where it greatly outperforms. This result matches our understanding of Bootstrapped DQN as a variant of DQN designed to estimate uncertainty and use this to guide deep exploration. • actor critic rnn typically performs worse than either DQN or Bootstrapped DQN on all tasks apart from memory. This agent is the only one able to perform better than random due to its recurrent network architecture.\nD bsuite report: optimization algorithm in DQN\nThe Behaviour Suite for Reinforcement Learning, or bsuite for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the bsuite project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. This report provides a snapshot of agent performance on bsuite2019, obtained by running the experiments from github.com/deepmind/bsuite (Osband et al., 2019)." }, { "heading": "D.1 Agent definition", "text": "All agents correspond to different instantiations of the DQN agent (Mnih et al., 2015b), as implemented in bsuite/baselines but with differnet optimizers from Tensorflow (Abadi et al., 2015). In each case we tune a learning rate to optimize performance on ‘basic’ tasks from {1e-1, 1e-2, 1e-3}, keeping all other parameters constant at default value.\n• sgd: vanilla stochastic gradient descent with learning rate 1e-2 (Kiefer & Wolfowitz, 1952). • rmsprop: RMSProp with learning rate 1e-3 (Tieleman & Hinton, 2012). • adam: Adam with learning rate 1e-3 (Kingma & Ba, 2015)." }, { "heading": "D.2 Summary scores", "text": "Each bsuite experiment outputs a summary score in [0,1]. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. A detailed analysis of each of these experiments may be found in a notebook hosted on Colaboratory: bit.ly/bsuite-optim." }, { "heading": "D.3 Results commentary", "text": "Both RMSProp and Adam perform better than SGD in every category. In most categories, Adam slightly outperforms RMSprop, although this difference is much more minor. SGD performs particularly badly on environments that require generalization and/or scale. This is not particularly surprising, since we expect the non-adaptive SGD may be more sensitive to learning rate optimization or annealing.\nIn Figure 11 we can see that the differences are particularly pronounced on the cartpole domains. We hypothesize that this task requires more efficient neural network optimization, and the nonadaptive SGD is prone to numerical issues.\nE bsuite report: ensemble size in Bootstrapped DQN\nThe Behaviour Suite for Reinforcement Learning, or bsuite for short, is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent. The aim of the bsuite project is to collect clear, informative and scalable problems that capture key issues in the design of efficient and general learning algorithms and study agent behaviour through their performance on these shared benchmarks. This report provides a snapshot of agent performance on bsuite2019, obtained by running the experiments from github.com/deepmind/bsuite (Osband et al., 2019)." }, { "heading": "E.1 Agent definition", "text": "In this experiment, all agents correspond to different instantiations of a Bootstrapped DQN with prior networks (Osband et al., 2016; 2018). We take the default implementation from bsuite/baselines. We investigate the effect of the number of models used in the ensemble, sweeping over {1, 3, 10, 30}." }, { "heading": "E.2 Summary scores", "text": "Each bsuite experiment outputs a summary score in [0,1]. We aggregate these scores by according to key experiment type, according to the standard analysis notebook. A detailed analysis of each of these experiments may be found in a notebook hosted on Colaboratory: bit.ly/bsuite-ensemble." }, { "heading": "E.3 Results commentary", "text": "Generally, increasing the size of the ensemble improves bsuite performance across the board. However, we do see signficantly decreasing returns to ensemble size, so that ensemble 30 does not perform much better than size 10. These results are not predicted by the theoretical scaling of proven bounds (Lu & Van Roy, 2017), but are consistent with previous empirical findings (Osband et al., 2017; Russo et al., 2017). The gains are most extreme in the exploration tasks, where ensemble sizes less than 10 are not able to solve large ‘deep sea’ tasks, but larger ensembles solve them reliably.\nEven for large ensemble sizes, our implementation does not completely solve every cartpole swingup instance. Further examination learning curves suggests this may be due to some instability issues, which might be helped by using Double DQN to combat value overestimation (van Hasselt et al., 2016)." } ]
2,020
Behaviour Suite for Reinforcement Learning
SP:b0c94a1ef77cbdab8af5402718ed06964148dd95
[ "This manuscript proposes a novel formulation of the MLP to address predicting a symmetric positive definite (SPD) matrix from an input vector or matrix. While the field has had methods for years to estimate SPD matrices (such as the covariance matrix estimate in the reparameterization trick), this manuscript proposes a markedly different approach based on a different layer structure and repeated normalization steps based on Mercer Kernels. Additionally, the loss used can be modified to use more PSD-specific losses, such as the symmetrized von Neumann divergence, rather than the traditional quadratic loss. This loss appears to give significantly better solutions on synthetic data.", "This paper explores the problem of deep heteroskedastic multivariate regression where the goal is to regress over symmetric positive definite matrices; that is, the deep learning model should take as input data points, and produce a conditional covariance matrix as the output. The key challenge in this setting is how to ensure the predicted matrix is positive definite (and thus follows the non-linear geometry of these matrices), how the neural network can be trained for this task, and what loss function can be used for effective training. The paper proposes a neural network with bilinear layers in this regard, and uses the von Neumann divergence as the loss function to regress the predicted covariance against a ground truth SPD matrix. The gradients of the von Neumann divergence are provided for learning via backpropagation. Experiments on several synthetic datasets and small scale datasets are provided, showcasing some benefits. " ]
Models that output a vector of responses given some inputs, in the form of a conditional mean vector, are at the core of machine learning. This includes neural networks such as the multilayer perceptron (MLP). However, models that output a symmetric positive definite (SPD) matrix of responses given inputs, in the form of a conditional covariance function, are far less studied, especially within the context of neural networks. Here, we introduce a new variant of the MLP, referred to as the matrix MLP, that is specialized at learning SPD matrices. Our construction not only respects the SPD constraint, but also makes explicit use of it. This translates into a model which effectively performs the task of SPD matrix learning even in scenarios where data are scarce. We present an application of the model in heteroscedastic multivariate regression, including convincing performance on six real-world datasets.
[]
[ { "authors": [ "Vincent Arsigny", "Pierre Fillard", "Xavier Pennec", "Nicholas Ayache" ], "title": "Geometric means in a novel vector space structure on symmetric positive-definite matrices", "venue": "SIAM Journal of Matrix Analysis Applications,", "year": 2006 }, { "authors": [ "André M. Carrington", "Paul W. Fieguth", "Helen H. Chen" ], "title": "A new Mercer sigmoid kernel for clinical data classification", "venue": "In International Conference of the IEEE Engineering in Medicine and Biology Society,", "year": 2014 }, { "authors": [ "Mohammed E. Fathy", "Azadeh Alavi", "Rama Chellappa" ], "title": "Discriminative log-Euclidean feature learning for sparse representation-based recognition of faces from videos", "venue": null, "year": 2016 }, { "authors": [ "Emily B. Fox", "David B. Dunson" ], "title": "Bayesian nonparametric covariance regression", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "E Gómez", "Miguel Gomez-Villegas" ], "title": "Marìn. A multivariate generalization of the power exponential family of distributions", "venue": "Communications in Statistics-theory and Methods,", "year": 1998 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "Mehrtash T. Harandi", "Conrad Sanderson", "Richard Hartley", "Brian C. Lovell" ], "title": "Sparse coding and dictionary learning for symmetric positive definite matrices: A kernel approach", "venue": "In ECCV,", "year": 2012 }, { "authors": [ "Mehrtash T. Harandi", "Mathieu Salzmann", "Richard Hartley" ], "title": "From manifold to manifold: Geometryaware dimensionality reduction for SPD matrices", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Peter D. Hoff", "Xiaoyue Niu" ], "title": "A covariance regression model", "venue": "Statistica Sinica,", "year": 2012 }, { "authors": [ "Zhiwu Huang", "Luc Van Gool" ], "title": "A Riemannian network for SPD matrix learning", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Zhiwu Huang", "Jiqing Wu", "Luc Van Gool" ], "title": "Building deep networks on Grassmann manifolds", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Catalin Ionescu", "Orestis Vantzos", "Cristian Sminchisescu" ], "title": "Matrix backpropagation for deep networks with structured layers", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "A. Karalic", "I. Bratko" ], "title": "First order regression", "venue": "Machine Learning,", "year": 1997 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Jan R. Magnus" ], "title": "On the concept of matrix derivative", "venue": "Journal of Multivariate Analysis,", "year": 2010 }, { "authors": [ "Jonathan Masci", "Davide Boscaini", "Michael M. Bronstein", "Pierre Vandergheynst" ], "title": "Geodesic convolutional neural networks on Riemannian manifolds", "venue": "In ICCVW,", "year": 2015 }, { "authors": [ "Ha Quang Minh", "Marco San-Biagio", "Vittorio Murino" ], "title": "Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Frank Nielsen" ], "title": "A family of statistical symmetric divergences based on", "venue": "Jensen’s inequality. arXiv,", "year": 2010 }, { "authors": [ "Frank Nielsen", "Richard Nock" ], "title": "On the centroids of symmetrized Bregman divergences", "venue": null, "year": 2007 }, { "authors": [ "Michael A. Nielsen", "Isaac L. Chuang" ], "title": "Quantum Computation and Quantum Information", "venue": null, "year": 2000 }, { "authors": [ "Peter Meer Oncel Tuzel", "Fatih Porikli" ], "title": "Pedestrian detection via classification on Riemannian manifolds", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2008 }, { "authors": [ "Théodore Papadopoulo", "Manolis I.A. Lourakis" ], "title": "Estimating the Jacobian of the singular value decomposition: Theory and applications", "venue": "In ECCV,", "year": 2000 }, { "authors": [ "Xavier Pennec", "Pierre Fillard", "Nicholas Ayache" ], "title": "A Riemannian framework for tensor computing", "venue": "International Journal of Computer Vision,", "year": 2005 }, { "authors": [ "Mohsen Pourahmadi" ], "title": "Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterisation", "venue": null, "year": 1999 }, { "authors": [ "C.E. Rasmussen", "C.K.I. Williams" ], "title": "Gaussian processes for machine learning", "venue": null, "year": 2006 }, { "authors": [ "David E. Rumelhart", "Geoffrey E. Hinton", "Ronald J. Williams" ], "title": "Learning representations by back-propagating", "venue": "errors. Nature,", "year": 1986 }, { "authors": [ "Eleftherios Spyromitros-Xioufis", "Grigorios Tsoumakas", "William Groves", "Ioannis Vlahavas" ], "title": "Multi-target regression via input space expansion: treating targets as inputs", "venue": "Machine Learning,", "year": 2016 }, { "authors": [ "Koji Tsuda", "Gunnar Rätsch", "Manfred K. Warmuth" ], "title": "Matrix exponentiated gradient updates for on-line learning and Bregman projection", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "logΛi", "i. B" ], "title": "THE α-DERIVATIVE: DEFINITION AND PROPERTIES Definition Let F be an m× n matrix function of an n× q matrix of variables X. The α-derivative of F(X) is defined as (Magnus", "venue": null, "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "For certain applications, it is desirable to construct a conditional covariance matrix as a function of the input (the explanatory variable). The problem arises, for instance, in spatial (and spatio-temporal) statistics, in relation to the heteroscedastic multivariate regression (e.g., Pourahmadi, 1999; Hoff & Niu, 2012; Fox & Dunson, 2015), where we deal with multivariate response measurements for which the typical assumption of homoscedasticity may not be suitable. In such cases, we require models that estimate the covariance matrix that captures the spatial variations in correlations between the elements of the response vector. The covariance matrix is a symmetric positive definite (SPD) matrix which can be challenging to estimate due to its non-Euclidean geometry (Pennec et al., 2005). The central problem that this work is concerned with is learning SPD matrices using neural networks.\nTo motivate our discussion, consider solving the problem of SPD matrix learning using a multilayer perceptron (MLP) as an example of a fully connected neural network (e.g., Goodfellow et al., 2016). To meet the SPD constraint, one would need to tailor the output layer of the MLP so that the estimated covariance matrix satisfies the SPD requirement. One possible approach would be to use the Cholesky decomposition. The main concern is that this approach does not take into account the non-Euclidean geometry of the SPD matrices. Using empirical evaluations, we will show that the use of “wrong geometry” results in the poor estimation of the SPD matrices in particular where data are scarce.\nThe primary objective here is to design a nonlinear architecture using neural networks that can effectively perform the task of SPD matrix learning. More specifically, our main contribution is to show how to alter the architecture of the MLP in such a way that it not only respects the SPD constraint, but also makes an explicit use of it. We will achieve this by: 1) Explicitly taking the non-Euclidean geometry of the underlying SPD manifolds (e.g., Pennec et al., 2005) into account by designing a new loss function, and 2) by deriving a new backpropagation algorithm (Rumelhart et al., 1986) that respects the SPD nature of the matrices. This new model will be referred to as the matrix multilayer perceptron (mMLP)1. The mMLP makes use of positive-definite kernels to satisfy the SPD requirement across all layers. Hence, it provides a natural way of enabling deep SPD matrix learning.\nWe take a step-by-step approach in the development of the model. We first develop a simplified version of the resulting model that is designed for learning SPD matrices (Section 3). We then extend this model into its most general form that can be used for joint estimation of the conditional mean function and the conditional covariance function in a mean-covariance regression setting (Section 3.2). An application of the model is discussed in the context of the heteroscedastic multivariate regression.\n1An implementation of this work will be made available via GitHub." }, { "heading": "2 RELATED WORK", "text": "SPD manifold metric. Earlier approaches for analyzing SPD matrices relied on the Euclidean space. But over the past decade, several studies suggest that non-Euclidean geometries such as the Riemannian structure may be better suited (e.g., Arsigny et al., 2006; Pennec et al., 2005). In this work, we consider the von Neumann divergence (e.g., Nielsen & Chuang, 2000) as our choice of the SPD manifold metric which is related to the Riemannian geometry. Previously, Tsuda et al. (2005) used this divergence in derivation of the matrix exponentiated gradients. Their work suggests its effectiveness for measuring dissimilarities between positive definite (PD) matrices.\nSPD manifold learning. There are multiple approaches towards the SPD matrix learning, via the flattening of SPD manifolds through tangent space approximations (e.g., Oncel Tuzel, 2008; Fathy et al., 2016), mapping them into reproducing kernel Hilbert spaces (Harandi et al., 2012; Minh et al., 2014), or geometry-aware SPD matrix learning (Harandi et al., 2014). While these methods typically employ shallow learning, the more recent line of research aims to design a deep architecture to nonlinearly learn target SPD matrices (Ionescu et al., 2015; Huang & Gool, 2017; Masci et al., 2015; Huang et al., 2018). Our method falls in this category but differs in the problem formulation. While the previous methods address the problem where the input is an SPD matrix and the output is a vector, we consider the reverse problem where the input is a matrix with an arbitrary size and the output is an SPD matrix.\nBackpropagation. Our extension of the matrix backpropagation differs from the one introduced by Ionescu et al. (2015). In their work, the necessary partial derivatives are computed using a two-step procedure consisting of first computing the functional that describes the variations of the upper layer variables with respect to the variations of the lower layer variables, and then computing the partial derivatives with respect to the lower layer variables using properties of the matrix inner product. In contrast, we make use of the concept of α-derivatives (Magnus, 2010) and its favorable generalization properties to derive a procedure which closely mimics the standard backpropagation." }, { "heading": "3 MATRIX MULTILAYER PERCEPTRON", "text": "Preliminaries: Matrix α-derivative. Throughout this work we adopt the narrow definition of the matrix derivatives known as the α-derivative (Magnus, 2010) in favor of the broad definition, the ω-derivative. The reason for this is that the α-derivative has better generalization properties. This choice turned out to be crucial in the derivation of the mMLP’s backpropagation routine which involves derivatives of matrix functions w.r.t. the matrix of variables.\nDefinition: Let F be an m× n matrix function of an n× q matrix of variables X. The α-derivative of F(X) is defined as (Magnus, 2010, Definition 2)\nDXF := ∂ vecF(X)\n∂ (vecX)> , (1)\nwhere DXF is an mp × nq matrix which contains all the partial derivatives such that each row contains the partial derivatives of one function with respect to all variables, and each column contains the partial derivatives of all functions with respect to one variable.\nFor convenience, the α-derivative’ basic properties, including the product rule and the chain rule, are summarized in Appendix B." }, { "heading": "3.1 THE BASIC FORM OF THE MMLP", "text": "Activation matrix function. Let Z = (z1, . . . , zd) denote a matrix of variables zi ∈ Rd. The activation function K(Z) defines a matrix function in the form of [K(Z)]i,j = κ(zi, zj), ∀i, j ∈ {1, . . . , d}, where κ is some differentiable activation function outputting scalar values. In the following, we restrict ourselves to the kernel functions which form PD activation matrix functions. For numerical stability reasons, irrespective of the functional form of κ, we normalize the resulting matrix. This can be achieved by enforcing the trace-one constraint,H(Z) = K(Z)/tr(K(Z)), where H denotes a differentiable SPD activation matrix function of trace one. Without loss of generality, throughout this work, we use the Mercer sigmoid kernel (Carrington et al., 2014) defined as\nκ(zi, zj) = tanh(αzi + β) tanh(αzj + β), (2)\nwhere α and β denote the slope and the intercept, respectively. Furthermore, denotes the dot product. In all experiments, we use default values of α = 1 and β = 0. The The α-derivative of the Mercer sigmoid kernel is computed in Appendix E.\nModel construction. Let X ∈ Rp1×p2 indicate the input matrix and Y ∈ Rd0×d0 indicate the corresponding output matrix, an SPD matrix of trace one. The mMLP of L hidden layers is shown as mMLP :X→Ŷ and constructed as Ŷ = H(Z0), Z0 = W0H1W>0 + B0, Hl = H(Zl), Zl = WlHl+1W>l + Bl, ∀ 1 ≤ l ≤ L, HL+1 = H(ZL+1), ZL+1 = WL+1vecX(WL+11p1p2)> + BL+1. (3)\nThe pair of Wl∈Rdl×dl+1 ,∀0 ≤ l≤ L, and WL+1∈RdL+1×p1p2 are the weight matrices, Bl∈Rdl×dl ,∀0≤ l ≤ L+1, are the bias matrices, Zl ∈ Rdl×dl ,∀0 ≤ l≤ L+1, are the latent input matrices, and Hl ∈ Rdl×dl ,∀1 ≤ l ≤ L+ 1, are latent output SPD matrices of trace one. Design choice. In the construction of (3), we have ensured that Hl are SPD matrices of trace one across all layers as opposed to only at the output layer. The idea behind this design choice is to propagate the nonlinearities introduced via the SPD activation matrix functions through all layers. This design choice turned out to be more effective than the alternative, and arguably simpler, design where the SPD requirement is met only at the output layer. We will discuss this further in Section 5.1.3, where we also present an illustrative numerical example.\nLoss function. We consider the normalized von Neumann divergence (e.g., Nielsen & Chuang, 2000), also commonly known as the quantum relative entropy (QRE), as the base for the loss function. For two arbitrary SPD matrices of trace one, Φ and Φ̃, the normalized von Neumann divergence is defined as:\n∆QRE(Φ̃||Φ) = tr(Φ̃logΦ̃− Φ̃logΦ), (4) where log denotes the matrix logarithm (taking Φ as an example, it is computed using logΦ = Vdiag(logλ)V>, where V and λ are the matrix of eigenvectors and the vector of eigenvalues from the eigendecomposition of Φ). The von Neumann divergence is asymmetric. However, it can be symmetrized by using the fact that the von Neumann entropy of trace one follows the class of generalized quadratic distances (Nielsen & Nock, 2007). Hence, we define the loss function as2\n`QRE(Ŷ,Y) = 1\n2 (∆QRE(Ŷ||Y) + ∆QRE(Y||Ŷ)). (5)\nTaking α-derivative of `QRE involves taking partial derivatives through the eigendecomposition. In the following, we derive a method for analytically computing the derivative of `QRE.\nThe α-derivative of the symmetrized von Neumann divergence. For the loss function defined in (5), using the α-derivative’s product rule and chain rule (refer to Appendix B), we obtain\nDŶ` = 1\n2 DŶtr((Ŷ −Y)logŶ)−\n1 2 DŶtr(ŶlogY), (6)\nwhere the above two terms are computed using\nDŶ(tr((Ŷ −Y)logŶ)) = (vec(logŶ)>)>︸ ︷︷ ︸ 1×d20 +\n vec(Ŷ> −Y>) vec( ∂ ∂Ŷ11 logŶ) vec(Ŷ> −Y>) vec( ∂ ∂Ŷ21 logŶ)\n... vec(Ŷ> −Y>) vec( ∂\n∂Ŷd0d0 logŶ)\n >\n︸ ︷︷ ︸ 1×d20\n, (7)\nDŶ(tr(ŶlogY)) = (vec(logY) >)>︸ ︷︷ ︸\n1×d20\n. (8)\n2Note that there are multiple ways of symmetrizing the von Neumann divergence. Our choice in (5) resembles to the α-Jensen–Shannon divergence for α = 1 (Nielsen, 2010).\nThe remaining part in the computation of (6) is to evaluate ∂ ∂Ŷij logŶ, for all i, j ∈ {1, . . . , d0}, which involves taking derivatives through the eigendecomposition. In the following, we take a similar approach as in Papadopoulo & Lourakis (2000) to compute the necessary partial derivatives.\nLet Ŷ = Υdiag(λ1, . . . , λd0)Υ > be the eigendecomposition. We can write\n∂\n∂Ŷij logŶ =\n∂\n∂Ŷij ΥΛΥ>, where Λ = diag(logλ1, . . . , logλd0),\n= ∂Υ\n∂Ŷij ΛΥ> + Υ\n∂Λ\n∂Ŷij Υ> + ΥΛ\n∂Υ> ∂Ŷij .\n(9)\nBy multiplying (9) from left and right by Υ> and Υ respectively, we obtain:\nΥ> ∂\n∂Ŷij logŶ Υ = Υ>\n∂Υ\n∂Ŷij Λ +\n∂Λ\n∂Ŷij + Λ\n∂Υ> ∂Ŷij Υ = Ξij(Υ)Λ + ∂Λ ∂Ŷij −ΛΞij(Υ), (10)\nwhere we have defined Ξij(Υ) = Υ> ∂∂Ŷij Υ and used the fact that Ξij(Υ) is an antisymmetric matrix, Ξij(Υ) + Ξ>ij(Υ) = 0, which in turn follows from the fact that Υ is an orthonormal matrix,\nΥ>Υ = Id0 ⇒ ∂Υ>\n∂Ŷij Υ + Υ>\n∂Υ\n∂Ŷij = Ξ>ij(Υ) + Ξij(Υ) = 0. (11)\nTaking the antisymmetric property of Ξij(Υ) into account in (10), we obtain ∂\n∂Ŷij logλk = ΥikΥjk, (12)\nΞij(Υkl) = ΥikΥjl + ΥilΥjk 2(logλl − logλk) , ∀l 6= k. (13)\nIt is notable that by construction, we do not have repeating eigenvalues, that is λk 6= λl, ∀k 6= l, so there exists a unique solution to (13). Once Ξij(Υ) is computed, it follows that\n∂Υ\n∂Ŷij = ΥΞij(Υ),\n∂Υ> ∂Ŷij = −Ξij(Υ)Υ>. (14)\nIn summary, the necessary partial derivatives for computing (9) is given by (12) and (14). Once (9) is computed for all i, j, we can evaluate (7) and ultimately evalaute (6).\nOptimization. The remaining steps are feed-forward computation, backpropagation, and learning, as in the standard MLP. However, here, the backpropagation requires taking derivatives with respect to the matrix functions. These steps are described in Appendix C." }, { "heading": "3.2 THE GENERAL FORM OF THE MMLP", "text": "We now discuss a general version of the mMLP which produces both a vector and an SPD matrix as outputs. An important application of this model is in heteroscedastic multivariate regression which we will discuss in Section 5.2.\nModel construction. As before, let X ∈ Rp1×p2 denote the input matrix. The corresponding outputs in this case are: an SPD matrix of trace one Y ∈ Rd0×d0 and y ∈ Rr0 . The mMLP of L hidden layers is denoted by mMLP :X→{ŷ, Ŷ} and constructed as: ŷ = h(z0), z0 = C0ŶA0h1 + b0, Ŷ = H(Z0), Z0 = W0H1W>0 + B0, hl = h(zl), zl = ClHlAlhl+1 + bl, ∀ 1 ≤ l ≤ L, Hl = H(Zl), Zl = WlHl+1W>l + Bl, ∀ 1 ≤ l ≤ L, hL+1 = h(zL+1), zL+1 = CL+1HL+1AL+11 + bL+1,\nHL+1 = H(ZL+1), ZL+1 = WL+1vecX(WL+11p1p2)> + BL+1,\n(15)\nwhere hl ∈ Rrl ,Hl ∈ Rdl×dl ,∀1 ≤ l ≤ L+ 1, zl,bl ∈ Rrl , Zl,Bl ∈ Rdl×dl , Cl∈Rrl×dl ,∀0≤ l≤L+1, Al∈Rdl×rl+1, and Wl∈Rdl×dl+1 , ∀0 ≤ l≤ L. Just as in the standard MLP, h is an activation function of choice, e.g., the hyperbolic tangent function.\nLoss function. The loss function here needs to be designed with the specific application in mind. In the case of the heteroscedastic multivariate regression, the loss can be defined based on the log-likelihood. This will be discussed in Section 4.\nOptimization. The remaining steps of feed-forward computation, backpropagation, and learning, are all described in Appendix D." }, { "heading": "4 THE MMLP IN HETEROSCEDASTIC MULTIVARIATE REGRESSION", "text": "In this section, we discuss an application of the mMLP in relation to the heteroscedastic multivariate regression, more specifically, the joint mean-covariance regression task.\nModel construction. Let Dtrain = {y(i),x(i)}ni=1 be our training dataset consisting of a set of inputs xi ∈ Rdx and a set of responses yi ∈ Rdy . Consider the following multivariate regression problem: y(i) = f(x(i)) + e(i), where f is a nonlinear function, and e(i) is the additive noise on the i-th response measurement. The goal is estimation of the conditional mean function E[y |x∗] and the conditional covariance function Var[y |x∗] for an unseen input x∗ ∈ Dtest. We consider two likelihood models based on our choices of the noise model, namely, the multivariate Gaussian model and its generalization the multivariate power exponential model.\nMultivariate Gaussian model. We first consider the noise model to follow a zero-mean multivariate Gaussian (mG) distribution with a dense covariance matrix, that is ei ∼ N (0,Σi). Let Σi = ηiΩi, where ηi = tr(Σi) and Ωi is a trace-one matrix. The noise model can accordingly be reformulated as ei ∼ Ntr1(0,Ωi, ηi) where Ntr1 is referred to as the trace-one mG distribution3. Although these two formulations of the mG distribution are identical, we find it easier to work with the latter. This is because the output layer of the mMLP model in (15) operates under the trace-one constraint. It is of course possible to drop the trace-one constraint from the output layer, but we would then also be forced to use a different kernel function than the Mercer sigmoid kernel in that layer. Instead, we find it easier to work with this reformulated mG distribution with a trace-one covariance matrix which allows us to use the same choice of kernel function (2) across all layers.\nGiven ei ∼ N (0,Ωi, ηi), the likelihood is defined as `Ntr1 := log Ntr1(y;µ,Ω, η), mMLPθ : x→ {µ,Ω, η}, (16)\nwhere E[y |x] = µ, and Var[y |x] = ηΩ. The set θ includes all the neural network parameters. Multivariate power exponential model. We next consider the noise model to follow a zero-mean multivariate power exponential (mPE) distribution (e.g., Gómez et al., 1998) which is a generalized variant of the mG distribution, that is e(i) ∼ E(0,Σi, αi, βi) where Σi is a symmetric real dispersion matrix and the pair of αi ∈ R+ and βi ∈ R+ control the tail and the shape of the distribution. As a special case, for α = 1 and β = 1, the mPE includes the mG distribution4.\nAs in the case of the mG distribution, we find it easier to work with a reformulated variant where the dispersion matrix is of trace-one. Let Σi = ηiΩi, where ηi = tr(Σi). The noise model can accordingly be represented as e(i) ∼ Etr1(0,Ωi, αi, βi, ηi) where Etr1 is referred to as the trace-one mPE distribution5.\nGiven e(i) ∼ Etr1(0,Ωi, αi, βi, ηi), the likelihood is defined as: `Etr1 := log Etr1(y;µ,Ω, α, β, η), mMLPθ : x→ {µ,Ω, α, β, η}, (17)\nwhere E[y |x] = µ and Var[y |x] = ηαν(β)Ω, where ν(β) = 2 1/βΓ( d+12β ) d Γ( d2β ) . The set θ includes all the neural network parameters.\nOptimization. Finally, the optimization involves learning the neural network parameters θ by maximizing the likelihoods given by (16) and (17).\n3Refer to Appendix F.1 for the exact functional form of the trace-one mG distribution. 4Figure G.1.1 visualizes the probability density function of the distribution for selected values of α and β. 5Refer to Appendix G for the exact functional form of the trace-one mPE distribution and its basic properties.\nA. truth D. MLP, B. mMLP, C. mMLP,\nFigure 1: SPD matrix learning (refer to Example 1 in Section 5.1.1). Two instances of target covariance (SPD) matrices (20 × 20). (B, C) Estimated covariance matrices by the mMLP using `QRE, and using `quad. (D) Estimated covariance matrices using the Cholesky-based MLP and `quad." }, { "heading": "5 EXPERIMENTS", "text": "Experiments are divided into two parts. The first part is on the empirical validation of the mMLP model in a supervised task of learning SPD matrices using synthetic data. The second part discusses our results in heteroscedastic multivariate regression on real-world datasets." }, { "heading": "5.1 SPD MATRIX LEARNING", "text": "" }, { "heading": "5.1.1 THE CHOICE OF LOSS FUNCTION", "text": "Consider the problem of learning SPD matrices on synthetic data using the mMLP model of (3). The objectives are to validate the model, and to evaluate the effect of the choice of the loss function on the overall performance.\nThe following loss functions are considered for this analysis: The first candidate is the loss function based on the normalized von Neumann divergence `QRE(Ŷ,Y) given by (5). The `QRE is related to the Riemannian geometry. The second candidate is the quadratic loss, `quad(Ŷ,Y) = tr((Ŷ −Y)(Ŷ −Y)>), which is related to the Euclidean geometry.\nExample 1. Consider the set Dtrain={xi,Yi}ntraini=1 of inputs xi ∈ R20 and corresponding SPD matrix outputs Yi ∈ Rd0×d0 which are in this case dense covariance matrices (refer to Appendix H.1 for details on the data generation). The goal is to estimate the covariance matrices Ŷ associated to the input vectors from the unseen test set Dtest = {xi}ntesti=1 . The training size is varied between ntrain ={102, 104} samples. The analysis is carried out for d0 ={10, 20}. Two examples of the test target outputs Yi for d0 =20 and ntrain = 102 are visualized in Figure 1-A.\nThe mMLP models (3) are trained using 3 layers (20 units per layer, dl = 20) under our two choices of loss functions: `QRE, and `quad. All models share the same initialization and the only difference here is the loss function. Refer to Appendix H.1 for additional details on the mMLP initialization 6. The performance is evaluated on the test set, ntest = 103, in terms of the losses as the error measures, shown as EQRE and Equad. Table 1 summarizes the results of the evaluation (also refer to Figure 1-B,C for the visual illustration). The key observation is that the quality of estimates differs considerably depending on the choice of the loss function. The loss function `QRE that takes into account the geometry of the SPD matrices clearly outperforms the one based on the Euclidean geometry, `qaud. The advantage is more pronounced for small set of training data." }, { "heading": "5.1.2 THE MMLP VS. THE MLP BASED ON CHOLESKY DECOMPOSITION", "text": "As we discussed in Section 1, one can take a heuristic approach and tailor the output layer of a vanilla MLP so that the resulting output matrix satisfies the SPD requirement.\nFor the sake of comparison, we solve the same task as in Example 1 using the standard MLP with 3 layers (200 units per layer) and with the quadratic loss. To meet the SPD requirement, we use the\n6Here and in general throughout this section, special care has been made to minimize the effect of overfitting through trying out various initializations and using early stopping.\nCholesky decomposition at the output layer using the known result that for a SPD matrix there is a unique lower triangular matrix, with ones as its diagonal entires, and a unique diagonal matrix with positive diagonal entires. Table 1 summarizes the results of the evaluation. Overall the performance is quite poor for small set of training data. As the size of training data grows, the performance improves as expected (refer to Figure 1-D for the visual illustration)." }, { "heading": "5.1.3 SHALLOW VS DEEP SPD MATRIX LEARNING", "text": "The design of the mMLP model in (3) enables a mechanism for deep SPD matrix learning by satisfying the SPD constraint across all input, hidden and output layers. The simpler approach would be to consider the standard MLP architecture across input and hidden layers but make use of the activation matrix functions only at the output layer to meet the SPD requirement: Ŷ = H(Z0), Z0 = W0h1(W01)> + B0, hl = h(zl), zl = Wlhl+1 + bl, 1 ≤ l ≤ L, hL+1 = h(zL+1), zL+1 = WL+1vecX + bL+1. (18)\nThis amounts to a shallow design in the sense that it does not enable a mechanism for preserving the SPD constraint across all layers during the learning.\nThe design in (3) allows nonlinearities to pass through layers via activation function matrices imposing the SPD constraint, whereas in the shallow design, nonlinearities are propagated across layers via activation functions without imposing any constraints. Our hypothesis is that the former has advantage over the latter in that it captures complex dependencies which are important for the SPD matrix learning. Below, we present a numerical example which indeed highlights the importance of preserving the SPD constraint across all layers when learning the SPD matrix.\nExample 2. Consider a similar experiment as in Example 1 for the case of ntrain = 102 and output dimensions d0 = {10, 20}. We directly compare the performance of (3) against (18) under different number of hidden layers L = {2, 4, 6}. For the mMLP model, the number of hidden units at each layer is set to 20, and for the MLP model it is set to 200 units. The shallow design (18) uses the hyperbolic tangent as the activation function h(·). The same choice of the activation matrix function H(·), given by (2), is used for both models. We use `QRE as the choice of the loss function for both models (refer to Appendix H.2 for additional details on the initialization). The performance is evaluated in terms of EQRE.\nTable 2 summarizes the results of the evaluation. Although the shallow design (18) performs relatively well, it underperforms in comparison to (3). Given the limited number of training samples, arbitrarily increasing the number of layers may not be necessarily advantageous, which is the case for both models. However, in this regard, the design in (18) is more sensitive." }, { "heading": "5.2 EXPERIMENTAL RESULTS ON THE MULTI-OUTPUT REGRESSION DATASETS", "text": "In heteroscedastic multivariate regression, the central task is to estimate the conditional covariance matrix that captures the spatial variations in correlations between the elements of the response vector. The underlying hypothesis is that if a model can effectively capture the heteroscedasticity, then it will provide better uncertainty quantification, and the estimation of the conditional mean response should also improve, in comparison to the model that is build based on the homoscedasticity assumption.\nFor this purpose, we compare our mMLP-based regression models against another heteroscedastibased mean-covariance regression model by Fox & Dunson (2015) and two homoscedastic-based mean-regression models, namely, the MLP regressor and the Gaussian process (GP). The models used in this experiment are summarized in Table 3. Real-world datasets. We compare the performance of the heteroscedastic and homoscedastic regression models on six real-world multi-output datasets. Key features of the datasets are summarized in Table 5a. The performance is evaluated in terms of the root-mean-square error (RMSE) on test sets, shown in Table 5b. The result suggests that the mMLP homoscedastic-based regression models are capable of capturing dependencies between the output measurements which contributes to the better estimation of the mean predictions." }, { "heading": "6 LIMITATIONS AND FUTURE WORK", "text": "The main limitation of the mMLP has to do with scalability to higher dimensions. The complexity associated with computing the α-derivative of the von Neumann loss function (5) at the output layer is O(d30). Taking the symmetric nature of the SPD matrices into account, the computational complexity at the hidden layer l reduces to O(d2l ). Our implementation of the matrix backpropagation involves the use of multiple Kronecker products. Although it facilitates the implementation, we would need access to the full Jacobian matrices (d2l × d2l ). However, these matrices are in fact available in the form of sparse block matrices, which means that it is possible to implement a memory-efficient computation of the tensor products without the need to actually have access to the full matrices. Future work is needed in this direction.\nWe believe that there are many cases in which the mMLP can prove to be useful. An interesting direction for future work is to investigate application of the model in the context of the conditional density estimation within the framework of variational autoencoders." }, { "heading": "7 DISCUSSION", "text": "We introduced a new method to learn SPD matrices, referred to as the matrix multilayer perceptron (mMLP). The mMLP takes the non-Euclidean geometry of the underlying SPD manifolds into account by making use of the von Neumann divergence as the choice of the SPD manifold metric. One key aspect of the mMLP is that it preserves the SPD constraint across all layers by exploiting PD kernel functions and a backpropagation algorithm that respects the inherent SPD nature of the matrices. We studied application of the model in the context of heteroscedastic multivariate regression. We showed the effectiveness of the proposed model on multiple real-world datasets." }, { "heading": "A MATRIX NOTATIONS", "text": "We use > for the transpose operator, tr(·) for the trace operator, and det(·) for the matrix determinant. The symmetric part of a square matrix B is denoted by sym(B) = (B + B>)/2. The Kronecker product is denoted by ⊗, the Hadamard product by ◦, and the dot product by . We use the vec-operator for column-by-column stacking of a matrix A, shown as vecA ≡ vec(A). Let A be an m× n matrix, the operator P(m,n) will then rearrange vecA to its matrix form: A = P(m,n)(vecA). For the m× n dimensional matrix A, the commutation matrix is shown asK(m,n) which is the mn×mn matrix that transforms vecA into vecA> as: K(m,n)vecA = vecA>. An m×m identity matrix is shown as Im. If T (X) : Rd×d → R is a real-valued function on matrices, then ∇XT (X) denotes the gradient with respect to the matrix X,∇XT (X) = [ ∂T ∂xij ] i,j=1:d\n. The matrix logarithm and the matrix exponential are written as logA and expA, respectively. The matrix exponential in the case of symmetric matrices can be expressed using the eigenvalue decomposition as expA = V(expΛ)V>, where V is an orthonormal matrix of eigenvectors and Λ is a diagonal matrix with the eigenvalues on the diagonal. The matrix logarithm is the inverse of the matrix exponential if it exists. If A is symmetric and strictly positive definite (PD) it is computed using logA = V(logΛ)V>, where (logΛ)i,i = logΛi,i.\nB THE α-DERIVATIVE: DEFINITION AND PROPERTIES\nDefinition Let F be an m× n matrix function of an n× q matrix of variables X. The α-derivative of F(X) is defined as (Magnus, 2010, Definition 2)\nDXF := ∂ vecF(X)\n∂ (vecX)> , (19)\nwhere DXF is an mp × nq matrix which contains all the partial derivatives such that each row contains the partial derivatives of one function with respect to all variables, and each column contains the partial derivatives of all functions with respect to one variable.\nProduct rule Let F (m× p) and G (p× r) be functions of X (n× q). Then the product rule for the α-derivative is given by (Magnus, 2010)\nDX(FG) = (G > ⊗ Im)DXF + (Ir ⊗ F)DXG. (20)\nChain rule Let F (m× p) be differentiable at X (n× q), and G (l × r) be differentiable at Y = F(X), then the composite function H(X) = G(F(X)) is differentiable at X, and\nDXH = DYGDXF, (21)\nwhich expresses the chain rule for the α-derivative (Magnus, 2010)." }, { "heading": "C THE BASIC CASE OF THE MMLP", "text": "" }, { "heading": "C.1 FEEDFORWARD STEP", "text": "At the feed-forward computation, we compute and store the latent outputs Ŷ, Hl for all l ∈ {L+ 1, . . . , 1} using the current setting of the parameters, which are Wl, and Bl computed from the learning step, Appendix C.3." }, { "heading": "C.2 BACKPROPAGATION STEP", "text": "We first summarize the necessary α-derivatives for the backpropagation, and then write down the backpropagation procedure accordingly.\nDerivatives required for backpropagation The derivative of the activation matrix function depends on the specific choice of kernel function, and in general it is computed readily from the\ndefinition of α-derivative,\nDZlH(Zl) := ∂ vecH(Zl) ∂ (vecZl)> , l ∈ {0, . . . , L+ 1}. (22)\nFor our specific choice of activation function, the Mercer Sigmoid kernel (2), it is computed in Appendix E.\nVia repeated use of the product rule of α-derivatives (20), we obtain\nDWlZl = (Wl ⊗ Idl)(H>l+1 ⊗ Idl) + (Idl ⊗ (WlHl+1))K(dl,dl+1), l ∈ {0, . . . , L}, (23) DWL+1Zl = (WL+11p1p2 ⊗ IdL+1)((vecX)> ⊗ IdL+1) + (IdL+1 ⊗ (WL+1vecX))(IdL+1 ⊗ 1>p1p2)K(dL+1,p1p2), (24) DHl+1Zl = (Wl ⊗ Idl)(Idl+1 ⊗Wl), l ∈ {0, . . . , L}, (25)\nwhereK is the commutation matrix (refer to Appendix A for a summary of the matrix notation).\nBackpropagation In the interest of simple expressions, let H0 ≡ Ŷ. Backpropagation to the hidden layer l is computed recursively using the derivatives computed at the previous layer according to\nDZl` = DHl`DZlHl, ∀l ∈ {0, . . . , L+ 1}, (26) DWl` = DZl`DWlZl, ∀l ∈ {0, . . . , L+ 1}, (27) DHl+1` = DZl`DHl+1Zl, ∀l ∈ {0, . . . , L}, (28) DBl` = DZl`, ∀l ∈ {0, . . . , L+ 1}. (29)" }, { "heading": "C.3 LEARNING STEP", "text": "Learning involves updating the weights Wl and the biases Bl using derivatives computed during the backpropagation step. These are updated using derivatives DWl` and DBl` for a given learning rate η as\nWl ←Wl − ηP(dl,dl+1)(DWl`), ∀l ∈ {0, . . . , L+ 1}, (30) Bl ← Bl − ηP(dl,dl)(DBl`), ∀l ∈ {0, . . . , L+ 1}, (31)\nwhere P is the rearrangement operator introduced in Appendix A." }, { "heading": "D THE GENERAL FORM OF THE MMLP", "text": "" }, { "heading": "D.1 FEEDFORWARD STEP", "text": "The forward path involves computing and storing both hl, ŷ and Hl, Ŷ using the current settings of the parameters, for all l ∈ {0, . . . , L+ 1}." }, { "heading": "D.2 BACKPROPAGATION STEP", "text": "Most of the necessary derivatives are identical to the ones in Appendix C.2. However, there are some additional derivatives needed which we will discuss in the following. We then write down the backpropagation formula." }, { "heading": "D.2.1 REQUIRED DERIVATIVES FOR BACKPROPAGATION", "text": "The derivative of the activation function depends on the choice of the function, and it is computed using the definition of the α-derivative,\nDzlhl(zl) = ∂ h(zl)\n∂ (zl)> , l ∈ {0, . . . , L+ 1}. (32)\nThe other required derivatives are computed as\nDAlzl = ClHl, l ∈ {0, . . . , L}, (33) DAL+1zL+1 = (CL+1HL+1)(1 > rL+1 ⊗ IdL+1), (34) DClzl = (HlAlhl+1) > ⊗ Irl , l ∈ {0, . . . , L}, (35) DCL+1zL+1 = (HL+1AL+11rL+1) > ⊗ IrL+1 , (36) Dhl+1zl = ClHlAl, l ∈ {0, . . . , L}, (37) DHlzl = ((Alhl+1)\n> ⊗ Irl)(Idl ⊗Cl), l ∈ {0, . . . , L}, (38) DHL+1zL+1 = ((AL+11L+1) > ⊗ IrL+1)(IdL+1 ⊗CL+1). (39)\nBackpropagation For simplicity of expressions, let h0 ≡ ŷ and H0 ≡ Ŷ. The derivatives are recursively computed as\nDh0` ≡ Dŷ` (40) DH0` ≡ DŶ` = Dz0`DŶz0 + DŶ` (41) Dzl` = Dhl`Dzlhl, ∀l ∈ {0, . . . , L+ 1}, (42) Dhl+1` = Dzl`Dhl+1zl, ∀l ∈ {0, . . . , L}, (43) DZl` = DHl`DZlHl, ∀l ∈ {0, . . . , L+ 1}, (44) DHl+1` = Dzl+1`DHl+1zl+1 + DZl`DHl+1Zl, ∀l ∈ {0, . . . , L}, (45) DAl` = Dzl`DAlzl, ∀l ∈ {0, . . . , L+ 1}, (46) DCl` = Dzl`DClzl, ∀l ∈ {0, . . . , L+ 1}, (47) DWl` = DZl`DWlZl, ∀l ∈ {0, . . . , L+ 1}, (48) Dbl` = Dzl`, ∀l ∈ {0, . . . , L+ 1}, (49) DBl` = DZl`, ∀l ∈ {0, . . . , L+ 1}. (50)" }, { "heading": "D.3 LEARNING STEP", "text": "The learning step involves updating the weights and the biases which are computed using derivatives computed from the backpropagation step. Update rules for Wl, and Bl are the same as the ones given in Appendix C.3. The remaining parameters are learned in a similar fashion,\nAl ← Al − ηP(dl,rl+1)(DAl`), ∀l ∈ {0, . . . , L+ 1}, (51) Cl ← Cl − ηP(rl,dl)(DCl`), ∀l ∈ {0, . . . , L+ 1}, (52) bl ← bl − ηP(rl,1)(Dbl`), ∀l ∈ {0, . . . , L+ 1}. (53)\nE THE α-DERIVATIVE OF THE MERCER SIGMOID KERNEL\nThe α-derivative of the Mercer sigmoid kernel can be computed as\nDZH = · · · · · · · · · · · · ∂ ∂zi κmn trK︸ ︷︷ ︸\n1×dl\n· · ·\n· · · · · · · · · , ∀ i,m, n ∈ {1, . . . , dl}, (54) where zi indicates the ith column of Z, trK ≡ trK(Z), κmn ≡ κ(zm, zn) as defined in (2), and\n∂\n∂zi\nκmn trK =\n( α(1> − f(zi) ◦ f(zi)) (trK)2 ) ◦ ( trK f ( ∂ ∂zi (zm ◦ zn) ) − 2κmnf(zi) ) , (55)\nwhere f(zi) := tanh(αzi − β).\nUnder review as a conference paper at ICLR 2020\n13 Supplementary figures195\nFigure 1: The probability density function of a trace-one mPE distribution, Etr1(µ, M/⌘, ⌘,↵, ) for fixed values of µ, M , ⌘ = tr(M) and varying values of scale and shape parameters (↵, ). When ↵ = 1 and = 1, the density corresponds to the multivariate Gaussian distribution N (µ, M).\n10\n13 Supplementary figures195\nFigure 1: The probability density function of a trace-one mPE distribution, Etr1(µ, M/⌘, ⌘,↵, ) for fixed values of µ, M , ⌘ = tr(M) and varying values of scale and shape parameters (↵, ). When ↵ = 1 and = 1, the density corresponds to the multivariate Gaussian distribution N (µ, M).\n10\n13 Supplementary figures195\nFigure 1: The probability density function of a trace-one mPE distribution, Etr1(µ, M/⌘, ⌘,↵, ) for fixed values of µ, M , ⌘ = tr(M) and varying values of scale and shape parameters (↵, ). When ↵ = 1 and = 1, the density corresponds to the multivariate Gaussian distribution N (µ, M).\n10\nα\nβ\nFigure G.1: The probability density function of a trace-one mPE distribution, Etr1(µ,M/η, η, α, β) for fixed values of µ,M , η = tr(M) and varying values of scale and shape parameters (α, β). When α = 1 and β = 1, the density corresponds to the multivariate Gaussian distribution N (µ,M)." }, { "heading": "F TRACE-ONE MULTIVARIATE GAUSSIAN DISTRIBUTION", "text": "" }, { "heading": "F.1 PROBABILITY DENSITY FUNCTION", "text": "For a d-dimensional random variable ϑ ∈ Rd, we define the trace-one Gaussian distribution according to\nNtr1(ϑ;µ,Ω,η)= 1\ndet(2πηΩ) 1 2\ne− 1 2 (ϑ−µ) >(ηΩ)−1(ϑ−µ), (56)\nwhere µ ∈ Rd is the mean, η ∈ R+ is the scale parameter, and Ω ∈ Rd×d is the trace-one covariance matrix, tr(Ω) = 1.\nF.2 THE α-DERIVATIVES OF THE LOG-PDF\nThe α-derivatives of the trace-one Gaussian distribution’s log-pdf with respect to its parameters are summarized as\nDΩlogNtr1(ϑ;µ,Ω, η) = − 1 2 (vec(Ω−1))> − 1 2 (DΩt) >, (57) DΩt = −vec((ηΩ)−1(ϑ− µ)(ϑ− µ)>Ω−1), (58)\nDµlogNtr1(ϑ;µ,Ω, η) = (ϑ− µ)(ηΩ)−1, (59)\nDlogηlogNtr1(ϑ;µ,Ω, η) = − d\n2 +\n1 2 (ϑ− µ)>(ηΩ)−1(ϑ− µ). (60)" }, { "heading": "G TRACE-ONE MULTIVARIATE POWER EXPONENTIAL (MPE) DISTRIBUTION", "text": "" }, { "heading": "G.1 PROBABILITY DENSITY FUNCTION", "text": "For a random variable ϑ ∈ Rd, the trace-one mPE distribution can be expressed as\nEtr1(ϑ;µ,Ω, α, β, η)= c(α, β)\n(det(ηΩ)) 1 2\nexp { −1\n2\n( t(ϑ;µ,Ω)\nαη\n)β} ,\nc(α, β) = βΓ(d2 )\nπ d 2 Γ( d2β )2 d 2β α d 2\n, t(ϑ;µ,Ω) := (ϑ− µ)>Ω−1(ϑ− µ), tr(Ω) = 1. (61)\nHere, µ is the mean vector, and Ω is a d× d symmetric real dispersion matrix where tr(Ω) = 1. The parameter η has the same role as in the trace-one Gaussian distribution. The pair of α ∈ R+\nand β ∈ R+ control the tail and the shape of the distribution. As a special case, the mPE includes the Gaussian distribution: For α = 1 and β = 1, the trace-one mPE distribution corresponds to the trace-one multivariate Gaussian distribution. Figure G.1.1 visualizes the probability density function of the distribution for selected values of α and β, for the case of d = 2.\nRemark: Very large and very small values of α and β might be undesirable as, for one, they pose numerical challenges. In practice, these parameters can be restricted within a range by choosing suitable output activation functions. In all experiments in this paper, we choose to bound them conservatively as: 0.5 ≤ α, β ≤ 1.5, using the sigmoid function." }, { "heading": "G.2 MOMENTS", "text": "Let ϑ ∈ Rd and ϑ ∼ Etr1(µ,Ω, α, β, η). The mPE’s mean vector and covariance matrix are computed from:\nE[ϑ] = µ, V[ϑ] = αην(β)Ω, ν(β) := 21/βΓ(d+22β )\ndΓ( d2β ) , (62)\nwhere Γ(·) denotes the gamma function.\nG.3 THE α-DERIVATIVES OF THE LOG-PDF\nIt is straightforward to take derivatives of the mPE’s log-pdf using the favorable generalization properties of the α-derivative’s chain and product rules. These are summarized as:\nDΩlogEtr1(ϑ;µ,Ω, α, β, η) = − 1 2 (vec(Ω−>))> − β 2αη (t/αη)β−1DΩt, (63)\nDΩt = −(vec(Ω−1(ϑ− µ)(ϑ− µ)>Ω−1)>)>, (64)\nDµlogEtr1(ϑ;µ,Ω, α, β, η) = − β\n2αη (t/αη)β−1Dµt, (65)\nDµt = −2(vec((ϑ− µ)>Ω−1)>)>, (66)\nDlogηlogEtr1(ϑ;µ,Ω, α, β, η) = − d\n2 +\nβt\n2αη (t/αη)β−1, (67)\nDαlogEtr1(ϑ;µ,Ω, α, β, η) = − d\n2α +\nβt\n2ηα2 (t/αη)β−1, (68)\nDβ logEtr1(ϑ;µ,Ω, α, β, η) = Dβ logc(α, β)− 1\n2 (t/αη)β log(t/αη), (69)\nDβ logc(α, β) = 1\nβ +\nd\n2β2 (ψ(d/2β) + log2). (70)" }, { "heading": "H ADDITIONAL DETAILS ON THE EXPERIMENTS", "text": "" }, { "heading": "H.1 EXAMPLE 1", "text": "Data generation Let A ∈ Rd0×d0 be a matrix where each of its elements is generated from a standard normal distribution. The matrixA is kept fixed. The ith class covariance Yi is computed according to the following procedure:\n1. Draw 104 samples from a known Gaussian distribution N (µi,Σi) with a unique mean µi ∈ Rd0 and a unique dense covariance matrix Σi ∈ Rd0×d0 .\n2. Let tj be a random sample from this Gaussian. For this sample, compute yj = Atj . For all 104 samples, collect the results into y = {yj}10 4\nj=1. 3. Compute the sample covariance of y and normalize the resulting covariance matrix to trace\none, that is Yi ← cov(y)/tr(cov(y)).\nInitialization All models use the same batch size (equal to 5), the same choice of activation matrix function, which is given by the Mercer sigmoid kernel (2), and the same optimizer (the Adam optimizer (Kingma & Welling, 2014) with default settings)." }, { "heading": "H.2 EXAMPLE 2", "text": "Data generation See the data generation procedure in Example 1.\nInitialization Both models (18) and (3) use the same batch size (equal to 5), the same choice of loss function (5), and the same optimizer (the Adam optimizer (Kingma & Welling, 2014) with default settings). Both models use the same choice of the output activation matrix function, given by the Mercer sigmoid kernel (2). The model in (18) uses the hyperbolic tangent as the activation function across the hidden layers, while (3) makes use of the same choice of the activation matrix function as in its output layer." }, { "heading": "H.3 MULTI-OUTPUT REGRESSION DATASETS", "text": "H.3.1 oes10\nThe dataset oes10 was obtained from Spyromitros-Xioufis et al. (2016). The Occupational Employment Survey (OES) datasets contain records from the years of 2010 (OES10) of the annual Occupational Employment Survey compiled by the US Bureau of Labor Statistics. As described in (Spyromitros-Xioufis et al., 2016), \"each row provides the estimated number of full-time equivalent employees across many employment types for a specific metropolitan area\". We selected the same 16 target variables as listed in (Spyromitros-Xioufis et al., 2016, Table 5). The remaining 298 variables serve as the inputs. Data samples were randomly divided into training and test sets (refer to Table 5a).\nH.3.2 edm\nThe dataset edm was obtained from Karalic & Bratko (1997). The electrical discharge machining (EDM) dataset contain domain measurements in which the workpiece surface is machined by electrical discharges occurring in the gap between two electrodes: the tool and the workpiece. Given the two input variables, gap control and flow control, the aim here is to predict the other 16 target variables representing mean values and deviations of the observed quantities of the considered machining parameters. Data samples were randomly divided into training and test sets (refer to Table 5a).\nH.3.3 atp1d AND atp7d\nThe datasets atp1d and atp7d were obtained from (Spyromitros-Xioufis et al., 2016). The Airline Ticket Price (ATP) dataset includes the prediction of airline ticket prices. As described in (Spyromitros-Xioufis et al., 2016), the target variables are either the next day price, atp1d, or minimum price observed over the next seven days atp7d for 6 target flight preferences listed in (Spyromitros-Xioufis et al., 2016, Table 5). There are 411 input variables in each case. The inputs for each sample are values considered to be useful for prediction of the airline ticket prices for a specific departure date, for example, the number of days between the observation date and the departure date, or the boolean variables for day-of-the-week of the observation date. Data samples were randomly divided into training and test sets (refer to Table 5a).\nH.3.4 scm1d AND scm20d\nThe datasets scm1d and scm20d were obtained from (Spyromitros-Xioufis et al., 2016). The Supply Chain Management (SCM) datasets are derived from the Trading Agent Competition in Supply Chain Management (TAC SCM) tournament from 2010. As described in (Spyromitros-Xioufis et al., 2016), each row corresponds to an observation day in the tournament. There are 280 input variables in these datasets which are observed prices for a specific tournament day. The datasets contain 16 regression targets, where each target corresponds to the next day mean price scm1d or mean price for 20 days in the future scm20d for each product (Spyromitros-Xioufis et al., 2016, Table 5). Data samples were randomly divided into training and test sets (refer to Table 5a)." } ]
2,019
null
SP:228fd66964ccbf61d40a38bd12db78cad1401136
[ "In this paper, the authors consider stochastic optimization in the setting where a validation function is used to guide the termination of the algorithm. In more details, the algorithm terminates if the gradient of the validation function at an iterate is smaller than a threshold. In this framework, the authors consider several variants of SGD, including distributed variant and SVRG, for each of which the authors study the expected number of iterations for a prescribed accuracy under an assumption between the training and validation set.", "This paper proposes an optimization approach in which the optimizer computes the gradient on a given function yet uses another to decide a stopping time. Conceptually those functions are empirical errors on train and validation folds in the most common setting, although the authors seem to use other settings later in the paper to consider decentralized optimization schemes. The authors introduce a bound on the Wasserstein distance between the train and validation distributions in their analysis which plays a crucial role in their results. The authors use these results to motivate variants of existing optimization algorithms. " ]
This work examines the convergence of stochastic gradient algorithms that use early stopping based on a validation function, wherein optimization ends when the magnitude of a validation function gradient drops below a threshold. We derive conditions that guarantee this stopping rule is well-defined and analyze the expected number of iterations and gradient evaluations needed to meet this criteria. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach for stochastic gradient descent (SGD), allowing for biased update directions subject to a Lyapunov condition. We apply the approach to obtain new bounds on the expected running time of several algorithms, including Decentralized SGD (DSGD), a variant of decentralized SGD, known as Stacked SGD, and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping.
[]
[ { "authors": [ "A. Agarwal", "L. Bottou" ], "title": "A lower bound for the optimization of finite sums", "venue": "In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume", "year": 2015 }, { "authors": [ "A. Agarwal", "J.C. Duchi" ], "title": "Distributed delayed stochastic optimization", "venue": "Advances in Neural Information Processing Systems", "year": 2011 }, { "authors": [ "Allen-Zhu", "Z. Natasha" ], "title": "Faster Non-Convex Optimization Than SGD", "venue": "In Proceedings of the 32nd Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Z. Allen-Zhu" ], "title": "How To Make the Gradients Small Stochastically", "venue": "In Proceedings of the 32nd Conference on Neural Information Processing Systems, NeurIPS", "year": 2018 }, { "authors": [ "Z. Allen-Zhu", "E. Hazan" ], "title": "Variance reduction for faster non-convex optimization", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "F. Bach", "E. Moulines" ], "title": "Non-strongly-convex smooth stochastic approximation with convergence rate o (1/n)", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "D.P. Bertsekas", "J.N. Tsitsiklis" ], "title": "Gradient convergence in gradient methods with errors", "venue": "SIAM Journal on Optimization,", "year": 2000 }, { "authors": [ "J. Blanchet", "C. Cartis", "M. Menickelly", "K. Scheinberg" ], "title": "Convergence Rate Analysis of a Stochastic Trust Region Method via Submartingales", "venue": null, "year": 2016 }, { "authors": [ "A. Defazio", "F. Bach", "S. Lacoste-Julien" ], "title": "Saga: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "S. Dereich", "M. Scheutzow", "R. Schottstedt" ], "title": "Constructive quantization: Approximation by empirical measures", "venue": "Ann. Inst. H. Poincar Probab. Statist., 49(4):1183–1203,", "year": 2013 }, { "authors": [ "D. Duvenaud", "D. Maclaurin", "R. Adams" ], "title": "Early stopping as nonparametric variational inference", "venue": "Proceedings of the 19th International Conference on Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "S. Ghadimi", "G. Lan" ], "title": "Stochastic first- and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "S. Ghadimi", "G. Lan" ], "title": "Accelerated gradient methods for nonconvex nonlinear and stochastic programming", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "M. Hardt", "B. Recht", "Y. Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "D. Hsu", "S. Sabato" ], "title": "Loss minimization and parameter estimation with heavy tails", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "R. Johnson", "T. Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "H. Kushner", "D. Clark" ], "title": "Stochastic Approximation Methods for Constrained and Unconstrained Systems", "venue": "Number v. 26 in Applied Mathematical Sciences. Springer-Verlag,", "year": 1978 }, { "authors": [ "N.L. Roux", "M. Schmidt", "F.R. Bach" ], "title": "A stochastic gradient method with an exponential convergence rate for finite training sets", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "G. Lai", "Chang", "W.-C", "Y. Yang", "H. Liu" ], "title": "Modeling long-and short-term temporal patterns with deep neural networks", "venue": "In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval,", "year": 2018 }, { "authors": [ "L. Lei", "C. Ju", "J. Chen", "M.I. Jordan" ], "title": "Non-convex finite-sum optimization via scsg methods", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "X. Lian", "Y. Huang", "Y. Li", "J. Liu" ], "title": "Asynchronous parallel stochastic gradient for nonconvex optimization", "venue": "In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2,", "year": 2015 }, { "authors": [ "X. Lian", "C. Zhang", "H. Zhang", "Hsieh", "C.-J", "W. Zhang", "J. Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "J. Lin", "L. Rosasco" ], "title": "Optimal learning for multi-pass stochastic gradient methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "L. Ljung" ], "title": "Analysis of recursive stochastic algorithms", "venue": "IEEE transactions on automatic control,", "year": 1977 }, { "authors": [ "A. Nemirovski", "D. Yudin" ], "title": "Problem Complexity and Method Efficiency in Optimization", "venue": null, "year": 1983 }, { "authors": [ "A. Nemirovski", "A. Juditsky", "G. Lan", "A. Shapiro" ], "title": "Robust stochastic approximation approach to stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2009 }, { "authors": [ "C. Paquette", "K. Scheinberg" ], "title": "A Stochastic Line Search Method with Convergence Rate Analysis", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "A. Rakhlin", "O. Shamir", "K. Sridharan" ], "title": "Making gradient descent optimal for strongly convex stochastic optimization", "venue": "In Proceedings of the 29th International Coference on International Conference on Machine Learning,", "year": 2012 }, { "authors": [ "S. Reddi", "A. Hefny", "S. Sra", "B. Poczos", "A. Smola" ], "title": "Stochastic variance reduction for nonconvex optimization", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "S.J. Reddi", "S. Sra", "B. Pczos", "A. Smola" ], "title": "Fast incremental method for smooth nonconvex optimization", "venue": "In 2016 IEEE 55th Conference on Decision and Control (CDC),", "year": 2016 }, { "authors": [ "O. Shamir" ], "title": "A stochastic pca and svd algorithm with an exponential convergence rate", "venue": "In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "C. Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "D. Williams" ], "title": "Probability with Martingales", "venue": null, "year": 1991 }, { "authors": [ "H. Zhang", "S.J. Reddi", "S. Sra" ], "title": "Riemannian svrg: Fast stochastic optimization on riemannian manifolds", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "S. Zhang", "A. Choromanska", "Y. LeCun" ], "title": "Deep learning with elastic averaging sgd", "venue": "In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2015 }, { "authors": [ "M. Zinkevich", "M. Weimer", "L. Li", "A.J. Smola" ], "title": "Parallelized stochastic gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Vt" ], "title": "The second line uses the Lipschitz gradient property, and the second to last line follows from Jensen’s inequality", "venue": null, "year": 1991 }, { "authors": [], "title": "ANALYSIS OF DECENTRALIZED SGD The following result is a restatement of Lemma 5 of Lian et al. (2017)", "venue": "Lemma D.1. Under Assumption 5.1,", "year": 2017 }, { "authors": [ "Reddi" ], "title": "2016a)), the only difference being that conditional expectations", "venue": null, "year": 2016 }, { "authors": [ "G V" ], "title": "EXPERIMENTAL METHODOLOGY G.1 SSGD EXPERIMENTS The neural network model used for these experiments is LSTNet Lai et al. (2018)with CUDA-aware MPI and extended to use the Stacked SGD training method. The objective function is the squared", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "This work considers the minimization of a differentiable and possible nonconvex objective function:\nmin x∈Rd f(x). (1)\nA generally accepted success criteria for algorithms that use only first-order information is that an approximate stationary point is generated. These are points x ∈ Rd at which the function f has a small gradient. In a typical machine learning scenario, f is the average loss over a dataset of training examples, and the method of choice involves using some form of stochastic gradient method, for instance, stochastic gradient descent, or SGD (see Algorithm 1). The success of SGD in machine learning problems has led to many extensions of the algorithm, including variance-reduced and distributed variants (reviewed in Section 1.1).\nA common approach to stopping optimization in practice is to use early stopping based on a validation function. In this scenario, a stopping criterion is periodically evaluated on the validation function, and the algorithm stops once this criterion is met. The validation and training sets often are disjoint (although this is not required in the present work). Although this approach is used frequently, there is little theoretical work on the runtime of nonconvex optimization using early stopping based on a validation function. In general, the runtime and performance will depend on several factors, including the relation between the validation and training functions, and the desired level of solution accuracy. In this work, the stopping criterion is that the algorithm has generated a point, which is an approximate stationary point for the validation function. Our analysis focuses on bounding the number of iterations and gradient evaluations used until the algorithm meets the stopping criteria. Formally, we consider the stopping time defined as the first time an iterate has the property of being an approximate stationary point for the validation function, and we derive upper bounds on the expected value of this stopping time. Using bounds on the Wasserstein distance between the training and validation sets, we also may derive a bound on the stationary gap of the training function at the resulting iterate. As an extension, we also describe how Wasserstein concentration bounds can be used to bound the stationarity gap with respect to the testing distribution to which both the training and validation datasets are drawn.\nWe apply our analysis to several settings, including stochastic gradient descent (SGD), stochastic variance reduced gradient (SVRG), decentralized SGD (DSGD) and stacked SGD (SSGD). The\nresult is new bounds on the expected number of Incremental First-order Oracle (IFO) calls needed to generate approximate stationary points for some known algorithms (SGD; DSGD; SVRG), as well as a new algorithm (SSGD).\nMain contributions Our main contributions include:\n– We present a non-asymptotic analysis of SGD with early stopping that leads to a bound on the expected number of gradient evaluations needed to find approximate stationary points of the training function (Corollary 3.5). The analysis allows for biases in the update direction, subject to a Lyapunov-type inequality on the error terms.\n– We rigorously analyze the expected running time of two distributed SGD algorithms: Decentralized SGD, and Stacked SGD (Algorithm 2), a new decentralized form of SGD designed to exploit connectivity patterns consisting of a network of parameter-server type clusters (Corollary 4.4.)\n– We apply the analysis to nonconvex SVRG to obtain a bound on the expected number of IFO calls for this algorithm (Corollary 6.2).\n– We demonstrate how Wasserstein concentration bounds can be leveraged to bound the generalization performance of parameters returned by SGD with early stopping (Corollary 7.2)." }, { "heading": "1.1 RELATED WORK", "text": "The study of stochastic optimization goes back (at least) to the pioneering efforts of Robbins and Monro Robbins & Monro (1951). Subsequent developments include the ordinary differential equation (ODE) method Ljung (1977) and stochastic approximation Kushner & Clark (1978), which emphasizes the asymptotic behavior of the algorithms.\nThe subject of non-asymptotic performance guarantees has attracted interest as well, including lower bounds on algorithm performance Nemirovski & Yudin (1983). For a review of non-asymptotic guarantees for convex SGD, the reader may consult Nemirovski et al. (2009); Rakhlin et al. (2012); Bach & Moulines (2013). Extensions such as distributed Zinkevich et al. (2010) and asynchronous Zhang et al. (2015); Agarwal & Duchi (2011) variants of the algorithms also have been investigated. Many machine learning optimization problems involve an objective represented as a finite sum of functions, and, for this case, variance reduction techniques lead to improved rates of convergence over SGD Johnson & Zhang (2013); L. Roux et al. (2012); Defazio et al. (2014).\nThe randomized stochastic gradient (RSG) method Ghadimi & Lan (2013) uses randomization to obtain a non-asymptotic performance guarantee for SGD. The randomization technique has became a standard tool for analyzing optimization algorithms in the nonconvex setting Ghadimi & Lan (2016); Lian et al. (2017); Allen-Zhu (2018a;b); Reddi et al. (2016b;a); Zhang et al. (2016); Reddi et al. (2016b); Lei et al. (2017); Lian et al. (2015). Follow-up works have included analysis of nonconvex optimization in more sophisticated algorithmic settings, such as asynchronous Lian et al. (2015) and decentralized Lian et al. (2017) optimization. Analysis of variance reduced optimization has extended beyond convex functions beginning with an application to principal components analysis Shamir (2015) and later to general nonconvex functions Allen-Zhu & Hazan (2016); Reddi et al. (2016b;a); Lei et al. (2017). A highlight in this area is that the IFO complexity of SVRG for nonconvex functions is superior to that of RSG Allen-Zhu & Hazan (2016); Reddi et al. (2016a). Algorithms with better convergence rates than SVRG also have been developed Allen-Zhu (2018a;b).\nWe are particularly interested in results for biased SGD, e.g., Bertsekas & Tsitsiklis (2000) which considers the asymptotic convergence of biased SGD to stationary points. In this work, we are interested in a similar scenario but instead focus on the non-asymptotic behavior of the algorithm.\nSeveral recent works also have explored the average amount of resources needed to reach a desired performance level in optimization. The expected running time of a stochastic trust region algorithm (STORM) is given in Blanchet et al. (2016). A similar methodology also has been used to analyze stochastic line search methods Paquette & Scheinberg (2018). Our convergence analysis is similar in spirit to these works as we also are interested in the expected amount of time or other resources required to meet the performance guarantee. However, the algorithms and assumptions in this work differ.\nOther recent work has analyzed the theoretical aspects of early stopping. For instance, Duvenaud et al. (2016) developed an interpretation of early stopping in terms of variational Bayesian inference. Early stopping for a least squares problem in a reproducing kernel Hilbert space has been treated in Lin & Rosasco (2016). The implications of early stopping generalization were studied in Hardt et al. (2016). However, to our knowledge, this work is the first to analyze the algorithms’ runtimes when using a validation function for early stopping in nonconvex optimization." }, { "heading": "2 PRELIMINARIES", "text": "Let f : Rq×Rd → R be a loss function whose value we denote by f(y, x). Intuitively, the variable y represents an input/output pair, and x represents the model parameters. Throughout, we shall assume the gradient of the objective function with respect to x is Lipschitz continuous (defined as follows). Assumption 2.1. The function f : Rq × Rd → R is bounded from below by f∗ ∈ R, and the derivative of f with respect to x is L-Lipschitz continuous:\n∀y ∈ Rq, x1, x2 ∈ Rd, ‖∇xf(y, x1)−∇xf(y, x2)‖ ≤ L‖x1 − x2‖.\nAt times, we will make a distinction between the training function fT , which is used to calculate gradients, and a validation function fV used to decide when to stop training. Assumption 2.2. The training function fT is defined using a set YT ⊆ Rq of nT elements as fT (x) =\n1 nT ∑ y∈YT f(y, x), and the validation function fV is defined using a set YV ⊆ Rq of nV\nelements as fV (x) = 1nV ∑ y∈YV f(y, x).\nTo guarantee that the early stopping rule leads to a well-defined algorithm, we will assume a relation between the training and validation functions. Intuitively, the functions fT and fV will be similar when the datasets YT and YV are similar. Formally, the datasets YT and YV determine probability measures µT and µY , defined as µT = 1nT ∑ y∈YT δy and µV = 1 nV ∑ y∈YV δy, respectively, where δy is the delta measure δy(A) = 1y∈A for all sets A. We can compare these measures using the Wasserstein distance as in:\nFor q ≥ 1, p ≥ 1, we denote by Pp(Rq) the probability measures on Rq with finite moments of order p. Recall that a coupling of probability measures µ1 and µ2 is a probability measure γ on Rq × Rq such that for all measurable sets A, γ(A × Rq) = µ1(A) and γ(Rq × A) = µ2(A). Intuitively, a coupling transforms data distributed like µ1 into a dataset that is distributed according to µ2. The p-Wasserstein distance on Pp(Rq), denoted by dp, is defined as:\ndp(µ1, µ2) = inf γ∈Γ(µ1,µ2)\n( E\n(x1, x2)∼γ\n[ ‖x1 − x2‖p ])1/p , (2)\nwhere Γ(µ1, µ2) is the set of all couplings of µ1 and µ2. For more details the reader is referred to Villani (2008). Assumption 2.3. There is a constant G ≥ 0 such that\n∀x ∈ Rd, ‖∇fV (x)−∇fT (x)‖ ≤ Gd1(µV , µT ).\nThere are several cases in which this assumption will be satisfied. It is trivially satisfied if the training and validation sets are the same as µT = µV . As a consequence of the Kantorovich duality formula (Villani (2008), Remark 6.5), it is also satisfied if the function y 7→ ∇xf(y, x) is a G-Lipschitz function, uniformly for all x. Consider the following example: Example 2.4. Suppose that g : Rq × Rd → R is a smooth function. Let h : Rd → R be the function that applies the hyperbolic tangent function to each of its components: h(x) = (tanh(x1), . . . , tanh(xd)), and define f(y, x) = g(y, h(x)). Further suppose that the training data are bounded: ‖y‖ ≤ J for all y ∈ YT . Then Assumption 2.3 is satisfied because y 7→ ∇xf(x, y) is a Lipschitz function with G = sup‖y‖≤J,‖x‖≤√d ‖ ∂ 2g ∂x∂y (y, x)‖.\nIn our analyses the notion of success is that an algorithm generates an approximate stationary point: Definition 2.5. A point x ∈ Rd is an -approximate stationary point of f if ‖∇f(x)‖2 ≤ .\nWe measure the complexity of algorithms according to how many function value and gradient queries they make. Formally, an IFO) is defined as follows (Agarwal & Bottou, 2015): Definition 2.6. An IFO takes a parameter x and an input y and returns the pair (f(y, x),∇xf(y, x)).\nIn Appendix A.1, we briefly recall the notion of filtration, stopping times, and other concepts from stochastic processes that will be used in our analyses. We also refer readers to Williams (1991) for more details.\n3 BIASED SGD\nAlgorithm 1 SGD with early stopping 1: input: Initial point x1 ∈ Rd 2: t = 1 3: while ‖∇fV (xt)‖2 > do 4: for n = t to t+m− 1 do 5: xn+1 = xn − ηhn 6: end 7: t = t+m 8: end 9: return xt This section details our analysis of SGD with early stopping shown on the right in Algorithm 1. Starting from an initial point x1, at each iteration, the parameter is updated with an approximate gradient hn using a step-size η. The gradient norm of the validation function is evaluated every m iterations, and the algorithm ends when the norm decreases below a threshold . We assume that the update direction ht is a sum of two components, vt and ∆t, that represent an unbiased gradient estimate and an error term, respectively:\nht = vt + ∆t. (3)\nLet {Ft}t≥0 be a filtration such that x1 is F0-measurable, and for all t > 1, the variables vt,∆t are Ft-measurable. Our assumptions on the vt are as follows. Assumption 3.1. For any t ≥ 1, it holds that\nE [vt −∇fT (xt) | Ft−1] = 0, (4) E [ ‖vt −∇fT (xt)‖2 | Ft−1 ] ≤ σ2v . (5)\nAssumption 3.1 states that the update directions vt are valid approximations to the gradient∇fT (xt), and it also bounds the error in the approximation. For the variables ∆t we assume the following: Assumption 3.2. There is a sequence of random variables V1, V2, . . ., and U1, U2, . . . such that for all t ≥ 1 the variables Vt and Ut are Ft-measurable, ‖∆t‖2 ≤ Vt, and the Vt satisfy the following Lyapunov-type inequality: For constants α ∈ [0, 1) and β ≥ 0,\nV1 ≤ β, (6) ∀t ≥ 2, Vt ≤ αVt−1 + Ut−1, (7) ∀t ≥ 1, E [Ut | Ft−1] ≤ β. (8)\nAssumption 3.2 models a scenario where the gradient error dynamics is a combination of contracting and expanding behaviors. Contraction shrinks the error and is represented by a factor α. External noise, represented by the Ut terms, prevents the error from vanishing completely. Note that the assumption would be satisfied in the unbiased case by simply setting Vt = 0.\nWe can now state our result on the expected number of iterations for SGD with early stopping: Proposition 3.3. Let {xt}t≥1 be as in Algorithm 1. Let Assumptions 2.1, 2.2, 2.3, 3.1, and 3.2 hold. For > 0, let τ( ) be the stopping time τ( ) = inf{n ≥ 1 | n ≡ 1 (modm) and ‖∇fV (xn)‖2 ≤ }. Suppose that η ≤ 1L and − 2Lmησ2v − 2mβ/(1− α)−G2d1(µV , µT )2 > 0. Then\nE[τ( )] ≤ ηG 2d1(µV , µT ) 2 + 2(fT (x1)− f∗) + η /m+ 2ηβ/(1− α) η /m− 2Lη2σ2v − 2ηβ/(1− α)− ηG2d1(µV , µT )2/m . (9)\nFurthermore, it holds with probability 1 that ‖∇fT (xτ( ))‖2 ≤ 2 + 2G2d1(µV , µT )2.\nThis result can be strengthened by assuming a coupling between the step-size and the expansion bound β, then the result can be strengthened as demonstrated in the next corollary.\nCorollary 3.4. Let Assumptions 2.1, 2.2, 2.3, 3.1, and 3.2 hold. In the context of Proposition 3.3, let the constant β be of the form β = ηR for some R ≥ 0, and suppose that > G2d1(µV , µT )2. Let c ∈ (0, 1) and let the step-size be\nη = c ·min { 1\nL , −G2d1(µV , µT )2 m(2Lσ2v + 2R/(1− α))\n} . (10)\nThen\nE[τ( )] = O (\nm2 (1 +R/(1− α)) (1− c) c ( −G2d1(µV , µT )2)2\n) (11)\nSee the appendix for the complete formula, including lower order terms.\nFrom this corollary we see that when β is proportional to the step-size η, the complexity bound is of the order O(1/ 2). As we shall see below, this coupling assumption is satisfied for Stacked SGD. For the case of using SGD to minimize a finite sum using unbiased, we obtain the following:\nCorollary 3.5. Let Assumptions 2.1, 2.2, and 2.3 hold. Suppose each gradient estimate is obtained by selecting a data point yt ∈ YT uniformly at random and setting vt = ∇xf(yt, xt) and that there is a σ2v ≥ 0 such that ∀ y′ ∈ YT , x ∈ Rd, ∥∥∥∇xf(y′, x)− 1nT ∑y∈YT ∇xf(y, x)∥∥∥2 ≤ σ2v . If the step-sizes are defined according to Equation equation 20 using c = 1/2 then the expected number of IFO calls used by SGD before reaching an -approximate stationary point is\nE [IFO ( )] = O (\nmnV +m 2\n( −G2d1(µV , µT )2)2 + nV ) See the appendix for the complete formula, including lower-order terms.\nNote that when d(µV , µT ) is on the order of √ , this result states that the expected IFO complexity is O(1/( 2)). This can be compared with the RSG algorithm, where O(1/( 2)) iterations are sufficient for the expected (squared) norm of the gradient at a random iterate to be at most (Cor. 2.2 in Ghadimi & Lan (2013)).\n4 STACKED SGD\nCluster 1\nCluster 2\nCluster 3\nCluster 4\nIn this section, we introduce SSGD for decentralized optimization, which involves a distributed system with two types of nodes: workers and communicators. The communication pattern is shown on the right for a hypothetical system consisting of 16 workers and 4 communicators. Workers (circles) are grouped into clusters, each containing a communicator (triangles).\nAlgorithm 2 depicts the steps of SSGD. For every m epochs, the gradient of the validation function is computed at the mean of the communication node parameters. The algorithm stops when the norm of this gradient drops below a threshold. This step is naturally carried out by one of the communication nodes because they can store the average computed during Line 9 of the algorithm. For workers, each iteration begins on Line 6 with one step of SGD. Then, on Line 7, an averaging step is performed to partially synchronize the model parameters within the local cluster. The steps for a communication node begin in Line 9 with an averaging step, among the other communication nodes from each cluster. On Line 10, there is a partial synchronization with the worker nodes in the cluster.\nThe key to the algorithm’s efficiency is that the averaging among communication nodes can happen in parallel with the gradient descent step as it occurs within the cluster. In the naive approach to parallelizing SGD, all nodes must block after each iteration for synchronization of their parameters.\nWe assume there are M ≥ 1 clusters, each containing K ≥ 1 computation nodes and 1 communication node. Thus, there are M(K + 1) total nodes. Given a worker node 1 ≤ i ≤ KM , we let\nAlgorithm 2 SSGD with early stopping 1: input: Node id i, initial parameters xi1 (for workers), or x̂ i 1 (for communicators)\n2: t = 1 3: while ∥∥∥∇fV ( 1M ∑Mj=1 x̂jt)∥∥∥2 > do 4: for n = t to t+m− 1 do 5: if node i is a computation node then 6: xi\nn+ 12 = xin − ηvin\n7: xin+1 = 1\nK+1\n( x̂ c(i) n+ 12 + ∑K j=1 x j n+ 12 ) 8: else (node i is a communication node) 9: x̂i\nn+ 12 = 1M\n∑M j=1 x̂ j n\n10: x̂in+1 = 1\nK+1 ( x̂i n+ 12 + ∑ j∈c−1(i) x j n+ 12 ) 11: end if 12: end 13: t = t+m 14: end 15: return x̂t+ 12\n4 8 16 32 64 128 256 Number of nodes\n104\n105\nD at\na p\ner se\nco n\nd\nSGD\nSSGD\n4 8 16 32 64 128 256 Number of nodes\n0\n1\n2\n3\nf T (x t)\nSGD\nSSGD\nSSGD results for a climate modeling task. Top: throughput vs number of nodes.. Bottom: comparison of the loss of the algorithms.\nc(i) ∈ {1, . . . ,M} denote the index of the communication node of the group containing worker i. For analysis, we define the filtration {Ft}t≥0 as follows:\nF0 = σ ( { xi1 ∣∣ 1 ≤ i ≤ KM} ∪ {x̂i1 ∣∣ 1 ≤ i ≤M} ),\n∀t ≥ 1, Ft = σ ( { xi1, v i n ∣∣ 1 ≤ n ≤ t, 1 ≤ i ≤ KM} ∪ {x̂i1 ∣∣ 1 ≤ i ≤M} ). We assume that the gradient estimates used in SSGD are unbiased and have bounded variance: Assumption 4.1. For any t ≥ 1 and 1 ≤ i ≤ KM,\nE [ vit −∇fT (xit) | Ft−1 ] = 0, (12)\nE [∥∥vit −∇fT (xit)∥∥2 | Ft−1] ≤ σ2v . (13)\nThe first step in our analysis is a bound on the dispersion of the parameters across the system. Proposition 4.2. Let Assumption 2.1, 2.2, and 4.1 hold, and let the variables x̂it be as defined in Line 6 of Algorithm 2. Suppose the step-size satisfies η < 1/(2LK). Define the variables V1, U1, V2, U2, . . . and the constants α, β as follows:\nVt = L2\nM2 M∑ i=1 M∑ j=1 ∥∥∥x̂it − x̂jt∥∥∥2 , (14a) Ut = ηKL\n(K + 5/8)\n1\nM2 M∑ i=1 M∑ j=1 ∥∥∥∥∥ ( 1 K ∑ k∈c−1(i) vkt −∇fT (x̂it) ) − ( 1 K ∑ k∈c−1(j) vkt −∇fT (x̂jt ) )∥∥∥∥∥ 2 (14b)\nα = (K + 3/4)2\n(K + 1)2 , (14c)\nβ = η · 4Lσ 2 v\n(K + 5/8) . (14d)\nThen, for all t ≥ 1, it holds that Vt+1 ≤ αVt + Ut and E[Ut | Ft−1] ≤ β.\nIn the preceding definitions, Vt represents the dispersion of the parameter values across different nodes. An averaging step tends to reduce the dispersion by a factor of α, while the independent gradient updates at each node may increase parameter dispersion by an amount β. This result allows us to model SSGD as a form of biased SGD, leading to the following: Proposition 4.3. Let Assumptions 2.1, 2.2, 2.3, and 4.1 hold. Assume that the initial parameters at every node are equal: xi1 = x j 1, for all 1 ≤ i, j ≤ KM and x̂i1 = x̂j1 for 1 ≤ i, j ≤ M . For some c < 12K , suppose that the step-size η is\nη = c ·min { 1\nL , −G2d1(µV , µT )2 m(2Lσ2v/K + 32Lσ 2 v(K + 1)/(K + 5/8))\n} . (15)\nLet x̂t be the average of the communicator states at time t: x̂t = 1M ∑M i=1 x̂ i t. For > 0, define τ( ) to be the stopping time τ( ) = inf{n ≥ 1 | n ≡ 1 (mod m) and ‖∇fV (x̂n)‖2 ≤ }. Then,\nE[τ( )] = O (\nm2 (1− c)c( −G2d1(µV , µT )2)2 ) .\nRefer to the Appendix for the complete formula, including lower-order terms.\nThis leads to a bound on the expected IFO complexity to minimize a finite sum using SSGD: Corollary 4.4. Let Assumptions 2.1, 2.2, and 2.3 hold. Suppose each gradient estimate is obtained by selecting a data point yjt ∈ YT uniformly at random and setting vjt = ∇xf(yjt , xjt ) and that there is a σ2 ≥ 0, such that ∀ y′ ∈ YT , x ∈ Rd,\n∥∥∥∇xf(y′, x)− 1nT ∑y∈YT ∇xf(y, x)∥∥∥2 ≤ σ2v . If the step-sizes are defined as in Equation equation 15 with c = 1/(4K), then the expected number of IFO calls used by SSGD before reaching an -approximate stationary point is\nE [IFO( )] = O ( mK(nV +mK)\n( −G2d1(µV , µT )2)2 + nV\n) .\nExperimental Result SSGD has been implemented to train a neural network model as part of research into spatio-temporal data analysis for climate research. The model is LSTNet, a neural network architecture that includes a convolutional neural network to extract short-term local dependency patterns from spatial variables and a recurrent neural network component to discover long-term patterns from time series trends Lai et al. (2018). LSTNet is trained for the prediction of solar radiation from past sensor measurements. Fig. 4 shows results from the experiment. The upper plot compares the throughput of SGD and SSGD as the number of nodes increases. We see that the performance of SGD degrades after 64 nodes, while SSGD maintains near-linear scalability. The lower plot shows that although SSGD involves biased gradients, it does not sacrifice accuracy, and yields similar prediction errors compared to SGD. See Appendix G.1 for more details about the experimental methodology." }, { "heading": "5 DECENTRALIZED SGD", "text": "In this section we show how our methodology can be applied to Decentralized SGD (DSGD) which is another variant of distributed SGD. Recently, DSGD was analyzed using randomization Lian et al. (2017). In this section we complement that analysis by studying the expected running time of the algorithm.\nThe steps of DSGD are shown in Algorithm 3. The procedure involves M > 0 worker nodes that participate in the optimization. A communication matrix a describes the connectivity among the workers; ai,j > 0 means that workers i and j will communicate after each gradient descent step. At each step of optimization, every node computes a weighted average of the parameters in its local neighborhood, as determined by the connectivity matrix. This is combined with a local gradient approximation to obtain the new parameter at the worker. The data that is returned by the algorithm (assuming the termination criteria is met) is the average of the parameters throughout the system, denoted xt:\nxt = 1\nM M∑ i=1 xi (16)\nEvery m epochs, the norm of gradient of the validation function is evaluated at the average parameter and the algorithm terminates when this norm falls below a threshold.\nAlgorithm 3 DSGD with early stopping 1: input: Initial point x1 ∈ Rd, initial parameters xi1. 2: t = 1 3: while ‖∇fV (xt)‖2 > do 4: for n = t to t+m− 1 do 5: xin+1 = M∑ j=1 ai,jx j n − ηvin 6: end 7: t = t+m 8: end 9: return xt The intuitive justification for DSGD is that it may be more efficient compared to naive approaches to parallelizing SGD, since whenever ai,j = 0 then the nodes i and j need not communicate. In Lian et al. (2017) those authors offer theoretical support for the superiority of DSGD. In the present work, our goal is to analyze the expected running time of DSGD as an example of our the abstract theory developed above may be applied in practice. We leave comparisons of the algorithms for future work. For the analysis, we define the filtration {Ft}t≥0 as follows: F0 = σ ( { xi1 ∣∣ 1 ≤ i ≤M} ),\n∀t ≥ 1, Ft = σ ( { xi1, v i n ∣∣ 1 ≤ n ≤ t, 1 ≤ i ≤M} ) The connectivity matrix a is subject to the same conditions as in Lian et al. (2017): Assumption 5.1. The M ×M connectivity matrix a is symmetric and stochastic. The spectral gap, denoted by ρ and defined as ρ = (max{|λ2(a)|, |λM (a)|})2 is assumed to satisfy ρ < 1.\nTo make the proofs clear and concise, we make the assumption the parameters at each node are single real-numbers. That is, throughout this section we assume d = 1 in Assumption ??.\nWe also assume that the gradient estimates used at each worker at unbiased and have bounded variance. Assumption 5.2. For any t ≥ 1 and 1 ≤ i ≤M,\nE [ vit −∇fT (xit) | Ft−1 ] = 0, (17)\nE [ |vit −∇fT (xit)|2 | Ft−1 ] ≤ σ2v . (18)\nFor the analysis, we show that the sequence of averages xt for t = 1, 2, . . . can be modeled as being generated by a biased version of SGD, using the tools from Section 3. This involves showing that the distance between local parameter values xit and the system average can be controlled, as shown in the following. Proposition 5.3. Let Assumptions 2.1, 2.2, 5.1, and 5.2 hold. Suppose the step-size satisfies η ≤ (1−√ρ)/(4 √ 2L). Define the variables V1, U1, V2, U2, . . . and the constants α, β as follows:\nVt = L2\nM M∑ i=1 |xit − xt|2, (19a)\nUt = 8 η 2 L\n2(1 + ρ) M(1− ρ) M∑ i=1 |vin −∇f(xit)|2, (19b)\nα = (1 + ρ)\n2ρ\n(√ ρ+ 1\n2\n)2 (19c)\nβ = η L √ 2\n1− ρσ 2 v (19d)\nThen for all t ≥ 1 it holds that Vt+1 ≤ αVt + Ut and E[Ut | Ft−1] ≤ β.\nUsing this result on the dispersion of the parameters, we can move to the main result on decentralized SGD. The result gives conditions that guarantee the expected time E[τ( )] is finite, and also bounds this time in terms of the problem data. Notably, it shows a dependence on ρ, which is the mixing rate of the connectivity matrix. Proposition 5.4. Let Assumptions 2.1, 2.2, 5.1, 2.3 and 5.2 hold. Assume that the initial parameters at every node are equal: xi1 = x j 1 for all 1 ≤ i, j ≤M . Let c ≤ 1−√ρ 4 √ 2 and define R = L √ 2σ2v/(1− ρ)\nAlgorithm 4 SVRG with early stopping 1: input: Initial point x1m ∈ Rd 2: for s = 1, 2, . . . do 3: xs+10 = x s m\n4: gs+1 = 1nT ∑ y∈YT ∇fT (y, x s+1 0 ) 5: if ‖gs+1‖2 ≤ then return xs+10 6: for t = 0 to m− 1 do 7: Sample yst uniformly at random from YT 8: vst = ∇f(yst , xs+1t )−∇f(yst , xs+10 ) + gs+1 9: xs+1t+1 = x s+1 t − ηvst\n10: end 11: end\n10−6 10−5 10−4 10−3 10−2 10−1 100 ‖∇fT‖2 0.0\n2.5\n5.0\nIF O\nC al\nls\n×105\nSGD\nSVRG\n10−6 10−5 10−4 10−3 10−2 10−1 100 ‖∇fT‖2 0.0\n2.5\n5.0\nIF O\nC al\nls\n×105\nSGD\nSVRG\nIFO calls to approximate stationarity for MNIST (top) and CIFAR-10 (bot.).\nand let α be as in Equation equation 19c. Let the step-size be η = c ·min { 1\nL , −G2d1(µV , µT )2 m(2Lσ2v + 2R/(1− α))\n} . (20)\nFor > 0 define τ( ) to be the first time the norm of the gradient of the validation function falls below ; That is, τ( ) = inf{n ≥ 1 | n ≡ 1 (mod m) and ‖∇fV (xn)‖2 ≤ }. Then\nE[τ( )] = O (\nm2 (1 +R/(1− α)) (1− c) c ( −G2d1(µV , µT )2)2\n) (21)\nNote that in the above result, the order of the convergence is the same as for regular SGD and stacked SGD. An interesting avenue for future work would be to explore whether it is possible to obtain bounds where the step-size condition does not depend on the epoch length m." }, { "heading": "6 SVRG", "text": "We demonstrate how the method can applied to a variant of the SVRG Johnson & Zhang (2013) with early stopping, shown in Algorithm 4. Each epoch begins with a full gradient computation (Line 4), and then an inner loop runs for m steps. The first step of the inner loop is to choose a random data point (Line 7). Then, the update direction is computed (Line 8) and used obtain the next parameter (Line 9).\nCombining some existing bounds for SVRG with our stopping time approach yields the following bound on the expected number of iterations until SVRG with early stopping terminates: Proposition 6.1. Let Assumptions 2.1 and 2.2 hold and consider the variables xs+1t defined by Algorithm 4. Let ξ = 1/4 and suppose that the step-size is set to η = ξ/(Ln2/3T ) and the epoch length is m = bnT /(3ξ)c. For > 0, define τ( ) to be the stopping time τ( ) = inf { s ≥ 1\n∣∣ ‖∇fT (xs+10 )‖2 ≤ } . Then, E[τ( )] ≤ 1 + (40Ln2/3T (fT (x1m)− f∗))/ . Note that Proposition 6.1 counts the number of epochs until an approximate stationary point is generated. A bound on the number of IFO calls can be obtained by multiplying τ by the number of IFO calls per epoch, which is nT + 2m. This immediately leads to the following result: Corollary 6.2. Let Assumptions 2.1 and 2.2 hold and suppose the step-size η and epoch length m are defined as in Proposition 6.1. Then, the expected number of IFO calls until SVRG returns an approximate stationary point is E [IFO ( )] = O((n5/3T / ) + nT ).\nThis result may be compared with Cor. 4 of Reddi et al. (2016a), which concerns an upper bound on the IFO calls for the expected (squared) norm of the gradient at a randomly selected iterate to be less than . Our result concerns the expected number of IFO calls before the algorithm terminates with an iterate that is guaranteed to be an approximate stationary point with probability 1, a stronger property.\nFig. 6 illustrates the expected IFO complexity of SVRG and SGD. The top plot shows the results of an experiment using a simple logistic classifier on the MNIST dataset, and the bottom shows the result of training a one-layer neural net on the CIFAR-10 dataset. The error bands represent the standard deviation of the measurements over five independent runs. Appendix G.2 includes more details about the experimental methodology. In each case, SVRG is better at obtaining accurate solutions, while, for SGD, the expected IFO calls seem to become unbounded for sufficiently small ." }, { "heading": "7 GENERALIZATION PROPERTIES", "text": "Typically, the training and validation sets are made from independent samples of a test distribution µ, and it is of interest to estimate the model performance on samples from this test distribution. Define fµ : Rd → R as f(x) = Eµ[f(y, x)]. In this section, we derive a bound on E[‖∇fµ(xτ( ))‖2], where the expectation is not only over the variates generated by optimization, but also over the random choice of the datasets YV and YT . We will show how Wasserstein concentration bounds, which concern the average distance between µ and its empirical versions, can be used for this task. Consider the following from Dereich et al. (2013). Theorem 7.1 (Dereich et al. (2013), Special case of Theorem 1). Let d ≥ 3 and let µ be a measure on Rd, such that J = Eµ [ ‖y‖3 ]1/3 <∞. Then, there is a constant κd, such that E[d2(µ, µV )2] ≤ κdJn −3/d V .\nThis leads to a bound on the generalization performance of the iterates returned by SGD with early stopping, whose proof is deferred to the Appendix.\nCorollary 7.2. Let the conditions of Proposition 3.3 hold. Further assume that J = Eµ [ ‖y‖3 ]1/3 < ∞, the validation set YV is an empirical version of µ, and y 7→ ∇xf(y, x) is uniformly G-Lipschitz. If xτ ( ) is the output of Algorithm 1, then E[‖∇fµ(xτ( ))‖2] ≤ 2 + 2G2κdJn−3/dV . Note that for strongly convex optimization, it is possible to obtain rates of convergence of the test error that are independent of the dimension Hsu & Sabato (2016). Corollary 7.2 is interesting as it accounts for data distribution properties (via the 3rd moment J) and does not depend on the number of iterations used in SGD. This result could be compared with Hardt et al. (2016), where the authors proved a bound on the generalization gap for function values in terms of the number of iterations T and samples in the training set nT . There, the bound is independent of d but increasing with T , while our bound is independent of the number of iterations in SGD. It will be interesting to determine if these two analyses can be combined." }, { "heading": "8 DISCUSSION", "text": "This work presents an analysis of several stochastic gradient methods that use early stopping based on a validation function. We demonstrated that by blending existing analysis techniques with some basic tools related to stopping times, it is possible to bound the expected number of iterations and gradient evaluations to generate approximate stationary points. We also considered decentralized optimization and introduced a new algorithm, SSGD, that proved amenable to analysis in our framework. For SSGD, we obtained a convergence rate using our results for biased SGD, and experiments showed that the algorithm has favorable scaling properties compared to basic parallel SGD. Our application to SVRG demonstrated that the theoretical approach can be applied in various settings. We also considered the generalization properties of the output of early stopping. We hope these efforts inspire other works that investigate the theoretical and practical aspects of early stopping." }, { "heading": "APPENDIX: ON THE EXPECTED RUNNING TIME OF NONCONVEX OPTIMIZATION WITH EARLY STOPPING", "text": "" }, { "heading": "A PRELIMINARIES", "text": "Our analyses make use of a quadratic bound for the training function which follows from Assumption 2.1:\n∀x, v ∈ Rn, fT (x+ v) ≤ fT (x) +∇fT (x)T v + L\n2 ‖v‖2. (22)" }, { "heading": "A.1 STOCHASTIC PROCESSES", "text": "The formal setting of a stochastic optimization algorithm involves a probability space (Ω,F ,P), consisting of a sample space Ω, a σ-algebra F of subsets of Ω and a probability measure P on the subsets of Ω that are in F . The algorithm takes an initial point x1 and defines a sequence of random variables {xt(ω)}t>1. Intuitively Ω represents the random data used by the algorithm, such as indices used to define mini-batches. For ease of notation we will omit the dependence of random variates in the algorithms on ω ∈ Ω. A filtration {Ft}t=0,1,... is an increasing sequence of σ-algebras, with the interpretation that Ft represents the information available to an algorithm up to and including time t. A random variable x : Ω→ Rd is said to be Ft measurable if it can be expressed in terms of the state of the algorithm up and including time t. A rule for stopping an algorithm is represented as a stopping time, which is a random variable τ : Ω→ {0, 1, . . . ,∞} with the property that the decision of whether to stop or continue at time n is made based on the information up to and including time n.\nThe following proposition will be used through out our analysis of the different algorithms.\nProposition A.1. Let τ be a stopping time with respect to a filtration {Ft}t=0,1,.... Suppose there is a number c < ∞ such that τ ≤ c with probability one. Let x1, x2, . . . be any sequence of random variables such that each xt is Ft-measurable and E[‖xt‖] <∞. Then\nE [ τ∑ t=1 xt ] = E [ τ∑ t=1 E [xt | Ft−1] ] . (23)\nProof. This is a consequence of the optional stopping theorem (Theorem 10.10 in Williams (1991)). Define S0 = 0 and for t ≥ 1, let St = t∑ i=1\n(xi − E[xi | Fi−1]). Then S0, S1, . . . is a martingale with respect to the filtration {Ft}t=0,1,..., and the optional stopping theorem implies E[Sτ ] = E[S0]. But E[S0] = 0, and therefore E[Sτ ] = 0, which is equivalent to Equation equation 23." }, { "heading": "B ANALYSIS OF BIASED SGD", "text": "" }, { "heading": "PROOF OF PROPOSITION 3.3", "text": "Proof. For convenience, define the random variables δt for t = 1, 2, . . . as δt = vt−∇fT (xt). From equation 22, it holds that\nfT (xt+1) ≤ fT (xt)− η∇fT (xt)T (∇fT (xt) + δt + ∆t) + L\n2 η2‖∇fT (xt) + δt + ∆t‖2.\nSumming this over t = 1, . . . ,m yields\nfT (xm+1) ≤ fT (x1)− m∑ t=1 η∇fT (xt)T (∇fT (xt) + δt + ∆t)\n+ m∑ t=1 L 2 η2‖∇fT (xt) + δt + ∆t‖2\n= fT (x1)− m∑ t=1 η ( 1− L 2 η ) ‖∇fT (xt)‖2 − m∑ t=1 η(1− Lη)∇fT (xt)T δt\n+ m∑ t=1 L 2 η2‖δt‖2 − m∑ t=1 η(1− Lη)∇fT (xt)T∆t + m∑ t=1 L 2 η2‖∆t‖2\n+ m∑ t=1 Lη2δTt ∆t.\n(24)\nNote that, in general, for any numbers a, b it is the case that |ab| ≤ 12a2 + 12b2. Then\n|δTt ∆t| ≤ ‖δt‖‖∆t‖ ≤ 1\n2 ‖δt‖2 +\n1 2 ‖∆t‖2 (25)\nand\n|∇fT (xt)T∆t| ≤ ‖∇fT (xt)‖‖∆t‖ ≤ 1\n2 ‖∇fT (xt)‖2 +\n1 2 ‖∆t‖2. (26)\nCombining Equations equation 24, equation 25, and equation 26, we obtain\nfT (xm+1) ≤ f(x1)− m∑ t=1 η ( 1− L 2 η − 1 2 (1− Lη) ) ‖∇fT (xt)‖2\n− m∑ t=1 η(1− Lη)∇fT (xt)T δt + N∑ t=1 ( L 2 η2 + 1 2 Lη2 ) ‖δt‖2\n+ N∑ t=1 ( L 2 η2 + 1 2 η(1− Lη) + Lη2 1 2 ) ‖∆t‖2\n= f(x1)− m∑ t=1 η 1 2 ‖∇fT (xt)‖2 − m∑ t=1 η(1− Lη)∇fT (xt)T δt + m∑ t=1 Lη2‖δt‖2\n+ m∑ t=1 1 2 η(1 + Lη)‖∆t‖2.\nRearranging terms and noting that fT (xm+1) ≥ f∗, this yields m∑ t=1 η 1 2 ‖∇fT (xt)‖2 ≤ fT (x1)− f∗ − m∑ t=1 η(1− Lη)∇fT (xt)T δt + m∑ t=1 Lη2‖δt‖2\n+ m∑ t=1 1 2 η(1 + Lη)‖∆t‖2\n≤ f(x1)− f∗ − m∑ t=1 η(1− Lη)∇fT (xt)T δt + m∑ t=1 Lη2‖δt‖2\n+ m∑ t=1 ηVt.\n(27)\nwhere in the second inequality we used the assumptions that η ≤ 1/L and ‖∆t‖2 ≤ Vt. Based on our assumption that ‖∇fV (x)−∇fT (x)‖ ≤ Gd1(µV , µT ), it follows that\n‖∇fV (x)‖2 ≤ 2G2d1(µV , µT )2 + 2‖∇fT (x)‖2. (28)\nAlso note that n∑ t=1 1t≡1 (mod m) = ⌈ n m ⌉ ≤ n m + 1. (29)\nCombining equation 28 and equation 29 results in\nτ( )∧n∑ t=1 1t≡1 (mod m)‖∇fV (xt)‖2 ≤ τ( )∧n∑ t=1 1t≡1 (mod m)G 2d1(µV , µT ) 2\n+ τ( )∧n∑ t=1 1t≡1 (mod m)‖∇fT (xt)‖2\n≤ G2d1(µV , µT )2 (\n(τ( ) ∧ n) m + 1\n) + τ( )∧n∑ t=1 ‖∇fT (xt)‖2.\n(30)\nFor each n ≥ 1 define τ( )∧n to be the stopping time which is the minimum of τ( ) and the constant n. Applying Proposition A.1 and Assumption 4, it holds that\nE τ( )∧n∑ t=1 ∇fT (xt)T δt = 0 (31) and using Proposition A.1 with Assumption 5 gives\nE τ( )∧n∑ t=1 ‖δt‖2 ≤ σ2vE[τ( ) ∧ n]. (32)\nNext, according to conditions equation 6, and equation 7, it holds for any m ≥ 1 and with probability one that\nm∑ t=1 Vt ≤ α m∑ t=1 Vt + m∑ t=1 Ut + β (33)\nand by equation 8 together with Proposition A.1,\nE τ( )∧n∑ t=1 Ut ≤ E[τ( ) ∧ n]β. (34) Combining equation 33 and equation 34, then\nE τ( )∧n∑ t=1 Vt ≤ αE τ( )∧n∑ t=1 Vt + (E[τ( ∧ n)] + 1)β which, upon rearranging, results in\nE τ( )∧n∑ t=1 Vt ≤ (E[τ( ) ∧ n] + 1) β 1− α. (35)\nFurthermore, by definition of τ ,\nE τ( )∧n∑ t=1 1t≡1 (mod m)‖∇fV (xt)‖2 ≥ E (τ( )∧n)−1∑ t=1 1t≡1 (mod m)‖∇fV (xt)‖2 \n≥ E[(τ( ) ∧ n)− 1] m\n(36)\nHere we used that n−1∑ t=1 1t≡1 (mod m) = ⌈ n− 1 m ⌉ ≥ n− 1 m\nCombining equation 27, equation 30, equation 31, equation 32, equation 35 and equation 36 results in η\n2m (E[τ( ) ∧ n]− 1) ≤η 2 G2d1(µV , µT ) 2\n( E[τ( ) ∧ n]\nm + 1\n) + fT (x1)− f∗\n+ Lη2σ2vE [τ ( ) ∧ n] + η β\n1− α (E[τ( ) ∧ n] + 1) . This can be rearranged into( η\nm − 2Lη2σ2v −\n2ηβ 1− α − η m G2d1(µV , µT ) 2\n) E[τ( ) ∧ n] ≤ ηG2d1(µV , µT )2\n+ 2(fT (x1)− f∗) + 2η β 1− α + η m .\nwhich in turn is equivalent to\nE[τ( ) ∧ n] ≤ ηG 2d1(µV , µT ) 2 + 2(fT (x1)− f∗) + η /m+ 2ηβ/(1− α) η /m− 2Lη2σ2v − 2ηβ/(1− α)− ηG2d1(µV , µT )2/m . (37)\nNote that the sequence of random variables {(τ( ) ∧ n)}n=1,2,... is monotone increasing, and converges pointwise to τ( ). Then the claimed relation equation 9 follows from equation 37 by the monotone convergence theorem.\nFinally, by applying equation 28 with the roles of fV and fT reversed, and using the definition of τ( ), it follows that\n‖∇fT (xτ( ))‖ ≤ 2‖∇fV (xτ( ))‖2 + 2G2d1(µV , µT )2\n≤ 2 + 2G2d1(µV , µT )2." }, { "heading": "PROOF OF COROLLARY 3.4", "text": "Proof. According to the assumption on the step-size η (Equation equation 20), η [ ( −G2d1(µV , µT )2)/m− η(2Lσ2v + 2R/(1− α)) ] ≥ η(1− c)( −G2d1(µV , µT )2)/m (38) and\n1 η ≤ L c + m(2Lσ2v + 2R/(1− α)) c ( −G2d1(µV , µT )2) . (39)\nCombining these inequalities with the conclusion of Proposition 3.3 (relation equation 9) yields\nE[τ( )] A ≤ ηG\n2d1(µV , µT ) 2 + 2(fT (x1)− f∗) + η /m+ 2ηβ/(1− α) η( −G2d1(µV , µT )2)/m− 2Lη2σ2v − 2ηβ/(1− α) .\nB = ηG2d1(µV , µT ) 2 + 2(fT (x1)− f∗) + η /m+ 2η2R/(1− α) η( −G2d1(µV , µT )2)/m− 2Lη2σ2v − 2η2R/(1− α) .\nC ≤ ηG\n2d1(µV , µT ) 2 + 2(fT (x1)− f∗) + η /m+ 2η2R/(1− α)\nη(1− c) ( −G2d1(µV , µT )2)/m .\n(40)\nStep A was established by Proposition 3.3. Step B uses the assumption that β = ηR. Step C is an application of equation 38. Next, we will upper-bound the final inequality in three steps. First, using Inequality equation 39, we see that\n2(fT (x1)− f∗) η(1− c) ( −G2d1(µV , µT )2)/m ≤ 2(fT (x1)− f ∗) (1− c) ( −G2d1(µV , µT )2)/m)\n× ( L\nc + m(2Lσ2v + 2R/(1− α)) c ( −G2d1(µV , µT )2) ) =\n2m(fT (x1)− f∗) (1− c) c ( −G2d1(µV , µT )2) L\n+ 2m2(fT (x1)− f∗) (1− c) c ( −G2d1(µV , µT )2)2 ( 2Lσ2v + 2R/(1− α) ) .\n(41)\nNext, 2η2R/(1− α)\nη(1− c) ( −G2d1(µV , µT )2)/m = 2ηR/(1− α) (1− c) ( −G2d1(µV , µT )2)/m\n≤ 2cR/(1− α) (1− c) ( −G2d1(µV , µT )2)/m × −G 2d1(µV , µT ) 2 m(2Lσ2v + 2R/(1− α))\n= c (1− c) × 2R/(1− α)\n(2Lσ2v + 2R/(1− α)) ≤ c\n(1− c) . (42)\nFinally, ηG2d1(µV , µT ) 2 + η /m\nη(1− c) ( −G2d1(µV , µT )2)/m =\nG2d1(µV , µT ) 2 + /m\n(1− c) ( −G2d1(µV , µT )2)/m\n= mcG2d1(µV , µT ) 2 + c\n(1− c)c( −G2d1(µV , µT )2) .\n(43)\nCombining equation 40 with equation 41, equation 42 and equation 43, we find that\nE[τ( )] ≤ 4m 2(fT (x1)− f∗)\n( Lσ2v +R/(1− α) ) (1− c) c ( −G2d1(µV , µT )2)2\n+ 2Lm(fT (x1)− f∗) +mcG2d1(µV , µT )2 + c\n(1− c) c ( −G2d1(µV , µT )2) +\nc 1− c . (44)" }, { "heading": "PROOF OF COROLLARY 3.5", "text": "Proof. If the algorithm runs until iteration τ( ), then the number of times that the full gradient of fV is calculated is dτ( )/me ≤ τ( )/m + 1, and the number of IFO calls for the training function is τ( )− 1. Therefore\nIFO( ) ≤ ( τ( )\nm + 1\n) nV + (τ( )− 1) ≤ τ( ) (nV m + 1 ) + nV . (45)\nNote that under our assumption on the gradient estimates vt, we are in the unbiased setting where R = 0. Combining equation 44 with R = 0 and equation 45, we obtain\nE[IFO( )] ≤ (\n4m2(fT (x1)− f∗)Lσ2v (1− c) c ( −G2d1(µV , µT )2)2\n+ 2Lm(f(x1)− f∗) +mcG2d1(µV , µT )2 + c\n(1− c) c ( −G2d1(µV , µT )2) +\nc\n1− c ) × (nV m + 1 ) + nV .\n(46)\nUsing c = 1/2 and neglecting lower-order terms, then, E[IFO( )] = O ( mnV +m 2\n( −G2d1(µV , µT )2)2 + nV\n) ." }, { "heading": "C ANALYSIS OF STACKED SGD", "text": "" }, { "heading": "PROOF OF PROPOSITION 4.2", "text": "Proof. Note that the variables inside the stacked algorithm satisfy the following identities: For all 1 ≤ i ≤MK,\nx̂ c(i) t = x i t, (47)\nand for all pairs communication nodes nodes 1 ≤ j ≤M , and 1 ≤ k ≤M , and t ≥ 1, x̂j t+ 12 = x̂kt+ 12 . (48) Let 1 ≤ i ≤M and 1 ≤ j ≤M be arbitrary indices of communication nodes. By the definitions in Algorithm 2, we can see that for any t ≥ 1,\n‖x̂it+1 − x̂jt+1‖\nA = ∥∥∥∥∥ 1K + 1 ∑ k∈c−1(i) xkt+ 12 − 1 K + 1 ∑ k∈c−1(j) xkt+ 12 ∥∥∥∥∥ B =\n∥∥∥∥∥ 1K + 1 ∑ k∈c−1(i) ( xkt − ηvkt )− 1 K + 1 ∑ k∈c−1(j) ( xkt − ηvkt )∥∥∥∥∥ C =\n∥∥∥∥∥ 1K + 1 ∑ k∈c−1(i) ( x̂it − ηvkt )− 1 K + 1 ∑ k∈c−1(j) ( x̂jt − ηvkt )∥∥∥∥∥ D =\n∥∥∥∥∥ KK + 1 (x̂it − x̂jt)− 1K + 1 ∑ k∈c−1(i) ηvkt + 1 K + 1 ∑ k∈c−1(j) ηvkt ∥∥∥∥∥ E ≤ K K + 1 ∥∥∥x̂it − x̂jt∥∥∥+ η ∥∥∥∥∥∥ 1K + 1 ∑ k∈c−1(i) vkt − 1 K + 1 ∑ k∈c−1(j) vkt ∥∥∥∥∥∥ .\n(49)\nStep A follows from the definition of x̂it+1, x̂ j t+1 in Algorithm 2 and Equation equation 48. Step B follows from the definition of xk t+ 12\nin Algo. 2. Step C follows from Equation equation 47. Step D follows from rearranging terms in the previous step, and noting that there are K workers in a cluster. Finally, Step E is simply the triangle inequality. For the second term on the right of the final inequality above,∥∥∥∥∥∥ 1K + 1 ∑\nk∈c−1(i)\nvkt − 1\nK + 1 ∑ k∈c−1(j) vkt ∥∥∥∥∥∥ = K\nK + 1 ∥∥∥∥∥∇f(x̂it)−∇f(x̂jt ) + 1 K ∑ k∈c−1(i) vkt −∇fT (x̂it) − 1 K ∑ k∈c−1(j) vkt −∇fT (x̂jt )\n∥∥∥∥∥ ≤ KL K + 1 ‖x̂it − x̂jt‖\n+ K\nK + 1 ∥∥∥∥∥ 1 K ∑ k∈c−1(i) vkt −∇fT (x̂it) − 1 K ∑ k∈c−1(j) vkt −∇fT (x̂jt ) ∥∥∥∥∥.\n(50)\nwhere the last inequality uses the Lipschitz gradient property (Assumption 2.1) and the triangle inequality. Combining equation 49 and equation 50 yields∥∥∥x̂it+1 − x̂jt+1∥∥∥ ≤ KK + 1(1 + Lη)∥∥∥x̂it − x̂jt∥∥∥\n+ ηK\nK + 1 ∥∥∥∥∥ ( 1 K ∑ k∈c−1(i) vkt −∇fT (x̂it) ) − 1 K ∑ k∈c−1(j) vkt −∇fT (x̂jt ) ∥∥∥∥∥. (51)\nNote that for any k1 > 0 and all a, b we have |a+ b|2 ≤ (1 + k1)a2 + ( 1 + 1\nk1\n) b2. (52)\nCombining equation 51 and equation 52 while using the assumption that η ≤ 1/(2LK) results in\n‖x̂it+1 − x̂jt+1‖2 ≤ (1 + k1)K\n2\n(K + 1)2\n( 1 + 1\n2K )2 ∥∥∥x̂it − x̂jt∥∥∥2 + η (1 + k1)K\nk1(K + 1)2 1 2L ∥∥∥∥∥ 1 K ∑ k∈c−1(i) vkt −∇fT (x̂it) − 1 K ∑ k∈c−1(j) vkt −∇fT (x̂jt ) ∥∥∥∥∥ 2 . (53)\nLet k1 = (K+3/4)2 (K+1/2)2 − 1. Then by the definition of α,\n(1 + k1)K 2\n(K + 1)2\n( 1 + 1\n2K\n)2 = (1 + k1)(K + 1/2) 2\n(K + 1)2 =\n(K + 3/4)2 (K + 1/2)2 = α. (54)\nFurthermore, 1 + k1 k1 ≤ (K + 1) 2 K/2 + 5/16 so\nη (1 + k1)K k1(K + 1)2 1 2L ≤ η (K + 1)\n2\n(K/2 + 5/16)\nK (K + 1)2 1 2L\n= η K\n(K/2 + 5/16)\n1\n2L\n= η K\nL(K + 5/8) .\n(55)\nMultiplying each side of equation 53 by L2/M2, summing the resulting inequality over i = 1, . . . ,M and j = 1, . . . ,M , and using the relations equation 54, equation 55 we see that for all t ≥ 1,\nVt+1 ≤ αVt + Ut. It remains to confirm that E[Ut | Ft−1] for all t ≥ 1.\nTaking expectations in equation 14b, while noting that x̂c(k)t = x k t and applying the variance bound equation 13 along with the inequality ‖a+ b‖2 ≤ 2 ( ‖a‖2 + ‖b‖2 ) it holds for all t ≥ 1 that\nE [Ut | Ft−1]\n≤ η · KL (K + 5/8)\n× 1 M2 M∑ i=1 M∑ j=1 E [∥∥∥∥∥ 1 K ∑ k∈c−1(i) vkt −∇fT (x̂it) − 1 K ∑ k∈c−1(j) vkt −∇fT (x̂jt ) ∥∥∥∥∥ 2∣∣∣∣∣Ft−1 ]\n= η KL\n(K + 5/8)\n1\nM2 M∑ i=1 M∑ j=1 2 ( σ2v K + σ2v K ) = β." }, { "heading": "PROOF OF PROPOSITION 4.3", "text": "Proof. Note that according to Line 9 of SSGD,\nx̂t = x̂ 1 t+ 12 = . . . = x̂Mt+ 12 (56)\nand Lines 7 and 10 mean that for all j with c(j) = i,\nx̂it = x j t (57)\nUsing these equations, together with the definitions in the SSGD algorithm, we obtain that\nx̂t+1 A =\n1\nM M∑ i=1 1 K + 1 x̂it+ 12 + ∑ j∈c−1(i) xj t+ 12 B = 1 K + 1 x̂t + 1 M M∑ i=1 1 K + 1 ∑ j∈c−1(i) ( xjt − ηvjt\n) C = 1 K + 1 x̂t + 1 M M∑ i=1 1 K + 1 ∑ j∈c−1(i) (x̂it − ηvjt )\n D = ( 1\nK + 1 x̂t +\nK\nK + 1 x̂t −\n1\nM M∑ i=1 1 K + 1 ∑ j∈c−1(i) ηvjt\n)\nE = x̂t − η 1 M M∑ i=1 1 K + 1 ∑ j∈c−1(i) vjt . For Step A, note that x̂t is the average of the x̂it, and then use definition of the x̂it from Line 9 of the SSGD algorithm. Step B follows from equation 56 and the definition of the xi\nt+ 12 from Line 5 of\nSSGD. Step C follows from equation 57. Step D follows by rearranging terms in the previous step, and again noting the definition of x̂t is the average of the x̂it. Step E follows by grouping terms.\nContinuing, we can express this recursion as an approximate form of gradient descent:\nx̂t+1 = x̂t − η 1 M M∑ i=1 1 K + 1 ∑ j∈c−1(i) vjt = x̂t − η K\nK + 1 1 M M∑ i=1 1 K ∑ j∈c−1(i) vjt = x̂t − ηK\nK + 1 (vt + ∆t) ,\nwhere vt and ∆t are\nvt = ∇fT (x̂t) + 1\nM M∑ i=1 1 K ∑ j∈c−1(i) vjt −∇fT (x̂it) (58)\nand\n∆t = 1\nM ( M∑ i=1 ∇fT (x̂it) ) −∇fT (x̂t). (59)\nBased on the definition of vt in Equation equation 58 and on Assumption 4.1, for all t ≥ 1 it holds that\nE [vt −∇fT (x̂t) | Ft−1] = 0,\nE [ ‖vt −∇fT (x̂t)‖2 | Ft−1 ] ≤ σ 2 v\nK .\nThus Assumption 3.1 is confirmed. Next we consider the Lyapunov condition of Assumption 3.2. Let the variables Ut, Vt for t ≥ 1 and the constants α, β be defined as in equation 14a-equation 14d.\nThen by Assumption 2.1, ‖∆t‖2 = ∥∥∥∥∥ 1M M∑ i=1 ∇fT (x̂it)−∇f(x̂t) ∥∥∥∥∥ 2\n≤ L2 ∥∥∥∥∥ 1M M∑ i=1 ( x̂it − x̂t )∥∥∥∥∥ 2\n= L2 ∥∥∥∥∥∥ 1M M∑ i=1 x̂it − 1M M∑ j=1 x̂jt ∥∥∥∥∥∥ 2\n= L2 ∥∥∥∥∥∥ 1M2 M∑ i=1 M∑ j=1 ( x̂it − x̂jt )∥∥∥∥∥∥ 2\n≤ L2 1 M2 M∑ i=1 M∑ j=1 ∥∥∥x̂it − x̂jt∥∥∥2 = Vt.\nThe second line uses the Lipschitz gradient property, and the second to last line follows from Jensen’s inequality (Section 6.6, Williams (1991)).\nBy our assumption that each node has the same initial state, then V1 = 0, hence Inequality equation 6 holds. The Inequalities equation 7 and equation 8 are established by Prop 4.1. According to the definition of β (Equation equation 14d, we may write β = ηR where R = 4Lσ2v/(K + 5/8).\nNote that 1/(1− α) = 2(K + 1)2/(K + 5/8). Therefore, R\n1− α = 8Lσ 2 v(K + 1) 2/(K + 5/8)2 ≤ 32Lσ2v (60)\nAccording to Corollary 3.4, then, a step-size of η = c ·min { 1\nL , −G2d1(µV , µT )2 m(2Lσ2v/K + 16Lσ 2 v(K + 1) 2/(K + 5/8)2) } leads to\nE[τ( )] ≤ 4m 2(fT (x1)− f∗)\n( Lσ2v/K + 32Lσ 2 v ) (1− c) c ( −G2d1(µV , µT )2)2\n+ 2Lm(fT (x1)− f∗) +mcG2d1(µV , µT )2 + c\n(1− c) c ( −G2d1(µV , µT )2) +\nc 1− c . (61)\nDropping the lower order terms, we see that E[τ( )] = O (\nm2 (1− c)c( −G2d1(µV , µT )2)2 ) . (62)" }, { "heading": "PROOF OF COROLLARY 4.4", "text": "Proof. If SSGD runs until iteration τ( ), then number of times that the full gradient of fV is calculated is dτ( )/me ≤ τ( )/m + 1, and the number of IFO calls for the training function is (τ( )− 1)K. Therefore\nIFO( ) ≤ ( τ( )\nm + 1\n) nV + (τ( )− 1)K ≤ τ( ) (nV m +K ) + nV . (63)\nNext, note that (1− c)c = (1− 14K ) 14K = 4K−1(4K)2 , which implies 1\n(1− c)c = 16K2 4K − 1 ≤ 16K 3 . (64)\nCombining equation 62, equation 63, and equation 64 we see that E [IFO( )] = O ( mK(nV +mK)\n( −G2d1(µV , µT )2)2 + nV\n) ." }, { "heading": "D ANALYSIS OF DECENTRALIZED SGD", "text": "The following result is a restatement of Lemma 5 of Lian et al. (2017). Lemma D.1. Under Assumption 5.1, the matrix limk→∞ ak is well-defined and has entries a∞i,j = 1 M for 1 ≤ i, j ≤M . Furthermore, for all k ≥ 1, bound on the spectral gap implies ‖a∞ − ak‖2 ≤ ρk." }, { "heading": "PROOF OF PROPOSITION 5.3", "text": "Proof. For ease of notation, for each t ≥ 1 let yt and zt be the M -dimensional vectors with components defined as\nyit = x i t − xt, (65)\nzit = v i t −\n1\nM M∑ j=1 vjt . (66)\nLet a∞ be the M ×M matrix with entries a∞i,j = 1M (see Lemma D.1). Then according to Line 5 of Algorithm 3, the yt satisfy the recursion yt+1 = (a− a∞)yt + ηzt. (67) Note that zt can be expressed as\nzit = ∇f(xit)−∇f(xt)\n+ vit −∇f(xit)− 1\nM M∑ j=1 (vjt −∇f(xjt )) + 1 M M∑ j=1 (∇f(xjt )−∇f(xt)) (68)\nUsing the Lipschitz property (Assumption 2.1 ) then,\n|zin| ≤ L|xin − xn|+ |vin −∇f(xin)|+ 1\nM M∑ j=1 |vjn −∇f(xjn)|+ L M M∑ j=1 |xjn − xn| (69)\nSquaring and summing Equation equation 69, M∑ i=1 |zin|2 = M∑ i=1 L|xin − xn|+ |vin −∇f(xin)|+ 1M M∑ j=1 |vjn −∇f(xjn)|+ L M M∑ j=1 |xjn − xn| 2\n= L24 M∑ i=1 |xin − xn|2 + 4 M∑ i=1 |vin −∇f(xin)|2\n+ 4\nM M∑ i=1 M∑ j=1 |vjn −∇f(xjn)|2 + 4L2 M M∑ i=1 M∑ j=1 |xjn − xn|2\n= L28‖yn‖2 + 8 M∑ i=1 |vin −∇f(xin)|2\nTaking square roots on each sides of this equation yields\n‖zn‖ ≤ √√√√L28‖yn‖2 + 8 M∑ i=1 |vin −∇f(xin)|2\n≤ √ L28‖yn‖+ √√√√8 M∑ i=1 |vin −∇f(xin)|2 (70)\nCombining equation 67 and equation 70, then,\n‖yn+1‖ ≤ ( ‖a− a∞‖+ ηL √ 8 ) ‖yn‖+ η √√√√8 M∑ i=1 |vin −∇f(xin)|2\n≤ (√ ρ+ ηL √ 8 ) ‖yn‖+ η √√√√8 M∑ i=1 |vin −∇f(xin)|2\nIn the second step we have applied Assumption equation 5.1 and Lemma D.1. Squaring this equation, for any k1 > 0 it holds that\n‖yn+1‖2 ≤ (1 + k1) (√ ρ+ ηL √ 8 )2 ‖yn‖2 + η2 ( 1 + 1\nk1\n) 8 M∑ i=1 |vin −∇f(xin)|2. (71)\nLet k1 = 1−ρ2ρ (in which case 1 + 1 k1 = 1+ρ1−ρ ). Multiplying each side of 71 by L 2/M and noting that Vt = L2 M ‖yt‖2, it follows that\nVt+1 ≤ (1 + ρ)\n2ρ\n(√ ρ+ ηL √ 8 )2 Vt + Ut (72)\nIt follows from the variance bound in Assumption 5.2 that\nE [Ut | Ft−1] ≤ 8 η2 L2 1 + ρ\n1− ρσ 2 v (73)\nUsing the assumption that η ≤ 1− √ ρ\n4L √ 2 , then equation 72 and equation 73 become, respectively,\nVt+1 ≤ (1 + ρ)\n2ρ\n(√ ρ+ 1\n2\n)2 Vt + Ut\nand\nE [Ut | Ft−1] ≤ √ 2 η(1−√ρ)L1 + ρ 1− ρσ 2 v\n≤ η L √ 2\n1− ρσ 2 v" }, { "heading": "PROOF OF PROPOSITION 5.4", "text": "Proof. To begin, note that the system average xt satisfies the recursion\nxt+1 = xt + η\nM M∑ i=1 vit (74)\nDefine the variables vt and ∆t, for t ≥ 1, as\nvt = ∇f(xt) + 1\nM M∑ i=1 ( vit −∇f(xit) ) ∆n = 1\nM M∑ i=1 ( ∇f(xit)−∇f(xt) ) Then we can express the recursion equation 74 as\nxt+1 = η (vt + ∆t)\nWe will show that this can be interpreted as a form of biased SGD and therefore we may apply Corollary 3.4. For the unbiased component vt, observe that\nE [vt −∇fT (xt) | Ft−1] = E [ 1\nM M∑ i=1 (vit −∇f(xit)) | Ft−1 ] = 0 (75)\nand\nE [ ‖vt −∇fT (xt)‖2 | Ft−1 ] ≤ E\n[ 1\nM M∑ i=1 |vit −∇f(xit)|2 ] = σ2v (76)\nFor the bias term, note that\n‖∆t‖2 ≤ L2\nM M∑ i=1 |xit − xt|2 = Vt\nAssumption equation 3.1 follows from equation 75 and equation 76, while Assumption equation 3.2 follows from Proposition 5.3. The result then follows from Corollary 3.4." }, { "heading": "E ANALYSIS OF SVRG", "text": "For the analysis of SVRG, define the filtration {Ft}t=0,1,... as follows. F0 = σ(x1m) and for all s ≥ 1, Fs = σ ({ x1m } ∪ { ijt ∣∣∣ 0 ≤ t ≤ m− 1, 1 ≤ j ≤ s}) . We will leverage some prior results concerning the behavior of SVRG. The following is adapted from Reddi et al. (2016a).\nProposition E.1. Let Assumptions 2.1 and 2.2 hold. Let β > 0 and define the constants cm, cm−1, . . . , c0 as follows: cm = 0, and for 0 ≤ t ≤ m−1, let ct = ct+1(1+ηβ+2η2L2)+η2L3. Define Γt for 0 ≤ t ≤ m− 1 as Γt = η − ct+1ηβ − η2L− 2ct+1η2. Suppose that the step-size η and the analysis constant β are chosen so that Γt > 0 for 0 ≤ t ≤ m − 1, and set γ = inf0≤t<m Γt. Then for all s ≥ 1,\nm−1∑ t=0 E[‖∇fT (xs+1t )‖2 | Fs−1] ≤ fT (x s m)− E[fT (xs+1m ) | Fs−1] γ . (77)\nFurthermore, if η is of the form η = ξ/(Ln2/3) for some ξ ∈ (0, 1) and if the epoch length is set to m = bn/(3ξ)c, then there is a value for β such that γ ≥ ν(ξ)\nLn2/3 where ν(ξ) is a constant dependent\nonly on ξ. In particular, if ξ = 1/4 then\nγ ≥ 1 40Ln2/3 . (78)\nProof. The proof of equation 77 follows from nearly the same reasoning used to establish Equation 10 in (Section B, Reddi et al. (2016a)), the only difference being that conditional expectations replace expectations in all of the relevant formulas. The details are left to the reader.\nFormula equation 78 is a numerical inequality whose proof can be derived from the proof of Theorem 3 given in in (Appendix B, Reddi et al. (2016a))." }, { "heading": "PROOF OF PROPOSITION 6.1", "text": "Proof. First, note that τ( ) is a well-defined stopping time with respect to the filtration {Fs}s=0,1,.... For s = 1, 2, . . . define the random variables δs as\nδs = m−1∑ t=0 ‖∇fT (xs+1t )‖2 − fT (x s m)− fT (xs+1m ) γ\nIt holds trivially that for all s ≥ 1, m−1∑ t=0 ‖∇fT (xs+1t )‖2 = fT (x s m)− fT (xs+1m ) γ + δs (79)\nand by Proposition E.1, for all s ≥ 1,\nE[δs | Fs−1] = m−1∑ t=0 E [∥∥∇fT (xs+1t )∣∣2 | Fs−1]− fT (xsm)− E[fT (xs+1m ) | Fs−1]γ\n≤ 0. (80)\nSumming Equation equation 79 over s = 1, . . . , q yields\nq∑ s=1 m−1∑ i=0 ‖∇fT (xs+1i )‖2 = fT (x 1 m)− fT (xq+1m ) γ + q∑ s=1 δs, (81)\nRearranging terms and noting that fT (xq+1m ) ≥ f∗ results in\nγ q∑ s=1 m−1∑ i=0 ‖∇fT (xs+1i )‖2 ≤ fT (x1m)− f∗ + γ q∑ s=1 δs. (82)\nIt follows that\nγ q∑ s=1 ‖∇fT (xs+10 )‖2 ≤ fT (x1m)− f∗ + γ q∑ s=1 δs. (83)\nFor r ≥ 1, let τ( ) ∧ r be the stopping time which is the minimum of τ( ) and the constant value r. Applying Proposition A.1 together with Equation 80, it holds that\nE τ( )∧r∑ s=1 δs ≤ 0 (84) Furthermore, by definition of τ ,\nE τ( )∧r∑ s=1 ‖∇fT (xs+10 )‖2 ≥ E (τ( )∧r)−1∑ s=1 ‖∇fT (xs+10 )‖2 ≥E (τ( )∧r)−1∑ s=1 = E[(τ( ) ∧ r)− 1]. (85)\nCombining equation 83, equation 84, and equation 85 yields\nγ E[(τ( ) ∧ n)− 1] ≤ fT (x1m)− f∗\nRearranging terms in the above yields\nE[τ( ) ∧ n] ≤ fT (x 1 m)− f∗ γ + 1.\nApplying the monotone convergence theorem, then,\nE[τ( )] ≤ fT (x 1 m)− f∗ γ + 1.\nNext, specialize η and m to η = ξ/(Ln2/3) and m = bn/(3ξ)c with ξ = 1/4. Then by Proposition E.1, γ ≥ 1/(40Ln2/3). Therefore,\nE[τ( )] ≤ 40Ln 2/3(fT (x 1 m)− f∗) + 1." }, { "heading": "F GENERALIZATION ANALYSIS", "text": "" }, { "heading": "PROOF OF COROLLARY 7.2", "text": "Proof. Under our assumption on the Lipschitz property of y 7→ ∇xf(y, x), it holds that ‖∇fµ(xτ( ))‖ ≤ ‖∇fV (xτ( ))‖+Gd1(µ, µV ).\nSquaring and taking expectations, while noting that d1 ≤ d2 (see Villani (2008), Remark 6.6), E[‖∇fµ(xτ( ))‖2] ≤ 2E[‖∇fV (xτ( ))‖2] + 2G2E[d2(µ, µV )2].\nUsing the Wasserstein concentration bound from Theorem 7.1 and the definition of τ( ), we obtain\nE[‖∇fµ(xτ( ))‖2] ≤ 2 + 2G2κdJn−3/dV ." }, { "heading": "G EXPERIMENTAL METHODOLOGY", "text": "" }, { "heading": "G.1 SSGD EXPERIMENTS", "text": "The neural network model used for these experiments is LSTNet Lai et al. (2018)with CUDA-aware MPI and extended to use the Stacked SGD training method. The objective function is the squared error between the prediction and the true sensor measurement, averaged over the dataset of training instances.\nOur experiments compared SSGD (Algorithm 2) and SGD (1). For SSGD, each cluster has 4 worker nodes and 1 communication node. Hence, K = 4 and M is varied from 1 to 64. Since one physical node has four GPU devices, each physical node in the HPC environment is modeled a single local cluster in SSGD. The scalability of SSGD is compared with the basic parallel implementation of SGD, where all node synchronize with all-reduce collective communication call after each parameter update.\nThe experiment is conducted in a high performance computing environment that is equipped with 108 nodes. Each node has a dual Intel Xeon E5-2695v4 (Broadwell) CPU, four NVidia K40 GPUs, SAS-based local storage, and 256 GB of memory. The nodes are inter-connected with a non-blocking Infiniband EDR fabric." }, { "heading": "G.2 SVRG EXPERIMENTS", "text": "For the first experiment in this section, we trained a neural network with no hidden layer (i.e., a logistic classifier) for the MNIST classification task. For the second experiment, we trained a neural network with one hidden layer of n = 100 nodes on the CIFAR-10 classification task. The activation function in the hidden layer was the logistic function σ(x) = (1 + e−x)−1. In both cases, the objective function is the average cross entropy loss.\nOur experiments compared SVRG (Algorithm 4) and SGD (Algorithm 1). The only modification to the algorithms was that mini-batches were used to make the gradient estimates. For each algorithm we determined the values of the learning rate and mini-batch size using a validation set. For the learning rate, we searched over the values η ∈ {0.001, 0.01, 0.1, 1.0}. For the mini-batch size, we considered values in {32, 64, 128}. The values used for full training were determined by running the training procedure for several epochs and evaluating the resulting model on a held-out portion of the training dataset. The parameters that gave best performance on this held-out dataset were used for full training. Using the found settings, we ran five independent runs of training, and report the mean and confidence bands (representing one standard deviation) for the expected IFO calls." } ]
2,019
ON THE EXPECTED RUNNING TIME OF NONCONVEX OPTIMIZATION WITH EARLY STOPPING
SP:dc8557f06ebb81345d2edeb98716e8327dcb30d8
[ "This work is focused on topological characterization of target surfaces of optimization objectives (i.e. loss functions) by computing so called barcodes, which are lists of pairs of local minima and their connected saddle points. The authors claim that the barcodes constitute a representation of target objectives that is invariant under homeomorphisms of input to the objectives. The authors present an algorithm for computing the barcodes from graph-based representation of a surface, and present barcodes computed on toy examples in numerical analysis. ", "This paper introduces the notion of barcodes as a topological invariant of loss surfaces that encodes the \"depth\" of local minima by associating to each minimum the lowest index-one saddle. An algorithm is presented for the computation of barcodes, and some small-scale experiments are conducted. For very small neural networks, the barcodes are found to live at small loss values, and the authors argue that this suggests it may be hard to get stuck in a suboptimal local minimum." ]
We apply the canonical forms of gradient Morse complexes (barcodes) to explore topology of loss surfaces. We present a new algorithm for calculations of the objective function’s barcodes of minima. Our experiments confirm two principal observations: (1) the barcodes of minima are located in a small lower part of the range of values of loss function of neural networks and (2) increase of the neural network’s depth brings down the minima’s barcodes. This has natural implications for the neural network learning and the ability to generalize.
[]
[ { "authors": [ "Md Zahangir Alom", "Tarek M Taha", "Christopher Yakopcic", "Stefan Westberg", "Paheding Sidike", "Mst Shamima Nasrin", "Brian C Van Esesn", "Abdul A S Awwal", "Vijayan K Asari" ], "title": "The history began from alexnet: A comprehensive survey on deep learning approaches", "venue": "arXiv preprint arXiv:1803.01164,", "year": 2018 }, { "authors": [ "S. Barannikov" ], "title": "Framed Morse complexes and its invariants", "venue": "Adv. Soviet Math.,", "year": 1994 }, { "authors": [ "U. Bauer", "M. Kerber", "J. Reininghaus", "H. Wagner" ], "title": "Phat – persistent homology algorithms toolbox", "venue": "ICMS", "year": 2014 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "B.T.Fasy", "J.Kim", "F.Lecci", "C.Maria" ], "title": "Introduction to the R package", "venue": "TDA. preprint arxiv:1411.1830,", "year": 2014 }, { "authors": [ "Jiezhang Cao", "Qingyao Wu", "Yuguang Yan", "Li Wang", "Mingkui Tan" ], "title": "On the flatness of loss surface for two-layered relu networks", "venue": "In Asian Conference on Machine Learning,", "year": 2017 }, { "authors": [ "P. Chaudhari", "A. Choromanska", "S. Soatto", "Y. LeCun", "C. Baldassi", "C. Borgs", "J. Chayes", "L. Sagun", "R. Zecchina" ], "title": "Entropy-sgd: Biasing gradient descent into wide valleys", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Ch. Dellago", "P.G. Bolhuis", "Ph. L. Geissler" ], "title": "Transition Path Sampling, pages 1–78", "venue": "doi: 10.1002/0471231509.ch1", "year": 2003 }, { "authors": [ "L. Dinh", "R. Pascanu", "S. Bengio", "Y. Bengio" ], "title": "Sharp minima can generalize for deep nets", "venue": "In Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research,", "year": 2017 }, { "authors": [ "Marco Gori", "Alberto Tesi" ], "title": "On the problem of local minima in backpropagation", "venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence,", "year": 1992 }, { "authors": [ "Kenji Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "D. Le Peutrec", "F. Nier", "C. Viterbo" ], "title": "Precise Arrhenius law for p-forms: The Witten Laplacian and Morse–Barannikov complex", "venue": "Annales Henri Poincaré,", "year": 2013 }, { "authors": [ "F. Le Roux", "S. Seyfaddini", "C. Viterbo" ], "title": "Barcodes and area-preserving homeomorphisms", "venue": "arXiv preprint arXiv:1804.09028, art", "year": 2018 }, { "authors": [ "H. Li", "Zh. Xu", "G. Taylor", "Ch. Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "P. Bubenik M.K. Chung", "P.T. Kim" ], "title": "Persistence diagrams of cortical surface data", "venue": "Information Processing in Medical Imaging,", "year": 2009 }, { "authors": [ "Yury A Malkov", "Dmitry A Yashunin" ], "title": "Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, 2018", "venue": null, "year": 2018 }, { "authors": [ "A.R. Oganov", "M. Valle" ], "title": "How to quantify energy landscapes of solids", "venue": "The Journal of Chemical Physics,", "year": 2009 }, { "authors": [ "Chi Seng Pun", "Kelin Xia", "Si Xian Lee" ], "title": "Persistent-homology-based machine learning and its applications – a survey", "venue": null, "year": 1811 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "T. Sousbie", "C. Pichon", "H. Kawahara" ], "title": "The persistent cosmic web and its filamentary structure âĂŞ II. Illustrations", "venue": "Monthly Notices of the Royal Astronomical Society, 414(1):384–403,", "year": 2011 }, { "authors": [ "Mingyang Yi", "Qi Meng", "Wei Chen", "Zhi-ming Ma", "Tie-Yan Liu" ], "title": "Positively scale-invariant flatness of relu neural networks", "venue": null, "year": 1903 } ]
[ { "heading": "1 INTRODUCTION", "text": "The learning via finding minima of objective functions is the principal strategy underlying majority of learning algorithms. For example, in Neural Network training, the objective function’s input is model parameters (weights) and the objective function’s output is the loss on training dataset. The graph of the loss function, often called loss surface, typically has complex structure (e.g. see loss surface visualisations by Li et al. (2018)): non-convexity, many local minima, flat regions, steep slopes. These obstacles harm exploration of the loss surface and complicate searching for optimal network weights.\nThe optimization of modern neural networks is based on the gradient descent algorithm. The global topological characteristics of the gradient vector field trajectories are captured by the Morse complex via decomposing the parameter space into cells of uniform flow, see Barannikov (1994); Le Roux et al. (2018) and references therein. The invariants of Morse complex called \"canonical forms\"(or barcodes) constitute the fundamental summary of the topology of the gradient vector field flow.\nThe \"canonical forms\", or barcodes, in this context are decompositions of the change of topology of the sublevel sets of objective function into simple \"birth-death\" phenomena of topological feautures of different dimensions.\nThe calculation of the barcodes for different functions constitutes the essence of the topological data analysis. The currently available software packages for the calculation of barcodes of functions, also called \"sublevel persistence\", are GUDHI, Dionysus, PHAT, and TDA package which incorporates all three previous packages B.T.Fasy et al. (2014). They are based on the algorithm, described in Barannikov (1994), see also appendix and e.g. Bauer et al. (2014) and references therein. This algorithm which has complexity of O(n3). These packages can currently handle calculations of barcodes for functions defined on a grid of up to 106 points, and in dimensions two and three. Thus all current packages have the scalability issues.\nWe describe a new algorithm for computations of the barcodes of functions in lowest degree. Our algorithm works with functions defined on randomly sampled or specifically chosen point clouds. Point cloud based methods are known to work better than grid based methods in optimization related problems (Bergstra and Bengio (2012)). We also use the fact that the definition of the barcode of lowest degree can be reformulated in geometrical terms (see definition 1 in section 2). The previously known algorithms were based on the more algebraic approach as in definition 3. Our algorithm has complexity of O(n log(n)). It was tested in dimensions up to 16 and with number of points of up to 108.\nIn this work, we develop a methodology to describe the properties of the loss surface of the neural network via topological features of local minima.\nWe emphasize that the value of the objective function at the minimum can be viewed as only a part of its topological characteristic from the “canonical form” (barcode). The second half can be described as the value of objective function at the index-one saddle, which can be naturally associated with each local minimum.\nThe difference between the values of objective function at the associated index-one saddle and at the local minimum is a topological invariant of the minimum. For optimization algorithms this quantity measures, in particular, the obligatory penalty for moving from the given local minimum to a lower minimum.\nThe main contributions of the paper are as follows:\nApplying the one-to-one correspondence between local minima and 1-saddles to exploration of loss surfaces. For each local minimum p there is canonically defined 1-saddle q (see Section 2). The 1-saddle associated with p can be described as follows. The 1- saddle q is precisely the point where the connected component of the sublevel set Θf≤c = {θ ∈ Θ | f(θ) ≤ c} containing the minimum p merges with another connected component of the sublevel set whose minimum is lower. This correspondence between the local minima and the 1-saddles, killing a connected component of Θf≤c, is one-to-one. The segment [f(p), f(q)] is then the “canonical form” invariant attached to the minimum p. The set of all such segments is the barcode (\"canonical form\") of minima invariant of f . It is a robust topological invariant of objective function. It is invariant in particular under the action of homeomorphisms of Θ. Full “canonical form” invariants give a concise summary of the topology of objective function and of the global structure of its gradient flow.\nAlgorithm for calculations of the barcodes (canonical invariants) of minima. We describe an algorithm for calculation of the canonical invariants of minima. The algorithm works with function’s values on a a randomly sampled or specifically chosen set of points. The local minima give birth to clusters of points in sublevel sets. The algorithm works by looking at neighbors of each point with lower value of the function and deciding if this point belongs to the existing clusters, gives birth to a new cluster (minimum), or merges two or more clusters (index one saddle). A variant of the algorithm has complexity of O(n log(n)), where n is the cardinality of the set of points.\nCalculations confirming observations on behaviour of neural networks loss functions barcodes. We calculate the canonical invariants (barcodes) of minima for small fully-connected neural networks of up to three hidden layers and verify that all segments of minima’s barcode belong to a small lower part of the total range of loss function’s values and that with the increase in the neural network depth the minima’s barcodes descend lower.\nThe usefulness of our approach and algorithms is clearly not limited to the optimization problems. Our algorithm permits really fast computation of the canonical form invariants (persistence barcodes) of many functions which were not accessible until now. These sublevel persistence barcodes have been successfully applied in different discipline, to mention just a few: cognitive science (M. K. Chung and Kim (2009) ), cosmology (Sousbie et al. (2011)), see e.g. Pun et al. (2018) and references therein.\nOur viewpoint should also have applications in chemistry and material science where 1-saddle points on potential energy landscapes correspond to transition states and minima are stable states corresponding to different materials or protein foldings (see e.g. Dellago et al. (2003), Oganov and Valle (2009)).\nThe article is structured as follows. First we describe three definitions of barcodes of minima. After that our algorithm for their calculation is described. In the last part we give examples of calculations, including the loss functions of simple neural nets." }, { "heading": "2 TOPOLOGY OF LOSS SURFACES VIA CANONICAL FORM INVARIANTS", "text": "The “canonical form” invariants (barcodes) give a concise summary of topological features of functions (see Barannikov (1994), Le Roux et al. (2018) and references therein). These invariants describe a decomposition of the change of topology of the function into the finite sum of “birth”– “death” of elementary features. We propose to apply these invariants as a tool for exploring topology of loss surfaces.\nIn this work we concentrate on the part of these canonical form invariants, describing the “birth”– “death” phenomena of connected components of sublevel sets of the function.\nHowever it should be stressed that this approach works similarly also for “almost minima”, i.e. for the critical points (manifolds) of small indexes, which are often the terminal points of the optimization algorithms in very high dimensions.\nWe give three definitions of the “canonical form” invariants of minima.\nDEFINITION 1: MERGING WITH CONNECTED COMPONENT OF A LOWER MINIMUM\nThe values of parameter c at which the topology of sublevel set\nΘf≤c = {θ ∈ Θ | f(θ) ≤ c} changes are critical values of f .\nLet p be one of minima of f . When c increases from f(p)− to f(p)+ , a new connected component of the set Θf≤c is born (see fig 1a, the connected components S1, S2, S3 of sublevel set are born at the blue, green and red minima correspondingly.\nIf p is a minimum, which is not global, then, when c is increased, the connected component of Θf≤c born at p merges with a connected component born at a lower minimum. Let q is the merging point where this happens. The intersection of the set Θf<f(q) with any small neighborhood of q has two connected components. This is the index-one saddle q associated with p.\nAlso these two subsets of small neighborhood of q belong to two different connected components of the whole set Θf<f(q). The 1-saddles of this type are called “+” (“plus”) or “death” type. The described correspondence between local minima and 1-saddles of this type is one-to-one.\nIn a similar way, the 1-saddle q associated with p can be described also as follows. Proposition 2.1. Consider various paths γ starting from the local minimum p and going to a lower minimum. Let mγ ∈ Θ is the maximum of the restriction of f to such path γ. Then 1-saddle q which is paired with the local minimum p is the minimum over the set of all such paths γ of the maxima mγ:\nq = arg min γ:[0,1]→Θ\nγ(0)=p, f(γ(1))<f(p)\n[ max t f ( γ(t) )]\nDEFINITION 2: NEW MINIMUM ON CONNECTED COMPONENTS OF SUBLEVEL SETS\nThe correspondence in the opposite direction can be described analogously. Let q is a 1-saddle point of such type that the two branches of the set Θf≤f(q)− near q belong to two different connected components of Θf≤f(q)− . A new connected component of the set Θf≤c is formed when c decreases from f(q) + to f(q) − . The restriction of f to each of the two connected components has its global minimum.\nProposition 2.2. Given a 1-saddle q, the minimum p which is paired with q is the new minimum of f on the connected component of the set Θf≤c which is formed when c decreases from f(q) + to f(q)− .\nThe two branches of the set Θf≤f(q)− near q can also belong to the same connected components of this set. Then such saddle is of “birth” type and it is naturally paired with index-two saddle of “death” type (see theorem 2.3).\nDEFINITION 3: INVARIANTS OF FILTERED COMPLEXES\nChain complex is the algebraic counterpart of intuitive idea representing complicated geometric objects as a decomposition into simple pieces. It converts such a decomposition into a collection of vector spaces and linear maps.\nA chain complex (C∗, ∂∗) is a sequence of finite-dimensional k-vector spaces and linear operators\n→ Cj+1 ∂j+1→ Cj ∂j→ Cj−1 → . . .→ C0,\nwhich satisfy ∂j ◦ ∂j+1 = 0.\nThe j−th homology of the chain complex (C∗, ∂∗) is the quotient\nHj = ker (∂j) /im (∂j+1) .\nA chain complex C∗ is called R−filtered if C∗ is equipped with an increasing sequence of subcomplexes (R−filtration) Fs1C∗ ⊂ Fs2C∗ ⊂ . . . ⊂ FsmaxC∗ = C∗, indexed by a finite set of real numbers s1 < s2 < . . . < smax.\nTheorem 2.3. (Barannikov (1994)) Any R−filtered chain complex C∗ can be brought by a linear transformation preserving the filtration to “canonical form”, a canonically defined direct sum of R−filtered complexes of two types: one-dimensional complexes with trivial differential ∂j(ei) = 0 and two-dimensional complexes with trivial homology ∂j(ei2) = ei1 . The resulting canonical form is uniquely determined.\nThe full barcode is a visualization of the decomposition of an R−filtered complexes according to the theorem 2.3. Each filtered 2-dimensional complex with trivial homology ∂j(ei2) = ei1 , 〈ei1〉 = F≤s1 ,〈ei1 , ei2〉 = F≤s2 describes a topological feature in dimension j which is \"born\" at s1 and which \"dies\" at s2. It is represented by segment [s1, s2] in the degree-j barcode. And each filtered 1-dimensional complex with trivial differential, ∂jei = 0 , 〈ei〉 = F≤r describes a topological feature in dimension j which is \"born\" at r and never \"dies\". It is represented by the half-line [r,+∞[ in the degree-j barcode.\nThe proof of the theorem is given in Appendix. Essentially, one can bring an R−filtered complex to the required canonical form by induction, starting from the lowest basis elements of degree one, in such a way that the manipulation of degree j basis elements does not destroy the canonical form in degree j − 1 and in lower filtration pieces in degree j. Let f : Θ→ R is smooth, or more generally, piece-wise smooth continuous function such that the sublevel sets Θf≤c = {θ ∈ Θ | f(θ) ≤ c} are compact. One filtered complex naturally associated with function f and such that the subcomplexes FsC∗ compute the homology of sublevel sets Θf≤s is the gradient (Morse) complex, see e.g. Barannikov (1994); Le Peutrec et al. (2013) and references therein. Without loss of generality the function f can be assumed smooth here, otherwise one can always replace f by its smoothing. By adding a small perturbation such as a regularization term we can also assume that critical points of f are non-degenerate.\nThe generators of the gradient (Morse) complex correspond to the critical points of f . The differential is defined by counting gradient trajectories between critical points when their number is finite.\nThe canonical form of the gradient (Morse) complex describes a decomposition of the gradient flow associated with f into standard simple pieces.\nLet p be a minimum, which is not a global minimum. Then the generator corresponding to p represents trivial homology class in the canonical form, since the homology class of its connected component is already represented by the global minimum. Then p is the lower generator of a two-dimensional complex with trivial homology in the canonical form. I.e. p is paired with an index-one saddle q in the canonical form. The segment [f(p), f(q)] is then the canonical invariant (barcode) corresponding to the minimum p.\nThe full canonical form of the gradient (Morse) complex of all indexes is a summary of global structure of the objective function’s gradient flow.\nThe total number of different topological features in sublevel sets Θf≤c of the objective function can be read immediately from the barcode. Namely the number of intersections of horizontal line at level c with segments in the index j barcode gives the number of independent topological features of dimension j in Θf≤c.\nThe description of the barcode of minima on manifold Θ with nonempty boundary ∂Θ is modified in the following way. A connected component can be also born at a local minimum of restriction of f to the boundary f |∂Θ, if gradf is pointed inside manifold Θ. The merging of two connected components can also happen at an index-one critical point of f |∂Θ, if gradf is pointed inside Θ." }, { "heading": "3 AN ALGORITHM FOR CALCULATION OF BARCODES OF MINIMA", "text": "In this section we describe the developed algorithm for calculation of the canonical form invariants of local minima. The computation exploits the first definition of barcodes (see Section 2), which is based on the evolution on the connected components of the sublevel sets.\nTo analyse the surface of the given function f : Θ→ R, we first build its approximation by finite graph-based construction. To do this, we consider a random subset of points {θ1, . . . , θN} ∈ Θ and build a graph with these points as vertices. The edges connect close points. Thus, for every vertex θn, by comparing f(θn) with f(θn′) for neighbors θn′ of θn, we are able to understand the local topology near the point θn. At the same time, connected componenets of sublevel sets Θf≤c = {θ ∈ Θ | f(θ) ≤ c} will naturally correspond to connected components of the subgraph on point θn, such that f(θn) ≤ c.1\nTwo technical details here are the choice of points θn and the definition of closeness, i.e. when to connect points by an edge. In our experiments, we sample points uniformly from some rectangular box of interest. To add edges, we compute the oriented k-Nearest Neighbor Graph on the given points and then drop the orientation of edges. Thus, every node in the obtained graph has a degree in [k, 2k]. In all our experiments we use k = 2D, where D is the dimension of f ’s input.\nNext we describe our algorithm, which computes barcodes of a function from its graph-based approximation described above. The key idea is to monitor the evolution of the connected components of the sublevel sets of the graph, i.e. sets Θc = {θn | f(θn) ≤ c} for increasing c. For simplicity we assume that points θ are ordered w.r.t. the value of function f , i.e. for n < n′ we have f(θn) < f(θn′). In this case we are interested in the evolution of connected components throughout the process sequential adding of vertices θ1, θ2, . . . , θN to graph, starting from an empty graph. We denote the subgraph on vertices θ1, . . . , θn by Θn. When we add new vertex θn+1 to θn, there are three possibilities for connected componenets to evolve:\n1. Vertex θn+1 has zero degree in Θn+1. This means that θn+1 is a local minimum of f and it forms a new connected component in the sublevel set.\n2. All the neighbors of θn+1 in Θn+1 belong to one connected component in Θn.\n3. All the neighbors of θn+1 in Θn+1 belong to ≥ 2 connected components s1, s2, . . . , sK ⊂ Θn. Thus, all these components will form a single connected component in Θn+1.\n1In fact we build a filtered simplicial complex, which approximates the function plot. Its degree zero chains are spanned by the points θn, and degree one chains are spanned by the edges between close pairs of points.\nAlgorithm 1: Barcodes of minima computation for function on a graph. Input : Connected undirected graph G = (V,E); function f on graph vertices. Output : Barcodes: a list of \"birth\"-\"death\" pairs. S ← {}; f∗ ← min f(θ) for θ ∈ V ; Barcodes← [(f∗,∞)]; for θ ∈ V in increasing order of f(θ) do\nS′ ← {s ∈ S | ∃θ′ ∈ s such that (θ, θ′) ∈ E and f(θ) > f(θ′)}; if S′ = ∅ then\nS ← S ∪ {{θ}}; else f∗ ← min f(θ′) for θ′ ∈ ⊔ s∈S′ s;\nfor s ∈ S′ do fs ← min f(θ′) for θ′ ∈ s; if fs 6= f∗ then\nBarcodes← Barcodes ∪ { ( fs, f(θ) ) };\nend snew ← ( ⊔ s∈S′ s ) t {θ};\nS ← (S \\ S′) t {snew}; end return Barcodes\nend\nIn the third case, according to definition 1 of Section 2 the point θn+1is a 1-saddle point. Thus, one of the components sk swallows the rest. This is the component which has the lowest minimal value. For other components,2 this gives their barcodes: for sk the birth-death pair is ( min θ∈sk f(θ); f(θn+1) ) .\nWe summarize the procedure in the following algorithm 1. Note that we assume that the input graph is connected (otherwise the algorithm can be run on separate connected components).\nIn the practical implementation of the algorithm, we precompute the values of function f at all the vertices of G. Besides that, we use the disjoint set data structure to store and merge connected components during the process. We also keep and update the global minima in each component. We did not include these tricks into the algorithm’s pseuso-code in order to keep it simple.\nThe resulting complexity of the algorithm isO(N logN) in the number of points. Here it is important to note that the procedure of graph creation may be itself time-consuming. In our case, the most time consuming operation is nearest neighbor search. In our code, we used efficient HNSW Algorithm for aproximate NN search by Malkov and Yashunin (2018)." }, { "heading": "4 EXPERIMENTS", "text": "In this section we apply our algorithm to describing the surfaces of functions. In Subsection 4.1 we apply the algorithm to toy visual examples. In Subsection 4.2 we apply the algorithm to analyse the loss surfaces of small neural networks." }, { "heading": "4.1 TOY FUNCTIONS", "text": "In this subsection we demonstrate the application of the algorithm to simple toy functions f : RD → R. For D ∈ {1, 2} we consider three following functions:\n2Typically it merges two connected components of Θn. However, due to noise and non-dense approximation of function by graph in high-dimensional spaces, it may happen that it merges more than two connected components.\n1. Polynomial of a single variable of degree 4 with 2 local minima (see Fig. 2a):\nf(θ1) = θ 4 1 − θ21 + θ1 10\n(1)\n2. Camel function with 3 humps, i.e. 3 local minima (see Fig. 2b):\nf(θ1, θ2) = (2− 1.05θ21 + θ41/6)θ21 + θ1θ2 + θ22 (2)\n3. Camel function with 6 humps, i.e. 6 local minima (see Fig. 2c):\nf(θ1, θ2) = (4− 2.1θ21 + θ41/3)θ21 + θ1θ2 + (−4 + 4θ22)θ22 (3)\nFunction plots with their corresponding barcodes of minima are given in Figure 2. The barcode of the global minimum is represented by the dashed half-line which goes to infinity." }, { "heading": "4.2 TOPOLOGY OF NEURAL NETWORK LOSS FUNCTION", "text": "In this section we compute and analyse barcodes of small fully connected neural networks with up to three hidden layers.\nFor several architectures of the neural networks many results on the loss surface and its local minima are known (see e.g. Kawaguchi (2016) Gori and Tesi (1992) and references therein). Different geometrical and topological properties of loss surfaces were studied in Cao et al. (2017); Yi et al. (2019); Chaudhari et al. (2017); Dinh et al. (2017).\nThere is no ground truth on how should the best loss surface of a neural network looks like. Nevertheless, there exists many common opinions on this topic. First of all, from practical optimization point of view, the desired local (or global) minima should be easily reached via basic training methods such\nas Stochastic Gradient Descent, see Ruder (2016). Usually this requires more-or-less stable slopes of the surface to prevent instabilities such as gradient explosions or vanishing gradients. Secondly, the value of obtained minimum is typically desired to be close to global, i.e. attain smallest training error. Thirdly, from the generalization point of view, such minima are required to provide small loss on the testing set. Although in general it is assumed that the good local optimum is the one that is flat, some recent development provide completely contrary arguments and examples, e.g. sharp minima that generalize well.\nBesides the optimization of the weights for a given architecture, neural network training implies also a choice of the architecture of the network, as well as the loss function to be used for training. In fact, it is the choice of the architecture and the loss function that determines the shape of the loss surface. Thus, proper selection of the network’s architecture may simplify the loss surface and lead to potential improvements in the weight optimization procedure.\nWe have analyzed very tiny neural networks. However our method permits full exploration of the loss surface as opposed to stochastical exploration of higher-dimensional loss surfaces. Let us emphasize that even from practical point of view it is important to understand first the behavior of barcodes in simplest examples where all hyper-parameters optimization schemes can be easily turned off. For every analysed neural network the objective function is its mean squared error for predicting (randomly selected) function g : [−π, π]→ R given by\ng(x) = 0.31 · sin(−x)− 0.72 · sin(−2x)− 0.21 · cos(x) + 0.89 · cos(2x)\nplus l2−regularization. The error is computed for prediction on uniformly distributed inputs x ∈ {−π + 2π100k | k = 0, 1, . . . , 100}. The neural networks considered were fully connected one-hidden layer with 2, 3 and 4 neurons, two-hidden layers with 2x2, 3x2 and 3x3 neurons, and three hidden layers with 2x2x2 and 3x2x2 neurons. We have calculated the barcodes of the loss functions on the hyper-cubical sets Θ which were chosen based on the typical range of parameters of minima. The results are as shown in Figure 3.\nWe summarize our findings into two main observations:\n1. the barcodes are located in tiny lower part of the range of values; typically the maximum value of the function was around 200, and the saddles paired with minima lie below 1;\n2. with the increase of the neural network depth the barcodes descend lower.\nFor example the upper bounds of barcodes of one-layer (2) net are in range [0.55, 0.65], two-layer (2× 2) net in range [0.35, 0.45], and three-layer (2× 2× 2) net in range [0.1, 0.3]." }, { "heading": "5 CONCLUSION", "text": "In this work we have introduced a methodology for analysing the plots of functions, in particular, loss surfaces of neural networks. The methodology is based on computing topological invariants called canonical forms or barcodes.\nTo compute barcodes we used a graph-based construction which approximates the function plot. Then we apply the algorithm we developed to compute the barcodes of minima on the graph. Our experimental results of computing barcodes for small neural networks lead to two principal observations.\nFirst all barcodes sit in a tiny lower part of the total function’s range. Secondly, with increase of the depth of neural network the barcodes descend lower. From the practical point of view, this means that gradient descent optimization cannot stuck in high local minima, and it is also not difficult to get from one local minimum to another (with smaller value) during learning.\nThe method we developed has several further research directions. Although we tested the method on small neural networks, it is possible to apply it to large-scale modern neural networks such as convolutional networks (i.e. ResNet, VGG, AlexNet, U-Net, see Alom et al. (2018)) for imageprocessing based tasks. However, in this case the graph-based approximation we use requires wise choice of representative graph vertices, which is a hardcore in high-dimensional spaces (dense filling\nof area by points is computationally intractable). Another direction is to study the connections between the barcode of local minima and the generalization properties of given minimum and of neural network. There are clearly also connections, deserving further investigation, between the barcodes of minima and results concerning the rate of convergency during learning of neural networks." }, { "heading": "5.1 PROOF OF THE THEOREM 2.3", "text": "The theorem is similar in spirit to the bringing a quadratic form to a sum of squares.\nProof. (Barannikov (1994)) Let’s choose a basis in the vector spacesCn compatible with the filtration, so that each subspace FrCn is the span 〈 e (n) 1 , . . . , e (n) ir 〉 .\nLet ∂e(n)l has the required form for n = j and l ≤ i, or n < j and all l. I.e. either ∂e (n) l = 0 or ∂e (n) l = e (n−1) m(l) , where m(l) 6= m(l ′) for l 6= l′.\nLet ∂e\n(j) i+1 = ∑ k e (j−1) k αk.\nLet’s move all the terms with e(j−1)k = ∂e j q , q ≤ i, from the right to the left side. We get\n∂(e (j) i+1 − ∑ q≤i e(j)q αk(q)) = ∑ k e (j−1) k βk\nIf βk = 0 for all k, then define\nẽ (j) i+1 = e (j) i+1 − ∑ q≤i e(j)q αk(q),\nso that ∂ẽ\n(j) i+1 = 0,\nand ∂e(n)l has the required form for l ≤ i+ 1 and n = j, and for n < j and all l. Otherwise let k0 be the maximal k with βk 6= 0. Then\n∂(e (j) i+1 − ∑ q≤i e(j)q αk(q)) = e (j−1) k0 βk0 + ∑ k<k0 e (j−1) k βk, βk0 6= 0.\nDefine\nẽ (j) i+1 = e(j)i+1 −∑ q≤i e(j)q αk(q) /βk0 , ẽ(j−1)k0 = e(j−1)k0 + ∑ k<k0 e (j−1) k βk/βk0 .\nThen ∂ẽ\n(j) i+1 = ẽ (j−1) k0\nand for n = j and l ≤ i+ 1, or n < j and all l, ∂e(n)l has the required form. If the complex has been reduced to \"canonical form\" on subcomplex ⊕n≤jCn, then reduce similarly ∂e(j+1)1 and so on. Uniqueness of the canonical form follows essentially from the uniqueness at each previous step. Let{ a\n(j) i } , { b (j) i = ∑ k≤i a (j) k αk } , be two bases of C∗ for two different canonical forms. Assume that for all indexes p < j and all n, and p = j and n ≤ i the canonical forms agree. Let ∂a(j)i+1 = a (j−1) m and ∂b(j)i+1 = b (j−1) l with m > l.\nIt follows that\n∂ ∑ k≤i+1 a (j) k αk = ∑ n≤l a(j−1)n βn,\nwhere αi+1 6= 0, βl 6= 0. Therefore ∂a\n(j) i+1 = ∑ n≤l a(j−1)n βn/αi+1 − ∑ k≤i ∂a (j) k αk/αi+1.\nOn the other hand ∂a(j)i+1 = a (j−1) m , with m > l, and ∂a (j) k for k ≤ i are either zero or some basis elements a(j−1)n different from a (j−1) m . This gives a contradiction.\nSimilarly if ∂b(j)i+1 = 0, then\n∂a (j) i+1 = − ∑ k≤i ∂a (j) k αk/αi+1\nwhich again gives a contradiction by the same arguments. Therefore the canonical forms must agree for p = j and n = i+ 1 also." } ]
2,019
null
SP:ecdae30f9692bf6d23cd5a571dabb82fe782c244
[ "This paper addresses the continual learning setting, and aims to mitigate catastrophic forgetting, with results on Permuted MNIST, Split MNIST, Vision Datasets Mixture, and their own class-imbalanced version of the Permuted MNIST dataset. The authors propose to augment differentiable plastic weights - a general neural network component - with class-specific updates (similarly to prior work, such as the Hebbian softmax) at the final layer of a neural network, prior to a softmax. While well-motivated in terms of the background and methodology (indeed, this is a simple way to prevent interference in fast weights), and nicely explored experimentally with lots of examinations into the workings of the method, the weak results on the simpler continual learning settings lead me to consider this a weak reject.", "The authors introduce DIFFERENTIABLE HEBBIAN CONSOLIDATION,a new framework for continual learning that can be implemented in the usual differentiable programming setups. This framework is motivated in terms of complementary learning system (CLS) theory which features an episodic memory module. The method is shown to be easily implemented as seen in their pytorch pseudocode (authors also suggest code will be released). Additionally, authors show the method leads to significant improvements over simple baselines, and can complement other task-specific hebbian-based learning paradigms" ]
Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge. However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process. Thus, neural networks that are deployed in the real world often struggle in scenarios where the data distribution is non-stationary (concept drift), imbalanced, or not always fully available, i.e., rare edge cases. We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity (DHP) Softmax layer that adds a rapid learning plastic component (compressed episodic memory) to the fixed (slow changing) parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale. We demonstrate the flexibility of our method by integrating well-known task-specific synaptic consolidation methods to penalize changes in the slow weights that are important for each target task. We evaluate our approach on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and introduce an imbalanced variant of Permuted MNIST — a dataset that combines the challenges of class imbalance and concept drift. Our proposed model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.
[]
[ { "authors": [ "Wickliffe C. Abraham", "Anthony Robins" ], "title": "Memory retention – the synaptic stability versus plasticity dilemma", "venue": "Trends in Neurosciences,", "year": 2005 }, { "authors": [ "Rahaf Aljundi", "Francesca Babiloni", "Mohamed Elhoseiny", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Memory aware synapses: Learning what (not) to forget", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Bernard Ans", "Stéphane Rousset", "Robert M. French", "Serban Musca" ], "title": "Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting", "venue": "Connection Science,", "year": 2004 }, { "authors": [ "Craig Atkinson", "Brendan McCane", "Lech Szymanski", "Anthony V. Robins" ], "title": "Pseudo-recursal: Solving the catastrophic forgetting problem in deep neural networks", "venue": null, "year": 2018 }, { "authors": [ "Jimmy Ba", "Geoffrey E Hinton", "Volodymyr Mnih", "Joel Z Leibo", "Catalin Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Craig H. Bailey", "Eric R. Kandel", "Kristen M. Harris" ], "title": "Structural components of synaptic plasticity and memory consolidation", "venue": "Cold Spring Harbor Perspectives in Biology,", "year": 2015 }, { "authors": [ "Marcus K Benna", "Stefano Fusi" ], "title": "Computational principles of synaptic memory consolidation", "venue": "Nature Neuroscience,", "year": 2016 }, { "authors": [ "Gail A. Carpenter", "Stephen Grossberg" ], "title": "A massively parallel architecture for a self-organizing neural pattern recognition machine", "venue": "Computer Vision, Graphics, and Image Processing,", "year": 1987 }, { "authors": [ "Martin J. Chadwick", "Demis Hassabis", "Nikolaus Weiskopf", "Eleanor A. Maguire" ], "title": "Decoding individual episodic memory traces in the human hippocampus", "venue": "Current Biology,", "year": 2010 }, { "authors": [ "R French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in Cognitive Sciences,", "year": 1999 }, { "authors": [ "A.R. Gardner-Medwin" ], "title": "Doubly modifiable synapses: A model of short and long term autoassociative memory", "venue": "Proceedings of the Royal Society B: Biological Sciences,", "year": 1989 }, { "authors": [ "D.O. Hebb" ], "title": "The organization of behavior; a neuropsychological theory", "venue": null, "year": 1949 }, { "authors": [ "Geoffrey E. Hinton", "David C. Plaut" ], "title": "Using fast weights to deblur old memories", "venue": "In Proceedings of the 9th Annual Conference of the Cognitive Science Society,", "year": 1987 }, { "authors": [ "John G. Howland", "Yu Tian Wang" ], "title": "Chapter 8 synaptic plasticity in learning and memory: Stress effects in the hippocampus", "venue": "In Essence of Memory,", "year": 2008 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil C. Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A. Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska", "Demis Hassabis", "Claudia Clopath", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences (PNAS),", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Yann LeCun", "Leon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In IEEE Intelligent Signal Processing,", "year": 2001 }, { "authors": [ "Z. Li", "D. Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "J.L. McClelland", "B.L. McNaughton", "R.C. O’Reilly" ], "title": "Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory", "venue": "Psychological Review,", "year": 1995 }, { "authors": [ "Michael McCloskey", "Neil J. Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "The Psychology of Learning and Motivation,", "year": 1989 }, { "authors": [ "Thomas Miconi" ], "title": "Learning to learn with backpropagation of hebbian plasticity", "venue": "CoRR, abs/1609.02228,", "year": 2016 }, { "authors": [ "Thomas Miconi", "Kenneth O. Stanley", "Jeff Clune" ], "title": "Differentiable plasticity: training plastic neural networks with backpropagation", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Thomas Miconi", "Aditya Rawal", "Jeff Clune", "Kenneth O. Stanley" ], "title": "Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Tsendsuren Munkhdalai", "Adam Trischler" ], "title": "Metalearning with hebbian fast", "venue": "weights. CoRR,", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "German I. Parisi", "Ronald Kemker", "Jose L. Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Ole Paulsen", "Terrence J Sejnowski" ], "title": "Natural patterns of activity and long-term synaptic plasticity", "venue": "Current Opinion in Neurobiology,", "year": 2000 }, { "authors": [ "Jack W. Rae", "Chris Dyer", "Peter Dayan", "Timothy P. Lillicrap" ], "title": "Fast parametric learning with activation memorization", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H. Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition CVPR,", "year": 2017 }, { "authors": [ "Mark Bishop Ring" ], "title": "Continual Learning in Reinforcement Environments", "venue": "PhD thesis, Austin, TX, USA,", "year": 1994 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "Online structured laplace approximations for overcoming catastrophic forgetting", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "venue": "Connection Science,", "year": 1995 }, { "authors": [ "Anna C. Schapiro", "Nicholas B. Turk-Browne", "Matthew M Botvinick", "Kenneth A. Norman" ], "title": "Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning", "venue": "Philosophical transactions of the Royal Society of London. Series B, Biological sciences,", "year": 2017 }, { "authors": [ "Jonathan Schwarz", "Wojciech Czarnecki", "Jelena Luketina", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Sen Song", "Kenneth D. Miller", "L.F. Abbott" ], "title": "Competitive hebbian learning through spike-timingdependent synaptic plasticity", "venue": "Nature Neuroscience,", "year": 2000 }, { "authors": [ "T. Takeuchi", "A.J. Duszkiewicz", "R.G.M. Morris" ], "title": "The synaptic plasticity and memory hypothesis: encoding, storage and persistence", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2013 }, { "authors": [ "Sebastian Thrun" ], "title": "Lifelong Learning Algorithms, pp. 181–209", "venue": null, "year": 1998 }, { "authors": [ "Sebastian Thrun", "Tom M. Mitchell" ], "title": "Lifelong robot learning", "venue": "Robotics and Autonomous Systems,", "year": 1995 }, { "authors": [ "Gido M. van de Ven", "Andreas S. Tolias" ], "title": "Three scenarios for continual learning", "venue": null, "year": 1904 }, { "authors": [ "Johannes von Oswald", "Christian Henning", "João Sacramento", "Benjamin F Grewe" ], "title": "Continual learning with hypernetworks", "venue": "arXiv preprint arXiv:1906.00695,", "year": 2019 }, { "authors": [ "Chenshen Wu", "Luis Herranz", "Xialei Liu", "Yaxing Wang", "Joost van de Weijer", "Bogdan Raducanu" ], "title": "Memory replay gans: Learning to generate new categories without forgetting", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning", "venue": "algorithms. CoRR,", "year": 2017 }, { "authors": [ "Friedemann Zenke", "Wulfram Gerstner", "Surya Ganguli" ], "title": "The temporal paradox of hebbian learning and homeostatic plasticity", "venue": "Current Opinion in Neurobiology,", "year": 2017 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In Proceedings of the 34th International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Chen Zeno", "Itay Golan", "Elad Hoffer", "Daniel Soudry" ], "title": "Task agnostic continual learning using online variational bayes", "venue": null, "year": 2018 }, { "authors": [ "Zenke" ], "title": "2017b). First, the network was trained on the full CIFAR-10 dataset (Task Tn=1) and sequentially on 5 additional tasks each corresponding to 10 consecutive classes from the CIFAR-100 dataset (Tasks Tn=2:6). The test accuracies of CIFAR-10 and the CIFAR-100 splits are reported after having learned the final task in this sequence. The DHP Softmax (purple) alone significantly outperforms Finetune", "venue": null, "year": 2017 }, { "authors": [ "von Oswald" ], "title": "class-incremental learning setup. On some tasks, DHP Softmax alone performs as well or better than when training from scratch (light green). The test accuracies of Finetune, when training from scratch and SI (turquoise", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A key aspect of human intelligence is the ability to continually adapt and learn in dynamic environments, a characteristic which is challenging to embed into artificial intelligence. Recent advances in machine learning (ML) have shown tremendous improvements in various problems, by learning to solve one complex task very well, through extensive training on large datasets with millions of training examples or more. However, most of the ML models that are used during deployment in the real-world are exposed to non-stationarity where the distributions of acquired data changes over time. Therefore, after learning is complete, and these models are further trained with new data, responding to distributional changes, performance degrades with respect to the original data. This phenomenon known as catastrophic forgetting or catastrophic interference (McCloskey & Cohen, 1989; French, 1999) presents a crucial problem for deep neural networks (DNNs) that are tasked with continual learning (Ring, 1994), also called lifelong learning (Thrun & Mitchell, 1995; Thrun, 1998). In continual learning, the goal is to adapt and learn consecutive tasks without forgetting how to perform well on previously learned tasks, enabling models that are scalable and efficient over long timescales.\nIn most supervised learning methods, DNN architectures require independent and identically distributed (iid) samples from a stationary training distribution. However, for ML systems in realworld applications that require continual learning, the iid assumption is easily violated when: (1) There is concept drift in the training data distribution. (2) There are imbalanced class distributions and concept drift occuring simultaneously. (3) Data representing all scenarios in which the learner is expected to perform are not initially available. In such situations, learning systems face the “stability-plasticity dilemma” which is a well-known problem for artificial and biological neural networks (Carpenter & Grossberg, 1987; Abraham & Robins, 2005). This presents a continual learning challenge for an ML system where the model needs to provide a balance between its plasticity (to integrate new knowledge) and stability (to preserve existing knowledge).\nIn biological neural networks, synaptic plasticity has been argued to play an important role in learning and memory (Howland & Wang, 2008; Takeuchi et al., 2013; Bailey et al., 2015) and two major\ntheories have been proposed to explain a human’s ability to perform continual learning. The first theory is inspired by synaptic consolidation in the mammalian neocortex (Benna & Fusi, 2016) where a subset of synapses are rendered less plastic and therefore preserved for a longer timescale. The general idea for this approach is to consolidate and preserve synaptic parameters that are considered important for the previously learned tasks. This is normally achieved through task-specific updates of synaptic weights in a neural network. The second is the complementary learning system (CLS) theory (McClelland et al., 1995; Kumaran et al., 2016), which suggests that humans extract highlevel structural information and store it in different brain areas while retaining episodic memories.\nRecent work on differentiable plasticity has shown that neural networks with “fast weights” that leverage Hebbian learning rules (Hebb, 1949) can be trained end-to-end through backpropagation and stochastic gradient descent (SGD) to optimize the standard “slow weights”, as well as also the amount of plasticity in each synaptic connection (Miconi, 2016; Miconi et al., 2018). These works use slow weights to refer to the weights normally used to train vanilla neural networks, which are updated slowly and are often associated with long-term memory. The fast weights represent the weights that are superimposed on the slow weights and change quickly from one time step to the next based on input representations. These fast weights behave as a form of short-term memory that enable “reactivation” of long-term memory traces in the slow weights. Miconi et al. (2018) showed that simple plastic networks with learned plasticity outperform networks with uniform plasticity on various problems. Moreover, there have been several approaches proposed recently for overcoming the catastrophic forgetting problem in fixed-capacity models by dynamically adjusting the plasticity of each synapse based on its importance for retaining past memories (Parisi et al., 2019).\nHere, we extend the work on differentiable plasticity to the task-incremental continual learning setting (van de Ven & Tolias, 2019), where tasks arrive in a batch-like fashion, and have clear boundaries. We develop a Differentiable Hebbian Consolidation1 model that is capable of adapting quickly to changing environments as well as consolidating previous knowledge by selectively adjusting the plasticity of synapses. We modify the traditional softmax layer and propose to augment the slow weights in the final fully-connected (FC) layer (softmax output layer) with a set of plastic weights implemented using Differentiable Hebbian Plasticity (DHP). Furthermore, we demonstrate the flexibility of our model by combining it with recent task-specific synaptic consolidation based approaches to overcoming catastrophic forgetting such as elastic weight consolidation (Kirkpatrick et al., 2017; Schwarz et al., 2018), synaptic intelligence (Zenke et al., 2017b) and memory aware synapses (Aljundi et al., 2018). Our model unifies core concepts from Hebbian plasticity, synaptic consolidation and CLS theory to enable rapid adaptation to new unseen data, while consolidating synapses and leveraging compressed episodic memories in the softmax layer to remember previous knowledge and mitigate catastrophic forgetting. We test our proposed method on established benchmark problems including the Permuted MNIST (Goodfellow et al., 2013), Split MNIST (Zenke et al., 2017b) and Vision Datasets Mixture (Ritter et al., 2018) benchmarks. We also introduce the Imbalanced Permuted MNIST problem and show that plastic networks with task-specific synaptic consolidation methods outperform networks with uniform plasticity." }, { "heading": "2 RELEVANT WORK", "text": "Neural Networks with Non-Uniform Plasticity: One of the major theories that have been proposed to explain a human’s ability to learn continually is Hebbian learning (Hebb, 1949), which suggests that learning and memory are attributed to weight plasticity, that is, the modification of the strength of existing synapses according to variants of Hebb’s rule (Paulsen & Sejnowski, 2000; Song et al., 2000; Oja, 2008). It is a form of activity-dependent synaptic plasticity where correlated activation of pre- and post-synaptic neurons leads to the strengthening of the connection between the two neurons. According to the Hebbian learning theory, after learning, the related synaptic strength are enhanced while the degree of plasticity decreases to protect the learned knowledge (Zenke et al., 2017a).\nRecent approaches in the meta-learning literature have shown that we can incorporate fast weights into a neural network to perform one-shot and few-shot learning (Munkhdalai & Trischler, 2018; Rae et al., 2018). Munkhdalai & Trischler (2018) proposed a model that augments FC layers preceding the softmax with a matrix of fast weights to bind labels to representations. Here, the fast weights were implemented with non-trainable Hebbian learning-based associative memory. Rae et al. (2018)\n1Code is available at:\nproposed a Hebbian Softmax layer that can improve learning of rare classes by interpolating between Hebbian learning and SGD updates on the output layer using an engineered scheduling scheme.\nMiconi et al. (2018) proposed differentiable plasticity, which uses SGD to optimize the plasticity of each synaptic connection, in addition to the standard fixed (slow) weights. Here, each synapse is composed of a slow weight and a plastic (fast) weight that automatically increases or decreases based on the activity over time. Although this approach served to be a powerful new method for training neural networks, it was mainly demonstrated on recurrent neural networks (RNNs) for solving pattern memorization tasks and maze exploration with reinforcement learning. Also, these approaches were only demonstrated on meta-learning problems and not the continual learning challenge of overcoming catastrophic forgetting. Our work also augments the slow weights in the FC layer with a set of plastic (fast) weights, but implements these using DHP. We only update the parameters of the softmax output layer in order to achieve fast learning and preserve knowledge over time.\nOvercoming Catastrophic Forgetting: This work leverages two strategies to overcome the catastrophic forgetting problem: 1) Task-specific Synaptic Consolidation — Protecting previously learned knowledge by dynamically adjusting the synaptic strengths to consolidate and retain memories. 2) CLS Theory — A dual memory system where, the neocortex (neural network) gradually learns to extract structured representations from the data while, the hippocampus (augmented episodic memory) performs rapid learning and individuated storage to memorize new instances or experiences.\nThere have been several notable works inspired by task-specific synaptic consolidation for overcoming catastrophic forgetting (Kirkpatrick et al., 2017; Zenke et al., 2017b; Aljundi et al., 2018) and they are often categorized as regularization strategies in the continual learning literature (Parisi et al., 2019). All of these regularization approaches estimate the importance of each parameter or synapse, Ωk, where least plastic synapses can retain memories for long timescales and more plastic synapses are considered less important. The parameter importance and network parameters θk are updated in either an online manner or after learning task Tn. Therefore, when learning new task Tn+1, a regularizer is added to the original loss function Ln(θ), so that we dynamically adjust the plasticity w.r.t Ωk and prevent any changes to important parameters of previously learned tasks:\nL̃n(θ) = Ln(θ) + λ ∑ k Ωk(θ n k − θn−1k ) 2\n︸ ︷︷ ︸ regularizer\n(1)\nwhere θn−1k are the learned network parameters after training on the previous n− 1 tasks and λ is a hyperparameter for the regularizer to control the amount of forgetting (old versus new memories).\nThe main difference in these regularization strategies is on the method used to compute the importance of each parameter, Ωk. In Elastic Weight Consolidation (EWC), Kirkpatrick et al. (2017) used the values given by the diagonal of an approximated Fisher information matrix for Ωk, and this was computed offline after training on a task was completed. An online variant of EWC was proposed by Schwarz et al. (2018) to improve EWC’s scalability by ensuring the computational cost of the regularization term does not grow with the number of tasks. Zenke et al. (2017b) proposed an online method called Synaptic Intelligence (SI) for computing the parameter importance where, Ωk is the cumulative change in individual synapses over the entire training trajectory on a particular task. Memory Aware Synapses (MAS) from Aljundi et al. (2018) is an online method that measures Ωk by the sensitivity of the learned function to a perturbation in the parameters, instead of measuring the change in parameters to the loss as seen in SI and EWC.\nOur work draws inspiration from CLS theory which is a powerful computational framework for representing memories with a dual memory system via the neocortex and hippocampus. There have been numerous approaches based on CLS principles involving pseudo-rehersal (Robins, 1995; Ans et al., 2004; Atkinson et al., 2018), exact or episodic replay (Lopez-Paz & Ranzato, 2017; Li & Hoiem, 2018) and generative replay (Shin et al., 2017; Wu et al., 2018). Exact replay methods require storage of the data from previous tasks which are later replayed. Generative replay methods train a separate generative model to generate images to be replayed. iCaRL (Rebuffi et al., 2017) performs rehearsal and regularization, where an external memory is used to store exemplar patterns from old task data and rehearse the model via distillation. However, in our work, we are primarily interested in neuroplasticity techniques inspired from CLS theory for alleviating catastrophic forgetting. Earlier work from Hinton & Plaut (1987); Gardner-Medwin (1989) showed how each synaptic connection can be composed of a fixed weight where slow learning stores long-term knowledge and\na fast-changing weight for temporary associative memory. This approach involving slow and fast weights is analogous to properties of CLS theory to overcome catastrophic forgetting during continual learning. Recent research in this vein has included replacing soft attention mechanism with fast weights in RNNs (Ba et al., 2016), the Hebbian Softmax layer (Rae et al., 2018), augmenting slow weights in the FC layer with a fast weights matrix (Munkhdalai & Trischler, 2018), differentiable plasticity (Miconi, 2016; Miconi et al., 2018) and neuromodulated differentiable plasticity (Miconi et al., 2019).\nWe did not evaluate and compare against neuroplasticity-inpired CLS methods as baselines because they were designed for meta-learning problems and would be unfair to evaluate their performance on continual learning benchmark problems given some of their limitations. All of these methods were designed for rapid learning on simple tasks or meta-learning over a distribution of tasks or datasets, where a few number of examples from a class are seen by the network when training on different tasks to perform one-shot and few-shot learning. For instance, the Hebbian Softmax layer modifies its parameters by annealing between Hebbian and SGD updates based on an engineered scheduling scheme which achieves fast binding for rarer classes. However, when a large number of examples are observed frequently from the same class, the annealing function switches completely to SGD updates. Thus, when evaluating this model in continual learning setups, the effect of the fast weights memory storage becomes non-existent as the network learns from a large number of examples per class on each task. With a focus on continual learning, the goal of our work is to metalearn a local learning rule for the fast weights via the fixed (slow) weights and an SGD optimizer." }, { "heading": "3 DIFFERENTIABLE HEBBIAN CONSOLIDATION", "text": "In our model, each synaptic connection in the softmax layer has two weights: 1) The slow weights, θ ∈ Rm×d, wherem is the number of units in the final hidden layer and d is the number of outputs of the last layer. 2) A Hebbian plastic component of the same cardinality as the slow weights, composed of the plasticity coefficient, α, and the Hebbian trace, Hebb. The α is a scaling parameter for adjusting the magnitude of the Hebb. The Hebbian traces accumulate the mean hidden activations of the final hidden layer h for each target label in the mini-batch {y1:B} of sizeB which are denoted by h̃ ∈ R1×m (refer to Algorithm 1). Given the pre-synaptic activations of neurons i in h, we can formally compute the post-synaptic activations of neurons j using Eq. 2 and obtain the unnormalized log probabilities (softmax pre-activations) z. The softmax function is then applied on z to obtain the desired predicted probabilities ŷ thus, ŷ = softmax(z). The η parameter in Eq. 3 is a scalar value that dynamically learns how quickly to acquire new experiences into the plastic component, and thus behaves as the learning rate for the plastic connections. The η parameter also acts as a decay term for the Hebb to prevent instability caused by a positive feedback loop in the Hebbian traces.\nzj = m∑ i=1 ( θi,j︸︷︷︸ slow +αi,jHebbi,j︸ ︷︷ ︸ plastic (fast) )hi (2)\nHebbi,j ← (1− η)Hebbi,j + ηh̃i,j (3)\nThe network parameters αi,j , η and θi,j are optimized by gradient descent as the model is trained sequentially on different tasks in the continual learning setup. In standard neural networks the weight connection has only fixed (slow) weights, which is equivalent to setting the plasticity coefficients α = 0 in Eq. 2.\nAlgorithm 1 Batch update Hebbian traces. 1: Input: h1:B (hidden activations of penultimate layer),\ny1:B (target labels), Hebb (Hebbian trace)\n2: Output: z1:B (softmax pre-activations) 3: for each target label c ∈ {y1:B} do 4: s← ∑B b=1[yb = c] /*Count total occurences of c ∈ y.*/ 5: if s > 0 then 6: h̃← 1s ∑B b=1 h[yb = c] /*Update Hebb for class c.*/ 7: Hebb:,c ← (1− η)Hebb:,c + ηh̃ 8: end if 9: end for\n10: z ← (θ + αHebb)h /*Compute softmax pre-activations.*/\nHebbian Update Rule: The Hebbian traces are initialized to zero only at the start of learning the first task T1 and during training, the Hebb is automatically updated in the forward pass using Algorithm 1. Specifically, the Hebbian update for a coressponding class c in y1:B is computed on line 6. This Hebbian update 1s ∑B b=1 h[yb = c] is analogous to another formulaic description of the Hebbian learning update rule\n4\nwi,j = 1 N ∑N k=1 a k i a k j (Hebb, 1949), where wi,j is the change in weight at connection i, j and aki , a k j denote the activation levels of neurons i and j, respectively, for the k\nth input. Therefore, in our model, w = h̃ the Hebbian weight update, ai = h the hidden activations of the last hidden layer, aj = y the corresponding target class in y1:B and N = s the number of inputs for the corresponding class in y1:B (see Algorithm 1). Across the model’s lifetime, we only update the Hebbian traces during training as it learns tasks in a continual manner. Therefore, during test time, we maintain and use the most recent Hebb traces to make predictions.\nOur model explores an optimization scheme where hidden activations are accumulated directly into the softmax output layer weights when a class has been seen by the network. This results in better initial representations and can also retain these learned deep representations for a much longer timescale. This is because memorized activations for one class are not competing for space with activations from other classes. Fast learning, enabled by a highly plastic weight component, improves test accuracy for a given task. Between tasks this plastic component decays to prevent interference, but selective consolidation into a stable component protects old memories, effectively enabling the model to learn to remember by modelling plasticity over a range of timescales to form a learned neural memory (see Section 4.1 ablation study). In comparison to an external memory, the advantage of DHP Softmax is that it is simple to implement, requiring no additional space or computation. This allows it to scale easily with increasing number of tasks.\nThe plastic component learns rapidly and performs sparse parameter updates to quickly store memory traces for each recent experience without interference from other similar recent experiences. Furthermore, the hidden activations corresponding to the same class, c, are accumulated into one vector h̃, thus forming a compressed episodic memory in the Hebbian traces to reflect individual episodic memory traces (similar to the hippocampus in biological neural networks (Chadwick et al., 2010; Schapiro et al., 2017)). As a result, this method improves learning of rare classes and speeds up binding of class labels to deep representations of the data without introducing any additional hyperparameters. In Appendix B, we provide a sample implementation of the DHP Softmax using PyTorch.\nHebbian Synaptic Consolidation: Following the existing regularization strategies such as EWC (Kirkpatrick et al., 2017), Online EWC (Schwarz et al., 2018), SI (Zenke et al., 2017b) and MAS (Aljundi et al., 2018), we regularize the loss L(θ) as in Eq. 1 and update the synaptic importance parameters of the network in an online manner. We rewrite Eq. 1 to obtain the updated quadratic loss for Hebbian Synaptic Consolidation in Eq. 4 and show that the network parameters θi,j are the weights of the connections between pre- and post-synaptic activities of neurons i and j, as seen in Eq. 2.\nL̃n(θ, α, η) = Ln(θ, α, η) + λ ∑ i,j Ωi,j(θ n i,j − θn−1i,j ) 2 (4)\nWe adapt the existing task-specific consolidation approaches to our model and do not compute the synaptic importance parameters on the plastic component of the network, hence we only regularize the slow weights of the network. Furthermore, when training the first task Tn=1, the synaptic importance parameter, Ωi,j in Eq. 4, was set to 0 for all of the task-specific consolidation methods that we tested on except for SI. This is because SI is the only method we evaluated that estimates Ωi,j while training, whereas Online EWC and MAS compute Ωi,j after learning a task. The plastic component of the softmax layer in our model can alleviate catastrophic forgetting of consolidated classes by allowing gradient descent to optimize how plastic the connections should be (i.e. less plastic to preserve old information or more plastic to quickly learn new information)." }, { "heading": "4 EXPERIMENTS", "text": "In our experiments, we compare our approach to vanilla neural networks with Online EWC, SI and MAS. Since our approach increases the capacity of the DNN due to the addition of plastic weights, we add an extra set of slow weights to the softmax output layer of the standard neural network to match the capacity. We do this to show that it is not the increased model capacity from the plastic weights that is helping mitigate the forgetting when performing sequential task learning, thus ensuring a fair evaluation. We tested our model on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and also introduce the Imbalanced Permuted MNIST problem.\nFor all of the benchmarks, we evaluated the model based on the average classification accuracy on all previously learned tasks as a function of n, the number of tasks trained so far. To determine memory retention and flexibility of the model, we are particularly interested in the test performance on the first task and the most recent one. We also measure forgetting using the backward transfer metric, BWT = 1T−1 ∑T−1 i=1 RT,i − Ri,i (Lopez-Paz & Ranzato, 2017), which indicates how much learning new tasks has influenced the performance on previous tasks. RT,i is the test classification accuracy on task i after sequentially finishing learning the T th task. While BWT< 0 directly reports catastrophic forgetting, BWT > 0 indicates that learning new tasks has helped with the preceding tasks. To establish a baseline for comparison of well-known task-specific consolidation methods, we trained neural networks with Online EWC, SI and MAS, respectively, on all tasks in a sequential manner. The hyperparameters of the consolidation methods (i.e. EWC, SI and MAS) remain the same with and without DHP Softmax, and the plastic components are not regularized. Descriptions of the hyperparameters and other details for all benchmarks can be found in Appendix A." }, { "heading": "4.1 PERMUTED MNIST", "text": "In this benchmark, all of the MNIST pixels are permuted differently for each task with a fixed random permutation. Although the output domain is constant, the input distribution changes between tasks and is mostly independent of each other, thus, there exists a concept drift. In the Permuted MNIST and Imbalanced Permuted MNIST benchmarks we use a multi-layered perceptron (MLP) network with two hidden layers consisting of 400 ReLU nonlinearities, and a cross-entropy loss. The η of the plastic component was set to be a value of 0.001 and we emphasize that we spent little to no effort on tuning the initial value of this parameter (see Appendix A.5 for a sensitivity analysis).\nWe first compare the performance between our network with DHP Softmax and a fine-tuned vanilla MLP network we refer to as Finetune in Figure 2a and no task-specific consolidation methods involved. The network with DHP Softmax alone showed improvement in its ability to alleviate catastrophic forgetting across all tasks compared to the baseline network. Then we compared the performance with and without DHP Softmax using the same task-specific consolidation methods.\nFigure 2a shows the average test accuracy as new tasks are learned for the best hyperparameter combination for each task-specific consolidation method. We find our DHP Softmax with consolidation maintains a higher test accuracy throughout sequential training of tasks than without DHP Softmax.\nAblation Study: We further examine the structural parameters of the network and Hebb traces to provide further interpretability into the behaviour of our proposed model. The left plot in Figure 8 shows the behaviour of η during training as 10 tasks in the Permuted MNIST benchmark are learned continually. Initially, in task T1, η increases very quickly from 0.001 to 0.024 suggesting that the synaptic connections become more plastic to quickly acquire new information. Eventually, η decays after the 3rd task to reduce the degree of plasticity to prevent interference between the learned representations. We also observe that within each task from T4 to T10, η initially increases then decays. The Frobenius Norm of the Hebb trace (middle plot in Figure 8) suggests that Hebb grows without runaway positive feedback every time a new task is learned, maintaining a memory of which synapses contributed to recent activity. The Frobenius Norm of α (right plot in Figure 8) indicates that the plasticity coefficients grow within each task, indicating that the network is leveraging the structure in the plastic component. It is important to note that gradient descent and backpropagation are used as meta-learning to tune the structural parameters in the plastic component." }, { "heading": "4.2 IMBALANCED PERMUTED MNIST", "text": "We introduce the Imbalanced Permuted MNIST problem which is identical to the Permuted MNIST benchmark but, now each task is an imbalanced distribution where training samples in each class were artificially removed based on some random probability (see Appendix A.2). This benchmark was motivated by the fact that class imbalance and concept drift can hinder predictive performance, and the problem becomes particularly challenging when they occur simultaneously. Appendix A.6, Figure 5 shows the average test accuracy for the best hyperparameters of each method. We see that DHP Softmax achieves 80.85% after learning 10 tasks with imbalanced class distributions in a sequential manner, thus providing significant 4.41% improvement over the standard neural network baseline of 76.44%. The significance of the compressed episodic memory mechanism in the Hebbian traces is more apparent in this benchmark because the plastic component allows rare classes that are encountered infrequently to be remembered for a longer period of time. We find that DHP Softmax with MAS achieves a 0.04 decrease in BWT, resulting in an average test accuracy of 88.80% and a 1.48% improvement over MAS alone; also outperforming all other methods and across all tasks." }, { "heading": "4.3 SPLIT MNIST", "text": "We split the original MNIST dataset (LeCun et al., 2001) into a sequence of 5 binary classification tasks: T1 = {0/1}, T2 = {2/3}, T3 = {4/5}, T4 = {6/7} and T5 = {8/9}. The output spaces are disjoint between tasks, unlike the previous two benchmarks. Similar to the network used by Zenke et al. (2017b), we use an MLP network with two hidden layers of 256 ReLU nonlinearities each, and a cross-entropy loss. The initial η value was set to 0.001 as seen in previous benchmark experiments. We found that different values of η yielded very similar final test performance after learning T5 tasks (see Appendix A.5). We observed that DHP Softmax alone achieves 98.23% thus, provides a 7.80% improvement on test performance compared to a finetuned MLP network (Figure 2b). Also,\ncombining DHP Softmax with task-specific consolidation consistently decreases BWT, leading to a higher average test accuracy across all tasks, especially the most recent one, T5." }, { "heading": "4.4 VISION DATASETS MIXTURE", "text": "Following previous works (Ritter et al., 2018; Zeno et al., 2018), we perform continual learning on a sequence of 5 vision datasets: MNIST, notMNIST1, FashionMNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011) and CIFAR-10 (Krizhevsky, 2009) (see Appendix A.4 for dataset details). The MNIST, notMNIST and FashionMNIST datasets are zero-padded to be of size 32×32 and are replicated 3 times to create grayscale images with 3 channels, thus matching the resolution of the SVHN and CIFAR-10 images. Here, we use a CNN architecture that is similar to the one used in (Ritter et al., 2018; Zeno et al., 2018) (more details in Appendix A.4). The initial η parameter value was set to 0.0001. We train the network with mini-batches of size 32 and optimized using plain SGD with a fixed learning rate of 0.01 for 50 epochs per task.\nWe found that DHP Softmax plus MAS decreases BWT by 0.04 resulting in a 2.14% improvement in average test accuracy over MAS on its own (see Table 1 and Appendix A.6, Figure 6). Also, SI with DHP Softmax outperforms other competitive methods with an average test performance of 81.75% and BWT of -0.04 after learning all five tasks. In Table 1, we present a summary of the final average test performance after learning all tasks in the respective continual learning problems. Here, we summarize the average test accuracy and BWT across ten trials for each of the benchmarks." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "We have shown that the problem of catastrophic forgetting in continual learning environments can be alleviated by adding compressed episodic memory in the softmax layer through DHP and performing task-specific updates on synaptic parameters based on their individual importance for solving previously learned tasks. The compressed episodic memory allows new information to be learned in individual traces without overlapping representations, thus avoiding interference when added to the structured knowledge in the slow changing weights and allowing the model to generalize across experiences. The α parameter in the plastic component automatically learns to scale the magnitude of the plastic connections in the Hebbian traces, effectively choosing when to be less plastic (protect old knowledge) or more plastic (acquire new information quickly). The neural network with DHP Softmax showed noticeable improvement across all benchmarks when compared to a neural network with a traditional softmax layer that had an extra set of slow changing weights. The DHP Softmax does not introduce any additional hyperparameters since all of the structural parameters of the plastic part α and η are learned, and setting the initial η value required very little tuning effort.\n1Originally published at http://yaroslavvb.blogspot.com/2011/09/ notmnist-dataset.html and downloaded from https://github.com/davidflanagan/ notMNIST-to-MNIST.\nWe demonstrated the flexibility of our model where, in addition to DHP Softmax, we can perform Hebbian Synaptic Consolidation by regularizing the slow weights using EWC, SI or MAS to improve a model’s ability to alleviate catastrophic forgetting after sequentially learning a large number of tasks with limited model capacity. DHP Softmax combined with SI outperforms other consolidation methods on the Split MNIST and 5-Vision Datasets Mixture. The approach where we combine DHP Softmax and MAS consistently leads to overall superior results compared to other baseline methods on the Permuted MNIST and Imbalanced Permuted MNIST benchmarks. This is interesting because the local variant of MAS does compute the synaptic importance parameters of the slow weights θi,j layer by layer based on Hebb’s rule, and therefore synaptic connections i, j that are highly correlated would be considered more important for the given task than those connections that have less correlation. Furthermore, our model consistently exhibits lower negative BWT across all benchmarks, leading to higher average test accuracy over methods without DHP. This gives a strong indication that Hebbian plasticity enables neural networks to learn continually and remember distant memories, thus reducing catastrophic forgetting when learning from sequential datasets in dynamic environments. Furthermore, continual synaptic plasticity can play a key role in learning from limited labelled data while being able to adapt and scale at long timescales. We hope that our work will open new investigations into gradient descent optimized Hebbian consolidation for learning and memory in DNNs to enable continual learning." }, { "heading": "A DETAILS ON EXPERIMENTAL SETUP AND HYPERPARAMETER SETTINGS", "text": "In the continual learning setup, we train a neural network model on a sequence of tasks T1:nmax , where nmax is the maximum number of tasks the model is to learn in the respective benchmarks. Unlike the conventional supervised learning setup, continual learning trains a model on data that is fetched in sequential chunks enumerated by tasks. Therefore, in a continual learning sequence, the model receives a sequence of tasks T1:nmax that is to be learned, each with its associated training data (Xn,Yn), where Xn is the input data and the corresponding label data denoted by Yn. Each task Tn has its own task-specific loss Ln, that will be combined with a regularizer loss term (refer to Eq. 4) to prevent catastrophic forgetting. After training is complete, the model will have learned an approximated mapping f to the the true underlying function f̄ . The learned f maps a new input X to the target outputs Y1:n for all T1:n tasks the network has learned so far. Also, it is to be noted that the set of classes contained in each task can be different from each other, as we have done in the SplitMNIST and Vision Datasets Mixture benchmarks. All experiments were run on either a Nvidia Titan V or a Nvidia RTX 2080 Ti.\nA.1 PERMUTED MNIST\nWe train the network on a sequence of tasks Tn=1:10 with mini-batches of size 64 and optimized using plain SGD with a learning rate of 0.01. We train for at least 10 epochs and perform earlystopping once the validation error does not improve for 5 epochs. If the validation error increases for more than 5 epochs, then we terminated the training on the task Tn, reset the network weights and Hebbian traces to the values that had the lowest test error, and proceeded to the next task.\nHyperparameters: For the Permuted MNIST experiments shown in Figure 2a, the regularization hyperparameter λ for each of the task-specific consolidation methods is set to λ = 100 for Online EWC (Schwarz et al., 2018), λ = 0.1 for SI (Zenke et al., 2017b) and λ = 0.1 for MAS (Aljundi et al., 2018). We note that for the SI method, λ refers to the parameter c in the original work (Zenke et al., 2017b) but we use λ to keep the notation consistent across other task-specific consolidation methods. In SI, the damping parameter, ξ, was set to 0.1. To find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using a task sequence determined by a single seed. For Online EWC, we tested values of λ ∈ {10, 20, 50,. . . , 400}, SI — λ ∈ {0.01, 0.05,. . . , 0.5, 1.0} and MAS — λ ∈ {0.01, 0.5, . . . , 1.5, 2.0}.\nA.2 IMBALANCED PERMUTED MNIST\nFor each task in the Imbalanced Permuted MNIST problem, we artificially removed training samples from each class in the original MNIST dataset (LeCun et al., 2001) based on some random probability. For each class and each task, we draw a different removal probability from a standard uniform distribution U(0, 1), and then remove each sample from that class with that probability. The distribution of classes in each dataset corresponding to tasks Tn=1:10 is given in Table 2.\nFor the Imbalanced Permuted MNIST experiments shown in Figure 5, the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 400 for Online EWC (Schwarz et al., 2018), λ = 1.0 for SI (Zenke et al., 2017b) and λ = 0.1 for MAS (Aljundi et al., 2018). In SI, the damping parameter, ξ, was set to 0.1. Similar to the Permuted MNIST benchmark, to find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using a task sequence determined by a single seed. For Online EWC, we tested values of λ ∈ {50, 100,. . . ,1×103}, SI — λ ∈ {0.1, 0.5,. . . , 2.5, 3.0} and MAS — λ ∈ {0.01, 0.05, . . . , 1.5, 2.0}. Across all experiments, we maintained the the same random probabilities detemined by a single seed to artificially remove training samples from each class.\nA.3 SPLIT MNIST\nHyperparameters: For the Split MNIST experiments shown in Figure 2b, the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 400 for Online EWC (Schwarz et al., 2018), λ = 1.0 for SI (Zenke et al., 2017b) and λ = 1.5 for MAS (Aljundi et al., 2018). In SI, the damping parameter, ξ, was set to 0.001. To find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a grid search using the 5 task binary classification sequence (0/1, 2/3, 4/5, 6/7, 8/9). For Online EWC, we tested values of λ ∈ {1, 25, 50, 100, . . . ,1×103, 2×103}, SI — λ ∈ {0.1, 0.5, 1.0, . . . , 5.0} and MAS — λ ∈ {0.01, 0.05, 1.0,. . . , 4.5, 5.0}. We train the network on a sequence of Tn=1:5 tasks with mini-batches of size 64 and optimized using plain SGD with a fixed learning rate of 0.01 for 10 epochs.\nA.4 VISION DATASETS MIXTURE\nDataset Details: The Vision Datasets Mixture benchmark consists of a sequence of 5 tasks where each task is a different image classification dataset: MNIST, notMNIST, FashionMNIST, SVHN and CIFAR-10. The notMNIST dataset consists of font glypyhs corresponding to letters ‘A’ to ‘J’. The original dataset has 500,000 and 19,000 grayscale images of size 28×28 for training and testing, respectively. However, similar to MNIST, we only use 60,000 images for training and 10,000 for testing. FashionMNIST consists of 10 categories of various articles of clothing, and there are 60,000 and 10,000 grayscale images sized 28×28 for training and testing, respectively. SVHN consists of digits ‘0’ to ‘9’ from Google Street View images and there are 73,257 and 26,032 colour images of size 32×32 for training and testing, respectively. CIFAR-10 consists of 50,000 and 10,000 colour images of size 32×32 from 10 different categories for training and testing, respectively. Architecture: The CNN architecture consists of 2 convolutional layers with 20 and 50 channels respectively, and a kernel size of 5. Each convolution layer is followed by LeakyReLU nonlinearities (negative threshold of 0.3) and 2×2 max-pooling operations with stride 2. The two convolutional layers are followed by an FC layer of size 500 before the final softmax output layer (refer to Table 3). Similar to (Ritter et al., 2018; Zeno et al., 2018), a multi-headed approach was used because the class definitions are different between datasets.\nIn the other benchmark problems, we use a single η across all connections. In this benchmark, our model has a trainable η value for each connection in the final output layer thus, η ∈ Rm×d and we set the initial η value to be 0.0001. We found that using separate η parameters for each connection improved the stability of optimization and convergence to optimal test performance. This allows each plastic connection to modulate its own rate of plasticity when learning new experiences. It was observed that using a single η value across all connections lead to instability of optimization on the SVHN and CIFAR-10 tasks.\nHyperparameters: For the 5-Vision Datasets Mixture experiments shown in Figure 6 the regularization hyperparameter λ for each of the task-specific consolidation methods is λ = 100 for Online EWC (Schwarz et al., 2018), λ = 0.1 for SI (Zenke et al., 2017b) and λ = 1.0 for MAS (Aljundi et al., 2018). In SI, the damping parameter, ξ, was set to 0.1. To find the best hyperparameter combination for each of these synaptic consolidation methods, we performed a random search using the same task sequence ordering (MNIST, notMNIST, FashionMNIST, SVHN and CIFAR-10). For Online EWC, we tested values of λ ∈ {10, 50, 100,. . . , 500}, SI — λ ∈ {0.01, 0.05, 0.1,. . . , 1.0} and MAS — λ ∈ {0.01, 0.05, 1.0,. . . , 4.5, 5.0}.\nA.5 SENSITIVITY ANALYSIS\nWe provide a summary of the sensitivity analysis performed on the Hebb decay term η and show its effect on the final average test performance after learning a sequence of tasks in the continual learning setup. The plots on the left and center in Figure 4 show the effect of the initial η value on the final test performance after learning tasks Tn=1:10 in a sequential manner for the Permuted MNIST and Imbalanced Permuted MNIST benchmarks, respectively. We swept through a range of values η ∈ {0.1, 0.01, 0.001, 0.0005, 0.0001} and found that setting η to low values led to the best performance in terms of being able to alleviate catastrophic forgetting. Similarly, we also performed a sensitivity analysis on the η parameter for the Split MNIST problem (see the rightmost plot in Figure 4). Table 4 presents the average test accuracy across 5 trials for the MNIST-variant benchmarks, which corresponds to the sensitivity analysis plots in Figure 4.\nA.6 ADDITIONAL FIGURES FOR SPLITMNIST AND VISION DATASETS MIXTURE" }, { "heading": "B EXAMPLE PYTORCH IMPLEMENTATION OF DHP SOFTMAX LAYER", "text": "1 c l a s s DHP Softmax Layer ( nn . Module ) : 2 d e f i n i t ( s e l f , i n f e a t u r e s , o u t f e a t u r e s , e t a r a t e = 0 . 0 0 1 ) : 3 s u p e r ( DHP Softmax Layer , s e l f ) . i n i t ( ) 4 ””” A p p l i e s a l i n e a r t r a n s f o r m a t i o n t o t h e h id de n a c t i v a t i o n s o f t h e\nl a s t h id den l a y e r wi th an a d d i t i o n a l p l a s t i c component implemented u s i n g D i f f e r e n t i a b l e Hebbian P l a s t i c i t y (DHP) :\n5 : math : ‘ z = (w + \\ a l p h a ∗ Hebb ) h ‘ . 6 7 Args : 8 i n f e a t u r e s : s i z e o f each i n p u t i n l a s t h i dd en l a y e r . 9 o u t f e a t u r e s : number o f c l a s s e s .\n10 e t a r a t e : i n i t i a l l e a r n i n g r a t e v a l u e o f p l a s t i c c o n n e c t i o n s . 11 12 R e t u r n s : 13 z : t h e so f tmax pre−a c t i v a t i o n s ( u n n o r m a l i z e d l o g p r o b a b i l i t i e s ) . 14 hebb : t h e u p d a t e d Hebbian t r a c e s f o r t h e n e x t i t e r a t i o n . 15 ””” 16 s e l f . i n f e a t u r e s = i n f e a t u r e s 17 s e l f . o u t f e a t u r e s = o u t f e a t u r e s 18 s e l f . e t a r a t e = e t a r a t e 19 20 # I n i t i a l i z e f i x e d ( s low ) w e i g h t s wi th He i n i t i a l i z a t i o n . 21 s e l f . w e i gh t = P a r a m e t e r ( t o r c h . Tensor ( s e l f . i n f e a t u r e s , 22 s e l f . o u t f e a t u r e s ) ) 23 i n i t . k a i m i n g u n i f o r m ( s e l f . weight , a=math . s q r t ( 5 ) ) 24 25 # I n i t i a l i z e a l p h a s c a l i n g c o e f f i c i e n t s f o r p l a s t i c c o n n e c t i o n s . 26 s e l f . a l p h a = P a r a m e t e r ( ( . 0 1 ∗ t o r c h . r and ( s e l f . i n f e a t u r e s , 27 s e l f . o u t f e a t u r e s ) ) , 28 r e q u i r e s g r a d =True ) 29 30 # I n i t i a l i z e t h e l e a r n i n g r a t e o f p l a s t i c c o n n e c t i o n s . 31 s e l f . e t a = P a r a m e t e r ( ( s e l f . e t a r a t e ∗ t o r c h . ones ( 1 ) ) , 32 r e q u i r e s g r a d =True ) 33 34 d e f f o r w a r d ( s e l f , h , y , hebb ) : 35 i f s e l f . t r a i n i n g : 36 f o r , c i n enumera t e ( t o r c h . un i qu e ( y ) ) : 37 # Get i n d i c e s o f c o r r e s p o n d i n g c l a s s , c , i n y . 38 y c i d x = ( y == c ) . nonze ro ( ) 39 # Count t o t a l o c c u r e n c e s o f c o r r e s p o n d i n g c l a s s , c i n y . 40 s = t o r c h . sum ( y == c ) 41 42 i f s > 0 : 43 # Per form Hebbian u p d a t e ( l i n e s 6−7 i n Algo r i t hm 1) 44 h b a r = t o r c h . d i v ( t o r c h . sum ( h [ y c i d x ] , 0 ) , 45 s . i t em ( ) ) 46 hebb [ : , c ] = t o r c h . add ( t o r c h . mul ( t o r c h . sub ( 1 , s e l f . e t a ) , 47 hebb [ : , c ] . c l o n e ( ) ) , 48 t o r c h . mul ( h ba r , s e l f . e t a ) ) 49 50 # Compute so f tmax pre−a c t i v a t i o n s wi th p l a s t i c ( f a s t ) w e i g h t s . 51 z = t o r c h .mm( h , s e l f . we i gh t + t o r c h . mul ( s e l f . a lpha , hebb ) ) 52 53 r e t u r n z , hebb 54 55 d e f i n i t i a l z e r o h e b b ( s e l f ) : 56 r e t u r n V a r i a b l e ( t o r c h . z e r o s ( s e l f . i n f e a t u r e s , s e l f . o u t f e a t u r e s ) ,\n57 r e q u i r e s g r a d = F a l s e )\nListing 1: PyTorch implementation of the DHP Softmax model which adds a compressed episodic memory to the final output layer of a neural network through plastic connections as described in Algorithm 1. We want to emphasize the simplicity of implementation using popular ML frameworks." }, { "heading": "C DIFFERENTIABLE HEBBIAN CONSOLIDATION", "text": "" }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "" } ]
2,019
null
SP:1e1a6d0bb0dc9352227b3cade1e3b88096a544b5
[ "The paper proposes a method for noise robustness based on scaling gradients of examples. By choosing the proper scaling parameters (alpha and beta), the method recovers standard losses such as CCE, MAE, and GCE, while also recovering other losses. The method is strongly related to reweighting training examples, where alpha and beta define the shape of this weighting as a function of the model's prediction (i.e., p_i). Experiments show that the proposed method achieves competitive results on several standard benchmarks for noisy-labelled data.", "This paper presents Gradient Rescaling (GR) for robust learning to combat label noise. They propose to treat each data sample with different significance scores: some samples are important to learning, and some examples are insignificant (or even detrimental) to learning. So they desire to weight each samples according to their significance. They propose the notion of emphasis focus (When learning, whether we should put emphasis on learning “hard” examples or “easy” examples) and emphasis spread (the variance of these significance weights). The authors propose that this “difficulty” of samples are proportional to their network output logit values." ]
It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist. Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused on and how much more should they be emphasised to achieve robust learning? In this work, we study this question and propose gradient rescaling (GR) to solve it. GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs. Apart from regularisation, we connect GR to examples weighting and designing robust loss functions. We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels. It is also significantly superior to standard regularisers in both clean and abnormal settings. Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.
[]
[ { "authors": [ "Guillaume Alain", "Alex Lamb", "Chinnadhurai Sankar", "Aaron Courville", "Yoshua Bengio" ], "title": "Variance reduction in sgd by distributed importance sampling", "venue": "In ICLR Workshop,", "year": 2016 }, { "authors": [ "Devansh Arpit", "Stanisław Jastrzębski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron Courville", "Yoshua Bengio", "Simon Lacoste-Julien" ], "title": "A closer look at memorization in deep networks", "venue": null, "year": 2017 }, { "authors": [ "Dapeng Chen", "Hongsheng Li", "Tong Xiao", "Shuai Yi", "Xiaogang Wang" ], "title": "Video person reidentification with competitive snippet-similarity aggregation and co-attentive snippet embedding", "venue": null, "year": 2018 }, { "authors": [ "Aritra Ghosh", "Himanshu Kumar", "PS Sastry" ], "title": "Robust loss functions under label noise for deep neural networks", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Jacob Goldberger", "Ehud Ben-Reuven" ], "title": "Training deep neural-networks using a noise adaptation layer", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Siddharth Gopal" ], "title": "Adaptive sampling for sgd by exploiting side information", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Sheng Guo", "Weilin Huang", "Haozhi Zhang", "Chenfan Zhuang", "Dengke Dong", "Matthew R Scott", "Dinglong Huang" ], "title": "Curriculumnet: Weakly supervised learning from large-scale web", "venue": null, "year": 2018 }, { "authors": [ "Bo Han", "Jiangchao Yao", "Gang Niu", "Mingyuan Zhou", "Ivor Tsang", "Ya Zhang", "Masashi Sugiyama" ], "title": "Masking: A new perspective of noisy supervision", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Duncan Wilson", "Kevin Gimpel" ], "title": "Using trusted data to train deep networks on labels corrupted by severe noise", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Geoffrey E Hinton" ], "title": "To recognize shapes, first learn to generate images", "venue": "Progress in brain research,", "year": 2007 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "In ACMMM,", "year": 2014 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels", "venue": null, "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NeurIPS,", "year": 2012 }, { "authors": [ "David Krueger", "Nicolas Ballas", "Stanislaw Jastrzebski", "Devansh Arpit", "Maxinder S Kanwal", "Tegan Maharaj", "Emmanuel Bengio", "Asja Fischer", "Aaron Courville" ], "title": "Deep nets don’t learn via memorization", "venue": "In ICLR Workshop,", "year": 2017 }, { "authors": [ "M Pawan Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In NeurIPS,", "year": 2010 }, { "authors": [ "Jan Larsen", "L Nonboe", "Mads Hintz-Madsen", "Lars Kai Hansen" ], "title": "Design of robust neural network classifiers", "venue": "In ICASSP,", "year": 1998 }, { "authors": [ "Marc T Law", "Raquel Urtasun", "Richard S Zemel" ], "title": "Deep spectral clustering learning", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Kuang-Huei Lee", "Xiaodong He", "Lei Zhang", "Linjun Yang" ], "title": "Cleannet: Transfer learning for scalable image classifier training with label noise", "venue": null, "year": 2018 }, { "authors": [ "Junnan Li", "Yongkang Wong", "Qi Zhao", "Mohan S Kankanhalli" ], "title": "Learning to learn from noisy labeled data", "venue": null, "year": 2019 }, { "authors": [ "Shuang Li", "Slawomir Bak", "Peter Carr", "Xiaogang Wang" ], "title": "Diversity regularized spatiotemporal attention for video-based person re-identification", "venue": null, "year": 2018 }, { "authors": [ "Yuncheng Li", "Jianchao Yang", "Yale Song", "Liangliang Cao", "Jiebo Luo", "Li-Jia Li" ], "title": "Learning from noisy labels with distillation", "venue": null, "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollar" ], "title": "Focal loss for dense object detection", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Yu Liu", "Junjie Yan", "Wanli Ouyang" ], "title": "Quality aware network for set to set recognition", "venue": null, "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Online batch selection for faster training of neural networks", "venue": "In ICLR Workshop,", "year": 2016 }, { "authors": [ "Xingjun Ma", "Yisen Wang", "Michael E Houle", "Shuo Zhou", "Sarah M Erfani", "Shu-Tao Xia", "Sudanthi Wijewickrema", "James Bailey" ], "title": "Dimensionality-driven learning with noisy labels", "venue": null, "year": 2018 }, { "authors": [ "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Decoupling \"when to update\" from \"how to update", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1979 }, { "authors": [ "Yair Movshovitz-Attias", "Alexander Toshev", "Thomas K Leung", "Sergey Ioffe", "Saurabh Singh" ], "title": "No fuss distance metric learning using proxies", "venue": null, "year": 2017 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "In NeurIPS,", "year": 2013 }, { "authors": [ "Giorgio Patrini", "Alessandro Rozza", "Aditya Krishna Menon", "Richard Nock", "Lizhen Qu" ], "title": "Making deep neural networks robust to label noise: A loss correction approach", "venue": null, "year": 2017 }, { "authors": [ "Scott Reed", "Honglak Lee", "Dragomir Anguelov", "Christian Szegedy", "Dumitru Erhan", "Andrew Rabinovich" ], "title": "Training deep neural networks on noisy labels with bootstrapping", "venue": "In ICLR Workshop,", "year": 2015 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": null, "year": 2016 }, { "authors": [ "Richard Socher", "Cliff C Lin", "Chris Manning", "Andrew Y Ng" ], "title": "Parsing natural scenes and natural language with recursive neural networks", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning from noisy labels with deep neural networks", "venue": "arXiv preprint arXiv:1406.2080,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Daiki Tanaka", "Daiki Ikami", "Toshihiko Yamasaki", "Kiyoharu Aizawa" ], "title": "Joint optimization framework for learning with noisy labels", "venue": null, "year": 2018 }, { "authors": [ "Sunil Thulasidasan", "Tanmoy Bhattacharya", "Jeff Bilmes", "Gopinath Chennupati", "Jamal MohdYusof" ], "title": "Combating label noise in deep learning using abstention", "venue": null, "year": 2019 }, { "authors": [ "Arash Vahdat" ], "title": "Toward robustness against label noise in training deep discriminative neural networks", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Brendan Van Rooyen", "Aditya Menon", "Robert C Williamson" ], "title": "Learning with symmetric label noise: The importance of being unhinged", "venue": "NeurIPS", "year": 2015 }, { "authors": [ "Andreas Veit", "Neil Alldrin", "Gal Chechik", "Ivan Krasin", "Abhinav Gupta", "Serge Belongie" ], "title": "Learning from noisy large-scale datasets with minimal supervision", "venue": null, "year": 2017 }, { "authors": [ "Xinshao Wang", "Yang Hua", "Elyor Kodirov", "Guosheng Hu", "Neil M. Robertson" ], "title": "Deep metric learning by online soft mining and class-aware attention", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Xinshao Wang", "Elyor Kodirov", "Yang Hua", "Neil M Robertson" ], "title": "Improving MAE against CCE under label noise", "venue": "arXiv preprint arXiv:1903.12141,", "year": 2019 }, { "authors": [ "Yisen Wang", "Weiyang Liu", "Xingjun Ma", "James Bailey", "Hongyuan Zha", "Le Song", "Shu-Tao Xia" ], "title": "Iterative learning with open-set noisy labels", "venue": null, "year": 2018 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "Zaiyi Chen", "Yuan Luo", "Jinfeng Yi", "James Bailey" ], "title": "Symmetric cross entropy for robust learning with noisy labels", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Tong Xiao", "Tian Xia", "Yi Yang", "Chang Huang", "Xiaogang Wang" ], "title": "Learning from massive noisy labeled data for image classification", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": null, "year": 2018 }, { "authors": [ "Xu Zhang", "Felix Xinnan Yu", "Svebor Karaman", "Wei Zhang", "Shih-Fu Chang" ], "title": "Heated-up softmax embedding", "venue": "arXiv preprint arXiv:1809.04157,", "year": 2018 }, { "authors": [ "Zhilu Zhang", "Mert R Sabuncu" ], "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Liang Zheng", "Zhi Bie", "Yifan Sun", "Jingdong Wang", "Chi Su", "Shengjin Wang", "Qi Tian" ], "title": "Mars: A video benchmark for large-scale person re-identification", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "DNNs have been successfully applied in diverse applications (Socher et al., 2011; Krizhevsky et al., 2012; LeCun et al., 2015). However, their success is heavily reliant on the quality of training data, especially accurate semantic labels for learning supervision. Unfortunately, on the one hand, maintaining the quality of semantic labels as the scale of training data increases is expensive and almost impossible when the scale becomes excessively large. On the other hand, it has been demonstrated that DNNs are capable of memorising the whole training data even when all training labels are random (Zhang et al., 2017). Therefore, DNNs struggle to discern meaningful data patterns and ignore semantically abnormal examples1 simultaneously (Krueger et al., 2017; Arpit et al., 2017). Consequently, it becomes an inevitable demand for DNNs to hold robustness when training data contains anomalies (Larsen et al., 1998; Natarajan et al., 2013; Sukhbaatar & Fergus, 2014; Xiao et al., 2015; Patrini et al., 2017; Vahdat, 2017; Veit et al., 2017; Li et al., 2017).\nRecently, great progress has been made towards robustness against anomalies when training DNNs (Krueger et al., 2017). There are three appealing perspectives in terms of their simplicity and effectiveness: 1) Examples weighting. For example, knowledge distilling from auxiliary models is popular for heuristically designing weighting schemes. However, it is challenging to select and train reliable auxiliary models in practice (Li et al., 2017; Malach & Shalev-Shwartz, 2017; Jiang et al., 2018; Ren et al., 2018; Han et al., 2018b). 2) Robust loss functions (Van Rooyen et al., 2015; Ghosh et al., 2017; Zhang & Sabuncu, 2018; Wang et al., 2019b); 3) Explicit regularisation techniques (Arpit et al., 2017; Zhang et al., 2018a). Although designing robust losses or explicit regularisation is easier and more flexible in practice, the performance is not the optimal yet.\n1One training example is composed of an input and its corresponding label. A semantically abnormal example means the input is semantically unrelated to its label, which may come from corrupted input or label. For example, in Figure 3 in the supplementary material: 1) Out-of-distribution anomalies: An image may contain only background or an object which does not belong to any training class; 2) In-distribution anomalies: An image of class a may be annotated to class b or an image may contain more than one semantic object.\nRegarding examples weighting, there is a core research question which is not well answered yet: What training examples should be focused on and how large the emphasis spread should be?\nIn this work, we present a thorough study of this practical question under different settings. For better analysis, we propose two basic and necessary concepts: emphasis focus and spread with explicit definition in Sec. 3.2. They are conceptually introduced as follows:\nEmphasis focus. It is a common practice to focus on harder instances when training DNNs (Shrivastava et al., 2016; Lin et al., 2017). When a dataset is clean, it achieves faster convergence and better performance to emphasise on harder examples because they own larger gradient magnitude, which means more information and a larger update step for model’s parameters. However, when severe noise exists, as demonstrated in (Krueger et al., 2017; Arpit et al., 2017), DNNs learn simple meaningful patterns first before memorising abnormal ones. In other words, anomalies are harder to fit and own larger gradient magnitude in the later stage. Consequently, if we use the default sample weighting in categorical cross entropy (CCE) where harder samples obtain higher weights, anomalies tend to be fitted well especially when a network has large enough capacity. That is why we need to move the emphasis focus towards relatively easier ones, which serves as emphasis regularisation.\nEmphasis spread. We term the weighting variance of training examples emphasis spread. The key concept is that we should not treat all examples equally, neither should we let only a few be emphasised and contribute to the training. Therefore, when emphasis focus changes, the emphasis spread should be adjusted accordingly.\nWe integrate emphasis focus and spread into a unified example weighting framework. Emphasis focus defines what training examples own higher weights while emphasis spread indicates how large variance over their weights. Specifically, we propose gradient rescaling (GR), which modifies the magnitude of logit vector’s gradient. The logit vector is the output of the last fully connected (FC) layer of a network. We remark that we do not design the weighting scheme heuristically from scratch. Instead, it is naturally motivated by the gradient analysis of several loss functions.\nInterestingly, GR can be naturally connected to examples weighting, robust losses, explicit regularisation: 1) The gradient magnitude of logit vector can be regarded as weight assignment that is built-in in loss functions (Gopal, 2016; Alain et al., 2016; Zhang et al., 2018b). Therefore, rescaling the gradient magnitude equals to adjusting the weights of examples; 2) A specific loss function owns a fixed gradient derivation. Adjusting the gradient can be treated as a more direct and flexible way of modifying optimisation objectives; 3) Instead of focusing on harder examples2 by default, we can adjust emphasis focus to relative easier ones when noise is severe. GR serves as emphasis regularisation and is different from standard regularisers, e.g., L2 weight decay constraints on weight parameters and Dropout samples neural units randomly (Srivastava et al., 2014);\nGR is simple yet effective. We demonstrate its effectiveness on diverse computer vision tasks using different net architectures: 1) Image classification with clean training data; 2) Image classification with synthetic symmetric label noise, which is more challenging than asymmetric noise evaluated by (Vahdat, 2017; Ma et al., 2018); 3) Image classification with real-world unknown anomalies, which may contain open-set noise (Wang et al., 2018), e.g., images with only background, or outliers, etc.; 4) Video person re-identification, a video retrieval task containing diverse anomalies. Beyond, we show that GR is notably better than other standard regularisers, e.g., L2 weight decay and dropout. Besides, to comprehensively understand GR’s behaviours, we present extensive ablation studies.\nMain contribution. Intuitively and principally, we claim that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting. To the best of our knowledge, we are the first to thoroughly study and analyse them together in a unified framework." }, { "heading": "2 RELATED WORK", "text": "Aside from examples weighting, robust losses minimisation and explicit regularisation techniques, there are another two main perspectives for training robust and accurate DNNs when anomalies exist:\n2 An example’s difficulty can be indicated by its loss (Shrivastava et al., 2016; Loshchilov & Hutter, 2016; Hinton, 2007), gradient magnitude (Gopal, 2016; Alain et al., 2016), or input-to-label relevance score (Lee et al., 2018). The input-to-label relevance score means the probability of an input belonging to its labelled class predicted by a current model. The difficulty of an example may change as the model learns. In summary, higher difficulty, larger loss, larger gradient magnitude, and lower input-to-label relevance score are equal concepts.\n1) Robust training strategies (Miyato et al., 2018; Guo et al., 2018; Li et al., 2019; Thulasidasan et al., 2019); 2) Noise-aware modelling, and alternative label and parameter optimisation are popular when only label noise exists. Some methods focus on noise-aware modelling for correcting noisy labels or empirical losses (Larsen et al., 1998; Natarajan et al., 2013; Sukhbaatar & Fergus, 2014; Xiao et al., 2015; Vahdat, 2017; Veit et al., 2017; Goldberger & Ben-Reuven, 2017; Han et al., 2018a). However, it is non-trivial and time-consuming to learn a noise-aware model, which also requires prior extra information or some specific assumptions. For example, Masking (Han et al., 2018a) is assisted by human cognition to speculate the noise structure of noise-aware matrix while (Veit et al., 2017; Li et al., 2017; Lee et al., 2018; Hendrycks et al., 2018) exploit an extra clean dataset, which is a hyper-factor and hard to control in practice. Some other algorithms iteratively train the model and infer latent true labels (Wang et al., 2018; Tanaka et al., 2018). Those methods have made great progress on label noise. But they are not directly applicable to unknown diverse semantic anomalies, which covers both out-of-distribution and in-distribution cases." }, { "heading": "2.1 REMARKS ON ROBUSTNESS THEOREMS CONDITIONED ON SYMMETRIC LOSSES", "text": "We note that (Ghosh et al., 2017) proposed some theorems showing that empirical risk minimization is robust when the loss function is symmetric and the noise type is label noise. However, they are not applicable for deep learning under arbitrary unknown noise: 1) We remark that we target at the problem of diverse or arbitrary abnormal examples, where an input may be out-of-distribution, i.e., not belonging to any training class. As a result, the symmetric losses custom-designed for label noise are not applicable. 2) GR is independent of empirical loss expressions as presented in Table 1. Therefore, one specific loss is merely an indicator of how far we are away from a specific minimisation objective. It has no impact on the robustness of the learning process since it has no direct influence on the gradient back-propagation. Similar to the prior work of rethinking generalisation (Zhang et al., 2017), we need to rethink robust training under diverse anomalies, where the robustness theorems conditioned on symmetric losses and label noise are not directly applicable." }, { "heading": "3 EMPHASIS REGULARISATION BY GRADIENT RESCALING", "text": "Notation. We are given N training examples X = {(xi, yi)}Ni=1, where (xi, yi) denotes i−th sample with input xi ∈ RD and label yi ∈ {1, 2, ..., C}. C is the number of classes. Let’s consider a deep neural network z composed of an embedding network f(·) : RD → RK and a linear classifier g(·) : RK → RC , i.e., zi = z(xi) = g(f(xi)) : RD → RC . Generally, the linear classifier is the last FC layer which produces the final output of z, i.e., logit vector z ∈ RC . To obtain probabilities of a sample belonging to different classes, logit vector is normalised by a softmax function:\np(j|xi) = exp(zij)/ ∑C\nm=1 exp(zim). (1)\np(j|xi) is the probability of xi belonging to class j. A sample’s input-to-label relevance score is defined by pi = p(yi|xi). In what follows, we will uncover the sample weighting in popular losses: CCE, Mean Absolute Error (MAE) and Generalised Cross Entropy (GCE) (Zhang & Sabuncu, 2018)." }, { "heading": "3.1 ANALYSING INTRINSIC SAMPLE WEIGHTING IN CCE, MAE AND GCE", "text": "CCE. The CCE loss with respect to (xi, yi), and its gradient with respect to zij are defined as:\nLCCE(xi, yi) = − log p(yi|xi) and ∂LCCE ∂zij = { p(yi|xi)− 1, j = yi p(j|xi), j 6= yi . (2)\nTherefore, we have ||∂LCCE∂zi ||1 = 2(1−p(yi|xi)) = 2(1−pi). Here we choose L1 norm to measure the magnitude of gradient because of its simpler statistics and computation.\nSince we back-propagate ∂LCCE/zi to update the model’s parameters, an example’s gradient magnitude determines how much impact it has, i.e., its weight wCCEi = ||∂LCCE∂zi ||1 = 2(1 − pi). In CCE, more difficult examples with smaller pi get higher weight.\nMAE. When it comes to MAE, the loss of (xi, yi) and gradient with respect to zim are:\nTherefore, wMAEi = ||∂LMAE∂zi ||1 = 4p(yi|xi)(1 − p(yi|xi)) = 4pi(1 − pi). In MAE, those images whose input-to-label relevance scores are 0.5 become the emphasis focus.\nGCE. In GCE, the loss calculation of (xi, yi) and gradient with respect to logit vector zi are:\nLGCE(xi, yi) = 1− p(yi|xi)q\nq and ∂LGCE ∂zij = { p(yi|xi)q(p(yi|xi)− 1), j = yi p(yi|xi)qp(j|xi), j 6= yi , (4)\nwhere q ∈ [0, 1]. Therefore, wGCEi = ||∂LGCE∂zi ||1 = 2p(yi|xi) q(1−p(yi|xi)) = 2pqi (1−pi). In this case, the emphasis focus can be adjusted from 0 to 0.5 when q ranges from 0 to 1. However, in their practice (Zhang & Sabuncu, 2018), instead of using this naive version, a truncated one is applied:\nLGCEtrunc(xi, yi) = { Lq(pi), pi > 0.5 Lq(0.5), pi ≤ 0.5 and Lq(γ) = (1− γq)/q, (5)\nThe loss of an example with pi ≤ 0.5 is constant so that its gradient is zero, which means it is dropped and does not contribute to the training. The main drawback is that at the initial stage, the model is not well learned so that the predicted pi of most samples are smaller than 0.5. To address it, alternative convex search is exploited for iterative data pruning and parameters optimisation, making it quite complex and less appealing in practice.\nThe derivation details of Eq. (2), (3), (4) are presented in Section B of the supplementary material." }, { "heading": "3.2 GRADIENT RESCALING FOR EMPHASIS REGULARISATION", "text": "A loss function provides supervision information by its derivative with respect to a network’s output. Therefore, there are two perspectives for improving the supervision information: 1) Modifying the loss format to improve its corresponding derivative; 2) Manipulating the gradient straightforwardly. In this work, we choose to control the gradient, which is more direct and flexible.\nAccording to Eq. (2), (3), (4), the gradients of CCE, MAE and GCE share the same direction. Our proposal GR unifies them from the gradient perspective. Being independent of loss formulas, a sample’s gradient is rescaled linearly so that its weight is wGRi :\nwGRi = g(βp λ i (1− pi)) =>\n∂L ∂zi = ∂LCCE ∂zi wGRi wCCEi = ∂LMAE ∂zi wGRi wMAEi = ∂LGCE ∂zi wGRi wGCEi , (6)\nwhere λ, β are hyper-parameters for controlling the emphasis focus and spread, respectively. By choosing a larger λ when more anomalies exist, GR regularises examples weighting by moving emphasis focus toward relatively easier training data points, thus embracing noise-robustness.\nFor clarification, we explicitly define the emphasis focus and spread over training examples: Definition 1 (Emphasis Focus ψ). The emphasis focus refers to those examples that own the largest weight. Since an example’s weight is determined by its input-to-label relevance score pi, for simplicity, we define the emphasis focus to be an input-to-label score to which the largest weight is assigned, i.e., ψ = argmax\npi\nwGRi ∈ [0, 1).\nDefinition 2 (Emphasis Spread σ). The emphasis spread is the weight variance over all training instances in a mini-batch, i.e., σ = E((wGRi − E(wGRi ))2), where E(·) denotes the expectation value of a variable.\nWith these definitions, we differentiate GR with other methods in Table 1. We show the sample weighting curves of GR with different settings in Figure 1. As shown in Figure 1c, the emphasis spread declines as λ increases. Therefore, we choose larger β values when λ is larger in Sec. 4.2.1. Principally, transformation g could be designed as any monotonically increasing function. Because the non-linear exponential mapping can change the overall weights’ variance and relative weights between any two examples, we choose g(·) = exp(·), which works well in our practice. By integral, the exact loss format is an error function (non-elementary). We summarise several existing cases as follows (the ellipsis refers to other potential options which can be explored in the future):\nwGRi = wCCEi , β = 2, λ = 0, g = identity wMAEi , β = 4, λ = 1, g = identity wGCEi , β = 1, 1 ≥ λ ≥ 0, g = identity exp(βpλi (1− pi)), β ≥ 0, λ ≥ 0, g = exp ...\n(7)" }, { "heading": "3.3 WHY DOES GR CONTRIBUTE TO ROBUST LEARNING?", "text": "Let’s regard a deep network z as a black box, which produces C logits. C is the class number. Then during gradient back-propagation, an example’s impact on the update of z is determined by its gradient w.r.t. the logit vector. The impact can be decomposed into two factors, i.e., gradient direction and magnitude. To reduce the impact of a noisy sample, we can either reduce its gradient magnitude or amend its gradient direction. In this work, inspired by the analysis of CCE, MAE and GCE, which only differ in the gradient magnitude while perform quite differently, leading to a natural motivation that gradient magnitude matters. That is why we explore rescaling the gradient magnitude as illustrated in Figure 1. It is worth studying amending gradient directions in the future." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 IMAGE CLASSIFICATION WITH CLEAN TRAINING DATA", "text": "Datasets. We test on CIFAR-10 and CIFAR-100 (Krizhevsky, 2009), which contain 10 and 100 classes, respectively. In CIFAR-10, the training data contains 5k images per class while the test set includes 1k images per class. In CIFAR-100, there are 500 images per class for training and 100 images per class for testing. Implementation details. On CIFAR-10, following (He et al., 2016), we adopt ResNet-20 and ResNet-56 as backbones so that we can compare fairly with their reported results. On CIFAR-100, we follow D2L (Ma et al., 2018) to choose ResNet-44 and compare with its reported results. We also use an SGD optimiser with momentum 0.9 and weight decay 10−4. The learning rate is initialised with 0.1, and multiplied with 0.1 every 5k iterations. We apply the standard data augmentation as in (He et al., 2016; Ma et al., 2018): The original images are padded with 4 pixels on every side, followed by a random crop of 32× 32 and horizontal flip. The batch size is 128.\nResults. Our purpose is to show GR can achieve competitive performance with CCE under clean data to demonstrate its general applicability. As reported in D2L, all noise-tolerant proposals (Patrini et al., 2017; Reed et al., 2015; Ma et al., 2018) perform similarly with CCE when training labels are clean. Therefore we do not present other related competitors here. Our reimplemented results are shown in Table 2. For reference, the reported results in (He et al., 2016) on CIFAR-10 with CCE are 91.3% for ResNet-20 and 93.0% for ResNet-56. In D2L, the result on CIFAR-100 with ResNet-44 is 68.2%. Our reimplemented performance of CCE is only slightly different. For GR, we observe the best performance when emphasis focus is 0, i.e., λ = 0. Furthermore, it is insensitive to a wide range of emphasis spreads according to our observations in Figure 5 in the supplementary material.\nTreating training examples equally. As shown in Table 2, we obtain competitive performance by treating all training examples equally when β = 0. This is quite interesting and motivates us that sample differentiation and reweighting work much better only when noise exists." }, { "heading": "4.2 IMAGE CLASSIFICATION WITH SYNTHETIC SYMMETRIC LABEL NOISE", "text": "Symmetric noise generation. Given a probability r, the original label of an image is changed to one of the other class labels uniformly following (Tanaka et al., 2018; Ma et al., 2018). r denotes the noise rate. Symmetric label noise generally exists in large-scale real-world applications where the dataset scale is so large that label quality is hard to guarantee. It is also demonstrated in (Vahdat, 2017) that it is more challenging than asymmetric noisy labels (Reed et al., 2015; Patrini et al., 2017), which assume that label errors only exist within a predefined set of similar classes. All augmented training examples share the same label as the original one." }, { "heading": "4.2.1 EMPIRICAL ANALYSIS OF GR ON CIFAR-10", "text": "To understand GR well empirically, we explore the behaviours of GR on CIFAR-10 with r = 20%, 40%, 60%, 80%, respectively. We use ResNet-56 which has larger capacity than ResNet-20.\nDesign choices. We mainly analyse the impact of different emphasis focuses for different noise rates. We explore 5 emphasis focuses by setting β = 0 or different λ: 1) None: β = 0. There is no emphasis focus since all examples are treated equally; 2) 0: λ = 0; 3) 0∼0.5: λ = 0.5; 4) 0.5: λ = 1; 5) 0.5∼1: λ = 2. We remark that when λ is larger, the emphasis focus is higher, leading to relatively easier training data points are emphasised. As shown in Figure 1, when emphasis focus changes, emphasis spread changes accordingly. Therefore, to set a proper spread for each emphasis focus, we try 4 emphasis spread and choose the best one3 to compare the impact of emphasis focus.\nResults analysis. We show the results in Table 3. The intact training set serves as a validation set and we observe that its accuracy is always consistent with the final test accuracy. This motivates us that we can choose our model’s hyper-parameters β, λ via a validation set in practice. We display the training dynamics in Figure 2. We summarise our observations as follows: Fitting and generalisation. We observe that CCE always achieves the best accuracy on corrupted training sets, which indicates that CCE has a strong data fitting ability even if there is severe noise (Zhang et al., 2017). As a result, CCE has much worse final test accuracy than most models. Emphasising on harder examples. When there exist abnormal training examples, we obtain the worst final test accuracy if emphasis focus is 0, i.e., CCE and GR with λ = 0. This unveils that in applications where we have to learn from noisy training data, it will hurt the model’s generalisation dramatically if we use CCE or simply focus on harder training data points.\nEmphasis focus. When noise rate is 0, 20%, 40%, 60%, and 80%, we obtain the best final test accuracy when λ = 0, λ = 0.5, λ = 1, λ = 2, and λ = 2, respectively. This demonstrates that\n3Since there is a large interval between different β in our four trials, we deduce that the chosen one is not the optimal. The focus of this work is not to optimize the hyper-parameters.\nwhen noise rate is higher, we can improve a model’s robustness by moving emphasis focus towards relatively less difficult examples with a larger λ, which is informative in practice. Emphasis spread. As displayed in Table 3 and Figures 7-10 in the supplementary material, emphasis spread also matters a lot when fixing emphasis focus, i.e., fixing λ. For example in Table 3 , when λ = 0, although focusing on harder examples similarly with CCE, GR can outperform CCE by modifying the emphasis spread. As shown in Figures 7-10, some models even collapse and cannot converge if the emphasis spread is not rational." }, { "heading": "4.2.2 COMPETING WITH THE STATE-OF-THE-ART ON CIFAR-10", "text": "Implementation details. We follow the same settings as MentorNet (Jiang et al., 2018) to compare fairly with its reported results. Optimiser and data augmentation are described in Section 4.1.\nCompetitors. FullModel is the standard CCE trained using L2 weight decay and dropout (Srivastava et al., 2014). Forgetting (Arpit et al., 2017) searches the dropout parameter in the range of (0.2-0.9). Self-paced (Kumar et al., 2010), Focal Loss (Lin et al., 2017), and MentorNet (Jiang et al., 2018) are representatives of example reweighting algorithms. Reed Soft (Reed et al., 2015) is a weaklysupervised learning method. All methods use GoogLeNet V1 (Szegedy et al., 2015).\nResults. We compare the results under different noise rates in Table 4. GR with fixed hyperparameters β = 8, λ = 0.5 outperforms the state-of-the-art GCE by a large margin, especially when label noise becomes severe. Better results can be expected when optimising the hyper-parameters for each case. We remark that FullModel (naive CCE) (Jiang et al., 2018) was trained with L2 weight decay and dropout. However, GR’s regularization effect is much better in both clean and noisy cases." }, { "heading": "4.2.3 COMPETING WITH THE STATE-OF-THE-ART ON CIFAR-100", "text": "Implementation details. Most baselines have been reimplemented in (Ma et al., 2018) with the same settings. Therefore, for direct comparison, we follow exactly their experimental configurations and use ResNet-44 (He et al., 2016). Optimiser and data augmentation are described in Section 4.1. We repeat training and evaluation 5 times where different random seeds are used for generating noisy labels and model’s initialisation. The mean test accuracy and standard deviation are reported.\nCompetitors. We compare with D2L (Ma et al., 2018), GCE (Zhang & Sabuncu, 2018), and other baselines reimplemented in D2L: 1) Standard CCE (Ma et al., 2018); 2) Forward (Patrini et al., 2017) uses a noise-transition matrix to multiply the network’s predictions for label correction; 3) Backward (Patrini et al., 2017) applies the noise-transition matrix to multiply the CCE losses for loss correction; 4) Bootstrapping (Reed et al., 2015) trains models with new labels generated by a convex combination of the original ones and their predictions. The convex combination can be soft (Boot-soft) or hard (Boot-hard); 5) D2L (Ma et al., 2018) achieves noise-robustness from a novel perspective of restricting the dimensionality expansion of learned subspaces during training and is the state-of-the-art; 6) Since GCE outperforms MAE (Zhang & Sabuncu, 2018), we only reimplement GCE for comparison; 7) SL (Wang et al., 2019c) boosts CCE symmetrically with a noise-robust counterpart, i.e., reverse cross entropy.\nResults. We compare the results of GR and other algorithms in Table 5. GR outperforms other competitors by a large margin, especially when label noise is severe, e.g., r = 40% and 60%. More importantly, we highlight that GR is much simpler without any extra information. Compared with Forward and Backward, GR does not need any prior knowledge about the noise-transition matrix. Bootstrapping targets at label correction and is time-consuming. D2L estimates the local intrinsic dimensionality every bmini-batches and checks the turning point for dimensionality expansion every e epochs. However, b and e are difficult to choose and iterative monitoring is time-consuming." }, { "heading": "4.3 IMAGE CLASSIFICATION WITH REAL-WORLD UNKNOWN NOISE", "text": "Dataset. Clothing 1M (Xiao et al., 2015) contains 1 million images. It is an industrial-level dataset and its noise structure is agnostic. According to (Xiao et al., 2015), around 61.54% training labels are reliable, i.e., the noise rate is about 38.46%. There are 14 classes from several online shopping websites. In addition, there are 50k, 14k, and 10k images with clean labels for training, validation,\nand testing, respectively. Here, we follow and compare with existing methods that only learn from noisy training data since we would like to avoid exploiting auxiliary information.\nImplementation details. We train ResNet-50 (He et al., 2016) and follow exactly the same settings as (Patrini et al., 2017; Tanaka et al., 2018): 1) Initialisation: ResNet-50 is initialised by publicly available model pretrained on ImageNet (Russakovsky et al., 2015); 2) Optimisation: A SGD optimiser with a momentum of 0.9 and a weight decay of 10−3 is applied. The learning rate starts at 10−3 and is divided by 10 after 5 epochs. Training terminates at 10 epochs; 3) Standard data augmentation: We first resize a raw input image to 256× 256, and then crop it randomly at 224× 224 followed by random horizontal flipping. The batch size is 64 due to memory limitation. Since the noise rate is around 38.46%, we simply set λ = 1, β = 16 following Table 3 when noise rate is 40%.\nCompetitors. We compare with other noise-robust algorithms that have been evaluated on Clothing 1M with similar settings: 1) Standard CCE (Patrini et al., 2017); 2) Since Forward outperforms Backward on Clothing 1M (Patrini et al., 2017), we only present the result of Forward; 3) S-adaptation applies an additional softmax layer to estimate the noise-transition matrix (Goldberger & Ben-Reuven, 2017); 4) Masking is a human-assisted approach that conveys human cognition to speculate the structure of the noise-transition matrix (Han et al., 2018a). 5) Label optimisation (Tanaka et al., 2018) learns latent true labels and model’s parameters iteratively. Two regularisation terms are added for label optimisation and adjusted in practice.\nResults. The results are compared in Table 6. Under real-world agnostic noise, GR also outperforms the state-of-the-art. It is worth mentioning that the burden of noise-transition matrix estimation in Forward and S-adaptation is heavy due to alternative optimisation steps, and such estimation is non-trivial without big enough data. Masking exploits human cognition of a structure prior and reduces the burden of estimation, nonetheless its performance is not competitive. Similarly, Label Optimisation requires alternative optimisation steps and is time-consuming." }, { "heading": "4.4 VIDEO RETRIEVAL WITH DIVERSE ANOMALIES", "text": "Dataset and evaluation settings. MARS contains 20,715 videos of 1,261 persons (Zheng et al., 2016). There are 1,067,516 frames in total. Because person videos are collected by tracking and detection algorithms, abnormal examples exist as shown in Figure 3 in the supplementary material. We remark that there are some anomalies containing only background or an out-of-distribution person. Exact noise type and rate are unknown. Following standard settings, we use 8,298 videos of 625 persons for training and 12,180 videos of the other 636 persons for testing. We report the cumulated matching characteristics (CMC) and mean average precision (mAP) results.\nImplementation details. Following (Liu et al., 2017; Wang et al., 2019a), we train GoogleNet V2 (Ioffe & Szegedy, 2015) and treat a video as an image set, which means we use only appearance information without exploiting latent temporal information. A video’s representation is simply the average fusion of its frames’ representations. The learning rate starts from 0.01 and is divided by 2 every 10k iterations. We stop training at 50k iterations. We apply an SGD optimiser with a weight decay of 0.0005 and a momentum of 0.9. The batch size is 180. We use standard data augmentation: a 227 × 227 crop is randomly sampled and flipped after resizing an original image to 256 × 256. Training settings are the same for each method. We implement GCE with its reported best settings. At testing, following (Wang et al., 2019a; Movshovitz-Attias et al., 2017; Law et al., 2017), we first L2 normalise videos’ features and then calculate the cosine similarity between every two of them.\nResults. The results are displayed in Table 7. Although DRSA (Li et al., 2018) and CAE (Chen et al., 2018) exploit extra temporal information by incorporating attention mechanisms, GR is superior to them in terms of both effectiveness and simplicity. OSM+CAA (Wang et al., 2019a) is the only comparable method. However, OSM+CAA combines CCE and weighted contrastive loss to address anomalies, thus being more complex than GR. In addition, we highlight that one query may have multiple matching instances in the MARS benchmark. Consequently, mAP is a more reliable and accurate performance assessment. GR is the best in terms of mAP." }, { "heading": "4.5 BEATING STANDARD REGULARISERS UNDER LABEL NOISE", "text": "In Table 8, we compare our proposed regulariser GR with other standard ones, i.e., L2 weight decay and Dropout (Srivastava et al., 2014). We set the dropout rate to 0.2 and L2 weight decay rate to 10−4. For GR, as mentioned in Section 4.2.3, we fix β = 8, λ = 0.5. Interestingly, Dropout+L2 achieves 52.8% accuracy, which is even better than the state-of-the-art in Table 5, i.e., D2L with 52.0% accuracy. However, GR is better than those standard regularisers and their combinations significantly. GR works best when it is together with L2 weight decay." }, { "heading": "5 CONCLUSION", "text": "In this work, we present three main contributions: 1) We analyse and answer a core research question: What training examples should be focused on and how large the emphasis spread should be? 2) We uncover and analyse that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting. Consequently, we propose a simple yet effective gradient rescaling framework serving as emphasis regularisation. 3) Extensive experiments on different tasks using different network architectures are reported for better understanding and demonstration of GR’s effectiveness, which are also valuable for applying GR in practice." }, { "heading": "A DISPLAY OF SEMANTICALLY ABNORMAL TRAINING EXAMPLES", "text": "" }, { "heading": "B DERIVATION DETAILS OF SOFTMAX, CCE, MAE AND GCE", "text": "B.1 DERIVATION OF SOFTMAX NORMALISATION\nBased on Eq. (1), we have p(yi|xi)−1 = 1 + ∑ j 6=yi exp(zij − ziyi). (8)\nFor left and right sides of Eq. (8), we calculate their derivatives w.r.t. zij simultaneously.\nIf j = yi, −1\np(yi|xi)2 ∂p(yi|xi) ziyi = − ∑ j 6=yi exp(zij − ziyi)\n=> ∂p(yi|xi)\nziyi = p(yi|xi)(1− p(yi|xi)).\n(9)\nIf j 6= yi,\n−1 p(yi|xi)2 ∂p(yi|xi) zij = exp(zij − ziyi)\n=> ∂p(yi|xi)\nzij = −p(yi|xi)p(j|xi).\n(10)\nIn summary, the derivation of softmax layer is:\n∂p(yi|xi) ∂zij = { p(yi|xi)(1− p(yi|xi)), j = yi −p(yi|xi)p(j|xi), j 6= yi\n(11)\nB.2 DERIVATION OF CCE\nAccording to Eq. (2), we have\nLCCE(xi; fθ,W) = − log p(yi|xi). (12)\nTherefore, we obtain (the parameters are omitted for brevity),\n∂LCCE ∂p(j|xi) = { −p(yi|xi)−1, j = yi 0, j 6= yi . (13)\nB.3 DERIVATION OF MAE\nAccording to Eq. (3), we have\nLMAE(xi; fθ,W) = 2(1− (p(yi|xi)). (14)\nTherefore, we obtain ∂LMAE ∂p(j|xi) = { −2, j = yi 0, j 6= yi . (15)\nB.4 DERIVATION OF GCE\nAccording to Eq. (4), we have\nLGCE(xi; fθ,W) = 1− p(yi|xi)q\nq . (16)\nTherefore, we obtain ∂LGCE ∂p(j|xi) = { −p(yi|xi)q−1, j = yi 0, j 6= yi . (17)\nB.5 DERIVATIVES W.R.T. LOGITS zi\nB.5.1 ∂LCCE/∂zi\nThe calculation is based on Eq. (13) and Eq. (11).\nIf j = yi, we have:\n∂LCCE ∂ziyi = C∑ j=1 ∂LCCE ∂p(j|xi) ∂p(yi|xi) zij\n= p(yi|xi)− 1.\n(18)\nIf j 6= yi, it becomes: ∂LCCE ∂zij = C∑ j=1 ∂LCCE ∂p(j|xi) ∂p(yi|xi) zij\n= p(j|xi).\n(19)\nIn summary, ∂LCCE/∂zi can be represented as:\n∂LCCE ∂zij = { p(yi|xi)− 1, j = yi p(j|xi), j 6= yi . (20)\nB.5.2 ∂LMAE/∂zi\nThe calculation is analogous with that of ∂LCCE/∂zi.\nAccording to Eq. (15) and Eq. (11), if j = yi:\n∂LMAE ∂ziyi = C∑ j=1 ∂LMAE ∂p(j|xi) ∂p(yi|xi) zij\n= −2p(yi|xi)(1− p(yi|xi)).\n(21)\notherwise (j 6= yi): ∂LMAE ∂zij = C∑ j=1 ∂LMAE ∂p(j|xi) ∂p(yi|xi) zij\n= 2p(yi|xi)p(j|xi).\n(22)\nIn summary, ∂LMAE/∂zi is:\n∂LMAE ∂zij = { 2p(yi|xi)(p(yi|xi)− 1), j = yi 2p(yi|xi)p(j|xi), j 6= yi . (23)\nB.5.3 ∂LGCE/∂zi\nThe calculation is based on Eq. (17) and Eq. (11).\nIf j = yi, we have:\n∂LGCE ∂ziyi = C∑ j=1 ∂LGCE ∂p(j|xi) ∂p(yi|xi) zij\n= p(yi|xi)q(p(yi|xi)− 1).\n(24)\nIf j 6= yi, it becomes: ∂LGCE ∂zij = C∑ j=1 ∂LGCE ∂p(j|xi) ∂p(yi|xi) zij\n= p(yi|xi)qp(j|xi).\n(25)\nIn summary, ∂LGCE/∂zi can be represented as:\n∂LGCE ∂zij = { p(yi|xi)q(p(yi|xi)− 1), j = yi p(yi|xi)qp(j|xi), j 6= yi . (26)" }, { "heading": "C SMALL-SCALE FINE-GRAINED VISUAL CATEGORISATION OF VEHICLES", "text": "How does GR perform on small datasets, for example, the number of data points is no more than 5,000? We have tested GR on CIFAR-10 and CIFAR-100 in the main paper. However, both of them contain a training set of 50,000 images.\nFor this question, we answer it from different perspectives as follows:\n1. The problem of label noise we study on CIFAR-10 and CIFAR-100 in Section 4.2 is of similar scale. For example:\n• In Table 4, when noise rate is 80% on CIFAR-10, the number of clean training examples is around 50, 000× 20% = 5, 000× 2. Therefore, this clean set is only two times as large as 5,000. Beyond, the learning process may be interrupted by other noisy data points.\n• In Table 5, when noise rate is 60% on CIFAR-100, the number of clean training data points is about 50, 000× 40% = 5, 000× 4, i.e., four times as large as 5,000.\n2. We compare GR with other standard regularisers on a small-scale fine-grained visual categorisation problem in Table 9.\nVehicles-10 Dataset. In CIFAR-100 Krizhevsky (2009), there are 20 coarse classes, including vehicles 1 and 2. Vehicles 1 contains 5 fine classes: bicycle, bus, motorcycle, pickup truck, and train. Vehicles 2 includes another 5 fine classes: lawn-mower, rocket, streetcar, tank, and tractor. We build a small-scale vehicles classification dataset composed of these 10 vehicles from CIFAR-100. Specifically, the training set contains 500 images per vehicle class while the testing set has 100 images per class. Therefore, the number of training data points is 5,000 in total." }, { "heading": "D TRAINING UNDER ASYMMETRIC LABEL NOISE", "text": "We evaluate on CIFAR-100, whose 100 classes are grouped into 20 coarse classes. Every coarse class has 5 fine classes. Within each coarse class, an image’s label is flipped to one of the other four labels uniformly with a probability r. r represents the noise rate. We set r = 0.2. The results are displayed in Table 10. When GR is used, the performance is better than its counterparts without GR." }, { "heading": "E THE EFFECTIVENESS OF LABEL CORRECTION", "text": "The results are shown in Table 11." }, { "heading": "F MORE EMPIRICAL RESULTS", "text": "F.1 REVIEW\nQuestion: What training examples should be focused on and how much more should they be emphasised when training DNNs under label noise?\nProposal: Gradient rescaling incorporates emphasis focus (centre/focal point) and emphasis spread, and serves as explicit regularisation in terms of sample reweighting/emphasis.\nFinding: When noise rate is higher, we can improve a model’s robustness by moving emphasis focus towards relatively less difficult examples.\nF.2 DETAILED RESULTS ON CIFAR-100\nThe more detailed results on CIFAR-100 are shown in Table 12, which is the supplementary of Table 5 in the main text.\nF.3 DETAILED TRAINING DYNAMICS\nThere are more detailed training dynamics displayed in the Figures 4-10." } ]
2,019
null
SP:d74f95781e1b0f164644b7e30247791eae6afc79
[ "This paper proposes a novel data augmentation method, untied MixUp (UMixUp), which is a general case of both MixUp and Directional Adversarial Traning (DAT). DAT is referred to in this paper as a scheme that only input feature vectors are mixed, while MixUp also incorporates their corresponding labels. The authors provide a theoretical discussion that both DAT and UMixUp converges to be equivalent to each other when the number of training samples becomes infinity. Experimental results on Cifar 10, Cifar 100, MNIST, and Fashion MNIST show quantitative comparisons among the baseline, MixUp, and UMixUp.", "This paper introduces directional adversarial training (DAT) and UMixUP, which are extension methods of MixUp. DAT and UMixUp use the same method of MixUp for generating samples but use different label mixing ratios where DAT retains the sample's original label. In contrast, UMixUp uses a function of the input mixing ratio. This paper shows that UMixUp and DAT are equivalent when the number of samples tends to infinity. In the experiments, UMixUp provides an improvement over MixUp." ]
MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients. Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained. Because samples are preferentially moved in the direction of other classes we refer to this method as directional adversarial training, or DAT. We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT. We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples. We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes. Motivated by the understanding that UMixUp is both a generalization of MixUp and a form of adversarial training, we experiment with different datasets and loss functions to show that UMixUp provides improved performance over MixUp. In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.
[]
[ { "authors": [ "Devansh Arpit", "Stanislaw K. Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron C. Courville", "Yoshua Bengio", "Simon Lacoste-Julien" ], "title": "A closer look at memorization in deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "CoRR, abs/1810.04805,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR 2015,", "year": 2015 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Mixup as locally linear out-of-manifold regularization", "venue": "arXiv preprint arXiv:1809.02499,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Warren He", "Bo Li", "Dawn Song" ], "title": "Decision boundary analysis of adversarial examples", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Anders Krogh", "John A. Hertz" ], "title": "A simple weight decay can improve generalization", "venue": "In Advances in Neural Information Processing Systems 4, NIPS Conference,", "year": 1991 }, { "authors": [ "Jan Kukacka", "Vladimir Golkov", "Daniel Cremers" ], "title": "Regularization for deep learning: A taxonomy", "venue": null, "year": 2017 }, { "authors": [ "Alex Lamb", "Vikas Verma", "Juho Kannala", "Yoshua Bengio" ], "title": "Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Colin McDiarmid" ], "title": "On the method of bounded differences", "venue": "Surveys in combinatorics,", "year": 1989 }, { "authors": [ "Takeru Miyato", "Andrew M Dai", "Ian Goodfellow" ], "title": "Adversarial training methods for semisupervised text classification", "venue": "arXiv preprint arXiv:1605.07725,", "year": 2016 }, { "authors": [ "Uri Shaham", "Yutaro Yamada", "Sahand Negahban" ], "title": "Understanding adversarial training: Increasing local stability of supervised models through robust optimization", "venue": null, "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey E. Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Aaron Courville", "Ioannis Mitliagkas", "Yoshua Bengio" ], "title": "Manifold mixup: Learning better representations by interpolating hidden states", "venue": null, "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference 2016,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning applications often require complex networks with a large number of parameters (He et al., 2016; Zagoruyko & Komodakis, 2016; Devlin et al., 2018). Although neural networks perform so well that their ability to generalize is an area of study in itself (Zhang et al., 2017a; Arpit et al., 2017), their high complexity nevertheless causes them to overfit their training data (Kukacka et al., 2017). For this reason, effective regularization techniques are in high demand.\nThere are two flavors of regularization: complexity curtailing and data augmentation 1. Complexity curtailing methods constrain models to learning in a subset of parameter space which has a higher probability of generalizing well. Notable examples are weight decay (Krogh & Hertz, 1991) and dropout (Srivastava et al., 2014).\nData augmentation methods add transformed versions of training samples to the original training set. Conventionally, transformed samples retain their original label, so that models effectively see a larger set of data-label training pairs. Commonly applied transformations in image applications include flips, crops and rotations.\nA recently devised family of augmentation schemes called adversarial training has attracted active research interest (Szegedy et al., 2013; Goodfellow et al., 2014; Miyato et al., 2016; Athalye et al., 2018; Shaham et al., 2018; He et al., 2018). Adversarial training seeks to reduce a model’s propensity to misclassify minimally perturbed training samples, or adversarials. While attack algorithms used for testing model robustness may search for adversarials in unbounded regions of input space, adversarial training schemes generally focus on perturbing training samples within a bounded region, while retaining the sample’s original label (Goodfellow et al., 2015; Shaham et al., 2018).\nAnother recently proposed data augmentation scheme is MixUp (Zhang et al., 2017b), in which new samples are generated by mixing pairs of training samples using linear coefficients. Despite its well established generalization performance (Zhang et al., 2017b; Guo et al., 2018; Verma et al., 2018), the working mechanism of MixUp is not well understood. Guo et al. (2018) suggest viewing MixUp as imposing local linearity on the model using points outside of the data manifold. While this\n1Some authors describe these flavors as “data independent” and “data-dependent” (Guo et al., 2018).\nperspective is insightful, we do not believe it paints a full picture of how MixUp operates. A recent study (Lamb et al., 2019) provides empirical evidence that MixUp improves adversarial robustness, but does not present MixUp as a form of adversarial training.\nWe build a framework to understand MixUp in a broader context: we argue that adversarial training is a central working principle of MixUp. To support this contention, we connect MixUp to a MixUplike scheme which does not perform label mixing, and we relate this scheme to adversarial training.\nWithout label mixing, MixUp becomes a conventional augmentation scheme: input samples are moved, but their original labels are retained. Because samples are moved in the direction of other samples – which are typically clustered in input space – we describe this method as ‘directional’. Because this method primarily moves training samples in the direction of adversarial classes, this method is analogous to adversarial training. We thus refer to MixUp without label mixing as directional adversarial training (DAT). We show that MixUp converges to a subset of DAT under mild conditions, and we thereby argue that adversarial training is a working principle of MixUp.\nInspired by this new understanding of MixUp as a form of adversarial training, and upon realizing that MixUp is (asymptotically) a subset of DAT, we introduce Untied MixUp (UMixUp), a simple enhancement of MixUp which converges to the entire family of DAT schemes, as depicted in Figure 1. Untied Mixup mixes data-label training pairs in a similar way to MixUp, with the distinction that the label mixing ratio is an arbitrary function of the sample mixing ratio. We perform experiments to show that UMixUp’s classification performance improves upon MixUp.\nIn short, this research is motivated by a curiosity to better understand the working of MixUp. In-sodoing we aim to:\n1. Establish DAT as analogous to adversarial training. This is discussed in section 4. 2. Establish UMixUp as a superset of MixUp, and as converging to the entire family of\nDAT schemes. In-so-doing, a) establish MixUp’s convergence to a subset of DAT, and thereby that it operates analogously to adversarial training; and b) establish UMixUp as a broader class of MixUp-like schemes that operate analogously to adversarial training. This is discussed in 5.\n3. Establish empirically that UMixUp’s classification performance improves upon MixUp. This is discussed in section 6.\nFinally we note that this paper has another contribution. Conventionally, MixUp is only applicable to baseline models that use cross entropy loss. All analytical results we develop in this paper are applicable to a wider family of models using any loss function which we term target-linear. We define target-linearity and experiment with a new loss function called negative cosine-loss to show its potential.\nEssential proofs of theoretical results are given in the Appendix." }, { "heading": "2 PRELIMINARIES", "text": "Column vectors are denoted by bold letters such as m, and sets are denoted by calligraphic uppercase letters such asM. The component of a vector is denoted by a bracketed index. For example, m[i] denotes the ith component of m.\nRegular (non-calligraphic) capitalized letters such as X will denote random variables, and their lowercase counterparts, e.g., x, will denote realizations of a random variable. Any sequence, (a1, a2, . . . , an) will be denoted by an1 . Likewise (A1, A2, . . . , An) will be denoted by A n 1 , and a sequence of sample pairs ((x1, x′1), (x2, x′2), . . . , (xn, x′n)) denoted by (x, x′)n1 .\nFor any value a ∈ [0, 1], we will use a as a short notation for 1− a. Classification Setting Consider a standard classification problem, in which one wishes to learn a classifier that predicts the class label for a sample.\nFormally, let X be a vector space in which the samples of interest live and let Y be the set of all possible labels associated with these samples. The set of training samples will be denoted by D, a subset of X . We will use t(x) to denote the true label of x. Let F be a neural network function, parameterized by θ, which maps X to another vector space Z . Let ϕ : Y → Z be a function that maps a label in Y to an element in Z such that for any y, y′ ∈ Y , if y 6= y′, then ϕ(y) 6= ϕ(y′). In the space Z , we refer to F (x) as the model’s prediction. With slight abuse of language, we will occasionally refer to both t(x) and ϕ(t(x)) as the “label” of x. Let ` : Z×Z → R be a loss function, using which one defines an overall loss function as\nL := 1 |D| ∑ x∈D ` (F (x), ϕ(t(x))) (1)\nHere we have taken the notational convention that the first argument of ` represents the model’s prediction and the second represents the target label. In this setting, the learning problem is formulated as minimizing L with respect to its characterizing parameters θ. Target-Linear Loss Functions We say that a loss function `(z, z′) is target-linear if for any scalars α and β,\n`(z, αz1 + βz2) = α`(z, z1) + β`(z, z2) Target-linear loss functions arise naturally in many settings, for which we now provide two examples. For convenience, we define the vectors v = F (x) and y = ϕ(t(x)).\nCross-Entropy Loss The conventional cross-entropy loss function, written in our notation, is defined as:\n`CE (F (x), ϕ(t(x))) = `CE(v, y) := dim(Z)∑ i=1 y[i] log v[i]\nwhere v and y are constrained to being probability vectors. We note that in conventional applications, dim(Z) = |Y|, and the target label v is a one-hot vector where y[i] = 1 if i = t(x) and y[i] = 0 otherwise. Constraining v to being a probability vector is achieved using a softmax output layer.\nNegative-Cosine Loss The “negative-cosine loss”, usually used in its negated version, i.e., as the cosine similarity, can be defined as follows.\n`NC (F (x), ϕ(t(x))) = `NC(v, y) := −vTy\nwhere v and y are constrained to being unit-length vectors. For v this can be achieved by simple division at the output layer, and for y by limiting the range of ϕ to an orthonormal basis (making it a conventional label embedding function).\nIt is clear that the cross-entropy loss `CE and the negative-cosine loss `NC are both target-linear, directly following from the definition of target-linearity.\nAssumptions The theoretical development of this paper relies on two fundamental assumptions, which we call “axioms”.\nAxiom 1 (Target linearity) The loss function ` used for the classification setting is target-linear.\nThat is, the study of MixUp in this paper is in fact goes beyond the standard MixUp, which uses the cross-entropy loss.\nMuch of the development in this paper concerns drawing sample pairs (x, x′) from D×D. Suppose that (x, x′)n1 is a length-n sequence of sample pairs drawn from D ×D. A sequence (x, x′)n1 is said to be symmetric if for every (a,b) ∈ D ×D, the number of occurrences of (a,b) in the sequence is equal to that of (b, a). A distribution Q on D ×D will be called exchangeable, or symmetric, if for any (x, x′) ∈ D ×D, Q((x, x′)) = Q((x′, x)).\nAxiom 2 (Symmetric pair-sampling distribution) Whenever a sample pair (x, x′) is drawn from a distribution Q, Q is assumed to be symmetric.\nIn the standard MixUp, two samples are drawn independently from D to form a pair, making this condition satisfied." }, { "heading": "3 MIXUP, DAT, UNTIED MIXUP", "text": "" }, { "heading": "3.1 INFORMAL SUMMARY", "text": "We first provide a summary of each scheme for the reader’s convenience. We then describe each scheme more systematically. For concision of equations to follow, we define\ny = ϕ(t(x)) and y′ = ϕ(t(x′))\nMixUp is a data augmentation scheme in which samples are linearly combined using some mixing ratio λ ∈ [0, 1]:\nxg = λx + (1− λ)x′ (2)\nwhere λ ∼ PMix. A target label is generated using the same mixing ratio λ:\nyg = λy + (1− λ)y′ (MixUp)\nDAT and UMixUp use the same method (2) for generating samples, but use different λ distributions (PDAT and P uMix respectively). DAT and UMixUp also differ from MixUp in their target labels. DAT retains the sample’s original label:\nyg = y (DAT)\nwhereas UMixUp’s label mixing ratio is a function of λ:\nyg = γ(λ)y + (1− γ(λ))y′ (UMixUp)\nIn Untied MixUp, the label mixing ratio is “untied” from the sample mixing ratio, and can be any γ(λ). We will refer to γ as the weighting function. An Untied MixUp scheme is specified both by the its mixing policy P uMix and a weighting function γ." }, { "heading": "3.2 FORMAL DEFINITIONS", "text": "To draw comparisons between MixUp, DAT, and Untied MixUp schemes, we establish a framework for characterizing their optimization problems. To that end, we define each model’s loss function `m in terms of its baseline target-linear loss function `b, where the superscript m is replaced with a model identifier (i.e. `Mix, `DAT, `uMix). Each model’s overall loss function, Lm is defined in terms of its loss function `m as per equation 1 (where equation 1’s ` is `m). We denote the expected value of each scheme’s overall loss, LmE , with respect to its mixing ratio Λ. Let n be a positive integer. In every scheme, a sequence (x, x′)n1 := ((x1, x′1), (x2, x′2), . . . , (xn, x′n)) of sample pairs are drawn i.i.d. from Q, and a sequence λn1 := (λ1, λ2, . . . , λn) of values are drawn i.i.d. from Pm, where Pm is a distribution over [0, 1] unique to each model.\nMixUp For any x, x′ ∈ D and any λ ∈ [0, 1], denote\n`Mix(x, x′, λ) := `b ( F (λx + λx′), λy + λy′ ) Let Pm be PMix; in other words a sequence λn1 := (λ1, λ2, . . . , λK) of values are drawn i.i.d. from PMix. We denote the overall loss LMix ((x, x′)n1 , λn1 ) and the expected overall loss LMixE ((x, x′)n1 ):\nLMix ((x, x′)n1 , λn1 ) := 1\nn n∑ k=1 `Mix(xk, x′k, λk) (3)\nLMixE ((x, x′)n1 ) := Eλn1 iid∼PMix LMix ((x, x′)n1 , λn1 )\nIn MixUp, we refer to PMix as the mixing policy.\nDirectional Adversarial Training (DAT) For any x, x′ ∈ D and any λ ∈ [0, 1], we denote `DAT(x, x′, λ) := `b ( F (λx + λx′), y ) Let Pm be PDAT, such that members of λn1 are drawn i.i.d. from P\nDAT. We denote the overall loss LDAT ((x, x′)n1 , λn1}) and the expected overall loss LDATE ((x, x′)n1 ):\nLDAT ((x, x′)n1 , λn1}) := 1\nn n∑ k=1 `DAT (xk, x′k, λk) (4)\nLDATE ((x, x′)n1 ) := Eλn1 iid∼PDAT LDAT ((x, x′)n1 , λn1 )\nIn DAT, we refer to PDAT as the adversarial policy.\nUntied MixUp (UMixUp)\nLet γ be a function mapping [0, 1] to [0, 1]. For any x, x′ ∈ D and any λ ∈ [0, 1], we denote\n`uMix(x, x′, λ, γ) := `b(F (λx + λx′), γ(λ)y) + γ(λ)y′)\nLet Pm be P uMix, and denote the overall and expected overall loss functions LuMix ((x, x′)n1 , λn1 , γ) and LuMixE ((x, x′)n1 , γ) respectively:\nLuMix ((x, x′)n1 , λn1 , γ) := 1\nn n∑ k=1 `uMix(x, x′, λ, γ)\nLuMixE ((x, x′)n1 , γ) := Eλn1 iid∼PuMix LuMix ((x, x′)n1 , λn1 , γ)\nAt this end, it is apparent that MixUp is a special case of Untied MixUp, where the function γ(λ) takes the simple form γ(λ) = λ." }, { "heading": "4 DAT AS ANALOGOUS TO ADVERSARIAL TRAINING", "text": "The main theoretical result of this paper is the relationship established between DAT and UMixUp, and by extension MixUp. Both MixUp and UMixUp will be shown to converge to DAT as the number of mixed sample pairs, n, tends to infinity. Prior to developing these results, we provide insight into DAT, in terms of its similarity to adversarial training and its regularization mechanisms.\nConventional adversarial training schemes augment the original training dataset by searching for approximations of true adversarials within bounded regions around each training sample. For a training sample x, a bounded region U known as an Lp ball is defined as U = {x + η : ||η||p < }. Over this region, the loss function with respect to the true label of x is maximized. A typical loss function for an adversarial scheme is\n`(F (x), y) = max x̃∈U `b(F (x̃), y)\nwhere `b is the baseline loss function. Simply put, baseline training serves to learn correct classification over the training data, whereas adversarial training moves the classification boundary to improve generalization.\nDAT, on the other hand, combines intra-class mixing (mixing two samples of the same class) and inter-class mixing (mixing samples of different classes). Intra-class mixing serves to smooth classification boundaries of inner-class regions, while inter-class mixing perturbs training samples in the direction of adversarial classes, which improves generalization. Inter-class mixing dwarves intraclass mixing by volume of generated samples seen by the learning model in most many-class learning problems (by a 9-1 ratio in balanced 10-class problems for instance). DAT, which primarily consists of inter-class mixing, can therefore be seen as analogous to adversarial training.\nThe key distinction between conventional adversarial training and inter-class mixing is that MixUp movement is determined probabilistically within a bounded region, while adversarial movement is deterministic.\nFigure 2 illustrates the connection between standard adversarial training and DAT. Consider the problem of classifying the blue points and the black points in Figure 2a), where the dashed curve is a ground-truth classifier and the black curve indicates the classification boundary of F (x), which overfits the training data. In adversarial training, a training sample x is moved to a location within an Lp-ball around x while keeping its label to further train the model; the location, denoted by x̂1 in Figure 2b), is chosen to maximize training loss.\nIn DAT, a second sample x′ governs the direction in which x is perturbed. If x′ is chosen from a different class as shown in Figure 2c), then the generated sample x̂2 is used to further train the model. If x′ is chosen from the same class as shown in Figure 2d), then the sample x̂3 is used in further training. Note that the inter-class mixed sample x̂2 pushes the model’s classification boundary closer to the ground-truth classifier, thus connecting adversarial training and DAT. The intra-class sample x̂3, on the other hand, mainly helps to smooth inner parts of the class region. The latter behaviour is an additional feature of DAT and MixUp, which distinguishes these schemes from adversarial training." }, { "heading": "5 UNTIED MIXUP AS ASYMPTOTICALLY EQUIVALENT TO DAT", "text": "We now show that Untied MixUp and DAT are equivalent when n tends to infinity. A consequence of this equivalence is that it infuses both MixUp and UMixUp with the intuition of adversarial training. To that end, we relate the Untied MixUp loss function, `uMix, with the DAT loss function, `DAT.\nLemma 1 For any (x, x′) ∈ D ×D and any λ ∈ [0, 1],\n`uMix(x, x′, λ, γ) = γ(λ)`DAT(x, x′, λ) + γ(λ)`DAT(x′, x, λ)\nThis result follows directly from the target-linearity of the loss function.\nThe next two lemmas show that as n tends to infinity, the overall loss of both DAT and UMixUp converge in probability to their respective overall expected losses.\nLemma 2 As n increases, LDAT ((x, x′)n1 ,Λn1 ) converges to LDATE ( (x, x′)n1 ) in probability.\nLemma 3 As n increases, LuMix ((x, x′)n1 ,Λn1 , γ) converges to LuMixE ( (x, x′)n1 , γ ) in probability.\nThese two lemmas have similar proofs, thus only the proof of Lemma 2 is given in section A.1.\nNext we show that as n tends to infinity, UMixUp converges in probability to a subset of DAT, and DAT converges in probability to a subset of UMixUp. In other words, we show that as n increases, UMixUp converges to being equivalent to the entire class of DAT schemes.\nFor that purpose, let F denote the space of all functions mapping [0, 1] to [0, 1]. Each configuration in P × F defines an Untied MixUp scheme. We now define U, which maps a DAT scheme to an Untied MixUp scheme. Specifically U is a map from P to P × F such that for any p ∈ P , U(p) is a configuration (p′, g) ∈ P × F , where\np′(λ) := 1 2 (p(λ) + p(1− λ)) and g(λ) := p(λ) p(λ) + p(1− λ) (5)\nLemma 4 Let (x, x′)n1 be a sequence of sample pairs on which an Untied MixUp scheme specified by (P uMix, γ) and a DAT scheme with policy PDAT will apply independently. If (x, x′)n1 is symmetric and ( P uMix, γ ) = U(PDAT), then LuMix ((x, x′)n1 , γ) = LDAT ((x, x′)n1 ) .\nWe now define another map Du that maps an Untied MixUp scheme to a DAT scheme. Specifically Du is a map from P ×F to P such that for any (p, g) ∈ P ×F , Du(p, g) is a configuration p′ ∈ P , where\np′(λ) := ( g(λ)p(λ) + g(λ)p(1− λ) ) It is easy to verify that ∫ 1 0 p′(λ)dλ = 1. Thus p′ is indeed a distribution in P and Du is well defined.\nLemma 5 Let (x, x′)n1 be a sequence of sample pairs on which an Untied MixUp scheme specified by (P uMix, γ) and a DAT scheme with policy PDAT will apply independently. If (x, x′)n1 is symmetric and PDAT = Du ( P uMix, γ ) , then LuMix ((x, x′)n1 , γ) = LDAT ((x, x′)n1 ) .\nLemmas 2, 3, 4 and 5 provide the building blocks for theorem 1, which we state hereafter. As n increases, both DAT and UMixUp converge in probability toward their respective expected loss (lemmas 2 and 3). Since as n increases, the sequence (x, x′)n1 becomes arbitrarily close to the symmetric sampling distribution Q, then by lemma 4 the family of DAT schemes converges in probability to a subset of UMixUp schemes. Lemma 5 proves the converse, i.e. that as n increases the family of UMixUp schemes converges in probability to a subset of DAT schemes. As n increases, the family of UMixUp schemes therefore converges in probability to the entire family of DAT schemes.\nTheorem 1 Let (X,X′)∞1 be drawn i.i.d. from Q. On this sample-pair data, an Untied MixUp scheme specified by (PMix, γ) and a DAT scheme specified by PDAT will apply. In the Untied MixUp scheme, let Λ∞1 be drawn i.i.d. from P\nMix; in the DAT scheme, let Υ∞1 be drawn i.i.d. from PDAT. If PDAT = Du ( PMix, γ ) or ( PMix, γ ) = U(PDAT), then∣∣LMix ((X,X ′)n1 ,Λn1 , γ)− LDAT ((X,X ′)n1 ,Υn1 )∣∣ p−→ 0, as n→∞\nThe equivalence between the two families of schemes also indicates that there are DAT schemes that do not correspond to a MixUp scheme. These DAT schemes correspond to Untied MixUp scheme beyond the standard MixUp. The relationship between MixUp, DAT and Untied MixUp is shown in Figure 1." }, { "heading": "6 UMIXUP AS A USEFUL GENERALIZATION OF MIXUP: EXPERIMENTS", "text": "" }, { "heading": "6.1 EXPERIMENT SETUP AND IMPLEMENTATION", "text": "We consider an image classification task on the Cifar10, Cifar100, MNIST and Fashion-MNIST datasets. The baseline classifier chosen is PreActResNet18 (see Liu (2017)), noting the same choice is made by the authors of MixupZhang et al. (2017b).\nBoth MixUp and Untied MixUp are considered in the experiments. The MixUp policies are chosen as Beta distribution B(α, β). The Untied MixUp policy is taken as U(B(α, β)).\nTwo target-linear loss functions are essayed: cross-entropy (CE) loss and the negative-cosine (CE) loss as defined earlier. We implement CE loss similarly to previous works, which use CE loss to implement the baseline model. In our implementation of the NC loss model, for each label y, ϕ(y) is mapped to a randomly selected unit-length vector of dimension d and fixed during training; the feature map of the original PreActResNet18 is linearly transformed to a d-dimensional vector. The dimension d is chosen as 300 for Cifar10, MNIST and Fashion-Mnist (which have one black-andwhite channel) and 700 for Cifar100 (which has 3 colored channels).\nOur implementation of MixUp and Untied MixUp improves upon the published implementation from the original authors of MixUp Zhang et al. (2017b). For example, the original authors’ implementation samples only one λ per mini-batch, giving rise to unnecessarily higher stochasticity of the gradient signal. Our implementation samples λ independently for each sample. Additionally, the original code combines inputs by mixing a mini-batch of samples with a shuffled version of itself. This approach introduces a dependency between sampled pairs and again increases the stochasticity of training. Our implementation creates two shuffled copies of the entire training dataset prior to each epoch, pairs them up, and then splits them into mini-batches. This gives a closer approximation to i.i.d. sampling and makes training smoother. While these implementation improvements have merit on their own, they do not provide a theoretical leap in understanding, and so we do not quantify their impact in our results analysis.\nAll models examined are trained using mini-batched backpropagation, for 200 epochs." }, { "heading": "6.2 RESULTS", "text": "We sweep over the policy space of MixUp and Untied MixUp. For MixUp, it is sufficient to consider distribution PMix to be symmetric about 0.5. Thus we consider only consider PMix in the form of B(α, α), and scan through a single parameter α systematically. Since the policy of Untied MixUp is in the form of U(B(α, β)), searching through (α, β) becomes more difficult. Thus our policy search for Untied MixUp is restricted to an ad hoc heuristic search. For this reason, the found best policy for Untied MixUp might be quite far from the true optimal.\nThe main results of our experiments are given in tables 1 to 4. As shown in the tables, each setting is run 100 times. For each run, we compute the error rate in a run as the average test error rate over the final 10 epochs. The estimated mean (“MEAN”) performance of a setting is computed as the average of the error rates over all runs for the same setting. The 95%-confidence interval (“ConfInt”) for the estimated mean performance is also computed and shown in the table.\nFrom these results, we see that the Untied MixUp schemes each outperform their MixUp counterparts. Specifically, in 6 of the 8 cases (those printed in bold font), the confidence interval of Untied MixUp is completely disjoint from that of the corresponding MixUp scheme; and in some cases, the separation of confidence intervals is by a large margin. Note that the baseline model (PreActResNet18) has been designed with highly focused inductive bias for image classification tasks. Under such an inductive bias, one expects that the room for regularization (or the “amount of overfitting”) isn’t abundant. As such, we consider the improvement of Untied MixUp over MixUp rather significant.\nThe results show empirically that MixUp and Untied MixUp both work on the NC loss models. This validates our generalization of MixUp (and Untied MixUp) to models built with target linear losses." }, { "heading": "7 CONCLUDING REMARKS", "text": "This paper establishes a connection between MixUp and adversarial training. This connection allows for a better understanding of the working mechanism of MixUp as well as a generalization of MixUp to a wider family, namely Untied MixUp. Despite the development in this work, it is the authors’ belief that the current designs of MixUp and Untied MixUp are far from optimal. In particular, we believe a better design should allow individualized policy for each training pair. How this can be done remains open at this time." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF LEMMA 2:\nFor any fixed infinite sequence (x, x′)∞1 of samples drawn i.i.d. from Q and any infinite sequence of i.i.d. random variables Λ∞1 drawn from P\nDAT, let LDAT ((x, x′)n1 ,Λn1 ) be defined according to (4), with the first n elements of (x, x′)∞1 and the first n elements of Λ∞1 as input. Define\nδDAT := max (x,x′)∈ D×D sup (λ,λ′)∈\n[0,1]×[0,1]\n|`DAT(x, x′, λ)− `DAT(x, x′, λ′)|\nFor any given λn1 ∈ [0, 1]n and any of its modified version un1 ∈ [0, 1]n which differs from λn1 in exactly one location, it can be verified, following the definition of δDAT, that∣∣LDAT ((x, x′)n1 , λn1 )− LDAT ((x, x′)n1 , un1 )∣∣ ≤ δDAT/n Since Λ1,Λ2, . . .ΛK are independent and by McDiarmid Inequality McDiarmid (1989), it follows that for any > 0,\nPr [ LDAT ((x, x′)n1 ,Λn1 )− LDATE ( (x, x′)n1 ) ≥ ] < 2 exp ( − 2 2\nn · (δDAT/n)2 ) which proves the lemma\nA.2 PROOF OF LEMMA 4:\nLuMixE ((x, x′)n1 , γ) := 1\nn n∑ k=1 Eλ∼PMix { γ(λ)`DAT(xk, x′k, λ) + γ(λ)` DAT(x′k, xk, λ) }\n= 1\nn n∑ k=1 ∫ ( γ(λ)PMix(λ)`DAT(xk, x′k, λ) +γ(λ)P Mix(λ)`DAT(x′k, xk, λ) ) dλ\n= 1\nn n∑ k=1 ∫ ( 1 2 PDAT(λ)`DAT(xk, x′k, λ) + 1 2 PDAT(λ)`DAT(x′k, xk, λ) ) dλ\n= 1\n2K ( n∑ k=1 ∫ PDAT(λ)`DAT(xk, x′k, λ)dλ+ n∑ k=1 ∫ PDAT(λ)`DAT(x′k, xk, λ)dλ ) (a) = 1\n2K ( n∑ k=1 ∫ PDAT(λ)`DAT(xk, x′k, λ)dλ+ n∑ k=1 ∫ PDAT(λ)`DAT(x′k, xk, λ)dλ ) (b) = 1\n2K ( n∑ k=1 ∫ PDAT(λ)`DAT(xk, x′k, λ)dλ+ n∑ k=1 ∫ PDAT(λ)`DAT(xk, x′k, λ)dλ )\n= 1\nn n∑ k=1 ∫ PDAT(λ)`DAT(xk, x′k, λ)dλ\n=LDATE ((x, x′)n1 )\nwhere (a) is due to a change of variable in the integration, (b) is due to the symmetry of (x, x′)n1 . Note that in equation 5 g(λ) is undefined at values of λ for which the denominator is zero. But the lemma holds true because the denominator is only zero when p(λ) = 0, so those λ for which g(λ) is undefined never get drawn in the DAT scheme.\nA.3 PROOF OF LEMMA 5:\nLuMixE ((x, x′)n1 , γ) = 1\nn Eλ∼PMix n∑ k=1 ( γ(λ)`DAT(xk, x′k, λ) + γ(λ)` DAT(x′k, xk, λ) )\n= 1\nn\n( Eλ∼PMix\nn∑ k=1 γ(λ)`DAT(xk, x′k, λ) + Eλ∼PMix n∑ k=1 γ(λ)`DAT(x′k, xk, λ) ) (a) = 1\nn\n( Eλ∼PMix\nn∑ k=1 γ(λ)`DAT(xk, x′k, λ) + Eλ∼PMix n∑ k=1 γ(λ)`DAT(xk, x′k, λ) ) (b) = 1\nn\n( Eλ∼PMix\nn∑ k=1 γ(λ)`DAT(xk, x′k, λ) + Eλ∼PMix n∑ k=1 γ(λ)`DAT(xk, x′k, λ)\n)\n= 1\nn n∑ k=1 ∫ ( γ(λ)PMix(λ)`DAT(xk, x′k, λ) + γ(λ)P Mix(1− λ)`DAT(xk, x′k, λ) ) dλ\n= 1\nn n∑ k=1 ∫ `DAT(xk, x′k, λ) ( γ(λ)PMix(λ) + γ(λ)PMix(1− λ) ) ︸ ︷︷ ︸\nDu(PMix,γ)\ndλ\n= 1\nn n∑ k=1 Eλ∼PDAT`DAT(xk, x′k, λ)\n= LDATE ((x, x′)n1 ) .\nwhere (a) is due to the symmetry of (x, x′)n1 , and (b) is by a change of variable in the second term (renaming 1− λ as λ)." } ]
2,019
null
SP:a527aca3ea9653acb0d0a07eada4483414fd82e3
[ "This paper deals with complete formal verification of Neural Network, based on the Branch and Bound framework. The authors focus on branching strategies, which have been shown to be a critical design decision in order to obtain good performance. The tactic employed here is to learn a Graph Neural Network (which allows to transfer the heuristic from small networks to large networks), using supervised training to imitate strong branching. The authors also discuss fallback mechanism to prevent bad failures case, as well as an online fine-tuning strategy that provide better performance.", "The paper proposes learning a branching heuristic to be used inside a branch-and-bound algorithm used for solving integer programming problems corresponding to neural network verification. The heuristic is parameterized as a neural network and trained to imitate an existing heuristic called Strong Branching which is computationally expensive but produces smaller branch-and-bound trees than other heuristics. A graph neural network architecture is used to take the neural network being verified as input, and a message passing schedule that follows a forward pass and a backward pass along the network being verified is used. An online learning variant is also considered that fine tunes the learned heuristic at test time as a problem instance is being solved. Results for verifying large convolutional neural networks on CIFAR-10 show approximately 2x improvement in average running time of the branch-and-bound algorithm." ]
Formal verification of neural networks is essential for their deployment in safetycritical areas. Many available formal verification methods have been shown to be instances of a unified Branch and Bound (BaB) formulation. We propose a novel framework for designing an effective branching strategy for BaB. Specifically, we learn a graph neural network (GNN) to imitate the strong branching heuristic behaviour. Our framework differs from previous methods for learning to branch in two main aspects. Firstly, our framework directly treats the neural network we want to verify as a graph input for the GNN. Secondly, we develop an intuitive forward and backward embedding update schedule. Empirically, our framework achieves roughly 50% reduction in both the number of branches and the time required for verification on various convolutional networks when compared to the best available hand-designed branching strategy. In addition, we show that our GNN model enjoys both horizontal and vertical transferability. Horizontally, the model trained on easy properties performs well on properties of increased difficulty levels. Vertically, the model trained on small neural networks achieves similar performance on large neural networks. Code for all experiments is available at https://github.com/oval-group/GNN_branching.
[ { "affiliations": [], "name": "Jingyue Lu" }, { "affiliations": [], "name": "M. Pawan Kumar" } ]
[ { "authors": [ "Alejandro Marcos Alvarez", "Quentin Louveaux", "Louis Wehenkel" ], "title": "A machine learning-based approximation of strong branching", "venue": "INFORMS Journal on Computing,", "year": 2017 }, { "authors": [ "Greg Anderson", "Shankara Pailoor", "Isil Dillig", "Swarat. Chaudhuri" ], "title": "Optimization and abstraction: a synergistic approach for analyzing neural network robustness", "venue": "ACM SIGPLAN Conference on Programming Language Design and Implementation,", "year": 2019 }, { "authors": [ "Irwan Bello", "Hieu Pham", "Quoc V Le", "Mohammad Norouzi", "Samy Bengio" ], "title": "Neural combinatorial optimization with reinforcement learning", "venue": "arXiv preprint arXiv:1611.09940,", "year": 2016 }, { "authors": [ "Rudy Bunel", "Ilker Turkaslan", "Philip H.S Torr", "Pushmeet Kohli", "M. Pawan Kumar" ], "title": "A unified view of piecewise linear neural network verification", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Rudy Bunel", "Jingyue Lu", "Ilker Turkaslan", "Philip H.S Torr", "Pushmeet Kohli", "M. Pawan Kumar" ], "title": "Branch and bound for piecewise linear neural network verification", "venue": null, "year": 1909 }, { "authors": [ "Hanjun Dai", "Elias B. Khalil", "Yuyu Zhang", "Bistra Dilkina", "Le Song" ], "title": "Learning combinatorial optimization algorithms over graphs", "venue": "Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ruediger Ehlers" ], "title": "Formal verification of piece-wise linear feed-forward neural networks. Automated Technology for Verification and Analysis, 2017", "venue": null, "year": 2017 }, { "authors": [ "Maxime Gasse", "Didier Chételat", "Nicola Ferroni", "Laurent Charlin", "Andrea Lodi" ], "title": "Exact combinatorial optimization with graph convolutional neural networks", "venue": null, "year": 1906 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "The International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Christoph Hansknecht", "Imke Joormann", "Sebastian Stiller" ], "title": "Cuts, primal heuristics, and learning to branch for the time-dependent traveling salesman problem", "venue": "arXiv preprint arXiv:1805.01415,", "year": 2018 }, { "authors": [ "Guy Katz", "Clark Barrett", "David Dill", "Kyle Julian", "Mykel Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": "International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Guy Katz", "Derek A. Huang", "Duligur Ibeling", "Kyle Julian", "Christopher Lazarus", "Rachel Lim", "Parth et al. Shah" ], "title": "The marabou framework for verification and analysis of deep neural networks", "venue": "International Conference on Computer Aided Verification,", "year": 2019 }, { "authors": [ "Elias Boutros Khalil", "Pierre Le Bodic", "Le Song", "George Nemhauser", "Bistra Dilkina" ], "title": "Learning to branch in mixed integer programming", "venue": "Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Vicenc Rubies Royo", "Roberto Calandra", "Dusan M Stipanovic", "Claire Tomlin" ], "title": "Fast neural network verification via shadow prices", "venue": null, "year": 1902 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Markus Pschel", "Martin. Vechev" ], "title": "Boosting robustness certification of neural networks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Vincent Tjeng", "Kai Xiao", "Russ Tedrake" ], "title": "Evaluating robustness of neural networks with mixed integer programming", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Shiqi Wang", "Kexin Pei", "Justin Whitehouse", "Junfeng Yang", "Suman Jana" ], "title": "Efficient formal safety analysis of neural networks", "venue": "Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Bunel" ], "title": "2019), methods including MIPplanet, BaBSR, planet (Ehlers, 2017), reluBaB and reluplex (Katz et al., 2017) are compared on a small convolutional MNIST network. Among them, BaBSR and MIPplanet significantly outperform other methods", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite their outstanding performances on various tasks, neural networks are found to be vulnerable to adversarial examples (Goodfellow et al., 2015; Szegedy et al., 2013). The brittleness of neural networks can have costly consequences in areas such as autonomous driving, finance and healthcare. When one requires robustness to adversarial examples, traditional model evaluation approaches, which test the trained model on a hold-out set, do not suffice. Instead, formal verification of properties such as adversarial robustness becomes necessary. For instance, to ensure self-driving cars make consistent correct decisions even when the input image is slightly perturbed, the required property to verify is that the underlying neural network outputs the same correct prediction for all points within a norm ball whose radius is determined by the maximum perturbation allowed.\nSeveral methods have been proposed for verifying properties on neural networks (NN). Bunel et al. (2018) showed that many of the available methods can be viewed as instances of a unified BaB framework. A BaB algorithm consists of two key components: branching strategies and bounding methods. Branching strategies decide how the search space is recursively split into smaller spaces. Bounding methods compute bounds of each subspace to tighten the bounds of the final objective function over the whole search space. In this work, we focus on improving the branching strategies. By directly working with a general framework, our identified algorithmic improvements can be combined with any bounding method, leading to potential performance improvement for BaB based verification algorithms.\nBranching strategies have significant impacts on the overall problem-solving process, as it directly decides the total number of steps, and consequently the total time, required to solve the problem at hand. The quality of a branching strategy is even more important when NN verification problems are considered, which generally have a very large search space. Each input dimension or each activation unit can be a potential branching option and neural networks of interest often have high dimensional inputs and thousands of hidden activation units. With such a large search space, an effective branching strategy could mean a large reduction of the total number of branches required, and consequently of the time required to solve a problem. Developing an effective strategy is thus of significant importance to the success of BaB based NN verification.\nSo far, to the best of our knowledge, branching rules adopted by BaB based verification methods are either random selection (Katz et al., 2017; Ehlers, 2017) or hand-designed heuristics (Wang et al.,\n2018b; Bunel et al., 2018; Royo et al., 2019; Bunel et al., 2019). Random selection is generally inefficient as the distribution of the best branching decision is rarely uniform. In practice, this strategy often results in exhaustive search to make a verification decision. On the other hand, hand designed heuristics often involve a trade-off between effectiveness and computational cost. For instance, strong branching is generally one of the best performing heuristics for BaB methods in terms of the number of branches, but it is computationally prohibitive as each branching decision requires an expensive exhaustive search over all possible options. The heuristics that are currently used in practice are either inspired by the corresponding dual problem when verification is formulated as an optimization problem (Bunel et al., 2018; Royo et al., 2019) or incorporating the gradient information of the neural network (Wang et al., 2018b). These heuristics normally have better computational efficiency. However, given the complex nature of the search space, it is unlikely that any hand-designed heuristic is able to fully exploit the structure of the problem and the data distribution encountered in practice. As mentioned earlier, for large size NN verification problems, a slight reduction in the quality of the branching strategy could lead to substantial increase in the total number of branches required to solve the problem. A computationally cheap but high quality branching strategy is thus much needed.\nIn order to exploit the inherent structure of the problem and the data, we propose a novel machine learning framework for designing a branching strategy. Our framework is both computationally efficient and effective, giving branching decisions that are of a similar quality to that of strong branching. Specifically, we make the following contributions:\n• We use a graph neural network (GNN) to exploit the structure of the neural network we want to verify. The embedding vectors of the GNN are updated by a novel schedule, which is both computationally cheap and memory efficient. In detail, we mimic the forward and backward passes of the neural network to update the embedding vectors. In addition, the proposed GNN allows a customised schedule to update embedding vectors via shared parameters. That means, once training is done, the trained GNN model is applicable to various verification properties on different neural network structures.\n• We train GNNs via supervised learning. We provide ways to generate training data cheaply but inclusive enough to represent branching problems at different stages of a BaB process for various verification properties. With the ability to exploit the neural network structure and a comprehensive training data set, our GNN is easy to train and converges fast.\n• Our learned GNN also enjoys transferability both horizontally and vertically. Horizontally, although trained with easy properties, the learned GNN gives similar performance on medium and difficult level properties. More importantly, vertically, given that all other parts of BaB algorithms remain the same, the GNN trained on small networks performs well on large networks. Since the network size determines the total cost for generating training data and is positively correlated with the difficulty of learning, this vertical transferability allows our framework to be readily applicable to large scale problems.\n• We further enhance our framework via online learning. For a learned branching strategy, it is expected that the strategy can fail to output satisfactory branching decisions from time to time. To deal with this issue, we provide an online scheme for fine-tuning the GNN along the BaB process in order to best accommodate the verification property at hand.\n• Finally, we supply a dataset on convolutional NN verification problems, covering problems at different difficulty levels over neural networks of different sizes. We hope that by providing a large problem dataset it could allow easy comparisons among existing methods and additionally encourage the development of better methods.\nSince most verification methods available work on ReLU-based deep neural networks, we focus on neural networks with ReLU activation units in this paper. However, we point out that our framework is applicable to any neural network architecture." }, { "heading": "2 RELATED WORKS", "text": "Learning has already been used in solving combinatorial optimization problems (Bello et al., 2016; Dai et al., 2017) and mixed integer linear programs (MILP) (Khalil et al., 2016; Alvarez et al., 2017; Hansknecht et al., 2018; Gasse et al., 2019). In these areas, instances of the same underlying structure are solved multiple times with different data values, which opens the door for learning. Among them, Khalil et al. (2016), Alvarez et al. (2017), Hansknecht et al. (2018), and Gasse et al. (2019) proposed learned branching strategies for solving MILP with BaB algorithms. These meth-\nods imitate the strong branching strategy. Specifically, Khalil et al. (2016) and Hansknecht et al. (2018) learn a ranking function to rank potential branching decisions while Alvarez et al. (2017) uses regression to assign a branching score to each potential branching choice. Apart from imitation, Anderson et al. (2019) proposed utilizing Bayesian optimization to learn verification policies. There are two main issues with these methods. Firstly, they rely heavily on hand-designed features or priors and secondly, they use a generic learning structure which is unable to exploit the neural network architecture.\nThe approach most relevant to ours is the concurrent work by Gasse et al. (2019). They managed to reduce feature reliance by exploiting the bipartite structure of an MILP through a GNN. The bipartite graph is capable of capturing the network architecture, but cannot exploit it effectively. Specifically, it treats all the constraints the same and updates them simultaneously using the same set of parameters. This limited expressiveness can result in a difficulty in learning and hence in a high generalization error for NN verification problems. Our proposed framework is specifically designed for NN verification problems. By exploiting the neural network structure, and designing a customized schedule for embedding updates, our framework is able to scale elegantly both in terms of computation and memory. Finally, we mention that the recently proposed verification methods (Katz et al., 2019; Singh et al., 2018; Anderson et al., 2019) are not explicitly formulated as BaBs. Since our focus is on branching, we mainly use the methods in Bunel et al. (2019) for comparison." }, { "heading": "3 BACKGROUND", "text": "Formal verification of neural networks refers to the problem of proving or disproving a property over a bounded input domain. Properties are functions of neural network outputs. When a property can be expressed as a Boolean expression over linear forms, we can modify the neural network in a suitable way so that the property can be simplified to checking the sign of the neural network output (Bunel et al., 2018). Note that all the properties studied in previous works satisfy this form, thereby allowing us to use the aforementioned simplification. Mathematically, given the modified neural network f , a bounded input domain C, formal verification examines the truthfulness of the following statement: ∀x ∈ C, f(x) ≥ 0. (1) If the above statement is true, the property holds. Otherwise, the property does not hold." }, { "heading": "3.1 BRANCH AND BOUND", "text": "Verification tasks are often treated as a global optimization problem. We want to find the minimum of f(x) over C in order to compare it with the threshold 0. Specifically, we consider an L layer feedforward neural network, f : R|x| → R, with ReLU activation units such that for any x0 ∈ C ⊂ R|x|, f(x0) = x̂L ∈ R where\nx̂i+1 =W i+1xi + b i+1, for i = 0, . . . , L− 1, (2a) xi = max(x̂i, 0), for i = 1, . . . , L− 1. (2b)\nThe termsW i and bi refer to the weights and biases of the i-th layer of the neural network f. Domain C can be an `p norm ball with radius . Finding the minimum of f is a challenging task, as the optimization problem is generally NP hard (Katz et al., 2017). To deal with the inherent difficulty of the optimization problem itself, BaB (Bunel et al., 2018) is generally adopted. In detail, BaB based methods divide C into sub-domains, each of which defines a new sub-problem (branching). They then compute a relaxed lower bound of the minimum on each sub-problem (bounding). The minimum of the lower bounds of all the generated sub-domains constitutes a valid global lower bound of the global minimum over C. As a recursive process, BaB keeps partitioning the subdomains to tighten the global lower bound. The process terminates when the computed global lower bound is above zero (property is true) or when an input with a negative output is found (property is false). A detailed description of the BaB is provided in the appendices. In what follows, we provide a brief description of the two components, bounding methods and branching strategies, that is necessary for the understanding of our novel learning framework." }, { "heading": "3.2 BOUNDING", "text": "For NN verification problems, bounding consists of finding upper and lower bounds for the final output, the minimum of f(x) over C. An effective technique to compute a lower bound is to transform the original optimization problem into a linear program (LP) by introducing convex relaxations over ReLU activation units. As we can see in Eq. (2b), ReLU activation units do not define a convex feasible set, and hence, relaxations are needed. Denote the j-th element of the vector xi as xi[j].\nPossible convex relaxations for a hidden node xi[j] that have been introduced so far are shown in Figure 1. We replace ReLU with the shaded green area. The tighter the convex relaxation introduced, the more computational expensive it is to compute a bound but the tighter the bound is going to be. From Figure 1, we note that in order to introduce a convex relaxation, we need intermediate bounds li[j] and ui[j]. Thus intermediate bounds are required for building the LP for the final output lower bound. Given their purpose and the large number of intermediate bound computations, rough estimations are mainly used. On the other hand, the final output lower bound is directly used in making the pruning decision and hence a tighter lower bound is preferred as it avoids further unnecessary splits on the sub-problem." }, { "heading": "3.3 BRANCHING", "text": "Branching is of equal importance as bounding in the BaB framework. Especially for large scale networks f , each branching step has a large number of putative choices. In these cases, the effectiveness of a branching strategy directly determines the possibility of verifying properties over these networks within a given time limit. On neural networks, two types of branching decisions are used: input domain split and hidden activation unit split.\nAssume we want to split a parent domain D. Input domain split selects an input dimension and then makes a cut on the selected dimension while the rest of the dimensions remain the same. The common choice is to cut the selected dimension in half and the dimension to cut is decided by the branching strategy used. Available input domain split strategies are Bunel et al. (2018) and Royo et al. (2019). Royo et al. (2019)’s is based on sensitivity test of the LP onD while Bunel et al. (2018) use the formula provided in Wong & Kolter (2018) to estimate final output bounds for sub-domains after splitting on each input dimension and selects the dimension that results in the highest output lower bound estimates.\nIn our setting, we refer to a ReLU activation unit xi[j] = max(x̂i[j], 0) as ambiguous over D if the upper bound ui[j] and the lower bound li[j] for x̂i[j] have different signs. Activation unit split chooses among ambiguous activation units and then divides the original problem into cases of different activation phase of the chosen activation unit. If a branching decision is made on xi[j], we divide the ambiguous case into two determinable cases: {xi[j] = 0, li[j] ≤ x̂i[j] ≤ 0} and {xi[j] = x̂i[j], 0 ≤ x̂i[j] ≤ ui[j]}. After the split, the originally introduced convex relaxation is removed, since the above sets are themselves convex. We expect large improvements on the output lower bounds of the newly generated sub-problems if a good branching decision is made. Apart from random selection, employed in Ehlers (2017) and Katz et al. (2017), available ReLU split heuristics are Wang et al. (2018a) and Bunel et al. (2019). Wang et al. (2018a) compute scores based on gradient information to prioritise ambiguous ReLU nodes. Similarly, Bunel et al. (2019) use scores to rank ReLU nodes but scores are computed with a formula developed on the estimation equations in Wong & Kolter (2018). We note that for both branching strategies, after the split, intermediate bounds are updated accordingly on each new sub-problem. For NN verification problems, either domain split or ReLU split can be used at each branching step. When compared with each other, ReLU split is a more effective choice for large scale networks, as shown in Bunel et al. (2019).\nAll the aforementioned existing branching strategies use hand-designed heuristics. In contrast, we propose a new framework for branching strategies by utilizing a GNN to learn to imitate strong branching heuristics. This allows us to harness the effectiveness of strong branching strategies while retaining the efficiency of GPU computing power." }, { "heading": "4 GNN FRAMEWORK", "text": "Overview We begin with a brief overview of our overall framework, followed by a detailed description of each of its components. A graph neural network G is represented by two components: a set of nodes V and a set of edges E, such that G = (V,E). Each node and each edge has its set of features. A GNN uses the graph structure and node and edge features to learn a representation vector (embedding vector) for each node v ∈ V . The GNN is a key component of our framework, in which we treat the neural network f as a graph Gf . A GNN takes Gf as an input and initializes an embedding vector for each node in V . The GNN updates each node’s embedding vector by aggregating its own node features and all its neighbours’ embedding vectors. After several rounds of updates, we obtain a learned representation (an updated embedding vector) of each node. To make a branching decision, we treat the updated embedding vectors as inputs to a score function, which assigns a score for each node that constitutes a potential branching option. A branching decision is made based on the scores of potential branching decision nodes. Our framework is visualised in Figure 2. We now describe each component in detail." }, { "heading": "Neural Network Graph Neural Network", "text": "Nodes Given a neural network f , V consists of all input nodes v0[j], all hidden activation nodes vi[j] and an output node vL. In our framework, we combine every pre-activation variable and its associated post-activation variable and treat them as a single node. Pre- and post-activation nodes together contain the information about the amount of convex relaxation introduced at this particular activation unit, so dealing with the combined node simplifies the learning process. In terms of the Eq. (2), let x′i[j] denote the combined node of x̂i[j] and xi[j]. The nodes v0[j], vi[j] and vL are thus in one-to-one correspondence with x0[j], x′i[j] and xL. We note that V is larger than the set of all potential branching decisions as it includes unambiguous activation nodes and output nodes.\nNode features Different types of nodes have different sets of features. In particular, input node features contain the corresponding domain lower and upper bounds and the primal solution. For activation nodes, the node features consist of associated intermediate lower and upper bounds, the layer bias, primal and dual solutions and new terms computed using previous features. Finally, the output node has features including the associated output lower and upper bounds, the layer bias and the primal solution. Other types of features could be used and some features could be excluded if they are expensive to compute. We denote input node features as z0[j], activation node features as zi[j] and output node features as zL. Our framework uses simple node features and does not rely on extensive feature engineering. Nonetheless, by relying on the powerful GNN framework, it provides highly accurate branching decisions.\nEdges E consists of all edges connecting nodes in V , which are exactly the connecting edges in f . Edges are characterized by the weight matrices that define the parameters of the network f such that for an edge eijk connecting x ′ i[j] and x ′ i+1[k], we assign e i jk =W i jk.\nEmbeddings We associate a p-dimensional embedding vector µv for each node v ∈ V . All embedding vectors are initialised as zero vectors.\nForward and Backward embedding updates In general, a graph neural network learns signals from a graph by acting as a function of two inputs: a feature matrix X ∈ R|V |×p, where each row is the embedding vector µv for a node v ∈ V , and an adjacency matrix A representing the graph structure. Under this formulation, all node embedding vectors are updated at the same time and there is no particular order between nodes. In this work, instead, we propose an update scheme where only the nodes corresponding to the same layer of the network f are updated at the same time, so embedding vector updates are carried out in a layer-by-layer forward-backward way.\nWe argue that the forward-backward updating scheme is a natural fit for our problem. In more detail, for a given problem D, each branching decision (an input node or an ambiguous activation node) will generate two sub-problems s1 and s2, with each sub-domain having an output lower bound lLs1 and l L s2 respectively, equal to or higher than l L D the lower bound that of D. Strong branching heuristic uses a predetermined function to measure the combined improvement of lLs1 and l L s2 over lLD and makes the final branching decision by selecting the node that gives the largest improvement. Thus, to maximise the performance of a graph neural network, we want a node embedding vector to maximally capture all information related to the computation of lLs1 and l L s2 . For estimating l L s1 , l L s2 of splitting on a potential branching decision node v, we note that these values are closely related to two factors. The first factor is the amount of convex relaxations introduced at a branching decision node v, when v corresponds to an ambiguous activation node. The second factor considers that the impact that splitting node v will have on the convex relaxations introduced to nodes on layers after that of v. Recall that, if there are no ambiguous activation nodes, the neural network f is simply a linear operator, whose minimum value can be easily obtained. When ambiguous activation nodes are present, the total amount of relaxation introduced determines the tightness of the lower bound to f . We thus treat embedding vectors as a measure of local convex relaxation and its contribution to other nodes’ convex relaxation.\nAs shown in Figure 1, at each ambiguous activation node x′i[j], the area of convex relaxation introduced is determined by the lower and upper bounds of the pre-activate node x̂i[j]. We observe that intermediate lower and upper bounds of a node x̂i[j] are significantly affected by the layers prior to it and have to be computed in a layer-by-layer fashion. Based on the observation, we utilise a forward layer-by-layer update on node embedding vectors. This should allow these embedding vectors to capture the local relaxation information. In terms of the impact of local relaxation change to that of other nodes, we note that by splitting an ambiguous node into two fixed cases, all intermediate bounds of nodes on later layers will be affected, leading to relaxation changes at those nodes. We thus employ a backward layer-by-layer update to account for the impact the local change has over other nodes. Theoretically, by fixing an ambiguous ReLU node, intermediate bounds of nodes at previous layers and on the same layer might change as well. For a naturally trained neural network, the changes for these nodes should be relatively small compared to nodes on the later layers. To account for these changes, we rely on multiple rounds of forward-and-backward updates.\nIn summary, during the forward update, for i = 1, . . . , L− 1, we have, for all possible j,\nµ0[j] ←− Finp(z0[j];θ0), if µ0[j] = 0, (3)\nµi[j] ←− Fact(zi[j],µi−1, ei;θ1), (4) µL ←− Fout(zL,µL−1, eL;θ2). (5)\nDuring the backward update, for i = L− 1, . . . , 1, we have\nµi[j] ←− Bact(zi[j],µi+1, ei+1;θ3), (6) µ0[j] ←− Binp(z0[j],µ1, e1;θ4). (7)\nUpdate functions F andB take the form of multi-layered fully-connected networks with ReLU activation functions or composites of these simple update networks. The terms θi denote the parameters of the networks. A detailed description of update functions is provided in the appendices.\nWe point out that our forward-backward update scheme does not depend on the underlying neural network structure and thus should be generalizable to network architectures that differ from the one we use for training. However, it does rely on the information used to compute convex relaxations, so underlying data distribution, features and bounding methods are assumed to be the same when the trained model is applied to different networks. Furthermore, our forward-backward update is memory efficient, as we are dealing with one layer at a time and only the updated embedding vectors of the layer are used to update the embedding vectors in the next (forward-pass) and the previous (backward-pass) layer. This makes it readily applicable to large networks.\nScores At the end of the forward-backward updates, embedding vectors for potential branching decision nodes (all input nodes and ambiguous activation nodes) are gathered and treated as inputs of a score function gs(·;θ5) : Rp → R, which takes the form of a fully-connected network with parameters θ5. It assigns a scalar score for each input embedding vector. The final branching decision is determined by picking the node with the largest score." }, { "heading": "5 PARAMETER ESTIMATION", "text": "Training We train a GNN via supervised learning. To estimate Θ := (θ0,θ1,θ2,θ3,θ4,θ5), we propose a new hinge rank loss function that is specifically designed for our framework. Before we give details of the loss, we introduce a relative improvement measure m first. Given a domain D, for each branching decision node v, the two generated sub-problems have output lower bounds lLs1 and lLs2 . We measure the relative improvement of splitting at the node v over the output lower bound lLD as follows\nmv := (min(l L s1 , 0) + min(l L s2 , 0)− 2 · l L D)/(−2 · lLD). (8)\nIntuitively, m (0 ≤ m ≤ 1) measures the average relative sub-problem lower bound improvement to the maximum improvement possible, that is −lLD. Any potential branching decision node v can be compared and ranked via its relative improvement value mv . Since we are only interested in branching nodes with large improvement measures, ranking loss is a natural choice. A direct pairwise rank loss might be difficult to learn for NN verification problems, given the large number of branching decision nodes on each domain D. In addition, many branching decisions may give similar performance, so it is redundant and potentially harmful to the learning process if we learn a ranking among these similar nodes. To deal with these issues, we develop our loss by first dividing all potential branching nodes into M classes (M is much smaller than the total number of branching decision nodes) through the improvement value mv of a node. We denote the class label as Yv for a node v. Labels are assigned in an ascending order such that Yv >= Yv′ if mv > mv′ . We then compute the pairwise hinge-rank loss on these newly assigned labels as\nlossD(Θ) = 1\nK N∑ i=1 ( N∑ j=1 φ(gs(µj ;Θ)− gs(µi;Θ)) · 1Yj>Yi ) , (9)\nwhere φ(z) = (1 − z)+ is the hinge function, N is the total number of branching decision nodes and K is the total number of pairs where Yj > Yi for any branching decision nodes vi, vj . The loss measures the average hinge loss on score difference (gs(µj ;Θ) − gs(µi;Θ)) for all pairs of branching decision nodes vi, vj such that Yj > Yi. Finally, we evaluate Θ by solving the following optimization problem:\nΘ = argmin Θ\nλ 2 ‖Θ‖2 + 1 n n∑ i lossDi(Θ), (10)\nwhere the lossDi is the one introduced in Eq. (9) and n is the number of training samples.\nFail-safe Strategy We introduce a fail-safe strategy employed by our framework to ensure that consistent high-quality branching decisions are made throughout a BaB process. The proposed framework uses a GNN to imitate the behavior of the strong branching heuristic. Although computationally cheap, in some cases, the output decision by the learned graph neural network might be suboptimal. When this happens, it could lead to considerably deteriorated performance for two reasons. Firstly, we observed that for certain problems, which requires multiple splits to reach a conclusion on this problem, if a few low-quality branching decisions are made at the beginning or the middle stage of the branching process, the total number of splits required might increase substantially. The total BaB path is thus, to some extent, sensitive to the quality of each branching decision apart from those made near the end of the BaB process. Secondly, once a low-quality decision is made on a given problem, a decision of similar quality is likely to be made on the two newly generated sub-problems, leading to exponential decrease in performance. Features for newly generated sub-problems are normally similar to those of the parent problem, especially in the cases where the branching decision of the parent problem is made on the later layers and loose intermediate bounds are used. Thus, it is reasonable to expect the GNN fails again on the resulting sub-problems.\nTo deal with this issue, we keep track of the output lower bound improvement for each branching decision, as introduced in Eq. (8). We then set a pre-determined threshold parameter. If the improvement is below the threshold, a computationally cheap heuristic is called to make a branching decision. Generally, the back-up heuristic is able to give an above-threshold improvement and generate sub-problems sufficiently different from the parent problem to allow the learned GNN to recover from the next step onwards.\nOnline Learning Online learning is a strategy to fine-tune the network for a particular property after we have learnt Θ. It can be seen as an extension of the fail-safe strategy employed. Every time a heuristic branching decision node vh is used instead of the node vgnn chosen by the GNN, we can use vh and vgnn to update the GNN accordingly. Since a correct GNN model should output an embedding vector µh resulting in a higher score gs(µh;Θ) for the heuristic decision, a loss can be developed based on the two scores gs(µh;Θ) and gs(µgnn;Θ) to generate optimization signals for correcting the GNN behaviour. For example, the loss used in our experimental setting is:\nlossonline(Θ) = gs(µgnn;Θ)− gs(µh;Θ) + γ · ((mh −mgnn) > t). (11)\nThe last term is used to amplify (γ > 0) the loss if the relative improvement made by the heuristic decision is more than t percent higher than that by the GNN. We update Θ of the GNN by taking one gradient step with a small learning rate of the following minimization problem.\nΘ = argmin Θ\nλ 2 ‖Θ‖2 + lossonline(Θ). (12)\nOnline learning is property specific: it uses the decisions made by heuristics to fine tune the GNN model so it can best accommodate the property at hand. As will be shown in our experiments, a small but significant improvement in performance is achieved when online learning is used." }, { "heading": "6 EXPERIMENTS", "text": "We now validate the effectiveness of our proposed framework through comparative experiments against other available NN verification methods. A comprehensive study of NN verification methods has been done in Bunel et al. (2019). We thus design our experiments based on the results presented in Bunel et al. (2019)." }, { "heading": "6.1 SETUP", "text": "We are interested in verifying properties on large network architectures with convolutional layers. In Bunel et al. (2019), existing NN methods are compared on a robustly trained convolutional network on MNIST. We adopt a similar network structure but using a more challenging dataset, namely CIFAR-10, for an increased difficulty level. We compare against the following two methods: (i) MIPplanet, a mixed integer solver backed by the commercial solver Gurobi; and (ii) BaBSR, a BaB based method utilising a ReLU-split heuristic. Our choice is motivated by their superior performance over other methods for MNIST verification problems in the previous work (Bunel et al., 2019).\nWe provide the detailed experimental setup through four perspectives: bounding methods, branching strategies, network structures, and verification properties tested. (Bounding methods) We compute intermediate bounds using linear bounds relaxations (Figure 1(b)). For the output lower bound, we use Planet relaxation (Figure 1(c)) and solve the corresponding LP with Gurobi. For the output upper bound, we compute it by directly evaluating the network value at the input provided by the LP solution. (Branching strategy) We focus on ReLU split only in our experiments. As shown in Bunel et al. (2019), domain split only outperforms ReLU split on low input dimensional and small scale networks. Also, since one of the Baseline method BaBSR employs a ReLU-split heuristic, we consider ReLU split only for a fair comparison. However, we emphasize that our framework is readily applicable to work with a combined domain and ReLU split strategy. (Network structures) Three neural network structures will be studied. The base one is of the similar structure and size to the one used in Bunel et al. (2019). It has two convolutional layers, followed by two fully connected layers and is trained robustly using the method provided in Wong & Kolter (2018). This particular choice of network size is made because the time required for solving each LP increases substantially with the size of the network. To best evaluate the performance of the branching strategy, we have to work with a medium sized network so that within the given timeout, a sufficient amount of branching decisions can be made to allow effective comparisons. When testing the transferability of the framework, two larger networks will be tested but their sizes are still restricted by the LP bottleneck. A detailed description of the network architecture is provided in the appendices. (Verification properties) Finally, we consider the following verification properties. Given an image x for which the model correctly predicted the label yc, we randomly choose a label yc′ such that for a given , we want to prove (e(c) − e(c′))T f ′(x′) > 0, ∀x′ s.t ‖x−x′‖∞ ≤ . Here, f ′ is the original neural network, e(c) and e(c\n′) are one-hot encoding vectors for labels yc and yc′ . We want to verify that for a given , the trained network will not make a mistake by labelling the image as yc′ . Since BaBSR is claimed to be the best performing method on convolutional networks, we use it to determine the\nvalues, which govern the difficulty level of verification properties. Small values mean that most ReLU activation units are fixed so their associated verification properties are easy to prove while large values could lead to easy detection of counter-examples. The most challenging values are those at which a large number of activation units are ambiguous. We use binary search with BaBSR method to find suitable values. We only consider values that result in true properties and timed out properties. Binary search process is simplified by our choice of robustly trained models. Since these models are trained to be robust over a δ ball, the predetermined value δ can be used as a starting value for binary search." }, { "heading": "6.2 TRAINING DATASET", "text": "In order to generate training data, we firstly pick 565 random images and for each image, we randomly select an incorrect class. For each property, the value is determined by running binary search with BaBSR and 800 seconds timeout, so the final set of properties consists of mainly easily solvable properties and a limited number of timed out properties.\nWe collect training data along a BaB process for solving a verification property. At each given domain, given the large number of potential branching decisions, we perform the strong branching heuristic on a selected subset of all potential branching decisions. The subset consists of branching decisions that are estimated to be of high quality by the BaBSR heuristic and randomly selected ones, which ensure a minimum 5% coverage on each layer.\nTo construct a training dataset that is representative enough of the whole problem space, we need to cover a large number of properties. In addition, within a BaB framework, it is important to include training data at different stages of a BaB process. However, running a complete BaB process with the strong branching heuristic for hundreds of properties is computationally expensive and considerably time consuming. We thus propose the following procedure for generating a training dataset to guarantee a wide coverage both in terms of the verification properties and BaB stages. For generated verification properties, we randomly select 25% of non-timeout property to conduct a complete BaB process with the strong branching heuristic. For the rest of the properties, we try to generate at least B = 20 training data for each verification property. Given the maximum number of branches q = 10 and an effective and computationally cheap heuristic, we first generate a random integer k from [0, q]. Then, we run a BaB process with the selected cheap heuristic for k steps. Finally, we call the strong branching heuristic to generate a training sample. We repeat the process until B training samples are generated or the BaB process terminated. A detailed algorithm is provided in the appendices." }, { "heading": "6.3 BASE MODEL", "text": "We test our learned model on the same model structure but on properties of three different difficulty levels. Testing verification properties are generated by binary search with BaBSR and 3600s timeout. We categorise verification properties solved within 800s as easy, which is consistent with training data generated, between 800s and 2400s as medium and more than 2400s as hard. In total, we generated 467 easy properties, 773 medium properties and 426 hard properties.\nResults are given in the Table 1. Methods are compared in three perspectives: the average time over all properties, average number of branches required over the properties that are solved by all methods (we exclude timed out properties) and also the ratio of timed out properties. Since the properties are generated based on BaBSR, the timed out ratios of BaBSR on easy and medium properties are not comparable with that of other methods. All other numbers should give a fair evaluation of the effectiveness of our branching strategy. BaBSR, GNN and GNN-online only differ in the branching strategy used.\nOn all three sets of properties, we see that our learned branching strategy has led to a more than 50% reduction in the total average number of branches required for a property. As a direct result, the average time required achieves at least a 50% reduction as well. Our framework is thus an effective scheme and enjoys horizontal transferability. A further performance improvement is obtained through instance-specific online learning. Among all 1666 tested verification properties, GNN with online-learning solves 61.52% of properties with fewer number of branches and 60.20% of properties in less time when compared to the standard GNN.\nWe also provide a time cactus plot (Figure 3a) for all properties on the Base model. Time cactus plots for each category of properties can be found in the appendices. All these time cactus plots look\nsimilar. Although BaBSR performs better than the commercial solver encoded method MIPplanet overall, MIPplanet wins on a subset of properties. The learned model GNN, however, is capable of giving consistent high quality performance over all properties tested." }, { "heading": "6.4 TRANSFERABILITY: LARGER MODELS", "text": "We also robustly trained two larger networks. One has the same layer structure as the Base model but has more hidden units on each layer, which we refer to as the Wide model. The other has a similar number of hidden units on each layer but more layers. We refer to it as the Deep model. The detailed network architecture is provided in the appendices. Apart from the network structure, everything else is kept the same as for the Base model experiments. We use BaBSR and a timeout of 7200s to generate 300 properties for the Wide model and 250 properties for the Deep model. For these two models, each LP called for solving a sub-problem output lower bound is much more time consuming, especially for the Deep model. This is reason that the average number of branches considered is much fewer that those of the Base model within the given time limit.\nThe model learned on the Base network is tested on verification properties of large networks. Experimental results are given in the Table 2 and time cactus plots (Figures 3b, 3c) are also provided. All results are similar to what we observed on the Base model, which show that our framework enjoys vertical transferability." }, { "heading": "7 DISCUSSION", "text": "The key observation of our work is that the neural network we wish to verify can be used to design a GNN to improve branching strategies. This observation can be used in enhancing the performances of other aspects of BaB. Possible future works include employing GNNs to find fast-converging starting values for solving LPs on a neural network and utilising GNNs to develop a lazy verifier, that only solves the corresponding LP on a domain when it could lead to pruning." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work is supported by Clarendon Fund Scholarship. We thank Rudy Bunel for useful discussions. We thank Florian Jaeckle for proofreading the article." }, { "heading": "APPENDIX A. BRANCH AND BOUND ALGORITHM", "text": "The following generic Branch and Bound Algorithm is provided in Bunel et al. (2019). Given a neural network net and a verification property problem we wish to verify, the BaB procedure examines the truthfulness of the property through an iterative procedure. During each step of BaB, we first use the pick out function (line 6) to choose a problem prob to branch on. The split function (line 7) determines the branching strategy and splits the chosen problem prob into sub-problems. We compute output upper and lower bounds on each sub-problem with functions compute UB and compute LB respectively. Newly computed output upper bounds are used to tighten the global upper bound, which allows more sub-problems to be pruned. We prune a sub-problem if its output lower bound is greater than or equal to the global upper bound, so the smaller the global upper bound the better it is. Newly calculated output lower bounds are used to tighten the global lower bound, which is defined as the minimum of the output lower bounds of all remained sub-problems after pruning. We consider the BaB procedure converges when the difference between the global upper bound and the global lower bound is smaller than .\nIn our case, our interested verification problem Eq. (1) is a satisfiability problem. We thus can simplify the BaB procedure by initialising the global upper bound global ub as 0. As a result, we prune all sub-problems whose output lower bounds are above 0. In addition, the BaB procedure is terminated early when a below 0 output upper bound of a sub-problem is obtained, which means a counterexample exits.\nAlgorithm 1 Branch and Bound 1: function BAB(net, problem, ) 2: global lb← compute LB(net, problem) 3: global ub← compute UB(net, problem) 4: probs← [(global lb, problem)] 5: while global ub− global lb > do 6: ( , prob)← pick out(probs) 7: [subprob 1, . . . , subprob s]← split(prob) 8: for i = 1 . . . s do 9: sub lb← compute LB(net, subprob i) 10: sub ub← compute UB(net, subprob i) 11: if sub ub < global ub then 12: global ub← sub ub 13: prune probs(probs, global ub) 14: end if 15: if sub lb < global ub then 16: probs.append((sub lb, subprob i)) 17: end if 18: end for 19: global lb← min{lb | (lb, prob) ∈ probs} 20: end while 21: return global ub 22: end function" }, { "heading": "APPENDIX B. IMPLEMENTATION OF FORWARD AND BACKWARD PASSES", "text": "We give implementation details of forward and backward updates for embedding vectors for the model used in the experiments section. Choices of forward and backward update functions are based on the bounding methods used. In our experiments, we used linear bound relaxations for computing intermediate bounds and Planet relaxation for computing the final output lower bound. We start with a graph neural network mimicking the structure of the network we want to verify. We denote domain lower and upper bounds as l0 and u0 respectively. Similarly, we denote the intermediate bounds (pre-activation) for layers i = 1, . . . , L − 1 as li and ui. Since an LP solver is called for the final output lower bound, we have primal values for all nodes of V and dual values for all ambiguous nodes of V . Finally, let W 1, . . . ,WL be the layer weights and b1, . . . , bL be the layer biases of the network f , which we wish to verify." }, { "heading": "B.1 FORWARD PASS", "text": "Unless otherwise stated, all functions F∗ are 2-layer fully connected network with ReLU activation units.\nB.1.1 INPUT NODES\nWe update the embedding vectors of input nodes only during the first round of forward pass. That is we update µ0[j] when it is zero for all j. After that, input nodes embedding vectors are updated only in backward pass. For each input node, we form the feature vector z0[j] as a vector of l0[j], u0[j] and its associated primal solution. The input node embedding vectors are computed as\nµ0[j] = Finp(z0[j];θ0). (13)" }, { "heading": "B.1.2 ACTIVATION NODES", "text": "The update function Fact can be broken down into three parts: 1) compute information from local features 2) compute information from neighbourhood embedding vectors and 3) combine information from 1) and 2) to update current layer’s embedding vectors.\nInformation from local features Since we compute the final lower bound with the Planet relaxation (Figure 1(c)), we introduce a new feature related to the relaxation: the intercept of the relaxation triangle, shown in Figure 4. We denote an intercept as β and compute it as\nβi[j] = −li[j] · ui[j] ui[j] − li[j] . (14)\nThe intercept of a relaxation triangle can be used as a measure of the amount of relaxation introduced at the current ambiguous node.\nTherefore, the local feature vector zi[j] of an ambiguous node x′i[j] consists of li[j], ui[j], βi[j], its associated layer bias value, primal values (one for pre-activation variable and one for post-activation variable) and dual values. We obtain information from local features via\nRi[j] =\n{ Fact−lf (zi[j];θ 0 1) if x ′ i[j] is ambiguous,\n0 otherwise. (15)\nwhere Ri[j] ∈ Rp.\nInformation from neighbourhood embedding vectors During the forward pass, we focus on embedding vectors of the previous layer only. To update an embedding vector on layer i, we first combine embedding vectors of the previous layer with edge weights via\nEi[j] = ∑ k W ikj · µi−1[k]. (16)\nTo compute the information from neighbourhood embedding vectors to an arbitrary activation node x′i[j], we consider each activation unit as a gate. We observe that the amount of the information\nfrom neighbourhood embedding vectors that remains after passing through a gate is dependent on the its lower bound li[j] and upper bound ui[j]. When li[j] and ui[j] are of different signs, x′i[j] is an ambiguous node. With relaxation, for any input value between li[j] and ui[j], the maximum output achievable after passing an activation unit is shown by the red slope in Figure 5(a). The red slope si[j] is computed as\nsi[j](x̂i[j]) = ui[j]\nui[j] − li[j] · x̂i[j] + βi[j]. (17)\nThus, the amount of information from neighbourhood embedding vectors that remains after passing through an ambiguous gate is related to the ratio α := ui[j]ui[j]−li[j] . When ui[j] is no greater than zero, the activation node x′i[j] completely blocks all information. For any input value, the output value is zero after passing the activation unit, as shown by the red line in Figure 5(b). We have α = 0 in this case. Finally, when li[j] is no less than 0, the activation node x′i[j] allows a complete passing of information and α = 1. It is shown by the red line in Figure 5(c). We incorporate these observations into our evaluations and compute the information from neighbourhood embedding vectors as\nNi[j] = fact−nb([α · Ei[j], α′ · Ei[j]];θ11), (18)\nwhere α′ = 1−α when 0 < α < 1 and α′ = α otherwise. Here, we use [a, b] to denote the concatenation of two vectors a, b ∈ Rp into a vector of R2p. We introduce α′ to be more informative. We do not consider the information that relate to the intercept βi[j] in the ambiguous case for the sake of simplicity. Improved performance could be expected if the βi[j] related information is incorporated as well.\nCombine previous information Finally, we combine the information from local features and the information from neighbourhood embedding vectors to update the embedding vectors of activation nodes. Specifically,\nµi[j] = Fact−com([Ri[j], Ni[j]];θ 2 1). (19)" }, { "heading": "B.1.3 OUTPUT NODE", "text": "Embedding vectors of output nodes are updated in a similar fashion to that of activation nodes. We first compute information from local features.\nRLj = Fout−lf (zLj ;θ 0 2) (20)\nFor output nodes, the vector of local features zL consists of output lower bound, output upper bound, primal solution and layer bias. Fout−lf is a one-layer fully-connected network with ReLU activation units. We then compute information from neighbourhood embedding vectors. Since the output node does not have an activation unit associated with it, we directly compute the information of neighbourhood embedding vectors as\nEL[j] = ∑ k WLkj · µL−1[k]. (21)\nFinally, we update the embedding vector of the output node as µLj = Fout−com([RL[j], EL[j]];θ 1 2). (22)" }, { "heading": "B.2 BACKWARD PASS", "text": "During backward message passing, for i = L−1, . . . , 1, we update embedding vectors for activation nodes and input node. Again, all functions B∗ are 2-layer fully-connected networks unless specified otherwise." }, { "heading": "B.2.1 ACTIVATION NODES", "text": "Similar to updates of embedding vectors carried out for activation nodes in a forward pass, we update embedding vectors of activation nodes using the same three steps in the backward pass, but with minor modifications.\nInformation from local features We use the same feature zi[j] as the one used in the forward pass and compute the information from local features as\nRbi[j] =\n{ Bact−lf1(zi[j];θ 0 3) if x ′ i[j] is ambiguous,\n0 otherwise. (23)\nWe recall that a dual value indicates how the final objective function is affected if its associated constraint is relaxed by a unit. To better measure the importance of each relaxation to the final objective function, we further update the information from local features by\nRb ′\ni[j] =\n{ Bact−lf2([di[j] Rbi[j], R b i[j]];θ 1 3) if R b i[j] 6= 0\n0 otherwise. (24)\nHere, di[j] is the vector of dual values corresponding to the activation node x′i[j]. We use to mean that we multiply Rbi[j] by each element value of di[j] and concatenate them as a singe vector.\nInformation from neighbourhood embedding vectors During the backward pass, we focus on embedding vectors of the next layer only. In order to update an embedding vector on layer i, we compute the neighbourhood embedding vectors as\nEbi[j] = ∑ k W i+1jk · µi+1[k]. (25)\nWe point out that there might be an issue with computing Ei[j] if the layer i + 1 is a convolutional layer in the backward pass. For a convolutional layer, depending on the padding number, stride number and dilation number, each node x′i[j] may connect to a different number of nodes on the layer i + 1. Thus, to obtain a consistent measure of Ei[j], we divide Ei[j] by the number of connecting node on the layer i+ 1, denoted as Eb ′\ni[j] and use the averaged E b′ i[j] instead. Let\nEb∗i[j] =\n{ Eb ′\ni[j] if layer i+1 convolutional, Ebi[j] otherwise.\n(26)\nThe following steps are the same as the forward pass. We first evaluate\nN bi[j] = Bact−nb([α · E b∗ i[j], α ′ · Eb∗i[j]];θ 2 3), (27)\nand the update embedding vectors as\nµi[j] = Bact−com([R b′ i[j], N b i[j]];θ 3 3). (28)\nB.2.2 INPUT NODES\nFinally, we update the input nodes. We use the feature vector zb0, which consists of domain upper bound and domain lower bound. Information from local features is evaluated as\nR0j = Binp−lf (z b 0[j];θ 0 4). (29)\nWe compute the information from neighbourhood embedding vectors in the same manner as we do for activation nodes in the backward pass, shown in Eq (26). Denote the computed information as Eb∗0[j]. The embedding vectors of input nodes are updated by\nµ0[j] = Binp−com([R b′ 0[j], E b∗ 0[j]];θ 1 4). (30)" }, { "heading": "APPENDIX C. ALGORITHM FOR GENERATING TRAINING DATASET", "text": "Algorithm 2 outlines the procedure for generating the training dataset. The algorithm ensures the generated training date have a wide coverage both in terms of the verification properties and BaB stages while at the same time is computationally efficient. Specifically, we randomly pick 25% of all properties that do not time out and run a complete BaB procedure on each of them with the strong branching heuristic to generate training samples (line 3-5). For the remaining properties, we attempt to generate B training samples for each of them. To cover different stages of a BaB process of a property, we use a computationally cheap heuristic together with the strong branching heuristic. Given a property, we first use the cheap heuristic for k steps (line 10-15) to reach a new stage of the BaB procedure and then call the strong branching heuristic to generate a training sample (line 16). We repeat the process until B training samples are generated or the BaB processs terminates.\nAlgorithm 2 Generating Training Dataset 1: Provided: total P properties; minimum B training data for each property; a maximum q\nbranches between strong branching decisions 2: for p = 1, . . . , P do: 3: α←− random number from [0, 1] 4: if p is not a timed out property and α ≤ 0.25 then 5: Running a complete BaB process with the Strong Branching Heuristic 6: else 7: b = 0 8: while b ≤ B do 9: k ←− random integer from[0, q]\n10: while k > 0 do 11: Call a computationally cheap heuristic 12: if BaB process terminates then return 13: end if 14: k = k − 1 15: end while 16: Call the strong branching heuristic and generate a training sample 17: if BaB process terminates then return 18: end if 19: b = b+ 1 20: end while 21: end if 22: end for" }, { "heading": "APPENDIX D. EXPERIMENT DETAILS", "text": "All the hyper-parameters used in the experiments are determined by testing a small set of numbers over the validation set. Due to the limited number of tests, we believe better sets of hyper-parameters could be found." }, { "heading": "D.1 TRAINING DETAILS", "text": "Training dataset To generate a training dataset, 565 random images are selected. Binary serach with BaBSR and 800 seconds timeout are used to determine on the Base model. Among 565 verification properties determined, we use 430 properties to generate 17958 training samples and the rest of properties to generate 5923 validation samples. Training samples and validation samples are generated using Algorithm 2 with B = 20 and q = 10.\nFor a typical epsilon value, each sub-domain generally contains 1300 ambiguous ReLU nodes. Among them, approximately 140 ReLU nodes are chosen for strong branching heuristics, which leads to roughly 200 seconds for generating a training sample. We point out that the total amount of time required for generating a training sample equals the 2*(per LP solve time)*(number of ambiguous ReLU nodes chosen). Although both the second and the third terms increase with the size of the model used for generating training dataset, the vertical transferability of our GNN enables us to efficiently generate training dataset by working with a small substitute of the model we are interested in. In our case, we trained on the Base model and generalised to Wide and Deep model.\nTraining We initialise a GNN by assigning each node a 64-dimensional zero embedding vector. GNN updates embedding vectors through two rounds of forward and backward updates. To train the GNN, we use hinge rank loss (Eq. (9)) with M = 10. Parameters Θ are computed and updated through Adam optimizer with weight decay rate λ = 1e−4 and learning rate 1e−4. If the validation loss does not decrease for 10 consecutive epochs, we decrease the learning rate by a factor of 5. If the validation loss does not decrease for 20 consecutive epochs, we terminate the learning procedure. The batch size is set to 2. In our experiments, each training epoch took less than 400 seconds and the GNN converges within 60 epochs.\nIn terms of the training accuracy, we first evaluate each branching decision using the metric defined by Eq. (8) 1. Since there are several branching choices that give similar performance at each subdomain, we considered all branching choices that have mv above 0.9 as correct decisions. Under this assumption, our trained GNN achieves 85.8% accuracy on the training dataset and 83.1% accuracy on the validation dataset.\nD.2 VERIFICATION EXPERIMENT DETAILS\nWe ran all verification experiments in parallel on 16 CPU cores, with one property being verified on one CPU core. We observed that although we specifically set the thread number to be one for MIPplanet (backed by the commercial solver Gurobi), the time required for solving a property depends on the total number of CPUs used. For a machine with 20 cpu cores, MIPplanet requires much less time on average for proving the same set of properties on fewer (say 4) CPU cores in parallel than on many (say 16) CPU cores in parallel (the rest of CPU cores remain idle). Since BaBSR, GNN and GNN-online all use Gurobi for the bounding problems, similar time variations, depending on the number of CPU cores used, are observed. We ran each method in the same setting and on 16 CPUs in parallel, so our reported results and time are comparable. However, we remind readers to take the time variation into consideration when replicating our experiments or using our results for comparison.\nFail-safe strategy Since, to the best of our knowledge, the branching heurisitc of BaBSR is the best performing one on convolutional neural networks so far, we choose it for our fail-safe strategy. The threshold is set to be 0.2. Every time when the relative improvementmgnn of a GNN branching decision vgnn is less than 0.2, we call the heuristic to make a new branching decision vh. We solve\n1we have tried various other metrics, including picking the minimum of the two subdomain lower bounds and the maximum of the two lower bounds. Among these metrics, metric defined by Eq. (8) performs the best.\nthe corresponding LPs for the new branching decision and compute its relative improvement mh. The node with higher relative improvement is chosen to be the final branching decision.\nOnline learning We take a conservative approach in terms of online learning. We refer to a GNN decision as a failed decision if the relative improvement offered by heuristic branching is better than the one offered by the GNN. We record all GNN failed decisions and only update the GNN model online when the same failed decision is made at least twice. To update the GNN model, we use Adam optimizer with weight decay rate λ = 1e−4 and learning rate 1e−4. The GNN model is updated with one gradient step only with respect to the optimization problem Eq. (12), where γ = 1 and t = 0.1 in the loss function lossonline, defined in Eq. (11)." }, { "heading": "D.3 BASELINES", "text": "We decided our baselines based on the experiment results of Bunel et al. (2019). In Bunel et al. (2019), methods including MIPplanet, BaBSR, planet (Ehlers, 2017), reluBaB and reluplex (Katz et al., 2017) are compared on a small convolutional MNIST network. Among them, BaBSR and MIPplanet significantly outperform other methods. We thus evaluate our methods against these two methods only in the experiments section. In order to strengthen our baseline, we compare against two additional methods here.\nNeurify (Wang et al., 2018a) Similar to BaBSR, Neurify splits on ReLU activation nodes. It makes a branching decision by computing gradient scores to prioritise ReLU nodes. Since the updated version of Neurify’s released code supports verification, we conducted a comparison experiment between between Neurify and BaBSR for inclusiveness.\nNeurify does not support CIFAR dataset. To evaluate the performance of Neurify, we obtained the trained ROBUST MNIST model and corresponding verification properties from Bunel et al. (2019). We ranked all verification properties in terms of the BaBSR solving time and selected the first 200 properties, which are solved by BaBSR within one minute, as our test properties. For a fair comparison, we have restricted Neurify to use one CPU core only and set the timeout limit to be two minutes. Among all test properties, Neurify timed out on 183 out of 200 properties. BaBSR thus outperforms Neurify significantly. Combining with the results of Bunel et al. (2019), BaBSR is indeed a fairly strong baseline to be compared against.\nMIP based algorithm (Tjeng et al., 2019) We also compared our MIPplanet baseline against a new MIP based algorithm (Tjeng et al., 2019), published in ICLR 2019. To test these two methods, we randomly selected 100 verification properties from the CIFAR Base experiment with timeout 3600s. In terms of solving time, MIPplanet requires 1732.18 seconds on average while the new MIP algorithm requires 2736.60 seconds. Specifically, MIPplanet outperforms the new MIP algorithm on 78 out of 100 properties. MIPplanet is therefore a strong baseline for comparison.\nAs a caveat, we mention that the main difference between MIPplanet and the algorithm of (Tjeng et al., 2019) is the intermediate bound computation, which is complementary to our focus. If better intermediate bounds are shown to help verification, we can still use our approach to get better branching decisions corresponding to those bounds." }, { "heading": "D.4 MODEL ARCHITECTURE", "text": "We provide the architecture detail of the neural networks verified in the experiments in the following table." }, { "heading": "APPENDIX E. ADDITIONAL EXPERIMENT RESULTS", "text": "" }, { "heading": "E.1 FAIL-SAFE HEURISTIC DEPENDENCE", "text": "In all our experiments, we have compared against BaBSR, which employs only the fail-safe heuristic for branching. In other words, removing the GNN and using only the fail-safe heuristic is equivalent to BaBSR. The fact that GNN significantly outperforms BaBSR demonstrates that GNN is doing most of the job. To better evaluate the GNN’s reliance on a fail-safe heuristic, we study the ratio of times that a GNN branching decision is used for each verification property of a given model. Results are listed in Table 4. On all three models, GNN accounts for more than 90% of branching decisions employed on average, ensuring the effectiveness of our GNN framework." }, { "heading": "E.2 GNN FEATURE ANALYSIS", "text": "We evaluate the importance of different features used in GNN. We note that two types of features are used in GNN. The first type (including intermediates bounds, network weights and biases) can be collected at negligible costs. The other type is LP features (primal and dual values) that are acquired by solving a strong LP relaxation, which are expensive to compute but potentially highly informative. To evaluate their effect, we trained a new GNN with LP features removed and tested the new GNN on 260 randomly selected verification properties on the Base model. Among the selected properties, 140 are categorised as easy, 70 as medium and 50 as hard. We denote the model trained on all features as GNN and the newly trained model as GNN-R (we use R to indicate reduced features).\nFrom Table 5, we observe that removing primal and dual information deteriorates the GNN performance, but GNN-R still outperforms the baseline heuristic BaBSR. We believe cheap features are the most important. Depending on the cost of LP, potential users can either remove expensive LP features or train a GNN with a smaller architecture." }, { "heading": "E.3 MIPPLANET BRANCHING NUMBER", "text": "MIPplanet is implemented with the commercial solver Gurobi. Since Gurobi outputs internal branch number, we recorded MIPplanet branch number for a subset of verification properties for each model. In detail, we randomly selected 120 properties of various difficulty levels for the Base model and 27 properties each for the Wide and Deep model. Results are summarised in Table 6.\nOne key observation we made is that Gurobi branch number is not positively related to the solving time. For instance, on timed out properties of the Wide model, MIPplanet branch number varies between 1 and 7479. We suspect Gurobi performs cutting before branching, so time spent on branching varies between properties, leading to inconsistent branch number and solving time. As the result, the MIPplanet branch number is not comparable with that of BaBSR, GNN and GNN-online. This is also the reason that we did not include MIPplant branch number in Table 1 and Table 2." }, { "heading": "E.4 LP SOLVING TIME AND GNN COMPUTING TIME", "text": "We mention that LP solving time is the main bottleneck for branch-and-bound based verification methods. Although both GNN evaluation time and LP solving time increase with the size of network, LP solving time grows at a significantly faster speed. For instance, in CIFAR experiments, GNN requires on average 0.02, 0.03, 0.08 seconds to make a branching decision on Base, Wide and Deep model respectively but the corresponding one LP solving time on average are roughly 1.1, 4.9, 9.6 seconds. GNN evaluation is almost negligible for large neural networks when compared to LP solving time." }, { "heading": "E.5 GEOMETRIC MEAN", "text": "For all our experiments, we based our analyses on the statistics of average solving time and branching number. To ensure the reported numbers are not biased by potential outliers, we measure methods’ performance with the geometric mean as well and summarize results in Table 7 and Table 8. Statistics of geometric mean are consistent with that of arithmetic mean, validating the analyses of the main paper." }, { "heading": "E.6 ADDITIONAL PLOTS", "text": "We provide cactus plots for the Base model on easy, medium and hard difficulty level properties respectively." }, { "heading": "APPENDIX F. MNIST DATASET", "text": "We replicate the CIFAR experiments on the MNIST dataset to test the generalization ability of our GNN framework." }, { "heading": "F.1 MODEL ARCHITECTURE AND VERIFICATION PROPERTIES", "text": "We trained three different networks on MNIST with the method provided in Wong & Kolter (2018). The base model is mainly used for generating the training dataset and testing the horizontal generalization ability of the trained GNN. The Wide and Deep models are used for evaluating the vertical generalization. Verification properties are found via binary search with BaBSR. We set the binary search time limit to be 1800 seconds for the Base model and 3600 seconds for the other two models." }, { "heading": "F.2 TRAINING DETAILS", "text": "Training dataset Training dataset is generated on the Base model. We point out that we explicitly choose a Base model of small network size for efficient and fast training data generation.\nTo generate a training dataset, 538 random MNIST images are selected. Binary serach with BaBSR and 600 seconds timeout are used to determine on the Base model. Among 538 verification properties determined, we use 403 properties to generate 18231 training samples and the rest of properties to generate 5921 validation samples. Training samples and validation samples are generated using Algorithm 2 with B = 20 and q = 10. For a typical epsilon value, each sub-domain generally contains 480 ambiguous ReLU nodes. Among them, approximately 80 ReLU nodes are chosen for strong branching heuristics, which leads to roughly 45 seconds for generating a training sample.\nTraining The same set of parameters and training procedure are used for training a GNN for MNIST dataset. The GNN converges in 70 epochs with each epoch took less than 400 seconds. The trained GNN reached 86.5% accuracy on the training dataset and 83.1% accuracy on the validation dataset." }, { "heading": "F.3 EXPERIMENT RESULTS", "text": "We first note that we use verification properties with timeout 1800 seconds on the Base model to allow for an integrated evaluation of GNN on both its performance and its horizontal transferability. Vertical transferability is tested on the Wide and Deep model.\nWe observe that MIPplanet outperforms all BaB based methods on verification properties of the Base model. Given that the network size of the Base model is particularly small (1226 hidden units only), we believe that MIP algorithms backed by commercial solvers could be the most effective tool on verification problems of small size. Our conjecture is further confirmed by the fact that MIPplanet timed out on almost all properties of both the Wide and Deep models. On all three models, GNN consistently outperforms BaBSR, demonstrating the transferability of our framework. Finally, when online learning is considered, we found it is effective in fine-tuning the trained GNN and enabling further performance improvements, especially on the Wide model." }, { "heading": "F.4 FAIL-SAFE HEURISTIC DEPENDENCE", "text": "Results of Table 11 ensure that the trained GNN is indeed account for the most branching decisions." }, { "heading": "F.5 GEOMETRIC MEAN", "text": "The consistency between the results of Table 12 and Table 10 confirm that our analyses based on arithmetic mean are not biased by outliers." }, { "heading": "F.6 TRANSFERABILITY BETWEEN DATASETS", "text": "To evaluate whether our framework can generalize further to support transferring between datasets, we tested CIFAR trained GNN on MNIST verification properties. In detail, we have tested the GNN on 20 randomly picked verification properties of MNIST Base model. We found that BABSR outperforms CIFAR trained GNN on all properties, so the CIFAR trained GNN model does not transfer to MNIST dataset. This is expected as MNIST and CIFAR images differ significantly from each other." } ]
2,020
NEURAL NETWORK BRANCHING FOR NEURAL NETWORK VERIFICATION
SP:bbde4c0910f4ef9ef433f0349f7ff3edd569b63f
[ "This work presents an encoding approach for unordered set input to neural networks. The authors base their approach on weighted finite automata, where in order to absorb unordered sets, they enforce multiplicative commutativity on transition matrices by approximating them as complex diagonal matrices. The authors furthermore provide mathematical references and results to derive bounds for their approximation. They show that positional encoding in Transformer network can be seen as a special case of their multiset encoding scheme, which also generalizes DeepSets encoding from real to complex numbers.", "This paper proposed a complex weights based multiset automata designed to represent unordered data. The main idea of multiset automata is that the transition matrices of the automata is pairwise commutative. To achieve this property, the authors proposed to restrict the transition matrices to be diagonal and shows that the latter is a close approximation of the former. The authors proceed to give two practical applications of the multiset automata: position encoding of the transformer and deepset networks. For the former, the authors showed that the position encodings from Vaswani et al. can be written as a weighted unary automaton and therefore it is a generalization of the original position encodings. For the latter, the authors extended the classical deepset networks into its complex domain, allowing more efficient representation of the data. " ]
Unordered, variable-sized inputs arise in many settings across multiple fields. The ability for setand multisetoriented neural networks to handle this type of input has been the focus of much work in recent years. We propose to represent multisets using complexweighted multiset automata and show how the multiset representations of certain existing neural architectures can be viewed as special cases of ours. Namely, (1) we provide a new theoretical and intuitive justification for the Transformer model’s representation of positions using sinusoidal functions, and (2) we extend the DeepSets model to use complex numbers, enabling it to outperform the existing model on an extension of one of their tasks.
[]
[ { "authors": [ "Justin DeBenedetto", "David Chiang" ], "title": "Algorithms and training for weighted multiset automata and regular expressions", "venue": "In Cezar Câmpeanu, editor, Implementation and Application of Automata,", "year": 2018 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N. Dauphin" ], "title": "Convolutional sequence to sequence learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Roger A. Horn", "Charles A. Johnson" ], "title": "Matrix Analysis", "venue": null, "year": 2012 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks", "venue": "In Proc. ICLR,", "year": 2019 }, { "authors": [ "Ryan L. Murphy", "Balasubramaniam Srinivasan", "Vinayak A. Rao", "Bruno Ribeiro" ], "title": "Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs, 2018", "venue": null, "year": 1900 }, { "authors": [ "K.C. O’Meara", "C. Vinsonhaler" ], "title": "On approximately simultaneously diagonalizable matrices", "venue": "Linear Algebra and its Applications,", "year": 2006 }, { "authors": [ "Tomás Pevný", "Petr Somol" ], "title": "Using neural network formalism to solve multiple-instance problems", "venue": "In Proc. ISNN,", "year": 2016 }, { "authors": [ "Andrew M. Saxe", "James L. McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": null, "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proc. NeurIPS,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": null, "year": 2015 }, { "authors": [ "Edward Wagstaff", "Fabian B. Fuchs", "Martin Engelcke", "Ingmar Posner", "Michael A. Osborne" ], "title": "On the limitations of representing functions on sets, 2019", "venue": null, "year": 1901 }, { "authors": [ "Bo Yang", "Sen Wang", "Andrew Markham", "Niki Trigoni" ], "title": "Robust attentional aggregation of deep feature sets for multi-view 3D reconstruction", "venue": "International Journal of Computer Vision,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks which operate on set-structured input have been gaining interest for their ability to handle unordered and variable-sized inputs (Vinyals et al., 2015; Wagstaff et al., 2019). They have been applied to various tasks, such as processing graph nodes (Murphy et al., 2018), hypergraphs (Maron et al., 2019), 3D image reconstruction (Yang et al., 2019), and point cloud classification and image tagging (Zaheer et al., 2017). Similar network structures have been applied to multiple instance learning (Pevný and Somol, 2016).\nIn particular, the DeepSets model (Zaheer et al., 2017) computes a representation of each element of the set, then combines the representations using a commutative function (e.g., addition) to form a representation of the set that discards ordering information. Zaheer et al. (2017) provide a proof that any function on sets can be modeled this way, by encoding sets as base-4 fractions and using the universal function approximation theorem, but their actual proposed model is far simpler than the model constructed by the theorem.\nIn this paper, we propose to compute representations of multisets using weighted multiset automata, a variant of weighted finite-state (string) automata in which the order of the input symbols does not affect the output. In some sense, this is the most general representation of a multiset that can be computed incrementally using only a finite amount of memory, and it can be directly implemented inside a neural network. We show how to train these automata efficiently by approximating them with string automata whose weights form complex, diagonal matrices.\nOur representation generalizes DeepSets slightly, and it also turns out to be a generalization of the Transformer’s position encodings (Vaswani et al., 2017). In Sections 4 and 5, we discuss the application of our representation in both cases.\n• The Transformer (Vaswani et al., 2017) models the absolute position of a word within a sentence. This position can be thought of as a multiset over a single element, and indeed the Transformer uses a position encoding involving sinusoidal functions that turns out to be a special case of our representation. So weighted multiset automata provide a new theoretical and intuitive justification for sinusoidal position encodings. We also experiment with several variations on position encodings\ninspired by this justification, and although they do not yield any improvement, we do find that learned position encodings in our representation do better than learning a different vector for each absolute position.\n• We extend the DeepSets model to use our representation, which amounts to upgrading it from real to complex numbers. On an extension of one of their tasks (adding a sequence of one-digit numbers and predicting the units digit), our model is able to reach perfect performance, whereas the original DeepSets model does no better than chance." }, { "heading": "2 WEIGHTED MULTISET AUTOMATA", "text": "We define weighted finite automata below using a matrix formulation. Throughout, let K be either R or C. Definition 1. A K-weighted finite automaton (WFA) over Σ is a tuple M = (Q,Σ, λ, µ, ρ), where Q = {1, . . . , d} is a finite set of states, Σ is a finite alphabet, λ ∈ K1×d is a row vector of initial weights, µ : Σ → Kd×d assigns a transition matrix to every symbol, and ρ ∈ Kd×1 is a column vector of final weights.\n(We do not use the final weights ρ in this paper, but include them for completeness.)\nWe extend the mapping µ to strings: If w = w1 · · ·wn ∈ Σ∗, then µ(w) = µ(w1) · · ·µ(wn). Then, the vector of forward weights of a string w is fwM (w) = λ (∏n p=1 µ(wp) ) .\nNote that, different from many definitions of weighted automata, this definition does not allow -transitions, and there may be more than one initial state. (Throughout this paper, we use to stand for a small real number.)\nThe analogue of finite automata for multisets is the special case of the above definition where multiplication of the transition matrices µ(a) does not depend on their order.\nDefinition 2. A K-weighted multiset finite automaton is one whose transition matrices commute pairwise. That is, for all a, b ∈ Σ, we have µ(a)µ(b) = µ(b)µ(a).\nOur proposal, then, is to represent a multiset w by the vector of forward weights, fwM (w), with respect to some weighted multiset automatonM . In the context of a neural network, the transition weights µ(a) can be computed by any function as long as it does not depend on the ordering of symbols, and the forward weights can be used by the network in any way whatsoever." }, { "heading": "3 TRAINING", "text": "Definition 2 does not lend itself well to training, because parameter optimization needs to be done subject to the commutativity constraint. Previous work (DeBenedetto and Chiang, 2018) suggested approximating training of a multiset automaton by training a string automaton while using a regularizer to encourage the weight matrices to be close to commuting. However, this strategy cannot make them commute exactly, and the regularizer, which has O(|Σ|2) terms, is expensive to compute. Here, we pursue a different strategy, which is to restrict the transition matrices µ(a) to be diagonal. This guarantees that they commute. As a bonus, diagonal matrices are computionally less expensive than full matrices. Furthermore, we show that if we allow complex weights, we can learn multisets with diagonal matrices almost as well as with full matrices. We show this first for the special case of unary automata (§3.1) and then general multiset automata (§3.2)." }, { "heading": "3.1 UNARY AUTOMATA", "text": "Call an automaton unary if |Σ| = 1. Then, for brevity, we simply write µ instead of µ(a) where a is the only symbol in Σ.\nLet ‖ · ‖ be the Frobenius norm; by equivalence of norms (Horn and Johnson, 2012, 352), the results below should carry over to any other matrix norm, as long as it is monotone, that is: if A ≤ B elementwise, then ‖A‖ ≤ ‖B‖. As stated above, our strategy for training a unary automaton is to allow µ to be complex but restrict it to be diagonal. The restriction does not lose much generality, because any matrix can approximated by a complex diagonal matrix in the following sense (Horn and Johnson, 2012, 116): Proposition 1. For any complex square matrix A and > 0, there is a complex matrix E such that ‖E‖ ≤ and A+ E is diagonalizable in C.\nProof. Form the Jordan decomposition A = PJP−1. We can choose a diagonal matrix D such that ‖D‖ ≤ κ(P ) (where κ(P ) = ‖P‖‖P −1‖) and the diagonal entries of J + D are all different. Then J + D is diagonalizable. Let E = PDP−1; then ‖E‖ ≤ ‖P‖‖D‖‖P−1‖ = κ(P )‖D‖ ≤ , and A + E = P (J + D)P−1 is also diagonalizable.\nThus, for a unary automaton M with transition matrix µ, we can choose Qµ′Q−1 close to µ such that µ′ is diagonal. So M is close to the automaton with initial weights λ′ = λQ and transition weights µ′ ≈ Q−1µQ. This means that in training, we can directly learn complex initial weights λ′ and a complex diagonal transition matrix µ′, and the resulting automaton (M ′) should be able to represent multisets almost as well as a general unary automaton (M ) can.\nIt might be thought that even if µ′ approximates µ well, perhaps the forward weights, which involve possibly large powers of µ, will not be approximated well. As some additional assurance, we have the following error bound on the powers of µ: Proposition 2. For any complex square matrix A, > 0, and 0 < r < 1, there is a complex matrix E such that A+ E is diagonalizable in C and, for all n ≥ 0,\n‖(A+ E)n −An‖ ≤ rn if A nilpotent, ‖(A+ E)n −An‖\n‖An‖ ≤ n otherwise.\nFor the proof, please see Appendix A." }, { "heading": "3.2 GENERAL CASE", "text": "In this section, we allow Σ to be of any size. Proposition 1 unfortunately does not hold in general for multiple matrices (O’Meara and Vinsonhaler, 2006). That is, it may not be possible to perturb a set of commuting matrices so that they are simultaneously diagonalizable. Definition 3. Matrices A1, . . . , Am are simultaneously diagonalizable if there exists an invertible matrix P such that PAiP−1 is diagonal for all i ∈ {1, · · · , n}. We say thatA1, · · · , Am are approximately simultaneously diagonalizable (ASD) if, for any > 0, there are matrices E1, . . . , Em such that ‖Ei‖ ≤ and A1 + E1, . . . , Am + Em are simultaneously diagonalizable.\nO’Meara and Vinsonhaler (2006) give examples of sets of matrices that are commuting but not ASD. However, if we are willing to add new states to the automaton (that is, to increase the dimensionality of the weight matrices), we can make them ASD.\nProposition 3. Any weighted multiset automaton is close to an automaton that can be converted to a complex-weighted diagonal automaton, possibly with more states.\nProof. First we start with a fact from O’Meara and Vinsonhaler (2006).\nLemma 4. Suppose A1, . . . , Ak are commuting n × n matrices over an algebraically closed field F . Then there exists an invertible matrix C such that C−1A1C, . . . , C−1AkC are block diagonal matrices with matching block structures and each diagonal block has only a single eigenvalue (ignoring multiplicities). That is, there is a partition n = n1 + · · ·+ nr of n such that\nC−1AiC = Bi = Bi1 Bi2 . . .\nBir , (1) where each Bij is an nj × nj matrix having only a single eigenvalue for i = 1, . . . , k and j = 1, . . . , r. Moreover, if B1j , . . . , Bkj are ASD for j = 1, . . . , r, then A1, . . . , Ak are ASD.\nFurthermore, O’Meara and Vinsonhaler (2006) observe that each block can be written as Bij = λijI +Nij where Nij is nilpotent, so A1, . . . , Ak are ASD iff N1j , . . . , Nkj are for all j.\nSo the transition matrices of the automaton can be rewritten in the above form, and the problem of converting an automaton to one that is ASD is reduced to the problem of converting an automaton with nilpotent transition matrices (equivalently, an automaton recognizing a finite language) to one that is ASD (possibly with more states). See Appendix B for one such construction.\nThis means that if we want to learn representations of multisets over a finite alphabet Σ, it suffices to constrain the transition matrices to be complex diagonal, possibly with more states. Unfortunately, the best construction we know of (Appendix B) increases the number of states by a lot. But this does not in any way prevent the use our representation; we can choose however many states we want, and it’s an empirical question whether the number of states is enough to learn good representations.\nThe following two sections look at two practical applications of our representation." }, { "heading": "4 POSITION ENCODINGS", "text": "One of the distinguishing features of the Transformer network for machine translation (Vaswani et al., 2017), compared with older RNN-based models, is its curious-looking position encodings,\nep2j−1 = sin 10000 −2(j−1)/d(p− 1)\nep2j = cos 10000 −2(j−1)/d(p− 1)\n(2)\nwhich map word positions p (ranging from 1 to n, the sentence length) to points in the plane and are the model’s sole source of information about word order.\nIn this section, we show how these position encodings can be interpreted as the forward weights of a weighted unary automaton. We also report on some experiments on some extensions of position encodings inspired by this interpretation." }, { "heading": "4.1 AS A WEIGHTED UNARY AUTOMATON", "text": "Consider a diagonal unary automaton M in the following form: λ = [ exp iφ1 exp−iφ1 exp iφ2 exp−iφ2 · · · ]\nµ = exp iθ1 0 0 0 · · · 0 exp−iθ1 0 0 · · · 0 0 exp iθ2 0 . . .\n0 0 0 exp−iθ2 · · · ... ... ... ... . . . In order for a complex-weighted automaton to be equivalent to some real-weighted automaton, the entries must come in conjugate pairs like this, so this form is fully general.\nBy a change of basis, this becomes the following unary automaton M ′ (this is sometimes called the real Jordan form):\nλ′ = [ cosφ1 sinφ1 cosφ2 sinφ2 · · · ]\nµ′ = cos θ1 sin θ1 0 0 · · · − sin θ1 cos θ1 0 0 · · · 0 0 cos θ2 sin θ2 . . .\n0 0 − sin θ2 cos θ2 · · · ... ... ... ... . . .\n (3)\nThen, for any string prefix u (making use of the angle sum identities): fwM ′(u) = [ cos(φ1 + |u|θ1) sin(φ1 + |u|θ1) cos(φ2 + |u|θ2) sin(φ2 + |u|θ2) · · · ] .\nIf we let\nφi = π\n2\nθj = −10000−2(j−1)/d\nthis becomes exactly equal to the position encodings defined in (2). Thus, the Transformer’s position encodings can be reinterpreted as follows: it runs automatonM ′ over the input string and uses the forward weights of M ′ just before position p to represent p. This encoding, together with the embedding of word wp, is used as the input to the first self-attention layer." }, { "heading": "4.2 EXPERIMENTS", "text": "This reinterpretation suggests some potential extensions to position encodings: 1. Using the diagonal, polar form of the transition matrix (3), learn the φi and θi instead of keeping them fixed. 2. Learn all the initial weights and full transition matrix directly.\nWe carried out some experiments to see if these methods perform better or worse than the original. We used an open-source implementation of the Transformer, Witwicky.1 The settings used were the default settings, except that we used 8k joint BPE operations and d = 512 embedding dimensions. We tested the following variations on position encodings.\n1https://github.com/tnq177/witwicky\ncase-insensitive BLEU Model Training En-Vi∗ Uz-En Ha-En Hu-En Ur-En Ta-En Tu-En\ndiagonal polar fixed 32.6 25.7 24.4 34.2 11.5 13.4 25.7 learned angles 32.7 25.8 25.4 34.0 11.1 14.1 25.7\nfull matrix random 32.6 25.9 25.8 34.1 10.9 12.8 26.1 learned 32.5 24.5 23.6 33.5 11.4 14.5 23.8\nper position random 32.6 24.9 24.6 34.1 11.0 14.1 24.4 learned 32.1 22.6 21.2 33.0 11.7 14.4 21.1\n∗tokenized references\nTable 1 shows that no method is clearly the best. The only method that appears to be worse than the others is “per position, learned,” which, although best on Urdu-English, does much worse than other methods on several tasks. By contrast, the learned embeddings based on multiset automata (“diagonal polar, learned angles” and “full matrix, learned”) are usually close to the best, lending some support to our interpretation." }, { "heading": "5 COMPLEX DEEPSETS", "text": "In this sectoin, we incorporate a weighted multiset automaton into the DeepSets (Zaheer et al., 2017) model, extending it to use complex numbers." }, { "heading": "5.1 MODELS", "text": "The DeepSets model computes a vector representation for each input symbol and sums them to discard ordering information. We may think of the elementwise layers as computing the log-weights of a diagonal multiset automaton, and the summation layer as computing the forward log-weights of the multiset. (The logs are needed because DeepSets adds, whereas multiset automata multiply.) However, DeepSets uses only real weights, whereas our multiset automata use complex weights. Thus, DeepSets can be viewed as using a multiset representation which is a special case of ours.\nWe conduct experiments comparing the DeepSets model (Zaheer et al., 2017), a GRU model, an LSTM model, and our complex multiset model. The code and layer sizes for the three baselines come from the DeepSets paper.2 See Figure 1 for layer types and sizes for the three baseline models.\nIn our system, to avoid underflow when multiplying many complex numbers, we store each complex number as er(a+bi) where r, a, and b are real and a and b are normalized such that a2+b2 = 1 prior to multiplication. Thus, for each complex-valued parameter, we have three scalars (r, a, and b) to learn. To this end, each input is fed into three separate embedding layers of size 50 (for r, a, and b). (While the DeepSets code uses a dense layer at this point, in our network, we found that we could feed the embeddings directly into a complex multiplication layer to discard ordering information. This reduced the number of parameters for our model and did not affect performance.) The output of this is then a new r, a, and b which are concatenated and fed into a final dense layer as before to obtain the output. Since our diagonalized automata have complex initial weights (λ′), we also tried learning a complex initial weight vector λ′, but this had no effect on performance.\nThe total number of parameters for each model was 4,161 parameters for the DeepSets model, 31,351 parameters for the LSTM model, 44,621 parameters for the GRU model, and 1,801 parameters for our model. In order to eliminate number of parameters as a difference from our model to the DeepSets model, we also tried the DeepSets model without the first dense layer and with embedding sizes of 150 to exactly match the number of parameters of our model, and the results on the test tasks were not significantly different from the baseline DeepSets model.\nFor all experiments, we used mean squared error loss, a learning rate decay of 0.5 after the validation loss does not decrease for 2 epochs, and early stopping after the validation loss does not decrease for 10 epochs.\n2https://github.com/manzilzaheer/DeepSets/blob/master/DigitSum/text_sum.ipynb" }, { "heading": "5.2 EXPERIMENTS", "text": "Task 1: Sum of digits In this task, taken from Zaheer et al. (2017), the network receives a set of single digit integers as input and must output the sum of those digits. The output is rounded to the nearest integer to measure accuracy. The training set consisted of 100k randomly generated sequences of digits 1–9 with lengths from 1 to 50. They were fed to each network in the order in which they were generated (which only affects GRU and LSTM). This was then split into training and dev with approximately a 99/1 split. The test set consisted of randomly generated sequences of lengths that were multiples of 5 from 5 to 95. Figure 2 shows that both our model and DeepSets obtain perfect accuracy on the test data, while the LSTM and GRU fail to generalize to longer sequences.\nTask 2: Returning units digit of a sum The second task is similar to the first, but only requires returning the units digit of the sum. The data and evaluation are otherwise the same as task 1. Here, random guessing within the output range 0–9 achieves approximately 10% accuracy. Figure 2 show that DeepSets, LSTM, and GRU are unable to achieve performance better than random guessing on the test data. Our method is able to return the units digit perfectly for all test lengths, because it effectively learns to use the cyclic nature of complex multiplication to produce the units digit." }, { "heading": "6 CONCLUSION", "text": "We have proven that weighted multiset automata can be approximated by automata with (complex) diagonal transition matrices. This formulation permits simpler elementwise multiplication instead of matrix multiplication, and requires fewer parameters when using the same number of states. We show that this type of automaton naturally arises within existing neural architectures, and that this representation generalizes two existing multiset representations, the Transformer’s position encodings and DeepSets. Our results provide new theoretical and intuitive justification for these models, and, in one case, lead to a change in the model that drastically improves its performance." }, { "heading": "A PROOF OF PROPOSITION 2", "text": "Lemma 5. If J is a Jordan block with nonzero eigenvalue, then the bound of Proposition 2 holds for J .\nProof. Note that for any δ, ≥ 0, we have\n(1− δ)(1− ) ≥ 1− δ − ≥ 1− 2 max{δ, } (1− )n ≥ 1− n\nThe powers of J look like\nJn = ( n 0 ) λn ( n 1 ) λn−1 ( n 2 ) λn−2 · · ·( n 0 ) λn ( n 1 ) λn−1 · · ·( n 0 ) λn · · ·\n. . . More concisely, for k ≥ j,\n[Jn]jk =\n( n\nk − j\n) λn−k+j .\nLet D be a diagonal matrix whose elements are in [− λ, 0) and are all different. The powers of (J +D) are\n[(J +D)n]jk = cjk[J n]jk\nwhere\ncjk ≥ (1− )n−k+j\n≥ 1− (n− k + j) ≥ 1− n .\nFinally, form their difference:\n[(J +D)n − Jn]jk = (cjk − 1)[Jn]jk |[(J +D)n − Jn]jk| ≤ n |[Jn]jk| ‖(J +D)n − Jn‖ ≤ n ‖Jn‖ ‖(J +D)n − Jn‖\n‖Jn‖ ≤ n .\nLemma 6. If J is a Jordan block with zero eigenvalue, then for any > 0, r > 0, there is a complex matrix E such that M + E is diagonalizable in C and\n‖(M + E)n −Mn‖ ≤ rn .\nProof. In this case, we have to perturb the diagonal elements to nonzero values. For any δ ≤ 12 , let D be a diagonal matrix whose elements are in (0, δ] and are all different. Then the elements of ((J + D)n − Jn) are:\n[(J +D)n − Jn]jk ≤ ( n\nk − j\n) δn−k+j (0 ≤ k − j < min{n, d})\n< 2nδmin{0,n−d}+1\nso the error is at most 2d−1d(2δ)min{0,n−d}+1. Let δ = min{ 12 , r, ( r 2 )d d}.\nNow we can prove Proposition 2.\nProof. Form the Jordan decomposition M = PJP−1, where\nJ = J1 J2 . . .\nJp . We begin with the non-nilpotent case. Let κ(P ) = ‖P‖‖P−1‖ be the Frobenius condition number of P . For each Jordan block Jj :\n• If Jj has nonzero eigenvalue, perturb it so that the absolute error of the nth power is at most n\nκ(P )2 ‖Jnj ‖ 2p , by Lemma 5.\n• If Jj has zero eigenvalue, perturb it so that the absolute error is at most n κ(P )2 ρ(J)n 2p , by Lemma 6.\nThen the total absolute error of all the blocks with nonzero eigenvalue is at most n κ(P )2 ‖Jn‖ 2 . And since ρ(J)n ≤ ‖Jn‖, the total absolute error of all the blocks with zero eigenvalue is also at most n κ(P )2 ‖Jn‖\n2 . So the combined total is\n‖(J + E)n − Jn‖ ≤ n κ(P )2 ‖Jn‖.\nFinally,\n‖(M + E)n −Mn‖ = ‖P ((J +D)n − Jn)P−1‖ ≤ κ(P )‖((J +D)n − Jn)‖\n≤ n κ(P ) ‖Jn‖\n≤ n κ(P ) ‖P−1MnP‖\n≤ n ‖Mn‖ ‖(M + E)n −Mn‖\n‖Mn‖ ≤ n .\nIf M is nilpotent, the above argument does not go through, because ρ(J) = 0. Instead, use Lemma 6 to bound the absolute error of each block by r\nn p , so that the total absolute error is at most r n ." }, { "heading": "B MAKING AUTOMATA ASD", "text": "In this section, we give a construction for converting a multiset automaton to one that is equivalent, but possibly has more states.\nLet ⊕ stand for the direct product of vector spaces, ⊗ for the Kronecker product, and define the shuffle product A B = A⊗ I + I ⊗B. (This is known as the Kronecker sum and is sometimes notated ⊕, but we\nuse that for direct product.) These operations extend naturally to weighted multiset automata and correspond roughly to union and concatenation, respectively:\nλA⊕B = λA ⊕ λB µA⊕B(a) = µA(a)⊕ µB(a) ρA⊕B = ρA ⊕ ρB λA B = λA ⊗ λB µA B(a) = µA(a) µB(a) ρA B = ρB ⊗ ρB\nThey are of interest here because they preserve the ASD property: Proposition 7. If M1 and M2 are multiset automata with ASD transition matrices, then M1 ⊕M2 has ASD transition matrices, and M1 M2 has ASD transition matrices.\nProof. First consider the ⊕ operation. Let µ1(a) (for all a) be the transition matrices of M1. For any > 0, let E1(a) be the perturbations of the µ1(a) such that ‖E1(a)‖ ≤ /2 and the µ1(a) + E1(a) (for all a) are simultaneously diagonalizable. Similarly for M2. Then the matrices (µ1(a) + E1(a)) ⊕ (µ2(a) + E2(a)) (for all a) are simultaneously diagonalizable, and\n‖(µ1(a) + E1(a))⊕ (µ2(a) + E2(a))− µ1(a)⊕ µ2(a)‖ = ‖E1(a)⊕ E2(a)‖ ≤ ‖E1(a)‖+ ‖E2(a)‖ = .\nNext, we consider the operation. Let d1 and d2 be the number of states in M1 and M2, respectively. This time, we choose ‖E1(a)‖ ≤ /(2d2) and ‖E2(a)‖ ≤ /(2d1). Then the matrices (µ1(a) + E1(a)) (µ2(a) + E2(a)) (for all a) are simultaneously diagonalizable, and\n(µ1(a) + E1(a)) (µ2(a) + E2(a)) = (µ1(a) + E1(a))⊗ I + I ⊗ (µ2(a) + E2(a)) = µ1(a)⊗ I + E1(a)⊗ I + I ⊗ µ2(a) + I ⊗ E2(a) = (µ1(a) µ2(a)) + (E1(a) E2(a))\n‖(µ1(a) + E1(a)) (µ2(a) + E2(a))− µ1(a) µ2(a)‖ = ‖E1(a) E2(a)‖ = ‖E1(a)⊗ I + I ⊗ E2(a)‖ ≤ ‖E1(a)⊗ I‖+ ‖I ⊗ E2(a)‖ ≤ ‖E1(a)‖d2 + d1‖E2(a)‖ ≤ .\nProposition 8. IfM is a weighted multiset automaton with d states recognizing a finite language (that is, all of its transition matrices are nilpotent), there exists an equivalent automaton with O(d2|Σ|+1) states whose transition matrices are ASD.\nProof. Because any set of commuting matrices can be simultaneously triangularized by a change of basis, assume without loss of generality that M ’s transition matrices are upper triangular, that is, there are no transitions from state q to state r where q > r.\nThe idea is that M ′ should simulate a run of M in which the symbols are read in lexicographic order. It does so by building up partial runs, one for each symbol in Σ, and then stitching them together.\nLet Q be the states of M , and let a1, . . . , am be the symbols of Σ. For all a ∈ Σ, q, r ∈ Q, define Mq,a,r to be the automaton which simulates M starting from state q, reading only a’s, and ending in state r. Then\nM ′ = ⊕ q0∈Q · · · ⊕ qm∈Q λq0Mq0,a1,q1 · · · Mqm−1,am,qm\n(where the multiplication by the scalar λq0 means scaling the initial weight vector by λq0 ). By Proposition 7, M ′ is ASD, and because each of the Mq,a,r has no more than d states, M ′ has at most d2|Σ|+1 states." } ]
2,019
null
SP:942e9e4be427dd59ec333c2a3073288c4c418cdc
[ "The paper deals with robustness against adversarial attacks. It proposes to blank out large parts of the early convolution layers in a CNN, in an attempt to shift the focus from \"texture\" to \"shape\" features. This does seem to improve robustness against adversarial examples, with only a small decrease in general classification performance. The explanation for this, on the other hand, is not really convincing.", "The paper proposes defective convolutional layers as a measure of defense against adversarial attacks on deep neural networks. This layer sets the outputs of a randomly sampled but *fixed* set of neurons in the convolutional layers to zero during training and testing. The authors claim that defective convolutional layers encourage the model to pick up features other than local textures, e.g. shape information. The shape-vs-texture tradeoff is supported by experiments showing that defective CNNs perform worse than normal CNNs on images with permuted patches and that adversarial examples with larger epsilons exhibit more semantic shapes. The detailed experiment section evaluates the method on transfer-based, gray-box and black-box adversarial attacks,\tincluding Gaussian noise. Additionally, it provides ablation studies on the keep-probability and position of the defective layer." ]
Robustness of convolutional neural networks has recently been highlighted by the adversarial examples, i.e., inputs added with well-designed perturbations which are imperceptible to humans but can cause the network to give incorrect outputs. Recent research suggests that the noises in adversarial examples break the textural structure, which eventually leads to wrong predictions by convolutional neural networks. To help a convolutional neural network make predictions relying less on textural information, we propose defective convolutional layers which contain defective neurons whose activations are set to be a constant function. As the defective neurons contain no information and are far different from the standard neurons in its spatial neighborhood, the textural features cannot be accurately extracted and the model has to seek for other features for classification, such as the shape. We first show that predictions made by the defective CNN are less dependent on textural information, but more on shape information, and further find that adversarial examples generated by the defective CNN appear to have semantic shapes. Experimental results demonstrate the defective CNN has higher defense ability than the standard CNN against various types of attack. In particular, it achieves state-of-the-art performance against transfer-based attacks without applying any adversarial training.
[]
[ { "authors": [ "Nicholas Baker", "Hongjing Lu", "Gennady Erlikhman", "Philip J Kellman" ], "title": "Deep convolutional networks do not classify based on global object shape", "venue": "PLoS computational biology,", "year": 2018 }, { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2013 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Jacob Buckman", "Aurko Roy", "Colin Raffel", "Ian Goodfellow" ], "title": "Thermometer encoding: One hot way to resist", "venue": null, "year": 2018 }, { "authors": [ "Nicholas Carlini", "David A. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "CoRR, abs/1608.04644,", "year": 2016 }, { "authors": [ "Jianlong Chang", "Jie Gu", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Structure-aware convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Xiaolin Hu", "Jianguo Li", "Jun Zhu" ], "title": "Boosting adversarial attacks with momentum", "venue": "arXiv preprint arXiv:1710.06081,", "year": 2017 }, { "authors": [ "Nic Ford", "Justin Gilmer", "Nicolas Carlini", "Dogus Cubuk" ], "title": "Adversarial examples are a natural consequence of test error in noise", "venue": null, "year": 1901 }, { "authors": [ "Robert Geirhos", "David HJ Janssen", "Heiko H Schütt", "Jonas Rauber", "Matthias Bethge", "Felix A Wichmann" ], "title": "Comparing deep neural networks against humans: object recognition when the signal gets weaker", "venue": "arXiv preprint arXiv:1706.06969,", "year": 2017 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "arXiv preprint arXiv:1811.12231,", "year": 2018 }, { "authors": [ "Golnaz Ghiasi", "Tsung-Yi Lin", "Quoc V Le" ], "title": "Dropblock: A regularization method for convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Hossein Hosseini", "Sreeram Kannan", "Radha Poovendran" ], "title": "Dropping pixels for adversarial robustness", "venue": "arXiv preprint arXiv:1905.00180,", "year": 2019 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Ruitong Huang", "Bing Xu", "Dale Schuurmans", "Csaba Szepesvári" ], "title": "Learning with a strong adversary", "venue": "arXiv preprint arXiv:1511.03034,", "year": 2015 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": null, "year": 1905 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Fangzhou Liao", "Ming Liang", "Yinpeng Dong", "Tianyu Pang", "Jun Zhu", "Xiaolin Hu" ], "title": "Defense against adversarial attacks using high-level representation guided denoiser", "venue": "arXiv preprint arXiv:1712.02976,", "year": 2017 }, { "authors": [ "Mengchen Liu", "Shixia Liu", "Hang Su", "Kelei Cao", "Jun Zhu" ], "title": "Analyzing the noise robustness of deep neural networks", "venue": "arXiv preprint arXiv:1810.03913,", "year": 2018 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "arXiv preprint arXiv:1611.02770,", "year": 2016 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Arunesh Sinha", "Michael Wellman" ], "title": "Towards the science of security and privacy in machine learning", "venue": "arXiv preprint arXiv:1611.03814,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Ian J. Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "CoRR, abs/1605.07277,", "year": 2016 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "arXiv preprint arXiv:1707.04131,", "year": 2017 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "venue": "arXiv preprint arXiv:1805.06605,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "arXiv preprint arXiv:1710.10766,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Bo Sun", "Nian-hsuan Tsai", "Fangchen Liu", "Ronald Yu", "Hao Su" ], "title": "Adversarial defense by stratified convolutional sparse coding", "venue": "arXiv preprint arXiv:1812.00037,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian J. Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "CoRR, abs/1312.6199,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jonathan Tompson", "Ross Goroshin", "Arjun Jain", "Yann LeCun", "Christoph Bregler" ], "title": "Efficient object localization using convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": null, "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Ford" ], "title": "2019) bridge the adversarial robustness", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning (LeCun et al., 2015), especially deep Convolutional Neural Network (CNN) (LeCun et al., 1998), has led to state-of-the-art results spanning many machine learning fields (He et al., 2016; Ren et al., 2015). Despite the great success in numerous applications, recent studies show that deep CNNs are vulnerable to some well-designed input samples named as Adversarial Examples (Szegedy et al., 2013; Biggio et al., 2013). Take the task of image classification as an example, for almost every commonly used well-performed CNN, attackers are able to construct a small perturbation on an input image. The perturbation is almost imperceptible to humans but can make the model give a wrong prediction. The problem is serious as some designed adversarial examples can be transferred among different kinds of CNN architectures (Papernot et al., 2016b), which means a machine learning system can be easily attacked even if the attacker does not have access to the model parameters.\nThere is a rapidly growing body of work on how to obtain a robust convolutional neural network, mainly based on adversarial training (Szegedy et al., 2013; Madry et al., 2017; Goodfellow et al., 2015; Huang et al., 2015). However, those methods need lots of extra computation to obtain adversarial examples at each time step and may tend to overfit the attacking method used in training (Buckman et al., 2018). In this paper, different from most existing methods, we tackle the problem from another perspective. In particular, we explore the possibility of designing new CNN architectures which can be trained using standard optimization methods on standard benchmark datasets but by themselves enjoy robustness, without appealing to other techniques. Recently, studies (Geirhos et al., 2017; 2018; Baker et al., 2018) show that the predictions of CNNs mainly depend on the texture of objects but not the shape. Also, Liu et al. (2018) finds attack methods usually perturb patches to contain textural features of incorrect classes. They suggest that the wrong prediction by CNNs for adversarial examples comes from the change on the texture-level information. The small perturbation of adversarial examples will change the textures and eventually affect the features extracted by the CNNs. Therefore, a natural way to avoid adversarial examples is to let the CNN make prediction\nrelying less on textures but more about other information which will not be severely affected by small perturbations, such as shape.\nIn real practice, sometimes a camera might have mechanical failures which cause the output image to have many defective pixels (such pixels are always black in all images). Nonetheless, humans can still recognize objects in the image with defective pixels but have to classify the objects by other information as some local textural information is missing. Motivated by this, we introduce the concept of defectiveness into the convolutional neural networks: We call a neuron a defective neuron if its output value is fixed to zero no matter what input signal is received, and a convolutional layer a defective convolutional layer if it contains defective neurons. Before training, we replace the standard convolutional layers with the defective version on a standard CNN and train the network in the standard way. As defective neurons of the defective convolutional layer contain no information and are very different from their spatial neighbors, the textural information cannot be accurately extracted from the bottom defective layers to top layers. Therefore, we destroy local textural information to a certain extent and prompt the neural network to learn more other information for classification. We call the architecture deployed with defective convolutional layers as Defective CNN.\nWe find that applying the defective convolutional layers to the bottom1 layers of the network and introducing various patterns for defective neurons arrangement across channels are crucial for robustness. According to the experimental results, we find" }, { "heading": "2 RELATED WORK", "text": "Various methods have been proposed to defend against adversarial examples. One line of research is to derive a meaningful optimization objective and optimize the model by adversarial training (Szegedy et al., 2013; Madry et al., 2017; Goodfellow et al., 2015; Huang et al., 2015). The high-level idea of these works is that if we can predict the potential attack to the model during optimization, then we\n1In this paper, bottom layer means the layer close to the input and top layer means the layer close to the output prediction.\ncan give the attacked sample a correct signal and use it during training. Another line of research is to take an adjustment to the input image before letting it go through the deep neural network (Liao et al., 2017; Song et al., 2017; Samangouei et al., 2018; Sun et al., 2018). The basic intuition behind this is that if we can clean the adversarial attack to a certain extent, then such attacks can be defended. Although these methods achieve some success, a major difficulty is that it needs a large extra cost to collect adversarial examples and hard to apply on large-scale datasets.\nSeveral studies (Geirhos et al., 2017; 2018; Baker et al., 2018) show that the prediction of CNNs is mainly from the texture of objects but not the shape. Also, Liu et al. (2018) found that adversarial examples usually perturb a patch of the original image so that the perturbed patch looks like the texture of incorrect classes. For example, the adversarial example of the panda image is misclassified as a monkey because a patch of the panda skin is perturbed adversarially so that it alone looks like the face of a monkey (see Figure 11 in Liu et al. (2018)). All previous works above suggest that the CNN learns textural information more than shape and the adversarial attack might come from textural-level perturbations. This is also correlated with robust features (Tsipras et al., 2018; Ilyas et al., 2019; Hosseini et al., 2019) which has attracted more interest recently. Pixels which encode textural information contain high redundancy and may be easily deteriorated to the distribution of incorrect classes. However, shape information is more compact and might be a more robust feature." }, { "heading": "3 DEFECTIVE CONVOLUTIONAL NEURAL NETWORK", "text": "" }, { "heading": "3.1 DESIGN OF DEFECTIVE CONVOLUTIONAL LAYERS", "text": "In this subsection, we introduce our proposed defective convolutional neural networks and discuss the differences between our proposed method and related topics.\nFirst, we briefly introduce the notations. For one convolutional layer, denote x as the input and z as the output of neurons in the layer. Note that x may be the input image or the output of the last convolutional layer. The input x is usually a M ×N ×K tensor in which M/N are the height/width of a feature map, and K is the number of feature maps, or equivalently, channels. Denote w and b as the parameters (e.g., the weights and biases) of the convolutional kernel. Then a standard convolutional layer can be mathematically defined as below." }, { "heading": "Standard convolutional layer:", "text": "x′ = w ⊗conv x+ b, (1) z = f(x′), (2)\nwhere f(·) is a non-linear activation function such as ReLU2 and ⊗conv is the convolutional operation. The convolutional filter receives signals in a patch and extracts local textural information from the patch. As mentioned in the introduction, recent works suggest that the prediction of standard CNNs strongly depends on such textural information, and noises imposed on the texture may lead to wrong predictions. Therefore, we hope to learn a feature extractor which does not solely rely on textural features and also considers other information. To achieve this goal, we introduce the defective convolutional layer in which some neurons are purposely designed to be corrupted. Define Mdefect to be a binary matrix of size M ×N ×K. Our defective convolutional layer is defined as follows." }, { "heading": "Defective convolutional layer:", "text": "x′ = w ⊗conv x+ b, (3) z′ = f(x′) (4) z = Mdefect ∗ z′, (5)\nwhere ∗ denotes element-wise product. Mdefect is a fixed matrix and is not learnable during training and testing. A simple visualization of a defective convolutional layer is shown in Figure 2. From the figure, we can see that Mdefect plays a role of “masking” out values of some neurons in the layer. This disturbs the distribution of local textural information and decouples the correlation among neurons. With the masked output z as input, the feature extractor of the next convolutional layer cannot accurately capture the local textural feature from x. As a consequence, the textural information\n2Batch normalization is popularly used on x′ before computing z. Here we simply omit this.\nis hard to pass through the defective CNN from bottom to top. To produce accurate predictions, the deep neural network has to find relevant signals other than the texture, e.g., the shape. Those corrupted neurons have no severe impact on the extraction of shape information since neighbors of those neurons in the same filter are still capable of passing the shape information to the next layer.\nIn this paper, we find that simply settingMdefect by random initialization is already helpful for learning a robust CNN. Before training, we sample each entry in Mdefect using Bernoulli distribution with keep probability p and then fix Mdefect during training and testing. More discussions and ablation studies are provided in Section 4.\nAs can be seen from Equation (3)-(5), the implementation of our defective convolutional layer is similar to the dropout operation (Srivastava et al., 2014). To demonstrate the relationship and differences, we mathematically define the dropout as below.\nStandard convolutional layer + dropout:\nMdp ∼ Bernoulli(p) (6) x′ = w ⊗conv x+ b (7) z′ = f(x′) (8) z = Mdp ∗ z′. (9)\nThe shape of Mdp is the same as Mdefect, and the value of each entry in Mdp is sampled in each batch using some sampling strategies at each step during training. Generally, entries in Mdp are independent and identically sampled in an online fashion using Bernoulli distribution with keep probability p. DropBlock (Ghiasi et al., 2018) dropouts a connected block in each channel. In SpatialDropout (Tompson et al., 2015), the dropout masks apply to whole channels. Note that the masked unit of our proposed method is a single neuron.\nThere are several differences between dropout and defective convolutional layer. First, the motivations behind the two methods are quite different. Dropout tries to reduce overfitting by preventing coadaptations on training data. As the neurons in the feature maps still have full access to local textural features in testing, the model does not have to learn shape features. However, in our proposed architecture, defective neurons are fixed to be corrupted, and such neurons cannot contribute to local features. Second, the binary matrix Mdp is sampled online during training and is removed during testing, while the binary matrix Mdefect in defective convolutional layers is predefined and keeps fixed in both training and testing. Third, places to apply and values of the keep probability p for two methods are different. Dropout methods are usually applied to top layers, and p is set to be large (e.g., 0.9) (Tompson et al., 2015; Ghiasi et al., 2018). For defective convolutional layer, we find using a small p (e.g., 0.1) and applying it to bottom layers are more effective." }, { "heading": "3.2 DEFECTIVE CNN RELIES LESS ON TEXTURE BUT MORE ON SHAPE", "text": "In the defective CNN, some neurons are set to be corrupted during both training and testing, and we argue that this design can help the CNN make prediction relying less on textural information but more on shape information. In this subsection, we provide some empirical analyses to verify our idea.\nWe design a particular image manipulation in which the local texture of the object in an image is preserved while the shape is destroyed. In detail, we divide an image into k×k patches and randomly\nrelocate those patches to form a new image. A typical example is shown in Figure 3. By relocating the patches, it is even hard for a human to recognize the object in the picture when k is large.\nWe manipulate a set of images and test whether a defective CNN and a standard CNN can make correct predictions. The experimental details are described as follows. First, we construct a defective CNN by applying defective convolutional layers to the bottom layers of a standard ResNet-18. Then, we train the defective CNN along with a standard ResNet-18 on the ImageNet dataset and sample 4000 images from the validation set which are predicted correctly with more than 99% confidence by both two CNNs. We make manipulations to the sampled images by setting k ∈ {2, 4, 8}, feed these images to the networks and check their classification accuracy. The results in Table 1 show that when the shape information is destroyed but the local textural information is preserved, the defective CNN performs consistently worse than standard CNN, which verifying that defective CNN makes predictions relying less on local textural information but more on shape information." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we provide extensive experimental analyses on the performance of defective CNNs. We mainly test our models against black-box attacks. There are two reasons that indicate considering black-box attacks is meaningful. First, the black-box setting is more practical for real-world tasks. Second, for the white-box setting, the adversarial examples generated by defective CNNs appear to have semantic shapes and may even fool humans as well. This indicates that the small perturbations can actually change the semantic meaning of images for humans. Those samples should not be categorized into adversarial examples and should not be used to evaluate adversarial robustness.\nFor better evaluating the defense ability against black-box attacks, we propose a black-box defense evaluation protocol by examining the target model against the transfer-based attack, the decisionbased attack, and the additive Gaussian noise. In real-world tasks, attackers usually cannot access the parameters of the target models and thus need to transfer adversarial examples generated by their models. This setting of attack is referred to as transfer-based attack (Liu et al., 2016). Sometimes, attackers can get the final model decision and raise the more powerful decision-based attack (Brendel\net al., 2017). Both the two types of black-box attack are available in most real-world scenarios. Recently, Ford et al. (2019) bridge the adversarial robustness and corruption robustness, and points out that a successful adversarial defend method should also effectively defense against images with additive Gaussian noise. Also, the additive Gaussian noise is a type of black-box attack to some extent since the noise distribution has nothing to do with the parameters of target models. Therefore, we also test the performance of target models against additive Gaussian noise.\nWe first test the robustness of defective CNN against transfer-based attacks, and then make ablation studies on possible design choices of defective CNN. Due to space limitation, more results including transfer-based attacks from ensemble models, decision-based attacks, additive Gaussian noise, whitebox attacks and gray-box attacks are listed in Appendix A." }, { "heading": "4.1 TRANSFER-BASED ATTACK", "text": "" }, { "heading": "4.1.1 EXPERIMENTAL SETTINGS", "text": "We compare our proposed method with state-of-the-art defense methods (Buckman et al., 2018; Madry et al., 2017). For fair comparisons, we follow Buckman et al. (2018) to generate adversarial examples using wide residual networks (Zagoruyko & Komodakis, 2016) with a depth of 32 and a width factor of 4. The 4-block structure of ResNet-32 is shown in Appendix C. The blocks are labeled 0, 1, 2, 3 and the 0th block is the first convolution layer. Both FGSM (Goodfellow et al., 2015) and PGD (Kurakin et al., 2016) attacks are run on the entire validation set of CIFAR-10 dataset. These two methods both have `∞ perturbation scale 8 and PGD runs for 7 gradient descent steps with step size 2. The generated adversarial examples are used to attack target networks. For the target network, we use the same structure but applying defective convolutional layers to the 0th and 1st blocks with keep probability p = 0.1 and train the model using standard optimization method. As is mentioned in Section 3, our proposed method is essentially different from dropout, and thus we also take dropout methods as baselines. More specifically, we test SpatialDropout and DropBlock. For both methods, we follow the instruction from Ghiasi et al. (2018) to apply dropout to the 3rd block with p = 0.9. The block of DropBlock is set to be a 3× 3 square. Training details can be found in Appendix D. Second, we test our proposed method in different architectures on the CIFAR-10 dataset. We apply defective convolutional layers, in a way which is similar to the experiment above, to five popular network architectures: ResNet-18 (He et al., 2016), ResNet-50, DenseNet-121 (Huang et al., 2017), SENet-18 (Hu et al., 2017b) and VGG-19 (Simonyan & Zisserman, 2014). For each architecture, we replace the standard convolutional layer with the defective version on the bottom layers. We then test the black-box defense performance against transfer-based attacks on 5000 samples from the validation set. Adversarial examples are generated by PGD, which runs for 20 steps with step size 1 and the `∞ perturbation scale is set to 16. More results on the MNIST dataset and illustrations of where to apply the defective layers can be found in Appendix A and C." }, { "heading": "4.1.2 EXPERIMENTAL RESULTS", "text": "First, we compare with two strong defense methods (Madry et al., 2017; Buckman et al., 2018) and two dropout methods (Tompson et al., 2015; Ghiasi et al., 2018). The results are listed in Table 2. Madry et al. (2017) is one adversarial training method that directly optimizes on adversarial examples in an online way. Based on adversarial training, Buckman et al. (2018) proposed a method that discretizes inputs and achieved higher accuracy against transfer-based attack. The results show the strengths of our proposed method on both robustness and generalization, even though our model is only trained on clean data. In addition, from the results of two dropout methods, we can conclude that SpatialDropout and DropBlock do not improve the robustness of standard CNNs.\nSecond, we list the black-box defense results of applying defective convolutional layers to various architectures in Table 3. The results show that defective convolutional layers consistently improve the robustness of various network architectures against transfer-based attacks. In this paper, the successful defense rates except Table 2 are calculated on the adversarial examples whose corresponding original images can be classified correctly by the tested model." }, { "heading": "Architecture ResNet-18 ResNet-50 DenseNet-121 SENet-18 VGG-19 Test Accuracy", "text": "" }, { "heading": "4.2 ABLATION STUDIES", "text": "There are several design choices of the defective CNN, which include the appropriate positions to apply defective convolutional layers, the benefit of breaking symmetry, the diversity introduced by randomness, as well as the extensibility of defective layers via structure adjustment. In this subsection, we conduct a series of comparative experiments and use black-box defense performance against transfer-based attacks as the evaluation criterion. In our experiments, we found that the performance is not sensitive to the choices on the source model to attack and the target model to defense. Without loss of generality, we only list the performances using DenseNet-121 as the source model and ResNet-18 as the target model on the CIFAR-10 dataset and leave more experimental results in Appendix A.8. The results are listed in Table 4.\nDefective Layers on Bottom layers v.s Top Layers. We apply defective layers with different keep probabilities to the bottom layers and the top layers of the original CNN, respectively. Comparing the results of the models with the same keep probability but different parts being masked, we find that applying defective layers to bottom layers enjoys significantly higher success defense rates. Moreover, only applying defective layers to bottom layers can achieve better performance than applying defective layers on both the bottom and top layers. The bottom layers mainly contribute to detect the edges and shape, while the receptive fields of neurons in top layers are too large to respond to the location sensitive information. This corroborates the phenomena shown in Zeiler & Fergus (2014); Mordvintsev et al. (2015). Also, we find that the defense accuracy monotonically increases as the test accuracy decreases along with the keep probability (See the trend map in Appendix A.1). The appropriate value for the keep probability mainly depends on the relative importance of generalization and robustness.\nDefective Neuron v.s. Defective Channel. As our method independently selects defective neurons on different channels in a layer, we break the symmetry of the original CNN structure. To see whether this asymmetric structure would help, we try to directly mask whole channels instead of neurons using the same keep probability as the defective layer and train it to see the performance. This defective channel method does not hurt the symmetry while also leading to the same decrease in the number of convolutional operations. Table 4 shows that although our defective CNN suffers a\nsmall drop in test accuracy due to the low keep probability, we have a great gain in the robustness, compared with the defective-channel CNN.\nDefective Masks are Shared Among Channels or Not. The randomness in generating masks in different channels and layers allows each convolutional filter to focus on different input patterns. Also, it naturally involves various topological structures for local feature extraction instead of learning (Dai et al., 2017; Chang et al., 2018). We show the essentiality of generating various masks per layer via experiments that compare to a method that only randomly generates one mask per layer and uses it in every channel. Table 4 shows that applying the same mask to each channel will decrease the test accuracy. This may result from the limitation of expressivity due to the monotone masks at every channel of the defective layer.\nIncrease the Number of Channels at Defective Layers. Although masking neurons does not reduce the parameters of the CNN, it reduces the number of convolutional operations and may decrease the expressive capacity of the CNN. To compensate for these defective positions, we increase the number of neurons at the defective layers by increasing the number of channels. Table 4 shows that increasing channels does help the network with defective layers to obtain higher test accuracy while maintaining good robustness performance." }, { "heading": "5 CONCLUSION", "text": "In this paper, we introduce and experiment on defective CNNs, a modified version of existing CNNs that makes CNNs capture more information other than local textures, especially the shape. We show that defective CNNs can achieve much better robustness while maintaining high test accuracy. More specifically, by using defective convolutional layers, we reach state-of-the-art performance against two transfer-based attack methods. Another insight resulting from our experiments is that the adversarial perturbations generated against defective CNNs can actually change the semantic information of images and may even fool humans. We hope that these findings bring more understanding on adversarial examples and the robustness of neural networks." }, { "heading": "A MORE EXPERIMENTAL RESULTS", "text": "" }, { "heading": "A.1 BLACK-BOX ATTACK WITH DIFFERENT KEEP PROBABILITIES", "text": "In this subsection, we show the trade-off between robustness and generalization performance in defective CNNs with different keep probabilities. We use DenseNet-121 (Huang et al., 2017) as the source model to generate adversarial examples from CIFAR-10 with PGD (Kurakin et al., 2016), which runs for 20 steps with step size 1 and perturbation scale 16. The defective convolutional layers are applied to the bottom layers of ResNet-18 (He et al., 2016). Figure 4 shows that the defense accuracy monotonically increases as the test accuracy decreases along with the keep probability. We can see the trade-off between robustness and generalization." }, { "heading": "A.2 TRANSFER-BASED ATTACK FROM ENSEMBLE MODELS ON CIFAR-10", "text": "In this subsection, we evaluate the defense performance of networks with defective convolutional layers against transfer-based attack from ensemble models on the CIFAR-10 dataset. We apply defective convolutional layers to five popular network architectures ResNet-18, ResNet-50 (He et al., 2016), DenseNet-121, SENet-18 (Hu et al., 2017a), VGG-19 (Simonyan & Zisserman, 2014), and test the black-box defense performance against transfer-based attacks from ensemble models on the CIFAR-10 dataset. For each architecture, we replace the standard convolutional layer with the defective version on the bottom layers of different architectures. Illustrations of defective layers applied to these network architectures can be found in Appendix C. We test the black-box defense performance against transfer-based attacks on 5000 samples from the validation set. Adversarial examples are generated by PGD, which runs for 7 steps with step size 2 and the `∞ perturbation scale is set to 8. We generate five ensemble models as the source model by fusing every four models in all five models.\nThe results can be found in Table 5. These results show that defective convolutional layers can consistently improve the black-box defense performance of various network architectures against transfer-based attacks from ensemble models on the CIFAR-10 dataset." }, { "heading": "A.3 TRANSFER-BASED ATTACK ON MNIST", "text": "In this subsection, we evaluate the defense performance of networks with defective convolutional layers against trasfer-based attack on the MINST dataset. We apply defective convolutional layers to five popular network architectures ResNet-18, ResNet-50, DenseNet-121, SENet-18, VGG-19, and test the black-box defense performance against transfer-based attacks on MNIST dataset. For each architecture, we replace the standard convolutional layer with the defective version on bottom layers of different architectures. Illustrations of defective layers applied to these network architectures can be found in Appendix C. We test the black-box defense performance against transfer-based attacks on 5000 samples from the validation set. Adversarial examples are generated by PGD which runs for 40 steps with step size 0.01× 255 and perturbation scale 0.3× 255. The results can be found in Table 6. These results show that defective convolutional layers can consistently improve the black-box defense performance of various network architectures against transfer-based attacks on the MNIST dataset." }, { "heading": "Architecture ResNet-18 ResNet-50 DenseNet-121 SENet-18 VGG-19 Test Accuracy", "text": "" }, { "heading": "A.4 DECISION-BASED ATTACK", "text": "In this subsection, we evaluate the defense performance of networks with defective convolutional layers against the decision-based attack. Decision-based attack performs based on the prediction of the model. It needs less information from the model and has the potential to perform better against adversarial defenses based on gradient masking. Boundary attack (Brendel et al., 2017) is one effective decision-based attack. The attack will start from a point that is already adversarial by applying a large scale perturbation to the original image and keep decreasing the distance between the original image and the adversarial example by random walks. After iterations, we will get the final perturbation, which has a relatively small scale. The more robust the model is, the larger the final perturbation will be.\nIn our experiments, we use the implementation of boundary attack in Foolbox (Rauber et al., 2017). It finds the adversarial initialization by simply adding large scale uniform noises on input images. We perform our method on ResNet-18 and test the performance on CIFAR-10 with 500 samples from the validation set. The 5-block structure of ResNet-18 is shown in Appendix Figure 2. The blocks are labeled 0, 1, 2, 3, 4 and the 0th block is the first convolution layer. We apply the defective layer structure with keep probability p = 0.1 to the bottom blocks (the 0th, 1st, 2nd blocks). For comparison, we implement label smoothing (Szegedy et al., 2016) with smoothing parameter = 0.1 on a standard ResNet-18. We compare with both standard CNN and label smoothing (Szegedy et al., 2016) which is known to be a gradient masking method (Papernot et al., 2016a). The median squared `2-distance of final perturbation across all samples proposed in Brendel et al. (2017) is used as our evaluation criterion. The score S(M) is defined in Equation (10), where PMi ∈ RN is the final perturbation that the Boundary attack finds on model M for the ith image. Before computing PMi , the images are normalized into [0, 1]N .\nS(M) = Median i\n( 1\nN ‖PMi ‖22\n) (10)\nFrom the results in Table 7, we point out that the gradient masking method does not increase the robustness against boundary attack. Our proposed method achieves significant improvement over the standard CNN." }, { "heading": "A.5 ADDITIVE GAUSSIAN NOISE", "text": "In this subsection, we evaluate the defense performance of networks with defective convolutional layers against additive Gaussian noise. Recently, Ford et al. (2019) bridge the adversarial robustness and corruption robustness, and points out that a successful adversarial defense method should also effectively defense against images with additive Gaussian noise. Also the Gaussian noises usually do not change the shape of objects, our models should have better defense performance. To see whether our structure is more robust in this setting, we feed input images with additive Gaussian noises to both standard and defective CNNs.\nTo obtain noises of scales similar to the adversarial perturbations, we generate i.i.d. Gaussian random variables x ∼ N(0, σ2), where σ ∈ {1, 2, 4, 8, 12, 16, 20, 24, 28, 32}, clip them to the range [−2σ, 2σ] and then add them to every pixel of the input image. For CIFAR-10, we add Gaussian noises to 5000 samples which are drawn randomly from the validation set and can be classified correctly by all the tested models. We place the defective layers with keep probability p = 0.1 on ResNet-18 in the same way as we did in Section A.4.\nThe experimental results are shown in Figure 5. Standard ResNet-18 is still robust to small scale Gaussian noise such as σ ≤ 8. After that, the performance of standard ResNet-18 begins to drop sharply as σ increases. In contrast, defective CNNs show far better robustness than the standard version. The defective ResNet-18 with keep probability 0.1 can maintain high accuracy until σ increase to 16 and have a much slower downward trend as σ increases." }, { "heading": "A.6 WHITE-BOX ATTACK", "text": "In this subsection, we evaluate the defense performance of ResNet-18 with defective convolutional layers against white-box attacks on CIFAR-10 dataset. With neither obfuscated gradient nor gradient\nmasking, defective convolutional layers can still improve defense performance under various kinds of white-box attack (See Table 8). The results on other network architectures are similar." }, { "heading": "A.7 GRAY-BOX ATTACK", "text": "In this subsection, we show the gray-box defense performance of defective CNNs on the CIFAR-10 dataset. We use gray-box attacks in the following two ways. One way is to generate adversarial examples against one trained neural network and test those images on a network with the same structure but different initializations. The other way is specific to our defective models. We generate adversarial examples on one trained defective CNN and test them on a network with the same keep probability but different sampling of defective neurons. In both of these two ways, the adversarial knows some information on the structure of the network but does not know the specific parameters of it.\nFrom the results listed in Table 9, we find that defective CNNs have similar performance on adversarial examples generated by our two kinds of gray-box attacks. This phenomenon indicates that defective CNNs with the same keep probability would catch similar information which is insensitive to the selections of defective neurons. Also, comparing with the gray-box performance of standard CNNs (See Table 10), defective CNNs show stronger defense ability." }, { "heading": "Architecture ResNet-18 DenseNet-121", "text": "" }, { "heading": "A.8 FULL INFORMATION ON EXPERIMENTS MENTIONED IN SECTION 4.3", "text": "In this subsection, we will show more experimental results on defective CNNs using different adversarial examples, different attack methods and different mask settings on ResNet-18. The networks used to generate adversarial examples including ResNet-18, ResNet-50, DenseNet-121, SENet18, and VGG-19. More specifically, we choose 5000 samples to generate adversarial examples via FGSM and PGD, and 1000 samples for CW attack. All samples are drawn from the validation set of CIFAR-10 dataset and can be correctly classified correctly by the model used to generate adversarial examples.\nFor FGSM, we try step size ∈ {8, 16, 32}, namely FGSM8, FGSM16, FGSM32, to generate adversarial examples. For PGD, we have tried more extensive settings. Let { , T, α} be the PGD setting with step size , the number of steps T and the perturbation scale α, then we have tried PGD settings (1, 8, 4), (2, 4, 4), (4, 2, 4), (1, 12, 8), (2, 6, 8), (4, 3, 8), (1, 20, 16), (2, 10, 16), (4, 5, 16), (1, 40, 32), (2, 20, 32), (4, 10, 32) to generate PGD adversarial examples. From the experimental results, we observe the following phenomena. First, we find that the larger the perturbation scale is, the stronger the adversarial examples are. Second, for a fixed perturbation scale, the smaller the step size is, the more successful the attack is, as it searches the adversarial examples in a more careful way around the original image. Based on these observation, we only show strong PGD attack results in the Appendix, namely the settings (1, 20, 16) (PGD16), (2, 10, 16) (PGD2,16) and (1, 40, 32) (PGD32). Nonetheless, our models also perform much better on weak PGD attacks. For the CW attack, we have also tried different confidence parameters κ. However, we find that for large κ, the algorithm is hard to find adversarial examples for some neural networks such as VGG because of its logit scale. For smaller κ, the adversarial examples have weak transferability, which means they can be easily defended even by standard CNNs. Therefore, in order to balance these two factors, we choose κ = 40 (CW40) for DenseNet-121, ResNet-50, SENet-18 and κ = 20 (CW20) for ResNet-18 as a good choice to compare our models with standard ones. The step number for choosing the parameter c is set to 30.\nNote that the noises of FGSM and PGD are considered in the sense of `∞ norm and the noise of CW is considered in the sense of `2 norm. All adversarial examples used to evaluate can fool the original network. Table 11,12,13,14 and 15 list our experimental results. DC means we replace defective neurons with defective channels in the corresponding blocks to achieve the same keep probability. SM means we use the same defective mask on all the channels in a layer. ×n means we multiply the number of the channels in the defective blocks by n times. EN means we ensemble five models with different defective masks of the same keep probability." }, { "heading": "B ADVERSARIAL EXAMPLES GENERATED BY DEFECTIVE CNNS", "text": "" }, { "heading": "B.1 ADVERSARIAL EXAMPLES THAT CAN FOOL HUMAN", "text": "In this subsection, we show more adversarial examples generated by defective CNNs. Figure 6 shows some adversarial examples generated on the CIFAR-10 dataset along with the corresponding original images. These examples are generated from CIFAR-10 against a defective ResNet-18 of keep probability 0.2 on the 0th, 1st, 2nd blocks, a defective ResNet-18 of keep probability 0.1 on the 1st, 2nd blocks, and a standard ResNet-18. We use attack method MIFGSM with perturbation scale α = 16 and α = 32. We also show some adversarial examples generated from Tiny-ImageNet3 along with the corresponding original images in Figure 7. These examples are generated from Tiny-ImageNet against a defective ResNet-18 of keep probability of the keep probability 0.1 on the 1st, 2nd blocks and a standard ResNet-18. The attack methods are MIFGSM with scale 64 and 32, step size 1 and step number 40 and 80 respectively.\nThe adversarial examples generated by defective CNNs exhibit more semantic shapes of their fooled classes, such as the mouth of the frog in Figure 6. This also corroborates the point made in Tsipras et al. (2018) that more robust models will be more aligned with human perception.\n3https://tiny-imagenet.herokuapp.com/" }, { "heading": "B.2 RANDOMLY SELECTED ADVERSARIAL EXAMPLES", "text": "See Figure 15 for a randomly sampled set of images from Tiny-ImageNet along with the corresponding adversarial examples generated against standard and defective ResNet-18 under the same attack setting." }, { "heading": "C ARCHITECTURES", "text": "In this subsection, we briefly introduce the network architectures used in our experiments. Generally, we apply defective convolutional layers to the bottom layers of the networks and we have tried six different architectures, namely ResNet-18, ResNet-50, DenseNet-121, SENet-18, VGG-19 and WideResNet-32. We next illustrate these architectures and show how we apply defective convolutional layers to them. In our experiments, applying defective convolutional layers to a block means randomly selecting defective neurons in every layer of the block.\nC.1 RESNET-18\nResNet-18 (He et al., 2016) contains 5 blocks: the 0th block is one single 3× 3 convolutional layer, and each of the rest contains four 3 × 3 convolutional layers. Figure 8 shows the whole structure of ResNet-18. In our experiments, we apply defective convolutional layers to the 0th, 1st, 2nd blocks which are the bottom layers.\nC.2 RESNET-50\nSimilar to ResNet-18, ResNet-50 (He et al., 2016) contains 5 blocks and each block contains several 1 × 1 and 3 × 3 convolutional layers (i.e. Bottlenecks). In our experiment, we apply defective convolutional layers to the 3× 3 convolutional layers in the first three bottom blocks. The defective layers in the 1st block are marked by the red arrows in Figure 9." }, { "heading": "C.3 DENSENET-121", "text": "DenseNet-121 (Huang et al., 2017) is another popular network architecture in deep learning researches. Figure 10 shows the whole structure of DenseNet-121. It contains 5 Dense-Blocks, each of which contains several 1× 1 and 3× 3 convolutional layers. Similar to what we do for ResNet-50, we apply defective convolutional layers to the 3 × 3 convolutional layers in the first three “bottom” blocks. The growth rate is set to 32 in our experiments.\nC.4 SENET-18\nSENet (Hu et al., 2017a), a network architecture which won the first place in ImageNet contest 2017, is shown in Figure 11. Note that here we use the pre-activation shortcut version of SENet-18 and we apply defective convolutional layers to the convolutional layers in the first 3 SE-blocks.\nC.5 VGG-19\nVGG-19 (Simonyan & Zisserman, 2014) is a typical neural network architecture with sixteen 3× 3 convolutional layers and three fully-connected layers. We slightly modified the architecture by replacing the final 3 fully connected layers with 1 fully connected layer as is suggested by recent architectures. Figure 12 shows the whole structure of VGG-19. We apply defective convolutional layers on the first four 3× 3 convolutional layers." }, { "heading": "C.6 WIDERESNET-32", "text": "Based on residual networks, Zagoruyko & Komodakis (2016) proposed a wide version of residual networks which have much more channels. In our experiments, we adopt the network with a width factor of 4 and apply defective layers on the 0th and 1st blocks. Figure 13 shows the whole structure of WideResNet-32. D TRAINING PROCESS ON CIFAR-10 AND MNIST\nTo guarantee our experiments are reproducible, here we present more details on the training process in our experiments. When training models on CIFAR-10, we first subtract per-pixel mean. Then we apply a zero-padding of width 4, a random horizontal flip and a random crop of size 32× 32 on train data. No other data augmentation method is used. We apply SGD with momentum parameter 0.9, weight decay parameter 5× 10−4 and mini-batch size 128 to train on the data for 350 epochs. The learning rate starts from 0.1 and is divided by 10 when the number of epochs reaches 150 and 250. When training models on MNIST, we first subtract per-pixel mean. Then we apply random horizontal flip on train data. We apply SGD with momentum parameter 0.9, weight decay parameter 5× 10−4 and mini-batch size 128 to train on the data for 50 epochs. The learning rate starts from 0.1 and is divided by 10 when the number of epochs reaches 20 and 40. Figure 14 shows the train and test\ncurves of standard and defective ResNet-18 on CIFAR-10 and MNIST. Different network structures share similar tendency regarding the train and test curves." }, { "heading": "E ATTACK APPROACHES", "text": "In this subsection, we describe the attack approaches used in our experiments. We first give an overview of how to attack a neural network in mathematical notations. Let x be the input to the neural network and fθ be the function which represents the neural network with parameter θ. The output label of the network to the input can be computed as c = argmaxi fθ(x)i. In order to perform an adversarial attack, we add a small perturbation δx to the original image and get an adversarial image xadv = x+ δx. The new input xadv should look visually similar to the original x. Here we use the commonly used `∞-norm metric to measure similarity, i.e., we require that ||δx|| ≤ . The attack is considered successful if the predicted label of the perturbed image cadv = argmaxi fθ(xadv)i is different from c.\nGenerally speaking, there are two types of attack methods: Targeted Attack, which aims to change the output label of an image to a specific (and different) one, and Untargeted Attack, which only aims to change the output label and does not restrict which specific label the modified example should let the network output.\nIn this paper, we mainly use the following four gradient-based attack approaches. J denotes the loss function of the neural network and y denotes the ground truth label of x.\n• Fast Gradient Sign Method (FGSM). FGSM (Goodfellow et al., 2015) is a one-step untargeted method which generates the adversarial example xadv by adding the sign of the gradients multiplied by a step size to the original benign image x. Note that FGSM\ncontrols the `∞-norm between the adversarial example and the original one by the parameter .\nxadv = x+ · sign(∇xJ(x, y)).\n• Basic iterative method (PGD). PGD (Kurakin et al., 2016) is a multiple-step attack method which applies FGSM multiple times. To make the adversarial example still stay close to the original image, the image is projected to the `∞-ball centered at the original image after every step. The radius of the `∞-ball is called perturbation scale and is denoted by α.\nx0adv = x, x k+1 adv = Clipx,α [ xkadv + · sign(∇xJ(xkadv, y)) ] .\n• Momentum Iterative Fast Gradient Sign Method (MIFGSM). MIFGSM (Dong et al., 2017) is a recently proposed multiple-step attack method. It is similar to PGD, but it computes the optimize direction by a momentum instead of the gradients. The radius of the `∞-ball is also called perturbation scale and is denoted by α.\ngk+1 = µ · gk + ∇xJ(xkadv, y) ‖∇xJ(xkadv, y)‖1\nx0adv = x, g0 = 0 x k+1 adv = Clipx,α [ xkadv + · sign(gk+1) ] ." } ]
2,019
null
SP:ff1a7f2310f3d3c647ede8e418dcc104b9da3e2b
[ "This paper proposes a pretraining technique for question generation, where an answer candidate is chosen beforehand, and the objective is to predict the answer containing sentence given a paragraph excluding this sentence and the target answer candidate. The intuition of this method is that question generation requires to generate a sentence which contains the information about the answer while being conditioned on the given paragraph. In particular, the paper compares its approach to Devlin’s presentation (https://nlp.stanford.edu/seminar/details/jdevlin.pdf according to the references; is it not a published work?) which uses next sentence generation for pretraining, that is less related to the downstream question generation task.", "The paper in the field of machine reading comprehension. The authors address the issue of generating labeled data of question-answer tuples, without the need of manual annotation. Specifically, the authors propose a method that dynamically generates K answers given a paragraph in order to generate diverse questions and, secondly, pre-training the question generator on answers in a sentence generation task. The authors then show that this method is superior to existing baseline methods." ]
Numerous machine reading comprehension (MRC) datasets often involve manual annotation, requiring enormous human effort, and hence the size of MRC data remains significantly smaller than that of those available for unsupervised learning, limiting the generalization capability. To overcome this issue, a new approach, which can generate synthetic question-and-answer data from large corpora such as Wikipedia, has been recently proposed. Such synthetic data can be utilized as additional data to pre-train the main MRC model before fine-tuning it using real, existing MRC data. However, the quality of generated questions and answers is still far from being satisfactory, so previous work introduced a pretraining technique for the question generator by pre-training on the generation of the next sentence that follows a paragraph. However, the next sentence that follows a paragraph may have little relevance to the questions or answers from within the paragraph, and thus it is not the ideal candidate for pre-training question generation. In response, we propose a novel method called Answer-containing Sentence Generation (ASGen). Our approach is composed of multiple stages, involving two advanced techniques, (1) dynamically determining K answers from a given document and (2) pre-training the question generator using the task of generating the answer-containing sentence. We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model using the downstream MRC data after training on synthetic data. Experimental results show that our approach achieves outperforms existing methods achieving new state-of-the-art results on SQuAD question generation and increases the performance of the state-of-the-art MRC models across a range of datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD, and QUASART with no architectural modifications to the original MRC model.
[ { "affiliations": [], "name": "QUESTION ANSWERING" } ]
[ { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT),", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pretraining of deep bidirectional transformers for language understanding", "venue": "URL https://nlp.stanford.edu/seminar/details/jdevlin.pdf,", "year": 2019 }, { "authors": [ "Bhuwan Dhingra", "Kathryn Mazaitis", "William W Cohen" ], "title": "Quasar: Datasets for question answering by search and reading", "venue": "arXiv preprint arXiv:1707.03904,", "year": 2017 }, { "authors": [ "Li Dong", "Nan Yang", "Wenhui Wang", "Furu Wei", "Xiaodong Liu", "Yu Wang", "Jianfeng Gao", "Ming Zhou", "Hsiao-Wuen Hon" ], "title": "Unified language model pre-training for natural language understanding and generation", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Xinya Du", "Junru Shao", "Claire Cardie" ], "title": "Learning to ask: Neural question generation for reading comprehension", "venue": "In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2017 }, { "authors": [ "David Golub", "Po-Sen Huang", "Xiaodong He", "Li Deng" ], "title": "Two-stage synthesis networks for transfer learning in machine comprehension", "venue": "In Proceedings of the conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2017 }, { "authors": [ "Yanghoon Kim", "Hwanhee Lee", "Joongbo Shin", "Kyomin Jung" ], "title": "Improving neural question generation using answer separation", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Taku Kudo" ], "title": "Mecab: Yet another part-of-speech and morphological analyzer", "venue": "http://mecab. sourceforge. jp,", "year": 2006 }, { "authors": [ "Seungyoung Lim", "Myungji Kim", "Jooyoul Lee" ], "title": "Korquad1.0: Korean qa dataset for machine reading comprehension", "venue": "arXiv preprint arXiv:1909.07005,", "year": 2019 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of the conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT),", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "URL https://s3-us-west-2.amazonaws.com/openaiassets/researchcovers/languageunsupervised/language understanding paper.pdf,", "year": 2018 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of the conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2016 }, { "authors": [ "Pranav Rajpurkar", "Robin Jia", "Percy Liang" ], "title": "Know what you don’t know: Unanswerable questions for squad", "venue": "In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2018 }, { "authors": [ "Linfeng Song", "Zhiguo Wang", "Wael Hamza" ], "title": "A unified query-based generative model for question generation and question answering", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Sandeep Subramanian", "Tong Wang", "Xingdi Yuan", "Saizheng Zhang", "Yoshua Bengio", "Adam Trischler" ], "title": "Neural models for key phrase extraction and question generation", "venue": "In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Yao Zhao", "Xiaochuan Ni", "Yuanyuan Ding", "Qifa Ke" ], "title": "Paragraph-level neural question generation", "venue": null, "year": 2019 }, { "authors": [ "Zhao" ], "title": "BLEU-4 score on Test-Split3 by 1.3 (w.r.t the reproduced score). C STANDARD ERRORS OF EVALUATION IN DOWNSTREAM MRC TASKS As shown in Table 9, in the case of downstream MRC results (EM/F1) which we dicussed in Section 4, for SQuAD v1.1 and SQuAD v2.0, we selected 5 model checkpoints from the same pre", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine reading comprehension (MRC), which finds an answer to a given question from given paragraphs called context, is an essential task in natural language processing. With the use of high-quality human-annotated datasets for this task, such as SQuAD-v1.1 (Rajpurkar et al., 2016), SQuAD-v2.0 (Rajpurkar et al., 2018), and KorQuAD (Lim et al., 2019), researchers have proposed MRC models, often surpassing human performance on these datasets. These datasets commonly involve finding a short snippet within a paragraph as an answer to a given question.\nHowever, these datasets require a significant amount of human annotation to create pairs of a question and its relevant answer from a given context. Often the size of the annotated data is relatively small compared to that of data used in other unsupervised tasks such as language modeling. Hence, researchers often rely on the two-phase training method of transfer learning, i.e., pre-training the model using large corpora from another domain in the first phase, followed by fine-tuning it using the main MRC dataset in the second phase.\nMost state-of-the-art models for MRC tasks involve such pre-training methods. Peters et al. (2018) present a bidirectional contextual word representation method called ELMo, which is pre-trained on a large corpus, and its learned contextual embedding layer has been widely adapted to many\nother MRC models. Devlin et al. (2019a) show that pre-training with a masked language model on a large corpus and then fine-tuning on a downstream dataset results in significant performance improvements.\nHowever, pre-training on another domain task and then fine-tuning on a downstream task may suffer from performance degradation, depending on which pre-training task is used in the first phase. For example, Yang et al. (2019) show that the pre-training task of next sentence classification decreases performance on the downstream MRC tasks. To handle this problem, generating synthetic data similar to the those of a downstream task is crucial to obtain a properly pre-trained model. Recently, researchers have studied a model for generating synthetic MRC data from large corpora such as Wikipedia. This is essentially a form of transfer learning, by training a generation model and using this model to create synthetic data for training the MRC model, before fine-tuning on the downstream MRC dataset.\nGolub et al. (2017) suggest a two-stage synthesis network that decomposes the process of generating question-answer pairs into two steps, generating a fixed number (K) of answers conditioned on the paragraph, and question generation conditioned on the paragraph and the generated answer. Devlin et al. (2019b) introduced a pre-training technique for the question generator of this method by pretraining on the generation of next-sentence that follows the paragraph.\nHowever, choosing a fixed number (K) of candidate answers from each paragraph will lead to missing candidates if K is too small, and will lead to having lower-quality candidates if K is too big. Moreover, the next sentence generation task is not conditioned on the answer, despite the answer being a strong conditional restriction for question generation task. Also, the next sentence that follows a paragraph may have little relevance to the questions or answers from within the paragraph, and hence is not the ideal candidate for pre-training question generation.\nTo address these issues, we propose Answer-containing Sentence Generation (ASGen), a novel method for a synthetic data generator with two novel processes, (1) dynamically predicting K answers to generate diverse questions and (2) pre-training the question generator on answer-containing sentence generation task. We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on downstream MRC datasets after training on the generated data. Experimental results show that our approach outperforms existing generation methods, increasing the performance of the state-ofthe-art MRC models across a wide range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD, and QUASAR-T (Dhingra et al., 2017) without any architectural modifications to the MRC model." }, { "heading": "2 PROPOSED METHOD", "text": "This section discusses the details of our proposed ASGen method. ASGen consists of a BERT-based generative model (BertGen) and answer-containing sentence generation pre-training (AS). First, we will describe how BertGen model generates synthetic data from Wikipedia. Next, we will explain the novel components of our methods and how we pre-trained the question generator in BertGen based on them. BertGen encodes paragraphs in Wikipedia with two separate generation networks, the answer generator and the question generator.\nAnswer Generator. As shown in Fig. 2-(1), we generate the number of answer candidates K for a given context without the question by applying a fully connected feed-forward layer on the contextual embedding of classification token “[CLS]”. To make the contextual embeddings and to predict\nanswer spans, we utilize a BERT (Devlin et al., 2019a) encoder (Fig. 2-BERT Encoder-A). Depending on the predicted number K, we select the K top candidate answer spans from the context. As shown in Fig. 2-(2), we use the K selected candidate answer spans as input to the question generator.\nQuestion Generator. Next, as shown in Fig. 2-(2), we generate a question conditioned on each answer predicted from the answer generator. Specifically, we pass as input to a BERT encoder the context and an indicator for the answer span location in the context (Fig. 2-BERT Encoder-Q). Next, a Transformer decoder (Vaswani et al., 2017) generates the question word-by-word based on the encoded representation of the context and the answer span. For pre-training such a question generator on an answer-containing sentence generation task, we exclude the answer-containing sentence from the original context and train the model to generate the excluded sentence given the modified context and the answer span as input.\nFinally, we generate questions and answers from a large corpus, e.g., all the paragraphs in Wikipedia in this paper. After generating such data, we train the MRC model on the generated data in the first phase and then fine-tune on the downstream MRC dataset (such as SQuAD) in the second phase. In this paper, we use BERT as the default MRC model, since it exhibits state-of-the-art performance in many MRC datasets." }, { "heading": "2.1 DYNAMIC ANSWER PREDICTION", "text": "The most natural method for humans to create a question-answer pair from a given context is to select the answer first and then create a corresponding question. In this situation, we conjecture that a human is more likely to choose as an answer a phrase that is “answer-like”, such as keyphrases, nouns, dates, names, etc. There may be several answers in the context that are likely to be selected by humans as answers, especially if the context is lengthy or if it contains multiple nouns, dates, names, etc.\nFor example, the context “Barack Hussein Obama II is an American attorney and politician who served as the 44th president of the United States from 2009 to 2017” can have as possible answers “Barack Hussein Obama”, “44th”, “United States”, “2009 to 2017”, etc. As shown in Fig. 4, to see these characteristics, we examine the distribution of the number of answers in the SQuAD dataset and hypothesize that there exists an underlying pattern in the number of answers that occur in a context. The conventional method to generate multiple answers from a context is to draw a fixed number (K) of answers. However, this approach can generate low-quality answers if K is too big, and it can impact the number and diversity of the generated answers if K is too small.\nTherefore, we predict the number of answers K in a given context W = {wt}T0 using regression as,\n{wenct }Tt=0 = BERT Encoder-A(W)t, K = bfk(wenc0 )c,\nwhere T is the number of word tokens in the context with position 0 reserved for classification token ‘[CLS]’, and fk represents a fully connected unit with two hidden layers that have hidden dimensions equal to H and 1, respectively, where H is the hidden dimension of BERT Encoder-A.\nTo calculate the score si for start index i of a predicted answer span, we compute the dot product of the encoder output with a trainable start vector S. For each start index i, we calculate the span end index score ei,j for end index j in a similar manner with a trainable end vector E, but conditioned on i, i.e.,\n{si}Ti=0 = S ◦wenci {ei,j}T,Ti,j=0 = E ◦ fs(w enc j ⊕wenci ),\nwhere fs represents a fully connected layer with hidden dimension H and ⊕ indicates the concatenation operation. For training, we use the mean squared error loss between K and ground-truth number of answers. We also use cross-entropy loss on the si,ei,j and ground truth start/end of the answer span for each token. Predicting the number of answers and predicting the span are jointly trained to minimize the sum of their respective losses.\nDuring inference, we choose the K top answer spans with the highest score summation of start index score and end index score, i.e.,\nAspan = {(i, j) | 1 ≤ i < T and i ≤ j < T}, ak = max({a | #{(i, j) | (i, j) ∈ Aspan and si + ei,j ≥ a} = K}), Aspank = {(i, j) | (i, j) ∈ A span and si + ei,j ≥ ak}.\nThe K selected answer spans Aspank are then given to the question generator as input in the form of an indication of the answer span location." }, { "heading": "2.2 PRE-TRAINING QUESTION GENERATOR", "text": "In order to generate questions conditioned on different answers that may arise in a context, we generate a question for each of the K answers. Devlin et al. (2019b) previously proposed to pre-train this generation model with an unsupervised task that generates the next sentence following a given paragraph to improve generation performance. We identify several issues with this approach. The\nfinal question generation task has the form of sentence generation given an answer and a context, while the next-sentence generation has no answer component. The next-sentence generation task is not conditioned on the answer, despite the answer being a strong conditional constraint for the question generation task. Also, the next sentence that follows a paragraph may have little relevance to the questions or answers from within the paragraph, and hence is not the ideal candidate for pre-training question generation.\nTo address these issues, we modify the context to exclude the sentence containing our previously generated answer and pre-train our generator on the task of generating this excluded answercontaining sentence, conditioned on the answer and the modified context.\nSpecifically, we exclude answer-containing sentence Sans while leaving the answer and modify the original context D to Dans as\nSstart = {p | sentence start index = p}, Sans = {(p, q, i, j) | max({p|p≤i}),min({q|q≥j}), (i, j) ∈ Aspank , p ∈ S\nstart, q ∈ Sstart}, Dans = [D[:p];D[i:j];D[q:]], (p, q, i, j) ∈ Sans.\nNote that we change Sans to not exclude the answer-containing sentence in the case of fine-tuning on the question generation, i.e.,\nSans = {(p, q, i, j)|p = i, q = j}.\nAfterwards, we pass the previously generated answer to the sequence-to-sequence generation model as a segmentation encoding Mans that identifies the answer part within the context, i.e.,\nMans = [m0 ∗ p;m1 ∗ (j − i);m0 ∗ (T − q)], (p, q, i, j) ∈ Sans, where m0 and m1 indicate trainable vectors corresponding to segmentation id 0 and 1, respectively. Here we tag the segmentation id for each word in the context as 0 and each word in the answer as 1. A ∗B indicates the operation of concatenating vector A for B many times. Next, we generate answer-containing sentence embedding W g = {wgt }T0 using a Transformer sequence-to-sequence model (the encoder part is initialized with BERT) as\nwgt = Transformer Decoder({w g i } t−1 i=0,BERT Encoder-Q(D ans,Mans)).\nFinally, we calculate the loss of the generation model with cross-entropy over generated sentence words, i.e.,\n{wot }Tt=0 = {Softmax(w g tE)}Tt=0,\nL = −\n( T∑\nt=1 D∑ i=1 yt,ilog(wot,i) + (1− yt,i)log((1−wot,i))\n) /T,\nwhere y indicates a ground-truth one-hot vector of the answer-containing sentence word (the question word in the case of fine-tuning), D is the vocabulary size, and E ∈ Rd×D represents a word embedding matrix shared between the BERT Encoder-Q and the Transformer decoder.\nIn this manner, we pre-train the question generation model using a task similar to the final task of conditionally generating the question from a given answer and a context." }, { "heading": "3 EXPERIMENTAL SETUP", "text": "Pre-training Dataset. To build the dataset for answer-containing sentence generation tasks (AS) and the synthetic MRC data for pre-training the downstream MRC model, we collect all paragraphs from the entire English Wikipedia dump (Korean Wikipedia dump for KorQuAD) and synthetically generate questions and answers on these paragraphs. We apply extensive filtering and cleanup to only retain high quality collected paragraphs from Wikipedia. Detailed pre-processing steps for obtaining the final Wikipedia dataset can be found in the supplemental material.\nUsing the answer generator in ASGen (BertGen+AS), we generate 43M answer-paragraph pairs (Full-Wiki) from the final Wikipedia dataset for pre-training on answer-containing sentence generation. For ablation studies on pre-training approaches, we also sample 2.5M answer-paragraph\npairs (Small-Wiki) from Full-Wiki and 25K answer-paragraph pairs (Test-Wiki) to evaluate the pretraining method. Finally, using the question generator in ASGen (BertGen+AS), we generate one question for each answer-paragraph pair in Full-Wiki and create the final synthetic MRC data containing 43M triples of a paragraph, its question and its answer.\nBenchmark Datasets. In most MRC datasets, a question and a context are represented as a sequence of words, and the answer span (indices of start and end words) is annotated from the context words based on the question. Among these datasets, we choose SQuAD as the primary benchmark dataset for question generation, since it is the most popular human-annotated MRC dataset. SQuAD-v1.1 (Rajpurkar et al., 2016) consists of crowd-sourced questions and answers based on contexts from Wikipedia articles. We compare our question generation capability with existing question generation methods such as UniLM (Dong et al., 2019). For fair comparison, we split the training set of SQuAD-v1.1 data into our own training and test sets, and keep the original development set as our dev set, as previously done in Du et al. (2017), Kim et al. (2019), and Dong et al. (2019). We call this dataset as Test Split11. We also evaluate on the reversed dev-test split, called Test Split2.\nTo evaluate the effect of generated synthetic MRC data, we evaluate the fine-tuned MRC model on the downstream MRC dataset after training on the generated synthetic data. We perform this on SQuAD-v1.1 and SQuAD-v2.0 (Rajpurkar et al., 2018). We also evaluate on KorQuAD (Lim et al., 2019) which is another dataset created with the same procedure as SQuAD-v1.1 for Korean language. To show that our generated data is useful for other MRC datasets, we fine-tune and test the MRC model on QUASAR-T (Dhingra et al., 2017) which is large-scale MRC dataset, after training on the synthetic data that generated from SQuAD-v1.1.\nImplementation Details. For the answer generator, we use BERT (Devlin et al., 2019a) and two fully connected layers to predict the number of answers K. For the BertGen model, we use pretrained uncased BERT (Base) as encoder and 12 layers of Transformer as decoder. For the generation of unanswerable questions as in SQuAD-v2.0, we separate unanswerable cases and answerable cases and train separate generation models. For the final MRC model, we use BERT (Large) which is the state-of-the-art model on multiple datasets with all official hyper-parameters. We use the Mecab (Kudo, 2006) tokenizer for Korean to separate postposition words which do not exist in English.\nComparison of the Pre-training Method. We compare our question generation pre-training method, which is pre-training on answer-containing sentence generation task (AS), with a method from Devlin et al. (2019b), which is pre-training on next-sentence generation task (NS), and with a method from Golub et al. (2017), which only trains question generation on final MRC dataset. We reproduced these methods on BertGen as they were described in their original work for comparison. Note that ‘BertGen+AS’ is equivalent to ‘ASGen’. We generate synthetic data from Wikipedia using these approaches which are trained on the target downstream MRC datasets except for QUASAR-T. In the case of QUASAR-T, we use synthetic data which is generated by ASGen trained on SQuADv1.1. To check the effectiveness of our method on downstream MRC tasks, we evaluate our generated data on SQuAD-v1.1, SQuAD-v2.0, KorQuAD and QUASAR-T by training state-of-the-art models (BERT and BERT+CLKT2) on generated data followed by fine-tuning on the train set for each dataset. The structure of ‘BERT + CLKT’ model is the same as that of original BERT except that the model is pre-trained for the Korean language. Due to the absence of common pre-trained BERT for Korean, we used this model as a baseline to demonstrate the effectiveness of our method." }, { "heading": "4 QUANTITATIVE RESULTS", "text": "Dynamic Answer Prediction. We conducted an experiment to demonstrate the performance of our method in generating the number of answers in a given context. As shown in Table 1, in the case of fixed K, the mean absolute error from the ground-truth Kgt is the smallest at Kpred = 5 and the values are 1.92 and 0.99 for Test Split1 and Test Split2, respectively. Thresholding on the sum of the start and end logits with a fixed threshold value which minimizes the mean absolute error results in\n1We use the identical splitting of SQuAD data provided by UniLM from its publicly available website (https://github.com/microsoft/unilm)\n2‘BERT+CLKT with ASGen’ model can be found as ‘BERT-CLKT-MIDDLE’ on the leaderboard (https: //korquad.github.io/KorQuad%201.0)\nan error of 2.31 and 1.12, respectively in the two splits. In contrast, our answer generator generates a more appropriate number of answers than the fixed K approach, by reducing the mean absolute error between the ground-truth Kgt and the prediction Kpred of 1.24 and 0.76, respectively for the two splits.\nQuestion Generation. To evaluate our question generator, we fine-tune the model on both Test Split1 and Test Split2, after pre-training answer-containing sentence generation on Full-Wiki. As shown in Table 2, ASGen outperforms existing methods by 0.9 BLEU-4 score on Split2, 24.7 for ASGen vs. 23.8 for UniLM. Moreover, our final question generation model, ASGen (Large), outperforms existing methods by a large margin in BLEU-4 score on both splits, 25.4 for ASGen (Large) vs. 22.1 for UniLM for Split1 and 28.0 for ASGen (Large) vs. 23.8 for UniLM for Split2.\nTo show the effectiveness of our answer-containing sentence pre-training task (AS), we compare between various pre-training tasks. As shown in Table 3, AS is shown to perform better than NS, e.g. 21.5 vs. 18.2 and 24.7 vs. 19.7 in the two splits, respectively. Note that conditioning on a given answer has only a small effect on AS, e.g. 19.4 vs 19.5. This implies the performance gain is largely due to pre-training on the answer-containing sentence generation task rather than conditioning on a given answer.\nWe also compare the BLEU-4 scores between before and after applying AS on other existing question generation models. We reproduce Zhao et al. (2018) and use the official code of Dong et al. (2019). As shown in Table 4, AS consistently improves the performance of other question generation models with no architecture changes or parameter tuning.\nDownstream Task Performance. We conduct experiments by training MRC models on the synthetic data generated by ASGen from Wikipedia before fine-tuning the model on the downstream dataset to show the effectiveness of our synthetic data generation. For each dataset, the MRC model is pre-trained on the corresponding generated synthetic data and fine-tuned on the downstream data. As shown in Table 5, the MRC model pre-trained on the synthetic data generated by ASGen shows an improvement of 1.9 F1 score on SQuAD-v1.1, 4.0 F1 score on SQuAD-v2.0, and 0.5 F1 score on KorQuAD from the state-of-the-art baseline models. Moreover, using the synthetic data generated from ASGen shows better performance than using the synthetic data generated from ‘BertGen+NS’ on both SQuAD-v1.1 and SQuAD-v2.0 downstream data.\nEffects of MRC and Synthetic Data Size. Fig. 5 shows the effects of synthetic data with respect to the size of the synthetic and real MRC data. In Fig. 5-(a), where we fix the size of synthetic data as\n43M, the F1 score of MRC model pre-trained on the synthetic data generated by ASGen consistently outperforms that of BertGen+NS. In particular, performance difference becomes apparent for a small size of real MRC data, while the performance gap diminishes for a large size. Such a gap may become insignificant for a sufficient size of real MRC data, but for the current size of SQuAD data (87K in total) AS still improves the performance.\nAs shown in Fig. 5-(b), we also conducted experiments by training the MRC model using a different amounts of generated synthetic data for the same number of iterations, while using the full size of real SQuAD data. The total number of training steps for all data sizes is kept the same as that of 10M synthetic data. A larger size of generated data consistently gives better performance.\nTransfer Learning to Other Datasets. In this experiment, we first fine-tune ASGen using SQuADv1.1, and using synthetic data generated by this ASGen, we train BERT MRC model. Afterwards, we fine-tune BERT for the downstream MRC task using QUASAR-T, in order to verify that the data generated in this manner is useful for other MRC datasets. QUASAR-T has two separate datasets, one with short snippets as context, and the other with long paragraphs as context. As shown in Table 6, training with our synthetic data is shown to improve the F1 score by 2.2 and 1.7 for the two cases, respectively." }, { "heading": "5 QUALITATIVE RESULTS", "text": "Comparison of Question Generation. We qualitatively compare the generated questions after pretraining with NS and AS to demonstrate the effectiveness of our method. For the correct answer “49.6%” as shown in the first sample in Table 7, NS omitted “Fresno”, which is a critical word to make the question specific, while AS’s question does not suffer from this issue. Note that the word “Fresno” occurs in the answer-containing sentence. This issue also occurs in the second sample, where NS uses the word “available” rather than the more relevant words from the answer-containing sentence, but AS uses many of these words such as “most” and “popular” to generate contextually rich questions. Also, the question from NS asks about “two” libraries, while the answer has “three” libraries, showing the lack of sufficient conditioning on the answer. The third sample also shows that\nAS draws more context-related questions than NS by including the exact subject “TARDIS” to use for the corresponding answer in a similar vein." }, { "heading": "6 RELATED WORK", "text": "Machine Reading Comprehension. For MRC tasks, a large number of datasets have been proposed, most often focused on finding an answer span for a question from a given paragraph. Popular and fully human-annotated datasets include SQuAD-v1.1 (Rajpurkar et al., 2016), SQuAD-v2.0 (Rajpurkar et al., 2018), KorQuAD (Lim et al., 2019), and HotpotQA (Yang et al., 2018). However, these datasets are relatively small with around 100K samples each, which is far smaller than those datasets used for unsupervised tasks such as language modeling.\nQuestion Generation. Question generation methods have been actively studied for various purposes including data augmentation in question answering. Du et al. (2017) proposed an attention-based model for question generation by encoding sentence-level as well as paragraph-level information. Song et al. (2018) introduced a query-based generative model to jointly solve question generation and answering tasks. Kim et al. (2019) separately encoded the answer and the rest of the paragraph for proper question generation. Zhao et al. (2018) utilized a gated self-attention encoder with a max-out unit to handle long paragraphs. Our proposed method (AS) can further improve the question generation quality of these methods by pre-training them with an answer-containing sentence generation task.\nTransfer Learning. Pre-training methods have been increasingly popular in natural language processing to obtain contextualized word representations. Open-GPT (Radford et al., 2018), BERT (Devlin et al., 2019a), XLNet (Yang et al., 2019), and UniLM (Dong et al., 2019) use a Transformer module (Vaswani et al., 2017) to learn different styles of language models on a large dataset fol-\nlowed by fine-tuning on the downstream task. While our approach is similar to these approaches, our pre-training task for question generator generates answer-containing sentences to learn better representations for the question generation task.\nSynthetic Data Generation. Subramanian et al. (2018) show that neural models generate better answers than using off-the-shelf tools for selecting named entities and noun phrases. Golub et al. (2017) proposed to separate the answer generation and the question generation. This model generates questions conditioned on generated answers, and then they evaluate the quality of the synthetic data by training an MRC model with them before fine-tuning on SQuAD. Inspired by the observations from previous studies, we improved the performance of answer generation and question generation by using a newly designed models as well as a novel pre-training technique." }, { "heading": "7 CONCLUSIONS", "text": "We propose two advanced training methods for generating high-quality and diverse synthetic data for MRC. First, we dynamically choose the K top answer spans from an answer generator and then generate the sentence containing the corresponding answer span as a pre-training task for the question generator. Using the proposed methods, we generate 43M synthetic training samples and train the MRC model before fine-tuning on the downstream MRC dataset. Our proposed method outperforms existing questions generation methods achieving new state-of-the-art results on SQuAD question generation and consistently shows the performance improvement for the state-of-the-art models on SQuAD-v1.1, SQuAD-v2.0, KorQuAD, and QUASAR-T datasets without any architectural modification to the MRC model." }, { "heading": "A DETAILS OF WIKIPEDIA PREPROCESSING", "text": "To build the answer-containing sentence generation data and the synthetic MRC data, we collect all paragraphs from all articles of the entire English Wikipedia dump (Korean Wikipedia dump for KorQuAD) and generate questions and answers on these paragraphs. We apply extensive filtering and cleanup to only retain the highest-quality paragraphs from Wikipedia.\nTo filter out low-quality obscure pages, we remove all pages that received less than 200 cumulative page-views including all re-directions in a 2-month period. In order to calculate the number of pageviews, official Wikipedia page-view dumps were used. Of the 5.4M original Wikipedia articles, filtering by page-views leaves 2.8M articles.\nWe also remove all pages with less than 500 characters, as these pages are often low-quality stub articles, which removes a further 16% of the articles. We remove all “meta” namespace pages such as talk, disambiguation, user pages, portals, etc. as these often contain irrelevant text or casual conversations between editors.\nIn order to extract usable text from the wiki-markup format of the Wikipedia articles, we remove extraneous entities from the markup including table of contents, headers, footers, links/URLs, image captions, IPA double parentheticals, category tables, math equations, unit conversions, HTML escape codes, section headings, double brace templates such as info-boxes, image galleries, HTML tags, HTML comments and all other tables.\nWe then split the cleaned text from the pages into paragraphs, and remove all paragraphs with less than 150 characters or more than 3500 characters. Paragraphs with the number of characters between 150 to 500 were sub-sampled such that these paragraphs make up 16.5% of the final dataset, as originally done for the SQuAD dataset. Since the majority of the paragraphs in Wikipedia are rather short, of the 60M paragraphs from the final 2.4M articles, our final Wikipedia dataset contains 8.3M paragraphs." }, { "heading": "B ADDITIONAL EXPERIMENT ON ANOTHER SQUAD SPLIT", "text": "We also evaluate the question generation model from Zhao et al. (2018) on another data split. We call this as Test-Split3. Test Split3 is obtained by dividing the original development set in SQuAD-v1.1 into two equal halves randomly and choosing one of them as development set and the other as test set while retaining the train set in SQuAD-v1.1. As shown in Table 8, the question generation model from Zhao et al. (2018) improves the BLEU-4 score on Test-Split3 by 1.3 (w.r.t the reproduced score)." }, { "heading": "C STANDARD ERRORS OF EVALUATION IN DOWNSTREAM MRC TASKS", "text": "As shown in Table 9, in the case of downstream MRC results (EM/F1) which we dicussed in Section 4, for SQuAD v1.1 and SQuAD v2.0, we selected 5 model checkpoints from the same pretraining at varying numbers of pre-training steps. We then fine-tune each of these models on the final downstream data 3 times each, picked the best performing model and reported it’s score. For KorQuAD, only 1 finetuning was performed with the final pre-trained model." } ]
2,019
null
SP:ab4fbcfc2199b778ff071e8ccff33efc8c37e351
[ "This work proposes an unsupervised hierarchical graph representation learning method, named BayesPool. The method learns a coarsening sequence of graphs together with the corresponding node representations. The coarsening sequence is learned using the method in Loukas (2019). The node representations are learned using an encoder-decoder structure, where the encoder encodes a graph to coarsened node representations, and the decoder decodes the node representations to a coarsened graph. The adopted objective function is analogous to VAE, except that the decoder does not aims to reconstruct an identical graph. Experiments on graph classification is performed on 5 different datasets, and competitive accuracy is achieved.", "The authors propose in this paper a new unsupervised graph representation learning method. The method leverages recent advances in graph coarsening, mainly Loukas' method. The key idea of the method consists in using a reconstruction target that is not the classical one in an auto-encoder setting. More precisely, the encoder takes as an input the original adjacency matrix and node features but the decode only aims at reconstructing the coarse adjacency matrix (obtained via Loukas' method). " ]
Hierarchical graph representation learning is an emerging subject owing to the increasingly popular adoption of graph neural networks in machine learning and applications. Loosely speaking, work under this umbrella falls into two categories: (a) use a predefined graph hierarchy to perform pooling; and (b) learn the hierarchy for a given graph through differentiable parameterization of the coarsening process. These approaches are supervised; a predictive task with ground-truth labels is used to drive the learning. In this work, we propose an unsupervised approach, BAYESPOOL, with the use of variational Bayes. It produces graph representations given a predefined hierarchy. Rather than relying on labels, the training signal comes from the evidence lower bound of encoding a graph and decoding the subsequent one in the hierarchy. Node features are treated latent in this variational machinery, so that they are produced as a byproduct and are used in downstream tasks. We demonstrate a comprehensive set of experiments to show the usefulness of the learned representation in the context of graph classification.
[]
[ { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Jie Chen", "Ilya Safro" ], "title": "Algebraic distance on graphs", "venue": "SIAM Journal on Scientific Computing,", "year": 2011 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "FastGCN: Fast learning with graph convolutional networks via importance sampling", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Jingsheng Jason Cong", "Joseph R Shinnerl" ], "title": "Multilevel optimization in VLSICAD, volume 14", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Inderjit Dhillon", "Yuqiang Guan", "Brian Kulis" ], "title": "A fast kernel-based multilevel algorithm for graph clustering", "venue": "In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining,", "year": 2005 }, { "authors": [ "Inderjit S Dhillon", "Yuqiang Guan", "Brian Kulis" ], "title": "Weighted graph cuts without eigenvectors a multilevel approach", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1944 }, { "authors": [ "Paul D Dobson", "Andrew J Doig" ], "title": "Distinguishing enzyme structures from non-enzymes without alignments", "venue": "Journal of molecular biology,", "year": 2003 }, { "authors": [ "Florian Dorfler", "Francesco Bullo" ], "title": "Kron reduction of graphs with applications to electrical networks", "venue": "IEEE Transactions on Circuits and Systems I: Regular Papers,", "year": 2012 }, { "authors": [ "David Duvenaud", "Dougal Maclaurin", "Jorge Aguilera-Iparraguirre", "Rafael Gómez-Bombarelli", "Timothy Hirzel", "Alán Aspuru-Guzik", "Ryan P. Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Hongyang Gao", "Shuiwang Ji" ], "title": "Graph U-Nets", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Matan Gavish", "Boaz Nadler", "Ronald R Coifman" ], "title": "Multiscale wavelets on trees, graphs and high dimensional data: theory and applications to semi supervised learning", "venue": "In Proceedings of the 27th International Conference on International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "David Harel", "Yehuda Koren" ], "title": "A fast multi-scale method for drawing large graphs", "venue": "In International symposium on graph drawing,", "year": 2000 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": null, "year": 2015 }, { "authors": [ "B Hendrickson", "R Leland" ], "title": "A multi-level algorithm for partitioning graphs", "venue": "Proceedings of the 1995 ACM/IEEE Conference on Supercomputing,", "year": 1995 }, { "authors": [ "YF Hu", "Jennifer A Scott" ], "title": "A multilevel algorithm for wavefront reduction", "venue": "SIAM Journal on Scientific Computing,", "year": 2001 }, { "authors": [ "George Karypis", "Vipin Kumar" ], "title": "A fast and high quality multilevel scheme for partitioning irregular graphs", "venue": "SIAM Journal on scientific Computing,", "year": 1998 }, { "authors": [ "Kristian Kersting", "Nils M. Kriege", "Christopher Morris", "Petra Mutzel", "Marion Neumann" ], "title": "Benchmark data sets for graph kernels, 2016", "venue": "URL http://graphkernels.cs.tu-dortmund. de", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "In NIPS Workshop on Bayesian Deep Learning,", "year": 2016 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Stephane Lafon", "Ann B Lee" ], "title": "Diffusion maps and coarse-graining: A unified framework for dimensionality reduction, graph partitioning, and data set parameterization", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2006 }, { "authors": [ "Junhyun Lee", "Inyeop Lee", "Jaewoo Kang" ], "title": "Self-attention graph pooling", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter Battaglia" ], "title": "Learning deep generative models of graphs", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard Zemel" ], "title": "Lanczosnet: Multi-scale deep graph convolutional networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Andreas Loukas" ], "title": "Graph reduction with spectral and cut guarantees", "venue": null, "year": 2019 }, { "authors": [ "Andreas Loukas", "Pierre Vandergheynst" ], "title": "Spectrally approximating large graphs with smaller graphs", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Tengfei Ma", "Jie Chen", "Cao Xiao" ], "title": "Constrained generation of semantically valid graphs via regularizing variational autoencoders", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Henning Meyerhenke", "Burkhard Monien", "Thomas Sauerwald" ], "title": "A new diffusion-based multilevel algorithm for computing graph partitions of very high quality", "venue": "IEEE International Symposium on Parallel and Distributed Processing,", "year": 2008 }, { "authors": [ "Ankur Moitra" ], "title": "Vertex sparsification and universal rounding algorithms", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 2011 }, { "authors": [ "Francesco Orsini", "Paolo Frasconi", "Luc De Raedt" ], "title": "Graph invariant kernels", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Dorit Ron", "Ilya Safro", "Achi Brandt" ], "title": "Relaxation-based coarsening and multiscale graph organization", "venue": "Multiscale Modeling & Simulation,", "year": 2011 }, { "authors": [ "Peter Sanders", "Christian Schulz" ], "title": "Engineering multilevel graph partitioning algorithms", "venue": "In European Symposium on Algorithms,", "year": 2011 }, { "authors": [ "Eitan Sharon", "Achi Brandt", "Ronen Basri" ], "title": "Fast multiscale image segmentation", "venue": "In Proceedings IEEE Conference on Computer Vision and Pattern Recognition", "year": 2000 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Dynamic edge-conditioned filters in convolutional neural networks on graphs", "venue": null, "year": 2017 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "GraphVAE: Towards generation of small graphs using variational autoencoders", "venue": "In ICANN,", "year": 2018 }, { "authors": [ "Shashanka Ubaru", "Yousef Saad" ], "title": "Sampling and multilevel coarsening algorithms for fast matrix approximations", "venue": "Numerical Linear Algebra with Applications,", "year": 2019 }, { "authors": [ "Petar Velic̆ković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Nikil Wale", "Ian A Watson", "George Karypis" ], "title": "Comparison of descriptor spaces for chemical compound retrieval and classification", "venue": "Knowledge and Information Systems,", "year": 2008 }, { "authors": [ "Chris Walshaw", "Mark Cross" ], "title": "Jostle: parallel multilevel graph-partitioning software–an overview", "venue": null, "year": 2007 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L. Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In KDD,", "year": 2018 }, { "authors": [ "Rex Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "William L. Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Shali Jiang", "Zhicheng Cui", "Roman Garnett", "Yixin Chen" ], "title": "D-VAE: A variational autoencoder for directed acyclic graphs", "venue": "NeurIPS,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph representation learning has attracted a surge of interest recently, inspired by the widespread success of representation learning in the image and language domains through the use of deep neural networks for parameterization. A substantial number of graph neural network (GNN) architectures (Bruna et al., 2014; Henaff et al., 2015; Duvenaud et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2017; Hamilton et al., 2017; Chen et al., 2018; Velic̆ković et al., 2018; Ying et al., 2018a; Liao et al., 2019; Xu et al., 2019) extend the convolution filters for a regular grid of data (e.g., image pixels, time series, and sequences) to irregularly connected graph neighborhoods. This extension naturally stimulates the quest of also extending the pooling operation in convolutional neural networks (CNN) to graphs. The challenge lies in the irregular connections as opposed to a regular grid structure, whereby partitioning is straightforward.\nGraph pooling is used in at least two scenarios. One is global pooling: it pools the vector representations of all nodes to a single vector as the graph representation. Simple operators such as max or mean are applied. A slightly more complex operator is the weighted sum, wherein the weights are computed through attention (Velic̆ković et al., 2018; Lee et al., 2019). A recently proposed operator is top-k pooling (Zhang et al., 2018), whereby a fixed number of node representations at the top of a sorted list is retained so that convolutions or feed-forward transformations are applied.\nThe second use of pooling is the creation of a graph hierarchy. In this scenario, pooling is local, more similar to that in CNNs. It is interfaced with graph coarsening (also called graph reduction or graph compression), a generic form of which is to cluster the nodes of the original graph into a node of the coarse graph. Then, the representations of the nodes in the cluster are pooled. The clustering may be obtained by using existing graph coarsening or graph clustering approaches, as in Bruna et al. (2014); Defferrard et al. (2016); Simonovsky & Komodakis (2017); or learned through parameterization as in Ying et al. (2018b); Gao & Ji (2019); Lee et al. (2019). In either approach, the result include both a hierarchy of graphs and the accompanying node representations.\nRepresentation learning in these local pooling approaches is supervised, with the training signal coming from labels of the downstream task. In this work, we propose an unsupervised learning approach, named BAYESPOOL, through the use of variational Bayes. We use an existing coarsening\nmethod to obtain the graph hierarchy, an example of which is shown in Figure 1. Then, our contribution is the learning of node representations for all graphs in this sequence. The high-level idea is to employ an encoder-decoder architecture: the encoder takes the graph and its node features as input and produces node features for the next graph in the hierarchy, whereas the decoder uses these produced node features to construct the graph. The objective is to obtain a decoding result as close to the given next graph as possible. The tool we use is variational Bayes. It is, however, slightly different from variational autoencoders (Kingma & Welling, 2014), because our decoder does not intend to reconstruct the input graph.\nA clear benefit of unsupervised learning is that the learned representation is not tailored to a specific downstream task and hence may be more generalizable. Moreover, the coarsening method we adopt is a recent development that holds spectral guarantee (Loukas & Vandergheynst, 2018; Loukas, 2019) on the quality of the coarse graphs. We demonstrate the effectiveness of such a combination of hierarchical production and node representation learning in the context of graph classification. In particular, the classification performance is rather competitive with state-of-the-art supervised GNN approaches." }, { "heading": "2 RELATED WORK", "text": "This work is in part based on graph coarsening that produces a hierarchy for a given graph. Denote byG = (V,E) a graph with the vertex set V and the edge setE. Graph coarsening is concerned with computing a smaller (coarse) graphGc = (Vc, Ec) with |Vc| < |V | that retains the structure ofG. A multilevel coarsening technique recursively coarsens the graph, yielding a hierarchy. Graph coarsening has been studied in the context of graph partitioning (Karypis & Kumar, 1998; Dhillon et al., 2007), graph visualization (Harel & Koren, 2000), machine learning (Lafon & Lee, 2006; Gavish et al., 2010; Ubaru & Saad, 2019), and pooling in graph neural networks (Bruna et al., 2014; Defferrard et al., 2016; Simonovsky & Komodakis, 2017). A variety of heuristic coarsening techniques have been proposed in different disciplines, including matching (Hendrickson & Leland, 1995; Ubaru & Saad, 2019), first choice (Cong & Shinnerl, 2013), contraction-based schemes (Dhillon et al., 2005; Sanders & Schulz, 2011), and algebraic multigrid (AMG)-inspired schemes (Sharon et al., 2000; Hu & Scott, 2001; Ron et al., 2011; Chen & Safro, 2011). Many well-known software packages exist for graph coarsening, e.g., Jostle (Walshaw & Cross, 2007), Metis (Karypis & Kumar, 1998), and DiBaP (Meyerhenke et al., 2008).\nRecently, a few graph coarsening techniques achieving certain theoretical guarantees were presented (Moitra, 2011; Dorfler & Bullo, 2012; Loukas & Vandergheynst, 2018). Loukas (2019) presented variational approaches for graph coarsening with spectral guarantees. In particular, it was shown that the coarse graphs preserve the top eigenspace (whose dimension is an input to the method) within a predefined error tolerance. Here, we use this variational approach to obtain the graph hierarchy.\nThis work is concerned with unsupervised graph representation learning. Recent literature has focused on generative models to achieve the same; see. e.g., Kipf & Welling (2016); Li et al. (2018); Ma et al. (2018); Simonovsky & Komodakis (2018); Zhang et al. (2019). For learning hierarchical representations of graphs, most of the works that we are aware of are based on supervised learning, including Bruna et al. (2014); Defferrard et al. (2016); Simonovsky & Komodakis (2017); Ying et al. (2018b); Gao & Ji (2019); Lee et al. (2019). Methods most relevant to our work include: DIFFPOOL (Ying et al., 2018b), where the coarsening matrices are learned in an end-to-end fashion; GRAPH U-NET (Gao & Ji, 2019), where graph pooling is achieved using a learnable vector and\nnode ranking; and SAGPOOL (Lee et al., 2019), which is similar to GRAPH U-NET but uses graph self-attention to compute the ranking." }, { "heading": "3 METHOD", "text": "The proposed method BAYESPOOL is an extension of variational autoencoders. As the name suggests, the goal of an autoencoder is to reconstruct the original input object after encoding it in the latent space. Our approach does not reconstruct the original input, but rather, aims as decoding an output faithful to another prescribed object. To this end, we first revisit variational Bayes and justify the use of variational lower bound for learning. Then, the machinery is applied to the graph context." }, { "heading": "3.1 VARIATIONAL BAYES", "text": "Let x be the observed (data) variable and z be the unobserved (latent) variable. A core subject of Bayesian inference is concerned with estimating the posterior distribution p(z|x). It is related to the prior p(z) and the likelihood p(x|z) through the Bayes theorem\np(z|x) = p(x|z)p(z)∫ p(x, z) dz .\nThe challenge lies in the marginalization over z in the denominator, which is generally computationally intractable. Hence, various approximations were developed. Typically one adopts a surrogate model q(z) independent of data; and recently in the context of VAEs, the data dependent distribution q(z|x) is often used. In our setting, we introduce a new variable x̃ and consider q(z|x̃). The difference, in terms of the Kullback–Leibler divergence, between the surrogate (variational) posterior q(z|x̃) and the true posterior p(z|x) may be decomposed as\nDKL\n( q(z|x̃) || p(z|x) ) = ∫ q(z|x̃) log q(z|x̃)\np(z|x) dz\n= ∫ q(z|x̃) log q(z|x̃)\np(z) dz︸ ︷︷ ︸\nDKL\n( q(z|x̃) || p(z)\n) + ∫ q(z|x̃) log p(x) dz︸ ︷︷ ︸\nlog p(x)\n− ∫ q(z|x̃) log p(x|z) dz︸ ︷︷ ︸ Eq(z|x̃) [ log p(x|z)\n] . It consists of three terms: the KL divergence between the variational posterior and the prior p(z), the log-evidence log p(x), and the marginal log-likelihood log p(x|z) under the surrogate distribution. Because the KL divergence is nonnegative, the log-evidence is lower bounded by the combination of the other two terms:\nlog p(x) ≥ Eq(z|x̃) [ log p(x|z) ] −DKL ( q(z|x̃) || p(z) ) . (1)\nThe better the surrogate, the tigher the lower bound.\nOne sees that the right-hand side of (1) is almost the same as the usual log-evidence lower bound (ELBO), except that the surrogate q(z|x̃) appears in place of q(z|x). This observation is not surprising, because the marginalization is over the latent variable z and has nothing to do with x and x̃. We thus conclude that the usual machinery of VAE applies, with only a notational change of the variational posterior. In the usual VAE setting, the first term of the right-hand side of (1) is considered the decoding accuracy, whereas the second term is a regularization in the latent space. Our setting follows this interpretation." }, { "heading": "3.2 GRAPH REPRESENTATION LEARNING WITH VARIATIONAL BAYES", "text": "In our setting, a pair of graphs—the original one and the coarse one—is given. Let A ∈ Rn×n and Ac ∈ Rm×m be the corresponding graph adjacency matrices, respectively. Similarly, denote by X ∈ Rn×d and Xc ∈ Rm×d ′ the corresponding node feature matrices. We apply the encoderdecoder formalism, whereby the encoder encodes A and X into the coarse graph features Xc that we seek, such that the decoder can use Xc to decode a coarse graph as similar to Ac as possible. See Figure 2 for an illustration.\nSpecifically, in the language of generative modeling, the encoder is the parameterized inference model that produces the parameters of q(Xc|A,X) and the decoder is the parameterized generative model that produces the parameters of p(Ac|Xc). Following (1), Ac plays the role of x, Xc plays the role of z, and (A,X) plays the role of x̃. The variational lower bound for model learning is thus\nELBO = Eq(Xc|A,X) [ log p(Ac|Xc) ] −DKL ( q(Xc|A,X) || p(Xc) ) . (2)\nMaxmizing the ELBO amounts to maximizing the likelihood of decoding the coarse Ac (given coarse node features Xc resulting from the encoder), while minimizing a regularization of the variational posterior q(Xc|A,X) that departs from the latent distribution p(Xc)." }, { "heading": "3.3 MODELING AND PARAMETERIZATION", "text": "Generally, the latent space may be kept simple and unparameterized, with more emphasis placed on the encoder and the decoder. Thus, we let the prior p(Xc) be the standard matrix normal MN (Xc | 0m×d′ , Im, Id′). Occasionally, specifying simple Gaussian structures on p(Xc) may improve performance, such as letting p(Xc) be the factored Gaussian, with the mean and the diagonal variance being parameters to learn. We have not yet, however, obtained strong empirical evidence of the benefit of using a parameterized factored Gaussian in this case.\nFor the decoder (generative model), a natural choice is to treat each element of the coarse graph adjacency matrix as a Bernoulli variable (scaled to the magnitude of the corresponding edge weight), with the success probability parameterized by a function of the corresponding coarse node features. For notational convenience, let the column vector xci ≡ Xc(i, :)T and let Acij ≡ Ac(i, j). Then, p(Ac|Xc) is the product of independent Bernoulli distributions:\np(Ac|Xc) = ∏ i 6=j p(Acij | xci , xcj) = ∏ i 6=j Bernoulli(1Acij | pij),\nwhere 1Acij is the indicator function that returns 1 if A c ij 6= 0 and 0 otherwise, and pij is a parameterized function that computes the success probability. A simple choice of the probability is an (unparameterized) dot product:\npij = sigmoid(〈xci , xcj〉), (3)\nbut there exist several other straightforward parameterized variants. For example,\npij = sigmoid(〈WTxci , WTxcj〉) and pij = sigmoid(wT (xci xcj)), (4)\nwhere W and w are a parameter matrix and vector, respectively. One may also replace them by an MLP. Note that all these functions are symmetric with respect to i and j because the graph is undirected.\nFor the encoder (inference model), we treat the variational posterior q(Xc|A,X) as a factored Gaussian: each coarse node xci is an independent Gaussian with vector mean µi and diagonal variance diag(σ2i ). Then,\nq(Xc|A,X) = ∏ i q(xci |A,X) = ∏ i N (xci | µi,diag(σ2i )),\nwhere µi and σi are parameterized functions ofA andX . By doing so, the KL term in the ELBO (2) admits a closed form. We transpose the column vectors µi and stack them to form a matrix M . Similarly, we proceed with the σi’s and form a matrix S. In what follows, we model the parameterized function for M . The one for S is analogous.\nLet C be the set of coarse nodes; hence A(C, :) keeps only the rows of A that correspond to the coarse nodes. We let M = σ(HXW1), (5) where W1 is a parameter matrix, H has the same nonzero pattern as A(C, :), and σ is an activation function. This expression is in form similar to one graph convolution layer in GCN (Kipf & Welling, 2017), except that the square normalized adjacency matrix is replaced by a fat rectangular matrix H . Rather than basing H on the original adjacency matrix, we designate its nonzero elements to be self-attention weights computed from the node features. Specifically, the nonzero elements of the i-th row of H is computed as\nsoftmax j∈neighbor(i)\n( wT2 tanh(W3xi +W4xj) ) , (6)\nwhere w2 is a parameter vector and W3 and W4 are parameter matrices. As an alternative, one may replace the attention calculation by that in GAT (Velic̆ković et al., 2018):\nsoftmax j∈neighbor(i)\n( LeakyReLU(wT2 [W3xi;W3xj ]) ) . (7)\nTo further enhance the representational power, in the parameterization (5) one may replace the original feature matrix X by the node embedding matrix Z output from GCN:\nM = σ(HZW1) with Z = GCN(A,X;W5). (8) The GCN introduces additional parameters W5 that may be useful for large data sets.\nIn Section 4, we experiment with the different variants and suggest a default choice that works generally well." }, { "heading": "3.4 MULTILEVEL LEARNING", "text": "Coarsening may be done recursively, forming a sequence of increasingly coarse graphs. Let the adjacency matrices of this sequence be A0, A1, . . . , AL, where A0 = A corresponds to the initial given graph. Given this sequence and the initial feature matrix X0 = X , the goal is to obtain the subsequent feature matrices X1, . . . , XL.\nTo this end, model learning is conducted through maximizing the evidence of observingA1, . . . , AL, treated independently. That is, we want to optimize log p(A1) + · · · + log p(AL). Following the argument as before, the actual quantity to optimize is the evidence lower bound. Inserting the layer index ` into (2), we have\nELBO` = Eq(X`|A`−1,X`−1) [ log p(A`|X`) ] −DKL ( q(X`|A`−1, X`−1) || p(X`) ) .\nThen, the log-evidence lower bound is the sum\nELBO = L∑\n`=1\nELBO`.\nThe encoder and decoder parameters across coarsening levels differ but they are jointly learned through maximizing the ELBO." }, { "heading": "3.5 DOWNSTREAM TASKS", "text": "The learned features X1, . . . , XL, together with the original X0, may be used for predictive tasks through learning a separate predictive model. We follow a common practice and define the model as\nyp = MLP ( concat ( readout(X0), readout(X1), · · · , readout(XL) )) ,\nwhere readout is a global pooling across graph nodes (e.g., a concatenation of the max pooling and the mean pooling), concat denotes vector concatenation, MLP is self-explanatory, and yp is the class probability vector. In this paper, we consider the graph classification task." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the performance of BAYESPOOL through the task of graph classification. Note again that BAYESPOOL is an unsupervised method; but as we will see, it is rather competitive with recently proposed supervised methods, even outperforming them on several data sets. We first present the details of the experimented data sets, the compared methods, and the training procedure. Then, we compare the graph classification accuracies. We also perform sensitivity analysis regarding the number of coarsening levels and compare the performance of several variants in the implementation of BAYESPOOL.\nData sets: We consider the same data sets used by Lee et al. (2019). They are standard benchmarks publicly available from Kersting et al. (2016). Table 1 summarizes the information.\nThe first two data sets are related to protein structures. The graphs in DD (Dobson & Doig, 2003) have different amino acids as nodes; and the edges correspond to the distance between the nodes. Labels indicate if the protein is an enzyme or not. The PROTEINS (Dobson & Doig, 2003) graphs have secondary structure elements of proteins as nodes. The edges indicate whether the nodes are in amino acids. NCI1 and NCI109 are biological data sets popularly used for anticancer activity classification (Wale et al., 2008). Here, the graphs correspond to chemical compounds, with the atoms and the chemical bonds represented as the nodes and edges, respectively. The FRANKENSTEIN (Orsini et al., 2015) data set contains molecular graphs with 780 node features. The labels indicate if a molecule is mutagen or non-mutagen. Data sets DD, NCI1, and NCI109 do not come with node attributes for use as features. Hence, we employ transformations of node degrees as features, following the practice of DIFFPOOL.\nCompared methods: We compare with three supervised hierarchical graph representation learning methods, namely DIFFPOOL, GRAPH U-NET, and SAGPOOL. We duplicate the test accuracies of these methods reported by Lee et al. (2019).\nDIFFPOOL (Ying et al., 2018b) computes hierarchical representations of graphs by using end-to-end trainable pooling based on soft clustering. This method is expensive because of the computation of the dense projection matrix. Lee et al. (2019) report that the method ran out of memory for a pooling ratio greater than 0.5. GRAPH U-NET (Gao & Ji, 2019) uses a learnable scoring vector to rank the graph nodes and selects top-ranked nodes for pooling. SAGPOOL (Lee et al., 2019) uses a similar approach as GRAPH U-NET for pooling, but incorporates a self-attention layer for learning the scoring vector. In both methods, the pooling ratio was set to be 0.5.\nTraining procedure: We follow Lee et al. (2019) and perform several rounds of random splits with 80% training, 10% validation and 10% test. The learning rate is tuned over the range [1e-2, 5e-2, 1e-3, 5e-3, 1e-4]. The hidden dimensions are tuned over [10, 20, 32, 48]. For LeakyReLU, the slope coefficient is set to 0.01. The coarsening ratio ρ is experimented with [0.25, 0.5, 0.75]; see Section 4.2 for sensitivity analysis. The dimension of the top eigenspace to be preserved by the coarsening procedure is set to K = 5. We implement the method in PyTorch and use Adam as the optimizer. The training employs early stopping with a patience of 50 epochs. The training setting for other methods are reported in Lee et al. (2019).\nFor a fair comparison, we use the same MLP classifier as in Lee et al. (2019). For readout, we use mean pooling and max pooling and concatenate the two outputs. We then use 3 feedforward layers along with softmax for classification. The classifier is trained for 150 epochs.\nFor architecture variation, the following combination consistently achieves the best results: the decoder uses an unparameterized dot product (3), the encoder uses parameterization (5) with the attention matrixH computed by using the GAT form (7). The classification results reported in Section 4.1 below follow this choice. The results of other variants are reported in the subsections that follow.\nThe code is available at https://anonymous.4open.science/r/ a50d6411-55f7-4e24-8f6c-6eecee118ea0/." }, { "heading": "4.1 GRAPH CLASSIFICATION", "text": "We compare the performance of BAYESPOOL with several high-performing supervised methods recently proposed. These methods are all hierarchical methods. Table 2 lists the average test accuracies. The pooling/coarsening ratio is 0.5 in all cases.\nBAYESPOOL outperforms other methods on three out of five data sets. Its performance is also on par with DIFFPOOL on the other two data sets, although infereior to GRAPH U-NET and SAGPOOL. The graphs in DD and PROTEINS are relatively large; with a coarsening ratio 50% there does not seem to cause information loss. Hence, BAYESPOOL works rather appealingly. On the other hand, although the graphs are smaller in FRANKENSTEIN, the data set contains a large number (780) of features, which possibly dwarf the graph structure information. Therefore, all methods yield similar results (with ours slightly outperforming the others). Encouragingly, BAYESPOOL is an unsupervised method; hence, these results show that the method is highly competitive for downstream tasks such as graph classification. It enables incorporating sophisticated coarsening techniques such as Loukas (2019) for graph representation learning.\nNote that the adjacency matrices of the graphs are typically sparse. BAYESPOOL leverages sparse matrix computation, as opposed to DIFFPOOL where the projection matrix does not have an a priori sparsity structure. The coarsening procedure used by BAYESPOOL is implemented in sparse matrix format, along with the calculations of the neural network. This implementation results in a lower time cost and space complexity." }, { "heading": "4.2 EFFECT OF COARSENING RATIO", "text": "One of the key factors that affects the performance of BAYESPOOL is the amount of graph reduction (the ratio of the coarse graph size to the initial size), or equivalently, the number of coarsening levels. This ratio is an input parameter to the coarsening method of Loukas (2019) that we use. In Table 3, we evaluate the performance of BAYESPOOL on two data sets with respect to the coarsening ratio. These data sets have relatively large graphs so that aggressive coarsening is possible.\nIn the table, we report the results for three different levels of coarsening with ratio ρ = m/n, where m is the number nodes in the coarse graph and n the original graph. We observe that the performance of BAYESPOOL is relatively stable when ρ ≥ 0.5, but degrades as ρ becomes smaller. As we lower ρ, more and more nodes and edges are removed, causing significant loss of information. However,\neven for ρ = 0.25, BAYESPOOL still yields comparable results to DIFFPOOL according to Table 2. We conclude that ρ = 0.5 appears to be the right tradeoff." }, { "heading": "4.3 VARIANTS OF ARCHITECTURE", "text": "As discussed in Section 3, a few parameterizations of the encoder and the decoder are possible. In this subsection, we comprehensively study the different variants, with the aim of obtaining a combination that generally works well. The results are reported in Table 4.\nIn the top part of Table 4 we compare the performance of four variants of the encoder output M (as well as S). The variants include (i) the attention calculation and (ii) whether or not to apply GCN before attention. The former contains two versions (6) and (7) and the latter also admits two versions (5) and (8), hence four combinations in total.\nFrom the table, we observe that the GAT approach with LeakyReLU yields a better performance; and interestingly, using additionally GCN for parameterization lowers the performance. The introduction of the additional parameters inside GCN does not seem helpful.\nIn the bottom part of Table 4 we compare the performance of four variants of the decoder output pij . Along with the three variants discussed in Section 3.3 (unparamterized (3) and matrix/vectorparamterized (4)), we also consider replacing the matrix W in (4) by a 2-layer MLP.\nFrom the table, we observe that the plain dot product performs the best for both data sets. Again, the introduction of additional parameters does not seem helpful; rather, the accuracies deteriorate. This observation is consistent for both the encoder and the decoder. It is possible that the use of many parameters adversely affects the performance on data sets of a scale that we experimented with here." }, { "heading": "5 CONCLUSION", "text": "We have presented an unsupervised approach BAYESPOOL for hierarchical graph representation learning. Compared with supervised approaches, a clear benefit is that the learned representations are generalizable to different downstream tasks. BAYESPOOL consists of an encoder-decoder architecture and adopts variational Bayes for training, but it is different from standard VAEs in that it does not attempt to reconstruct the input graph; rather, the decoder aims at producing the next graph in the hierarchy. Together with the use of the graph coarsening approach of Loukas (2019), we perform empirical evaluations that show that the learned representations yield competitive classification accuracies with state-of-the-art supervised GNN methods." } ]
2,019
null
SP:f5783ca08d51aa886277c95f86438981e3e74810
[ "This paper focuses on topic of searching for the optimal architecture for the deep network. Building on the split linearized bregman iteration strategy, the authors propose two practical algorithms to boost network, namely GT-filters Alg and GT-layers Alg. The proposed algorithms can simultaneously grow and train a network by progressively adding both convolutional filters and layers. The experiments conducted on VGG and ResNets display the comparable accuracies between the BoN and the standard big models, but with much more compact representations and balanced computational cost.", "This paper proposes an architecture search method for deep convolutional neural network models that progressively increases the number of filters per layer as well as the number of layers, and the authors refer to this general approach as boosting networks. The algorithm for increasing the number of filters is based on split linear Bregman iteration, and the algorithm for increasing the number of layers proceeds block by block, increasing the layers per block until the accuracy does not increase. The experiments convincingly demonstrate gains in performance and smaller network sizes compared to baseline models, naive boosting methods, and a related method called Autogrow." ]
Network structures are important to learning good representations of many tasks in computer vision and machine learning communities. These structures are either manually designed, or searched by Neural Architecture Search (NAS) in previous works, which however requires either expert-level efforts, or prohibitive computational cost. In practice, it is desirable to efficiently and simultaneously learn both the structures and parameters of a network from arbitrary classes with budgeted computational cost. We identify it as a new learning paradigm – Boosting Network, where one starts from simple models, delving into complex trained models progressively. In this paper, by virtue of an iterative sparse regularization path -Split Linearized Bregman Iteration (SplitLBI), we propose a simple yet effective boosting network method that can simultaneously grow and train a network by progressively adding both convolutional filters and layers. Extensive experiments with VGG and ResNets validate the effectiveness of our proposed algorithms.
[]
[ { "authors": [ "Reze Abbasi-Asl", "Bin Yu" ], "title": "Structural compression of convolutional neural networks based on greedy filter pruning", "venue": "In arxiv,", "year": 2017 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "H. Cai", "T. Chen", "W. Zhang", "Y. Yu", "J. Wang" ], "title": "Efficient architecture search by network transformation", "venue": "Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "H. Cai", "J. Yang", "W. Zhang", "S. Han", "Y. Yu" ], "title": "Path-level network transformation for efficient architecture search", "venue": "arXiv preprint arXiv:1806.02639,", "year": 2018 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "AAAI, 2018c", "year": 2018 }, { "authors": [ "T. Chen", "I. Goodfellow", "J. Shlens" ], "title": "Net2net: Accelerating learning via knowledge transfer", "venue": "arXiv preprint arXiv:1511.05641,", "year": 2015 }, { "authors": [ "T. Elsken", "J.-H. Metzen", "F. Hutter" ], "title": "Simple and efficient architecture search for cnns", "venue": "NIPS, 2017", "year": 2017 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "In arxiv:1808.05377,", "year": 2018 }, { "authors": [ "Yanwei Fu", "Chen Liu", "Donghao Li", "Jingsan Zeng", "Yuan Yao" ], "title": "Parsimonious deep learning: A differential inclusion approach with global convergence", "venue": null, "year": 2019 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CVPR, 2016a", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NIPS 2014 Deep Learning Workshop,", "year": 2014 }, { "authors": [ "Andrew G. Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arxiv,", "year": 2017 }, { "authors": [ "Chendi Huang", "Xinwei Sun", "Jiechao Xiong", "Yuan Yao" ], "title": "Split lbi: An iterative regularization path with structural sparsity", "venue": "Advances In Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Chendi Huang", "Xinwei Sun", "Jiechao Xiong", "Yao Yuan" ], "title": "Boosting with structural sparsity: A differential inclusion approach", "venue": "Applied and Computational Harmonic Analysis,", "year": 2018 }, { "authors": [ "Forrest N. Iandola", "Song Han", "Matthew W. Moskewicz", "Khalid Ashraf", "William J. Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and ¡0.5mb model size", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Leon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In arXiv:1807.11164v1,", "year": 2018 }, { "authors": [ "Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz" ], "title": "Pruning convolutional neural networks for resource efficient transfer learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Anastasia Pentina", "Christoph H. Lampert" ], "title": "Lifelong learning with non-i.i.d", "venue": "tasks. In NIPS", "year": 2015 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": null, "year": 2018 }, { "authors": [ "George Philipp", "Jaime G Carbonell" ], "title": "Nonparametric neural networks", "venue": "arXiv preprint arXiv:1712.05440,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Xinwei Sun", "Lingjing Hu", "Yuan Yao", "Yizhou Wang" ], "title": "Gsplit lbi: Taming the procedural bias in neuroimaging for disease prediction", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR),", "year": 2015 }, { "authors": [ "Sebastian Thrun", "Tom M. Mitchell" ], "title": "Lifelong robot learning", "venue": "Robotics and Autonomous Systems,", "year": 1995 }, { "authors": [ "Yuxiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Growing a brain: Fine-tuning by increasing model capacity", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "T. Wei", "C. Wang", "C.W. Chen" ], "title": "Modularized morphing of neural networks", "venue": "arXiv preprint arXiv:1701.03281,", "year": 2017 }, { "authors": [ "W. Wen", "F. Yan", "H. Li" ], "title": "Autogrow: Automatic layer growing in deep convolutional networks", "venue": "arXiv preprint arXiv:1906.02909,", "year": 1906 }, { "authors": [ "Catherine Wong", "Neil Houlsby", "Yifeng Lu", "Andrea Gesmundo" ], "title": "Transfer learning with neural automl", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Bo Zhao", "Xinwei Sun", "Yanwei Fu", "Yuan Yao", "Yizhou Wang" ], "title": "Msplit lbi: Realizing feature selection and dense estimation simultaneously in few-shot and zero-shot learning", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR),", "year": 2018 } ]
[ { "heading": null, "text": "Network structures are important to learning good representations of many tasks in computer vision and machine learning communities. These structures are either manually designed, or searched by Neural Architecture Search (NAS) in previous works, which however requires either expert-level efforts, or prohibitive computational cost. In practice, it is desirable to efficiently and simultaneously learn both the structures and parameters of a network from arbitrary classes with budgeted computational cost. We identify it as a new learning paradigm – Boosting Network, where one starts from simple models, delving into complex trained models progressively. In this paper, by virtue of an iterative sparse regularization path -- Split Linearized Bregman Iteration (SplitLBI), we propose a simple yet effective boosting network method that can simultaneously grow and train a network by progressively adding both convolutional filters and layers. Extensive experiments with VGG and ResNets validate the effectiveness of our proposed algorithms." }, { "heading": "1 INTRODUCTION", "text": "In recent years, deep convolution neural networks have made remarkable achievements in compute vision and machine learning communities in addressing many important tasks, such as image classification, and segmentation. Researchers had designed many successful Deep Neural Network (DNN) architectures, from just have a few convolution layers like LeNet (LeCun et al., 1998) and AlexNet (Krizhevsky et al., 2012), to have more than 10 layers, e.g., VGG (Simonyan & Zisserman, 2014) and GoogleLeNet (Szegedy et al., 2015), and even have hundreds and thousands of layers like ResNet (He et al., 2016a). Designing a neural network architecture requires expert-level efforts to specify the key network hyper-parameters, such as type of layers, number of filters and layers (i.e., network width and depth) and so on. Since the capacity of over-parameterized networks largely depends on the number of total parameters, the number of filters and layers of networks are the key hyper-parameters that shape the expressive power of neural networks.\nIn machine learning communities, most researchers resort to AutoML methods, e.g., Neural Architecture Search (NAS), in automating architecture engineering. Critically, NAS methods indeed surpass the manually designed architectures on many tasks, such as image classification and object detection (Zoph & Le, 2016; Zoph et al., 2018). To search a good architecture, various search strategies have been employed, such as random search, Bayesian optimization, reinforcement learning, and so on. Most of them require significant amount of computational cost, which is normally orders of magnitude higher than training a network. Furthermore, some of the found architectures by NAS have much more parameters than manually designed ones on the same dataset.\nAs the field of representation learning moves closer towards artificial intelligence, it becomes important to efficiently and simultaneously learn both the structures and parameters of a network from arbitrary classes on mobile devices or even Internet of Things (IoT) devices. This requires more flexible strategies in dynamically handling the network width and depth, according to the scale of dataset. To this end, this paper studies a new paradigm – Boosting network (BoN), where one starts from simple models, delving into complex trained models progressively. Specifically, BoN could simultaneously grow the structures and train the parameters from a simple initialized network on the data gradually to complex ones. Formally, we demand the following properties of an algorithm qualified as BoN:\n• It should incorporate both architecture growth (including filters and layers) and parameter learning simultaneously, in which the width and depth of network can be gradually updated, and the parameters of network should be updated at the same time;\n• It should provide a comparable classifier for prediction tasks, as the state-of-the-art handcrafted architectures on the same dataset;\n• Its computational requirements, the total parameters of final boosted network, and memory footprint should remain bounded, ideally in the same order of magnitude as training a manually engineered architecture on the same dataset.\nThe first two criteria express the essence of boosting network; the third criterion identifies the key difference from NAS and other trivial or brute-force solutions, such as randomly searching.\nThis paper proposes a method for the BoN task based on the Split Linearized Bregman Iteration (SplitLBI) (Huang et al., 2016; Fu et al., 2019), originally proposed by Huang et al. (2016) to learn high dimensional sparse linear models and found applications in medical image classification (Sun et al., 2017), computer vision (Zhao et al., 2018), and training neural networks (Fu et al., 2019). Particularly, based on differential inclusions of inverse scale spaces (Huang et al., 2018), SplitLBI has the merit of learning both an over-parameterized model weight set (Over-Par set) as the Stochastic Gradient Descent (SGD), and structural sparsity model weight set (Stru-Spa set) in a coupled inverse scale space. Essentially, SplitLBI optimizes the Stru-Spa set as sparse approximation of the Over-Par set, by gradually selecting the important filters and weights from Over-Par set, along the training epochs.\nEquipped with SplitLBI, our key idea of BoN comes from progressively growing networks by checking the parameters within Over-Par and Stru-Spa set. Essentially along the training epochs, if enough parameters in Over-Par set have been selected in Stru-Spa set, it would be more advisable to increase the capacity of Over-Par Set by adding new parameters.\nFormally, to boosting a network, we introduce a Growing and Training Network Algorithm (GT-Net Alg), consisting of two parts of growing both filters and layers, i.e., Growing and Training filters algorithm (GT-filters Alg), and Growing and Training layers algorithm (GT-layers Alg). Given an initial network, the GT-filters Alg can effectively grow the filters of each layer, and train the network parameters at the same time. Furthermore, the GT-layers Alg firstly employs GT-filters Alg to compute the filter configuration for the layers of each block, and then periodically check whether to add new layer to the block along the training procedure. We conduct extensive experiments on several benchmark datasets, including MNIST, Cifar-10, and Cifar-100. It shows that our GT-Net Alg can achieve comparable or even better performance than the competitors, with much less computational cost, and smaller size of found network. This indicates the effectiveness of our proposed algorithms. Up to our knowledge, this is the first time that a BoN type algorithm of all the three aspects above is addressed in literature.\nContributions. We summarize the contributions: (1) A novel learning paradigm - Boosting network (BoN), is for the first time, studied in this paper: one starts from simple models, delving into complex trained models progressively. (2) We propose a novel GT-filters Alg, which can simultaneously effectively grows the filters of each layer, and trains the network parameters, given a simple initial network. (3) We present GT-layers Alg which grows and trains layers, by exploring the over-parameterized model weight and structural sparsity model weight set." }, { "heading": "2 RELATED WORK", "text": "To explore a good deep learning structure, recent research focuses on employing Network Architecture Search (NAS) (Elsken et al., 2018; Zoph & Le, 2016; Zoph et al., 2018) by using reinforcement learning to search the network structures, such as number of filters, filter size, layer depth, and so on. Despite promising performance achieved, the computational cost of NAS algorithms themselves are prohibitive expensive, e.g., 800 GPUs concurrently at any time training the algorithms in Zoph & Le (2016). Several approached improved NAS by accelerating it, including weight sharing/inheritance methods , or decreasing the searching space to a specific setting (Elsken et al., 2017; Pham et al., 2018; Cai et al., 2018c; Bender et al., 2018). But they still require significant amount of computational cost.( Nonparametric Network (Philipp & Carbonell, 2017) uses group lasso to update the\nnetwork structure.) In contrast, Our BoN aims at growing a network and making a balance between computational cost, the model size and performance of the network.\nNetwork pruning algorithms (Han et al., 2015; Abbasi-Asl & Yu, 2017; Molchanov et al., 2017) introduce additional computational cost in fine-tuning/updating the networks. In addition, some works study the manually designed compact and lightweight small DNNs (e.g. ShuffleNet (Ma et al., 2018), MobileNet (Howard et al., 2017), and SqueezeNet (Iandola et al., 2017)), which may still be tailored only for some specific tasks rather than boosting a network in this work.\nSeveral recent works also consider adding layers to networks. Network Morphism (Chen et al., 2015; Wei et al., 2016; 2017; Cai et al., 2018a;b) aims at accelerating the deep networks by adding layers to a shallower net while preserving the parameters of the shallower net. One recent arxiv paper – Autogrow (Wen et al., 2019) also explores adding new layers in an automatic way. But none of these works can dynamically add filters to an existing layer as our GT-filters Alg. Additionally, these methods still requires significant computational cost and training resources." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 BACKGROUND: SPLIT LINEARIZED BREGMAN ITERATIONS (SPLITLBI)", "text": "Our whole algorithm is built upon the SplitLBI algorithm. The basic spirit of this algorithm (Huang et al., 2018) lies in two coupled spaces: weight parameter W (Over-Par set) to explore over-parameterized models by gradient descent and structural sparsity parameter Γ (Stru-Spa set) to explore important subnetwork architectures by inverse scale space where those important parameters become nonzero faster than others. The SplitLBI algorithm can be described as following,\nW t+1 = W t − κα∇WL ( W t,Γt ) (1)\nZt+1 = Zt − α∇ΓL ( W t,Γt ) (2)\nΓt+1 = κ · Prox ( Zt+1 ) (3)\nwhere Z0 = Γ0 = 0; and L (W t,Γt) = Ltask (W t) + 12ν ‖W t − Γt‖22 indicates the loss function at t, with task specific loss Ltask (W t) (e.g., cross-entropy loss). W is initialized as He et al. (2015); Γ is learned to approximate W here. And the Prox is the proximal mapping function with the following form:\nProx (Z) = argminΓ 1\n2 ‖Γ− Z‖22 + ‖Z‖1,2\n= min ( 0, 1− 1\n‖Z‖1,2\n) Z (4)\nwhere ‖Z‖1,2 is a group Lasso (`1−`2) norm for convolutional filters or simply the Lasso (`1) norm for weights). The hyper-parameters of SplitLBI are α is the learning rate; κ and ν are controlling the sparsity of learned model.\nAs in Huang et al. (2016), and assume P := (W,Γ), the SplitLBI in Eq. (1-3) can be rewritten as standard Linearized Bregman Iteration, resulting the loss function,\nPk+1 = arg min P\n{ 〈P − Pk, α∇L̄(Pk)〉+BpkΨ (P, Pk) } , (5)\nwhere\nΨ(P ) = Ωλ(Γ) + 1\n2κ ‖P‖22 = Ωλ(Γ) +\n1\n2κ ‖W‖22 +\n1\n2κ ‖Γ‖22, (6)\npk ∈ ∂Ψ(Pk), and BqΨ is the Bregman divergence associated with convex function Ψ, defined by\nBqΨ(P,Q) := Ψ(P )−Ψ(Q)− 〈q, P −Q〉, for some q ∈ ∂Ψ(Q). (7)\nOne can see that Eq. (1) is essentially an gradient descent step over the primal parameter Wt. However in Eq. (2-3), SplitLBI lifts the original network parameters W , to a coupled parameter set, (W,Γ), where a sparse proximal gradient descent (or Linearized Bregman Iteration, Mirror Descent) runs over the dual parameter Γ which enforces structural sparsity on network models. Along the training path, Γ set is to learn a sparse approximation of parameter set W ; and the important filters and parameters will gradually become non-zeros in Γ." }, { "heading": "3.2 GROWING AND TRAINING FILTERS ALGORITHM (GT-FILTERS ALG)", "text": "Built upon the SplitLBI, we further propose a GT-filters Algorithm of learning to expand the conv filters and train network parameters simultaneously. Specifically, starting from very few filters of each conv layer, GT-filter algorithm requires not only efficiently optimizing the parameters of filters, but also adding more filters if the existing filters do not have enough capacity to model the distribution of training data.\nRemarkably, our boosting network is very different from previous tasks, including AutoML (Wong et al., 2018), or life-long learning (Wang et al., 2017; Thrun & Mitchell, 1995; Pentina & Lampert, 2015; Li & Hoiem, 2016), knowledge distill (Hinton et al., 2014). In general, these existing works do not allow additional expanding or fine-tuning algorithms which are very computational expensive in practice. In our GT-filters Alg, we define a projection of the conv filters to grow and train filters, as the W t onto the support set of Γt,\nW̃ t = Projsupp(Γt) ( W t ) . (8)\nThis above equation meansW t is projected on Γt, and the selected subset W̃ t include the parameters existed in both W t and Γt. The basic idea is to monitor the gap between W t and its projection W̃ t along the training iterations: when the gap becomes small, we are going to expand the network by adding new filters.\nFundamentally, the expressive power of recent deep convolutional neural networks largely attributes to the model over-parameterization. As in Eq. (1), the parameter set Γ sparsely approximate the weight set W . Thus intuitively, we can employ Eq. (8) to indicate whether the network is overparameterized: if the set of W̃ t is much smaller than that of W t, that means the model is well over-parameterized and of enough capacity for the task at current iteration step; otherwise, if we have,\n|W̃ t|/|W t| > τ, (9)\nthen it would be more advisable to enlarge the model capacity by adding filters. Here ∣∣∣W̃ t∣∣∣ indicates\nthe number of filters of W̃ t. More specifically, as shown in Fig. 1(a), GT-filters Alg dynamically expand the filters from an initial small network into a reasonable large one. Starting from a small number of filters (e.g. 2) of conv layer, more and more filters tend to be with non-zero values as the\nalgorithm iterates. Every J epochs, we can compute the ratio of Eq. (9): if this ratio passes a pre-set threshold τ , we add the same number of new filters as existing filters1 into W ; otherwise, we will not grow any filter in this epoch. Then we continue optimizing all the weights from training data; this process is repeated until the loss does not change much or maximum epochs is reached.\nRemarks. We highlight several insights of our GT-filter Alg (1) As a trivial case, our GT-filters Alg can be directly utilized to boost neurons in fully connected layer. (2) GT-filter Alg can be implemented in parallel to boost each individual layer simultaneously." }, { "heading": "3.3 GROWING AND TRAINING LAYERS ALGORITHM (GT-LAYERS ALG )", "text": "The GT-filters Alg is designed to dynamically add and train filters in one conv layer, rather than adding new conv layer of the whole network. To overcome this limitation, we further propose the GT-layers Alg, which can learn to boost layers of one network. We assume the network (e.g, VGG, or ResNet) is composed of several blocks (e.g. VGG block or Residual blocks) with necessary transition layers (e.g., pooling layer) between two blocks; each block has many conv layer of the same size of conv filters. The number of total blocks is fixed in the network; and only layers are boosted in GT-layers Alg.\nThe GT-layers Alg has two key steps: (1) learning the filter configuration of each conv of each block; and (2) boosting layers of each block. Specifically, in Step (1): given an initial network of B blocks; and each block has only one conv layer (for plain net) or one BasicBlock (for ResNet) in He et al. (2016a) which has 2 conv layers; we apply GT-filters Alg to boost filters of each conv layer of each block one by one. GT-filters Alg will find the final number of filters of each conv layer as Mi (i = 1, · · ·B). In Step (2): we initialize a network which has each block of one conv layer (for plain net) or one BasicBlock (for ResNet) in He et al. (2016a) which has 2 conv layers, each layer or layers of BasicBlock consisting of Mi (i = 1, · · ·B) filters. We train the network from the scratch by boosting the layers from bottom blocks to up blocks of networks, following the data streaming from input to output, in Fig. 1 (b).\nAlong the training path, if the Eq. (9) is established, and this ratio passes the threshold τ for the block b, we denote the training accuracy as Accbefore; and further add another conv layer or BasicBlock of Mb filters in each layer to this block, and each filter will be initialized as He et al. (2015), with zeros initialization for the corresponding dimension in Z and Γ; We continue the training process for J epochs, and denote the training accuracy as Accafter. If\n|Accafter −Accbefore| < , (10)\nthis indicates that block b has enough capacity, and we will not add layers or filters for block b. We continue the training process until model converged or maximum budget (epochs) is reached.\nRemarks. We have several reasonable simplification in GT-layers Alg. (1) The filters of each conv layer in the same block should be the same, since it is a standard practice in the most state-of-the-art manually designed structures, e.g., VGG and ResNet family. (2) We still utilize Eq. (9) as the metric to control the capacity of networks. Critically, by introducing the sparse set Γ, the learned model is still over-parameterized in general, and yet with controllable total parameters. Thus the boosted network can enjoy the best of two worlds. (3) We have to boost layers from bottom to up blocks of the networks, since we rely on Eq. (10) to judge whether to stop boosting layers for each block." }, { "heading": "4 EXPERIMENTS", "text": "Dataset and Implementation. We conduct the experiments to evaluate our algorithms on MNIST, and CIFAR10/100 datasets. Unless otherwise specified, the hyper-parameters of Split LBI are κ = 1, ν = 100, α = 0.01, with batch size 128. To validate GT-filter Alg the initial network used has 20 filters for each conv layer, and 100 neuron in each FC layer by default. For GT-layers Alg, the initial VGG-like network has one input conv layer, and 4 blocks, with 1 conv layer of 10 filters; and the initial ResNet-like network for GT-layers Alg, has one input conv layer, and 4 blocks, each block has 1 BasicBlock in He et al. (2016a), and each BasicBlock with 2 conv layers of 20 filters in each layer. We set the hyper-parameters as J = 40, = 0.3, and τ = 0.4 by default. After finishing\n1Z, and Γ will add corresponding dimensions, initialized as zeros; and the newly added parameters of W are randomly initialized as He et al. (2015).\nadding filters/layers, we decrease the learning rate by 1/10, continue training 70 epochs; and then further decrease the learning rate by 1/10 again, and go on training 30 epochs." }, { "heading": "4.1 EXPERIMENTS ON GROWING FILTERS BY GT-FILTERS ALGORITHM", "text": "Boosting Shallow Networks. We explore the performance that GT-filters Algorithm boosts one conv layer shallow networks to much wider ones on MNIST and CIFAR10/100 datasets. Given a network of initially a small number of filters (denoted as Seed Net), our GT-filters Alg will add and train filters to produce a network of large number of filters (denoted as Boosted Net). Here, we introduce two competitors: (1) Lower Bound (LB): directly training Seed Net by SplitLBI from the scratch; (2) Upper Bound (UB): directly training a network having the same structure as Boosted Net by SplitLBI from the scratch. Essentially, LB and UB serves as the lower and upper bound performance for the network learned by our GT-filter Algorithm. All models are trained by SplitLBI in 1000 epochs. We report the network structure and results in Tab. 1. Our GT-filters Alg boosts the filters of the network, denoted as Boosted-Net, which performs almost the same as UB and much better than LB of all cases in Tab. 1, this indicates that our GT-filters Alg indeed successfully boosts the filters of networks.\nBoosting Deep Networks. We further explore our GT-filters Alg boosting deep neural networks with more filters on CIFAR10 dataset. We employ the VGG and ResNet families as the backbone, since they are most typical models of plain net and skip-connection net. Our algorithm is compared against several naive ways in gradually boosting the filters in Tab. 2. (1) Random-layers-addingfilters (Random): After training very J epochs, we randomly select half number of all layers, double the filters of these selected layers, initialize the newly added filters, and continue to training by SGD. Repeat these steps until meet stopping condition (2) Ordering-layers-adding-filters (Order): We equally divide all layers in bottom and upper layer groups. After training every J epochs, we double the filters of each layer in each group in turn and go on training by SGD. Repeat these steps until meeting the stopping condition.\nWe set J = 30 for, and τ = 0.5 for our GT-filters Alg. For competitors, we adopt the stopping growing filters policy that, aftter growing filters and training J epochs by referring to Eq. (10), the increased validation is less than 1%. The maximum training epochs is set as 300.\nFigure 2 shows the growing and training process. In general, the training process of Randomlayers-adding-filters, Ordering-layers-adding-filters and our GT-Filters Alg are very close to each other. At the first time of growing filters, the performance of networks sharply decreased as along as adding filters, partly due to the fact that the initialization of networks after adding filters are far from any optima. The results are given in Table 2. The two baselines and our GT-Layer Alg achieved nearly the same accuracy, with orders of magnitude higher model size than ours. Interestingly, our GT-filters Alg found a sparse network with small number of filters in each layer and with only 1/7 parameters comparing to the two baselines. This experiment suggests that our GT-Layers Alg indeed could boost filters for deep networks.\nAblation study of GT-filters Alg. We conduct the ablation study and validate the efficacy of different hyper-parameters of J and τ in Tab. 3 and Tab. 4, trained on CIFAR10 dataset. Our model is compared against VGG-16 network trained by 350 epochs. We found the higher J value, the larger\nboosted network with better performance. Besides, all of our boosted networks are in a low level of parameter number, from 1/30 to 1/7 comparing to VGG-16 network, but with high performance. Especially, when J = 50, our final boosted networks have comparable to VGG-16. We argue that these results are reasonable, since the Γ set may not be well trained with smaller J , e.g., J = 20, and Eq. (9) may not be met. In Tab. 4, we set J = 40, our boosted models using τ = 0.3, 0.4 and 0.5 achieved high performance as well as low level of parameter number. Our hyper-parameter τ = 0.5 can result in the boosted network that have slightly inferior performace, and yet much less parameters than VGG-16. We highlight that our boosted results are not sensitive to different values of τ . Overall, this experiment suggests the efficacy of our GT-filters Alg in boosting filters.\nGrowing filters in ResNet-18. It is interesting to investigate whether we can boost a ResNet network. Here we start with ResNet18, given the same number of layers and structure as the Standard ResNet-18 (S-ResNet-18). We conduct the experiments on CIFAR10/100 datasets. Our GT-filters Alg generates the Boosted ResNet-18 (Boosted-Net). In addition, we also introduce the Total FLOPs Budge ResNet-18 (TFB-ResNet-18) which is trained standard ResNet-18 by the same total FLOating Point operations (FLOPs) as our B-ResNet-18. To train TFB-ResNet-18, we set α = 0.05, 0.01, and 0.001 after every 1/3 total FLOPs. The results are shown in Tab. 5. We find that on CIFAR10, our B-ResNet-18 has much less parameters, about 1/5 comparing to S-ResNet-18, but achieve comparable performance to S-ResNet-18 and even higher than TFB-ResNet-18. On CIFAR100, our boosted network performed a little worse than S-ResNet-18 but we performed much better than TFB-ResNet18 and we use less than 2/3 number of parameters comparing to S-ResNet-18. The most important is that total FLOPs we used in boosting are not enough for training a ResNet18 network, which demonstrates the effectiveness of our algorithms." }, { "heading": "4.2 EXTENDING TO GROWING LAYERS BY GT-LAYERS ALGORITHM", "text": "Section 4.1 explores our GT-filers Alg and shows that our method achieved good results in boosting filter for fixed deep networks. In this section, we study our GT-layers Alg in boosting both filters and layers for a shallow ‘seed’ network on CIFAR10 and CIFAR100 datasets. Here we use the initial plain net, and initial residual net referring to VGG net and ResNet, individually. The structures are: (1) (ResNet) the same architecture as ResNet in He et al. (2016b): it has 4 blocks, and each block has several BasicBlocks and 2 convolutional layers in each BasicBlock. We initialize each convolutional layer with 20 filters. (2) (PlainNet) a VGG-like plain net: it has 4 blocks divided by pooling layers, and each block has several conv layers with 10 filters in each conv layer. The processes of GT-layer Alg. have two parts: firstly we grow filters for seed net and get the configuration of filter number of all blocks, then we start growing layers. Note that we keep same filter number of conv layers inside of a block, so we first search the filter number configuration.\nWe also compare two types of DNNs of Autogrow (Wen et al., 2019): (1) (Basic4ResNet) a variant of ResNet with basic residual blocks 3 used for ImageNet in He et al. (2016b); (2) (Plain4Net) a VggNet-like plain net by removing shortcuts in Basic4ResNet.\nTable 6 compares the growing results of Autogrow and our boosting results using GT-layers Alg. Autogrow is one of the most efficiency methods in growing layers of networks. Specifically, Autogrow can grow layers from a seed network, but their approach does not explore the filter configuration of each block. If compared aganist our GT-layers Alg, the results networks have much deeper with a large number of parameters. On CIFAR10 dataset, the Boosted-Net by our GT-layers Alg performs as good as Plain4Net and Basic4Net models by Autogrow. However, our boosted networks are much shallower than the found nets of Autogrow but having nearly the same performance. For\nexample, on CIFAR10, our GT-layers Alg found a 16 layer VGG-like network with 2.58M parameters, Autogrow found a 138 layer network with approximate 105.06M parameters. On CIAFR100, by using plain net, our algorithm not only boosts much shallower networks and small number of parameters, but also performs much better than the models found by Autogrow. In general, our GTLayer Alg could not only efficiently boost networks from shallow to properly deep, but also achieve very good performance.\nWe also conduct ablation study of the hyper-parameter . We compare the results of different , and the results of standard ResNet18 and ResNet34 trained for 350 and 300 epochs, respectively. Table 7 shows the boosting results of different and standard models. As expected, smaller will find a deeper network. The accuracy of boosted models using different is not so much difference from each other. This indicates that our GT-Layer Alg is not sensitive to small . Besides, All of our found networks performed equally or better comparing to standard networks. This suggest that our algorithm could have very good performance in boosting layers." }, { "heading": "5 CONCLUSION", "text": "In this paper, we study the novel task of boosting network and propose an approach that simultaneously growing and training filters and layers: GT-filters Alg and GT-layers Alg. With experiments on VGG and ResNets, these algorithms could efficiently boost fixed networks from a small number of filters in each layer and boost shallow seed networks, respectively, with comparable accuracies to big models but remarkably economic representations." } ]
2,019
null
SP:03b7bce7c88de2434b54fc0483d8905aa04203e9
[ "The paper introduces a framework for quantifying information about one random variable, given another random variable (“side information”) and, importantly, a function class of allowed transformations that can be applied to the latter. This matches the typical scenario in machine learning, where observations (playing the role of side information) can be transformed (with a restricted class of transformation-functions) such that they become maximally predictive about another random variable of interest (“labels”). Using this framework, the paper defines the notion of conditional F-entropy and F-entropy (by conditioning on an empty set). Interestingly, both entropic quantities are shown to have many desirable properties known from Shannon entropy - and when allowing the function class of transformations to include all possible models F-entropies are equivalent to Shannon entropies. The paper then further defines “predictive F-information” which quantifies the increase in predictability about one random variable when given side information, under a restricted function-class of allowed transformations of the side information. Importantly, transformations of side information can increase predictive F-information (which is the basis for the notion of “usable” information), which is in contrast to the data processing inequality that applies to Shannon information and states that no transformation of a variable can increase predictability of another variable further than the un-transformed variable (information cannot be generated by transforming random variables). The paper highlights interesting properties of the F-quantities, most notably a PAC bound on F-information estimation from data, which gives reason to expect F-information estimation to be more data-efficient than estimating Shannon-information (particularly in the high-dimensional regime). This finding is confirmed by four types of interesting experiments, some of which make use of a modified version of a tree-structure learning algorithm proposed in the paper (using predictive F-information instead of Shannon mutual information).", "The paper presents a generalization of classical definitions of entropy and mutual information that can capture computational constraints. Intuitively, information theoretic results assume infinite computational resources, so they may not correspond to how we treat \"information\" in practice. One example is public-key encryption. An adversary that has infinite time will eventually break the code so the decrypted message conveys the same amount of information (in a classical sense) as the plaintext message. In practice, this depends on computational time. " ]
We propose a new framework for reasoning about information in complex systems. Our foundation is based on a variational extension of Shannon’s information theory that takes into account the modeling power and computational constraints of the observer. The resulting predictive V-information encompasses mutual information and other notions of informativeness such as the coefficient of determination. Unlike Shannon’s mutual information and in violation of the data processing inequality, V-information can be created through computation. This is consistent with deep neural networks extracting hierarchies of progressively more informative features in representation learning. Additionally, we show that by incorporating computational constraints, V-information can be reliably estimated from data even in high dimensions with PAC-style guarantees. Empirically, we demonstrate predictive V-information is more effective than mutual information for structure learning and fair representation learning.
[ { "affiliations": [], "name": "Yilun Xu" }, { "affiliations": [], "name": "Shengjia Zhao" }, { "affiliations": [], "name": "Jiaming Song" }, { "affiliations": [], "name": "Stefano Ermon" } ]
[ { "authors": [ "Peter L. Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "J. Mach. Learn. Res.,", "year": 2001 }, { "authors": [ "Roberto Battiti" ], "title": "Using mutual information for selecting features in supervised neural net learning", "venue": "IEEE Transactions on neural networks,", "year": 1994 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "R. Devon Hjelm", "Aaron C. Courville" ], "title": "Mutual information neural estimation", "venue": null, "year": 2018 }, { "authors": [ "C Chow", "Cong Liu" ], "title": "Approximating discrete probability distributions with dependence trees", "venue": "IEEE transactions on Information Theory,", "year": 1968 }, { "authors": [ "C.K. Chow", "Terry J. Wagner" ], "title": "Consistency of an estimate of tree-dependent probability distributions (corresp.)", "venue": "IEEE Trans. Information Theory,", "year": 1973 }, { "authors": [ "Yau Chu", "T. Liu" ], "title": "On the shortest arborescence of a directed graph", "venue": "Scientia Sinica,", "year": 1965 }, { "authors": [ "Georges A. Darbellay", "Igor Vajda" ], "title": "Estimation of the information by an adaptive partitioning of the observation space", "venue": "IEEE Trans. Information Theory,", "year": 1999 }, { "authors": [ "Morris H DeGroot" ], "title": "Uncertainty, information, and sequential experiments", "venue": "The Annals of Mathematical Statistics,", "year": 1962 }, { "authors": [ "John Duchi", "Khashayar Khosravi", "Feng Ruan" ], "title": "Multiclass classification, information, divergence and surrogate risk", "venue": "The Annals of Statistics,", "year": 2018 }, { "authors": [ "Harrison A Edwards", "Amos J. Storkey" ], "title": "Censoring representations with an adversary", "venue": "CoRR, abs/1511.05897,", "year": 2015 }, { "authors": [ "Wei Gao", "Zhi-Hua Zhou" ], "title": "Dropout rademacher complexity of deep neural networks", "venue": "Science China Information Sciences,", "year": 2016 }, { "authors": [ "Weihao Gao", "Sreeram Kannan", "Sewoong Oh", "Pramod Viswanath" ], "title": "Estimating mutual information for discrete-continuous mixtures", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Peter D Grünwald", "A Philip Dawid" ], "title": "Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory", "venue": "Annals of Statistics,", "year": 2004 }, { "authors": [ "Edwin T Jaynes" ], "title": "On the rationale of maximum-entropy methods", "venue": "Proceedings of the IEEE,", "year": 1982 }, { "authors": [ "Sham M. Kakade", "Karthik Sridharan", "Ambuj Tewari" ], "title": "On the complexity of linear prediction: Risk bounds, margin bounds, and regularization", "venue": "In NIPS,", "year": 2008 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "CoRR, abs/1312.6114,", "year": 2013 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Phys. Rev. E,", "year": 2004 }, { "authors": [ "Michel Ledoux", "Michel Talagrand" ], "title": "Probability in Banach Spaces: isoperimetry and processes", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "EK Lenzi", "RS Mendes", "LR Da Silva" ], "title": "Statistical mechanics based on renyi entropy", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2000 }, { "authors": [ "Christos Louizos", "Kevin Swersky", "Yujia Li", "Max Welling", "Richard S. Zemel" ], "title": "The variational fair autoencoder", "venue": "CoRR, abs/1511.00830,", "year": 2015 }, { "authors": [ "David Madras", "Elliot Creager", "Toniann Pitassi", "Richard S. Zemel" ], "title": "Learning adversarially fair and transferable", "venue": "representations. ArXiv,", "year": 2018 }, { "authors": [ "Daniel Marbach", "James C. Costello", "Robert Küffner", "N. Vega", "Robert J. Prill", "Diogo M Camacho", "Kyle R. Allison", "Manolis Kellis", "James J. Collins", "Gustavo Stolovitzky" ], "title": "Wisdom of crowds for robust gene network inference", "venue": "In Nature Methods,", "year": 2012 }, { "authors": [ "Patrick E. Meyer", "Kevin Kontos", "Frédéric Lafitte", "Gianluca Bontempi" ], "title": "Information-theoretic inference of large transcriptional regulatory networks", "venue": "In EURASIP J. Bioinformatics and Systems Biology,", "year": 2007 }, { "authors": [ "XuanLong Nguyen", "Martin J. Wainwright", "Michael I. Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Liam Paninski", "Masanao Yajima" ], "title": "Undersmoothed kernel entropy estimators", "venue": "IEEE Transactions on Information Theory,", "year": 2008 }, { "authors": [ "Rafael Pass", "Abhi Shelat" ], "title": "A course in cryptography", "venue": null, "year": 2010 }, { "authors": [ "Judea Pearl" ], "title": "Causality: Models, reasoning, and inference", "venue": null, "year": 2000 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of causal inference: foundations and learning algorithms", "venue": "MIT press,", "year": 2017 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alexander A Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 1905 }, { "authors": [ "Rajesh Ranganath", "Sean Gerrish", "David M. Blei" ], "title": "Black box variational inference", "venue": "In AISTATS,", "year": 2013 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P. Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other", "venue": "modifications. ArXiv,", "year": 2017 }, { "authors": [ "Claude E. Shannon", "Warren Weaver" ], "title": "The mathematical theory of communication", "venue": null, "year": 1948 }, { "authors": [ "Jiaming Song", "Stefano Ermon" ], "title": "Understanding the limitations of variational mutual information estimators", "venue": "arXiv preprint arXiv:1910.06222,", "year": 2019 }, { "authors": [ "Jiaming Song", "Pratyusha Kalluri", "Aditya Grover", "Shengjia Zhao", "Stefano Ermon" ], "title": "Learning controllable fair representations", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Zoltán Szabó" ], "title": "Information theoretical estimators toolbox", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Leslie G Valiant" ], "title": "A theory of the learnable", "venue": "Communications of the ACM,", "year": 1984 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive", "venue": "coding. ArXiv,", "year": 2018 }, { "authors": [ "van den Oord" ], "title": "We consider two approximate estimators for Shannon information. The first is the CPC (or InfoNCE in Poole et al. (2019)) estimator (ICPC", "venue": null, "year": 2018 }, { "authors": [ "p(xi", "yi" ], "title": "The second is the NWJ estimator (INWJ) proposed by Nguyen et al", "venue": null, "year": 2010 }, { "authors": [ "For ICPC", "van den Oord" ], "title": "2018) show that ICPC is no larger than logN , where N is the batch size. This means that the ICPC estimator will incur large bias when I(X;Y ) ≥ logN", "venue": null, "year": 2018 }, { "authors": [ "Poole" ], "title": "e, which could be dominated by rare data-points that have high fθ values. Intuitively, this would make it a poor mutual information estimator by optimizing θ. The NWJ estimator may suffer from high variance when the estimator is optimal (Song", "venue": "Ermon,", "year": 2019 }, { "authors": [ "Edwards", "Storkey", "Madras" ], "title": "2018), functions in V are parameterized by a discriminator", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "We propose a new framework for reasoning about information in complex systems. Our foundation is based on a variational extension of Shannon’s information theory that takes into account the modeling power and computational constraints of the observer. The resulting predictive V-information encompasses mutual information and other notions of informativeness such as the coefficient of determination. Unlike Shannon’s mutual information and in violation of the data processing inequality, V-information can be created through computation. This is consistent with deep neural networks extracting hierarchies of progressively more informative features in representation learning. Additionally, we show that by incorporating computational constraints, V-information can be reliably estimated from data even in high dimensions with PAC-style guarantees. Empirically, we demonstrate predictive V-information is more effective than mutual information for structure learning and fair representation learning." }, { "heading": "1 INTRODUCTION", "text": "Extracting actionable information from noisy, possibly redundant, and high-dimensional data sources is a key computational and statistical challenge at the core of AI and machine learning. Information theory, which lies at the foundation of AI and machine learning, provides a conceptual framework to characterize information in a mathematically rigorous sense (Shannon & Weaver, 1948; Cover & Thomas, 1991). However, important computational aspects are not considered in information theory. To illustrate this, consider a dataset of encrypted messages intercepted from an opponent. According to information theory, these encrypted messages have high mutual information with the opponent’s plans. Indeed, with infinite computation, the messages can be decrypted and the plans revealed. Modern cryptography originated from this observation by Shannon that perfect secrecy is (essentially) impossible if the adversary is computationally unbounded (Shannon & Weaver, 1948). This motivated cryptographers to consider restricted classes of adversaries that have access to limited computational resources (Pass & Shelat, 2010). More generally, it is known that information theoretic quantities can be expressed in terms of betting games (Cover & Thomas, 1991). For example, the (conditional) entropy of a random variable X is directly related to how predictable X is in a certain betting game, where an agent is rewarded for correct guesses. Yet, the standard definition unrealistically assumes agents are computationally unbounded, i.e., they can employ arbitrarily complex prediction schemes.\nLeveraging modern ideas from variational inference and learning (Ranganath et al., 2013; Kingma & Welling, 2013; LeCun et al., 2015), we propose an alternative formulation based on realistic computational constraints that is in many ways closer to our intuitive notion of information, which we term predictive V-information. Without constraints, predictive V-information specializes to classic mutual information. Under natural restrictions, V-information specializes to other well-known notions of predictiveness, such as the coefficient of determination (R2). A consequence of this new formulation is that computation can “create usable information” (e.g., by decrypting the intercepted messages), invalidating the famous data processing inequality. This generalizes the idea that clever\nfeature extraction enables prediction with extremely simple (e.g., linear) classifiers, a key notion in modern representation and deep learning (LeCun et al., 2015).\nAs an additional benefit, we show that predictive V-information can be estimated with statistical guarantees using the Probably Approximately Correct framework (Valiant, 1984). This is in sharp contrast with Shannon information, which is well known to be difficult to estimate for high dimensional or continuous random variables (Battiti, 1994). Theoretically we show that the statistical guarantees of estimating V information translate to statistical guarantees for a variant of the Chow-Liu algorithm for structure learning. In practice, when the observer employs deep neural networks as a prediction scheme, V-information outperforms methods that approximate Shannon information in various applications, including Chow-Liu tree contruction in high dimension and gene regulatory network inference." }, { "heading": "2 DEFINITIONS AND NOTATIONS", "text": "To formally define the predictive V-information, we begin with a formal model of a computationally bounded agent trying to predict the outcome of a real-valued random variable Y ; the agent is either provided another real-valued random variable X as side information, or provided no side information ∅. We use X and Y to denote the samples spaces of X and Y respectively (while assuming they are separable), and use P(X ) to denote the set of all probability measures over the Borel algebra on X (P(Y) similarly defined for Y). Definition 1 (Predictive Family). 1 Let Ω = {f : X ∪ {∅} → P(Y)}. We say that V ⊆ Ω is a predictive family if it satisfies\n∀f ∈ V,∀P ∈ range(f), ∃f ′ ∈ V, s.t. ∀x ∈ X , f ′[x] = P, f ′[∅] = P (1)\nA predictive family is a set of predictive models the agent is allowed to use, e.g., due to computational or statistical constraints. We refer to the additional condition in Eq.(1) as optional ignorance. Intuitively, it means that the agent can, in the context of the prediction game we define next, ignore the side information if she chooses to. Definition 2 (Predictive conditional V-entropy). Let X,Y be two random variables taking values in X × Y , and V be a predictive family. Then the predictive conditional V-entropy is defined as\nHV(Y |X) = inf f∈V Ex,y∼X,Y [− log f [x](y)]\nHV(Y |∅) = inf f∈V Ey∼Y [− log f [∅](y)]\nWe additionally call HV(Y |∅) the V-entropy, and also denote it as HV(Y )\nIn our notation f is a function X ∪ {∅} → P(Y), so f [x] ∈ P(Y) is a probability measure on Y chosen based on the received side information x (we use f [·] instead of the more conventional f(·)); and f [x](y) ∈ R is the value of the density evaluated at y ∈ Y . Intuitively, V (conditional) entropy is the smallest expected negative log-likelihood that can be achieved predicting Y given observation (side information) X (or no side information ∅), using models from V . Eq.(1) means that whenever the agent can use P to predict Y’s outcomes, it has the option to ignore the input, and use P no matter whether X is observed or not.\nDefinition 2 generalizes several known definitions of uncertainty. For example, as shown in proposition 2, if the V is the largest possible predictive family that includes all possible models, i.e. V = Ω, then Definition 2 reduces to Shannon entropy: HΩ(Y |X) = H(Y |X) and HV(Y |∅) = HΩ(Y ) = H(Y ). By choosing more restrictive families V , we recover several other notions of uncertainty such as trace of covariance, as will be shown in Proposition 1.\nShannon mutual information is a measure of changes in entropy when conditioning on new variables:\nI(X;Y ) = H(Y )−H(Y |X) = HΩ(Y )−HΩ(Y |X) (2) Here, we will use predictive V-entropy to define an analogous quantity, IV(X → Y ), to represent the change in predictability of an output variable Y when given side information X .\n1Regularity Conditions: To minimize technical overhead we restrict out discussion only to distributions with probability density functions (PDF) or probability mass functions (PMF) with respect to the underlying measure. Also ∅ 6∈ X .\nDefinition 3 (Predictive V-information). Let X,Y be two random variables taking values in X × Y , and V be a predictive family. The predictive V-information from X to Y is defined as\nIV(X → Y ) = HV(Y |∅)−HV(Y |X) (3)" }, { "heading": "2.1 IMPORTANT SPECIAL CASES", "text": "Several important notions of uncertainty and predictiveness are special cases of our definition. Note that when we are defining V-entropy of a random variable Y in sample space Y ∈ Rd (without side information), out of convenience we can assume X is empty X = ∅ (this does not violate our requirement that ∅ 6∈ X .) Proposition 1. For V-entropy and V-information, we have\n1. Let Ω be as in Def. 1. Then HΩ(Y ) is the Shannon entropy, HΩ(Y | X) is the Shannon conditional entropy, and IΩ(Y → X) is the Shannon mutual information.\n2. Let Y = Rd and V = {f : {∅} → Pµ | µ ∈ Rd}, where Pµ is the distribution with density y 7→ 1Z e −‖y−µ‖2 where Z = ∫ e−‖y−µ‖2dy, then the V-entropy of a random variable Y\nequals its mean absolute deviation, up to an additive constant.\n3. Let Y = Rd and V = {f : {∅} → N (µ,Σ) | µ ∈ Rd,Σ = 1/2Id×d}, then the V-entropy of a random variable Y equals the trace of its covariance tr (Cov(Y )), up to an additive constant.\n4. Let V = {f : {∅} → Qt,θ, θ ∈ Θ}, where Qt,θ is a distribution in a minimal exponential family with sufficient statistics t : Y → Rd and set of natural parameters Θ. For a random variable Y with expected sufficient statistics µY = E[t(Y )], the V-entropy of Y is the maximum Shannon entropy over all random variables Ŷ with identical expected sufficient statistics, i.e. E[t(Ŷ )] = µY .\n5. Let Y = Rd, X be any vector space, and V = {f : x 7→ N (φ(x),Σ), x ∈ X ;∅ 7→ N (µ,Σ)|µ ∈ Rd; Σ = 1/2Id×d, φ ∈ Φ}, where Φ is the set of linear functions {φ : X → Rd}, then V-information IV(X → Y ) equals the (unnormalized) maximum coefficient of determination R2 · tr (Cov(Y )) for linear regression.\nThe trace of covariance represents a natural notion of uncertainty – for example, a random variable with zero variance (when d = 1,tr (Cov(Y )) = Var(Y ))) is trivial to predict. Proposition 1.3 shows that the trace of covariance corresponds to a notion of surprise (in the Shannon sense) for an agent restricted to make predictions using certain Gaussian models. More broadly, a similar analogy can be drawn for other exponential families of distributions. In the same spirit, the coefficient of determination, also known as the fraction of variance explained, represents a natural notion of informativeness for computationally bounded agents. Also note that in the case of Proposition 1.4, the V-entropy is invariant if the expected sufficient statistics remain the same." }, { "heading": "3 PROPERTIES OF V -INFORMATION", "text": "" }, { "heading": "3.1 ELEMENTARY PROPERTIES", "text": "We first show several elementary properties of V-entropy and V-information. In particular, Vinformation preserves many properties of Shannon information that are desirable in a machine learning context. For example, mutual information (and V-information) should be non-negative as conditioning on additional side information X should not reduce an agent’s ability to predict Y . Proposition 2. Let Y and X be any random variables on Y and X , and V and U be any predictive families, then we have\n1. Monotonicity: If V ⊆ U , then HV(Y ) ≥ HU (Y ), HV(Y | X) ≥ HU (Y | X).\n2. Non-Negativity: IV(X → Y ) ≥ 0.\n3. Independence: If X is independent of Y , IV(X → Y ) = IV(Y → X) = 0.\nThe optional ignorance requirement in Eq.(1) is a technical condition needed for these properties to hold. Intuitively, it guarantees that conditioning on side information does not restrict the class of densities the agent can use to predict Y . This property is satisfied by many existing machine learning models, often by setting some weights to zero so that an input is effectively ignored." }, { "heading": "3.2 ON THE PRODUCTION OF INFORMATION THROUGH PREPROCESSING", "text": "The Data Processing Inequality guarantees that computing on data cannot increase its mutual information with other random variables. Formally, letting t : X → X be any function, t(X) cannot have higher mutual information with Y than X: I(t(X);Y ) ≤ I(X;Y ). But is this property desirable? In analyzing optimal communication, yes - it demonstrates a fundamental limit to the number of bits that can be transmitted through a communication channel. However, we argue that in machine learning settings this property is less appropriate.\nConsider an RSA encryption scheme where the public key is known. Given plain text and its corresponding encrypted text X , if we have infinite computation, we can perfectly compute one from the other. Therefore, the plain text and the encrypted text should have identical Shannon mutual information with respect to any label Y we want to predict. However, to any human (or machine learning algorithm), it is certainly easier to predict the label from the plain text than the encrypted text. In other words, decryption increases a human’s ability to predict the label: processing increases the “usable information”. More formally, denoting t as the decryption algorithm and V as a class of natural language processing functions, we have that: IV(t(X)→ Y ) > IV(X → Y ) ≈ 0. As another example, consider the mutual information between an image’s pixels and its label. Due to data processing inequality, we cannot expect to use a function to map raw pixels to “features” that have higher mutual information with the label. However, the fundamental principle of representation learning is precisely the ability to learn predictive features — functions of the raw inputs that enable predictions with higher accuracy. Because of this key difference between V-information and Shannon information, machine learning practices such as representation learning can be justified in the information theoretic context." }, { "heading": "3.3 ON THE ASYMMETRY OF PREDICTIVE V -INFORMATION", "text": "V-information also captures the intuition that sometimes, it is easy to predict Y from X but not vice versa. In fact, modern cryptography is founded on the assumption that certain functions h : X → Y are one-way, meaning that there exists an polynomial algorithm to compute h(x) but no polynomial algorithm to compute h−1(y). This means that if V contains all polynomial-time computable functions, then IV(X → h(X)) IV(h(X)→ X). This property is also reasonable in the machine learning context. For example, several important methods for causal discovery (Peters et al., 2017) rely on this asymmetry: if X causes Y , then usually it is easier to predict Y from X than vice versa; another commonly used assumption is that Y |X can be accurately modeled by a Gaussian distribution, while X|Y cannot (Pearl, 2000)." }, { "heading": "4 PAC GUARANTEES FOR V -INFORMATION ESTIMATION", "text": "For many practical applications of mutual information (e.g., structure learning), we do not know the joint distribution of X,Y , so cannot directly compute the mutual information. Instead we only have samples {(xi, yi)}Ni=1 ∼ X,Y and need to estimate mutual information from data. Shannon information is notoriously difficult to estimate for high dimensional random variables. Although non-parametric estimators of mutual information exist (Kraskov et al., 2004; Darbellay & Vajda, 1999; Gao et al., 2017), these estimators do not scale to high dimensions. Several variational estimators for Shannon information have been recently proposed (van den Oord et al., 2018; Nguyen et al., 2010; Belghazi et al., 2018), but have two shortcomings: due to their variational assumptions, their bias/variance tradeoffs are poorly understood and they are still not efficient enough for high dimensional problems. For example, the CPC estimator suffers from large bias, since its estimates saturate at logN where N is the batch size (van den Oord et al., 2018; Poole et al., 2019); the NWJ estimator suffers from large variance that grows at least exponentially in the ground-truth mutual information (Song & Ermon, 2019). Please see Appendix B for more details and proofs.\nOn the other hand, V-information is explicit about the assumptions (as a feature instead of a bug). V-information is also easy to estimate with guarantees if we can bound the complexity of V (such as its Radamacher or covering number complexity) As we will show, bounds on the complexity of V directly translate to PAC (Valiant, 1984) bounds for V-information estimation. In practice, we can efficiently optimize over V , e.g., via gradient descent. In this paper we will present the Rademacher complexity version; other complexity measures (such as covering number) can be derived similarly. Definition 4 (Empirical V-information). LetX,Y be two random variables taking values inX ,Y and D = {(xi, yi)}Ni=1 ∼ X,Y denotes the set of samples drawn from the joint distribution over X and Y . V is a predictive family. The empirical V-information (under D) is the following V-information under the empirical distribution defined via D:\nÎV(X → Y ;D) = inf f∈V\n1 |D| ∑ yi∈D log 1 f [∅](yi) − inf f∈V 1 |D| ∑ xi,yi∈D log\n1\nf [xi](yi) (4)\nThen we have the following PAC bound over the empirical V-information: Theorem 1. Assume ∀f ∈ V, x ∈ X , y ∈ Y, log f [x](y) ∈ [−B,B]. Then for any δ ∈ (0, 0.5), with probability at least 1− 2δ, we have:∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ ≤ 4R|D|(GV) + 2B √ 2 log 1δ |D| (5)\nwhere we define the function family GV = {g|g(x, y) = log f [x](y), f ∈ V}, and RN (G) denotes the Rademacher complexity of G with sample number N .\nTypically, the Rademacher complexity term satisfies R|D|(GV) = O(|D|− 1 2 ) (Bartlett & Mendelson, 2001; Gao & Zhou, 2016). It’s worth noticing that a complex function family V (i.e., with large Rademacher complexity) could lead to overfitting. On the other hand, an overly-simple V may not be expressive enough to capture the relationship between X and Y . As an example of the theorem, we provide a concrete estimation bound when V is chosen to be linear functions mapping X to the mean of a Gaussian distribution. This was shown in Proposition 1 to lead to the coefficient of determination.\nCorollary 1.1. Assume X = {x ∈ Rdx , ‖x‖2 ≤ kx} and Y = {y ∈ Rdy , ‖y‖2 ≤ ky}. If\nV = {f : f [x] = N (Wx+ b, I), f [∅] = N (c, I),W ∈ Rdy×dx , b, c ∈ Rdy , ‖(W, b)‖2 ≤ 1}\nDenote M = (kx + ky)2 + log 2π, then ∀δ ∈ (0, 0.5), with probability at least 1− 2δ:∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ ≤ M√ 4|D| ( 1 + 4 √ 2 log 1 δ )\nSimilar results can be obtained using other classes of machine learning models with known (Rademacher) complexity." }, { "heading": "5 STRUCTURE LEARNING WITH V -INFORMATION", "text": "Among many possible applications of V-information, we show how to use it to perform structure learning with provable guarantees. The goal of structure learning is to learn a directed graphical model (Bayesian network) or undirected graphical model (Markov network) that best captures the (conditional) independence structure of an underlying data generating process. Structure learning is difficult in general, but if we restrict ourselves to certain set of graphsG, there are efficient algorithms. In particular, the Chow-Liu algorithm (Chow & Liu, 1968) can efficiently learn tree graphs (i.e. G is the set of trees). Chow & Liu (1968) show that the problem can be reduced to:\ng∗ = arg max g∈Gtree ∑ (Xi,Xj)∈edge(g) I(Xi, Xj) (6)\nwhere I(Xi, Xj) is the Shannon mutual information between variables Xi and Xj . In other words, it suffices to construct the maximal weighted spanning tree where the weight between two vertices is\ntheir Shannon mutual information. Chow & Wagner (1973) show that the Chow-Liu algorithm is consistent, i.e, it recovers the true solution as the dataset size goes to infinity. However, the finite sample behavior of the Chow-Liu algorithm for high dimensional problems is much less studied, due to the difficulty of estimating mutual information. In fact, we show in our experiments that the empirical performance is often poor, even with state-of-the-art estimators. Additionally, methods based on mutual information cannot take advantage of intrinsically asymmetric relationships, which are common for example in gene regulatory networks (Meyer et al., 2007).\nTo address these issues, we propose a new structure learning algorithm based on V-information instead of Shannon information. The idea is that we can associate to each directed edge in G (i.e., each pair of variables) a suitable predictive family Vi,j (cf. Def 1). The main challenge is that we cannot simply replace mutual information with V-information in Eq. 6 because V-information is asymmetric – we now have to optimize over directed trees:\ng∗ = arg max g∈Gd−tree m∑ i=2 IVt(g)(i),i(Xt(g)(i) → Xi) (7)\nwhere Gd−tree is the set of directed trees, and t(g) : N→ N is the function mapping each non-root node of directed tree g to its parent, and Vi,j is the predictive family for random variables Xi and Xj . After estimating V-information on each edge, we use the Chu-Liu algorithm (Chu & Liu, 1965) to construct the maximal directed spanning tree. This allows us to solve (7) exactly, even though there is a combinatorially large number of trees to consider. Pseudocode is summarized in Algorithm 1 in Appendix. Denote C(g) = ∑m i=2 IVt(g)(i),i(Xt(g)(i) → Xi), we show in the following theorem that unlike the original Chow-Liu algorithm, our algorithm has guarantees in the finite samples regime, even in continuous settings: Theorem 2. Let {Xi}mi=1 be the set of m random variables, Di,j (resp. Dj) be the set of samples drawn from P (Xi, Xj) (resp. P (Xj)). Denote the optimal directed tree with maximum expected edge weights sum C(g) as g∗ and the optimal directed tree constructed on the dataset D as ĝ. Then with the assumption in theorem 1, for any δ ∈ (0, 12m(m−1) ), with probability at least 1− 2m(m− 1)δ, we have:\nC(ĝ) ≥ C(g∗)− 2(m− 1) max i,j\n{ 2RDi,j (GVi,j ) + 2RDj (GVj ) +B √ 2 log 1\nδ (|Dj |− 1 2 + |Di,j |− 1 2 ) } (8)\nTheorem 2 shows that the total edge weights of the maximal directed spanning tree constructed by algorithm 1 would be close to the optimal total edge weights if the Rademacher term is small. Although larger C(g) does not necessarily lead to better Chow-Liu trees, empirically we find that the optimal tree in the sense of equation (7) is consistent with the optimal tree in equation (6) under commonly used V ." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "6.1 STRUCTURE LEARNING WITH CONTINUOUS HIGH-DIMENSIONAL DATA", "text": "We generate synthetic data using various ground-truth tree structures g∗ with between 7 and 20 variables, where each variable is 10-dimensional. We use Gaussians, Exponentials, and Uniforms as ground truth edge-conditionals. We use V-information(Gaussian) and V-information(Logistic) to denote Algorithm 1 with two different V families. Please refer to Appendix D.1 for more details. We compare with the original Chow-Liu algorithm equipped with state-of-the-art mutual information estimators: CPC (van den Oord et al., 2018), NWJ (Nguyen et al., 2010) and MINE (Belghazi et al., 2018), with the same neural network architecture as the V-families for fair comparison. All the experiments are repeated for 10 times. As a performance metric, we use the wrong-edges-ratio (the ratio of edges that are different from ground truth) as a function of the amount of training data.\nWe show two illustrative experiments in figure 1a; please refer to Appendix D.1 for all simulations. We can see that although the two V-families used are misspecified with respect to the true underlying (conditional) distributions, the estimated Chow-Liu trees are much more accurate across all data regimes, with CPC (blue) being the best alternative. Surprisingly, V-information(Gaussian) works consistently well in all cases and only requires about 100 samples to recover the ground-truth Chow-Liu tree in simulation-A." }, { "heading": "6.2 GENE REGULATORY NETWORK INFERENCE", "text": "Mutual information between pairs of gene expressions is often used to construct gene regulatory networks. We evaluate V-information on the in-silico dataset from the DREAM5 challenge (Marbach et al., 2012) and use the setup of Gao et al. (2017), where 20 genes with 660 datapoints are utilized to evaluate all methods. We compare with state-of-the-art non-parametric Shannon mutual information estimators in this low dimensional setting: KDE, the traditional kernel density estimator; the KSG estimator (Kraskov et al., 2004); the Mixed KSG estimator (Gao et al., 2017) and Partitioning, an adaptive partitioning estimator (Darbellay & Vajda, 1999) implemented by Szabó (2014). For fair comparison with these low dimensional estimators, we select V = {f : f [x] = N (g(x), 12 ), x ∈ X ; f [∅] = N (µ, 12 )|µ ∈ range(g)}, where g is a 3-rd order polynomial. The task is to predict whether a directed edge between genes exists in the ground-truth gene network. We use the estimated mutual information and V-information for gene pairs as the test statistic to obtain the AUC for various methods. As shown in Figure 1b, our method outperforms all other methods in network inference under different fractions of data used for estimation. The natural information measure in this task is asymmetry since the goal is to find the pairs of genes (Ai, Bi)s in which Ai regulates Bi, thus V-information is more suitable for such case than mutual information." }, { "heading": "6.3 RECOVERING THE ORDER OF VIDEO FRAMES", "text": "Let X1, · · · , X20 be random variables each representing a frame in videos from the Moving-MNIST dataset, which contains 10,000 sequences each of length 20 showing two digits moving with stochastic dynamics. Can Algorithm 1 be used to recover the natural (causal) order of the frames? Intuitively, predictability should be inversely related with frame distance, thus enabling structure learning. Using a conditional PixelCNN++ (Salimans et al., 2017) as predictive family V , we shown in Figure 1c that predictive V-information does indeed decrease with frame distance, despite some fluctuations when the frame distances are large. Using Algorithm 1 to construct a Chow-Liu tree, we find that the tree perfectly recovers the relative order of the frames.\nWe also generate a Deterministic-Moving-MNIST dataset, where digits move according to deterministic dynamics. From the perspective of Shannon mutual information, every pair of frames has the same mutual information. Hence, standard Chow-Liu tree learning algorithm would fail to discover the natural ordering of the frames (causal structure). In contrast, once we constrain the observer to PixelCNN++ models, algorithm 1 with predictive V-information can still recover the order of different frames when the frame distances are relatively small (less than 9). Compared to the stochastic dynamics case, V-information is more irregular with increasing frame distance, since the PixelCNN++ tends to overfit." }, { "heading": "6.4 INFORMATION THEORETIC APPROACHES TO FAIRNESS", "text": "The goal of fair representation learning is to map inputs X ∈ X to a feature space Z ∈ Z such that the mutual information between Z and some sensitive attribute U ∈ U (such as race or gender) is minimized. The motivation is that using Z (instead of X) as input we can no longer use the sensitive attributes U to make decisions, thus ensuring some notion of fairness. Existing methods obtain fair representations by optimizing against an “adversarial” discriminator so that the discriminator cannot predict U from Z (Edwards & Storkey, 2015; Louizos et al., 2015; Madras et al., 2018; Song et al., 2018). Under some assumptions on U and V , we show in Appendix D.2 that these works actually use V-information minimization as part of their objective, where V depends on the functional form of the discriminator.\nHowever, it is clear from the V-information perspective that features trained with VA-information minimization might not generalize to VB-information and vice versa. To illustrate this, we use a function family Vj as the attacker to extract information from features trained with IVi(Z → U) minimization, where all the Vs are neural nets. On three datasets commonly used in the fairness literature (Adult, German, Heritage), previous methods work well at preventing information “leak” against the class of adversary they’ve been trained on, but fail when we consider different ones. As shown in Figure 3b in Appendix, the diagonal elements in the matrix are usually the smallest in rows, indicating that the attacker function family Vi extracts more information on featured trained with Vj(j 6=i)-information minimization. This challenges the generalizability of fair representations in previous works. Please refer to Appendix D.2 for details." }, { "heading": "7 RELATED WORK", "text": "Alternative definitions of Information Several alternative definitions of mutual information are available in the literature. Renyi entropy and Renyi mutual information (Lenzi et al., 2000) extend Shannon information by replacing KL divergence with f -divergences. However, they have the same difficulty when applied to high dimensional problems as Shannon information.\nThe line of work most related to ours is the H entropy and H mutual information (DeGroot et al., 1962; Grünwald et al., 2004), which associate a definition of entropy to every prediction loss. However, there are two key differences. First, literatures in H entropy only consider a few special types of prediction functions that serve unique theoretical purposes; for example, (Duchi et al., 2018) considers the set of all functions on a feature space to prove surrogate risk consistency, and (Grünwald et al., 2004) only considers the H entropy to prove the duality between maximum entropy and worst-case loss minimization. In contrast, our definition takes a completely different perspective — emphasizing bounded computation and intuitive properties of “usable” information. Furthermore H entropy still suffers from difficulty of estimation in high dimension because the definitions do not restrict to functions with small complexity (e.g. Rademacher complexity).\nMutual information estimation The estimation of mutual information in the machine learning field is often on the continuous underlying distribution. For non-parametric mutual information estimators, many methods have exploited the 3H principle to calculate the mutual information, such as the Kernel density estimator (Paninski & Yajima, 2008), k-Nearest-Neighbor estimator and the KSG estimator (Kraskov et al., 2004). However, these non-parametric estimators usually aren’t scalable to high dimension. Recently, several works utilize the variational lower bounds of MI to design MI estimator based on deep neural network in order to estimate MI of high dimension continuous random variables (Nguyen et al., 2010; van den Oord et al., 2018; Belghazi et al., 2018)." }, { "heading": "8 CONCLUSION", "text": "We defined and investigated V-information, a variational extension to classic mutual information that incorporates computational constraints. Unlike Shannon mutual information, V-information attempts to capture usable information, and has very different properties, such as invalidating the data processing inequality. In addition, V-information can be provably estimated, and can thus be more effective for structure learning and fair representation learning." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This research was supported by AFOSR (FA9550-19-1-0024), NSF (#1651565, #1522054, #1733686), ONR, and FLI." }, { "heading": "A PROOFS", "text": "A.1 PROOF OF PROPOSITION 1\nProposition 1. For V-entropy and V-information, we have\n1. Let Ω be as in Def. 1. Then HΩ(Y ) is the Shannon entropy, HΩ(Y | X) is the Shannon conditional entropy, and IΩ(Y → X) is the Shannon mutual information.\n2. Let Y = Rd and V = {f : {∅} → Pµ | µ ∈ Rd}, where Pµ is the distribution with density y 7→ 1Z e −‖y−µ‖2 where Z = ∫ e−‖y−µ‖2dy, then the V-entropy of a random variable Y\nequals its mean absolute deviation, up to an additive constant.\n3. Let Y = Rd and V = {f : {∅} → N (µ,Σ) | µ ∈ Rd,Σ = 1/2Id×d}, then the V-entropy of a random variable Y equals the trace of its covariance tr (Cov(Y )), up to an additive constant.\n4. Let V = {f : {∅} → Qt,θ, θ ∈ Θ}, where Qt,θ is a distribution in a minimal exponential family with sufficient statistics t : Y → Rd and set of natural parameters Θ. For a random variable Y with expected sufficient statistics µY = E[t(Y )], the V-entropy of Y is the maximum Shannon entropy over all random variables Ŷ with identical expected sufficient statistics, i.e. E[t(Ŷ )] = µY .\n5. Let Y = Rd, X be any vector space, and V = {f : x 7→ N (φ(x),Σ), x ∈ X ;∅ 7→ N (µ,Σ)|µ ∈ Rd; Σ = 1/2Id×d, φ ∈ Φ}, where Φ is the set of linear functions {φ : X → Rd}, then V-information IV(X → Y ) equals the (unnormalized) maximum coefficient of determination R2 · tr (Cov(Y )) for linear regression.\nProof. (1)\nLet PY |x denote the density function of random variable Y conditioned on X = x (we denote this random variable as Y | x).\nHΩ(Y |X) = inf f∈Ω\nEx,y∼X,Y [ log\n1\nf [x](y) ] = inf f∈Ω Ex∼XEy∼Y |x [ log\nPY |x(y)\nf [x](y)PY |x(y) ] = inf f∈Ω Ex∼X [ KL(PY |x‖f [x]) +H(Y |x)\n] = Ex∼X [H(Y |x)] = H(Y |X) (9)\nwhere infimum is achieved for f where f [x] = PY |x and H is the Shannon (conditional) entropy. The same proof technique can be used to show that HΩ(Y ) = H(Y ), with the infimum achieved by f where f [∅] = PY . Hence we have\nIΩ(Y → X) = HΩ(Y )−HΩ(Y |X) = H(Y )−H(Y |X) = I(Y ;X) (10)\n(2)\nHV(Y ) = inf f∈V Ey∼Y [− log f [∅](y)] = inf µ∈Rd\nEy∼Y [ − log 1\nZ e−‖y−µ‖2 ] = inf µ∈Rd Ey∼Y [‖ y − µ ‖2] + logZ\n= MAD(Y ) + logZ (11)\nwhere MAD denotes mean absolute deviation Ey∼Y [‖ y − E[Y ] ‖2].\n(3) HV(Y ) = inf\nf∈V Ey∼Y [− log f [∅](y)]\n= inf µ∈Rd\nEy∼Y [ − log 1\n(2π) d 2 |Σ| 12\ne− 1 2 (y−µ) TΣ−1(y−µ)\n]\n= inf µ∈Rd\nEy∼Y [(y − µ)T (y − µ)] + d\n2 log π\n= inf µ∈Rd\nEy∼Y [tr ( (y − µ)(y − µ)T ) ] + d\n2 log π (Cyclic property of trace)\n= tr (Cov(Y )) + d\n2 log π (Linearity of trace)\n(4) The density function of an exponential family distribution with sufficient statistics t is y 7→ exp (θ · t(y)−A(θ)) where A(θ) is the partition function.\nHV(Y ) = inf f∈V Ey∼Y [− log f [∅](y)] = inf θ∈Θ Ey∼Y [− log exp (θ · Ey∼Y [t(y)]−A(θ))]\n=− sup θ∈Θ (θ · Ey∼Y [t(y)]−A(θ))]\n=−A∗(Ey∼Y [t(y)]) (12) where A∗ is the Fenchel dual of the log-partition function A(θ). Under mild conditions (Wainwright et al., 2008) −A∗(µ) = H(Pµ) where Pµ is the maximum entropy distribution out of all distributions satisfying Ey∼Pµ [t(y)] = µ (Jaynes, 1982), and H(·) is the Shannon entropy.\n(5) Assume random variable Y ∈ Rd, V = {f : x 7→ N (φ(x),Σ), x ∈ X ;∅ 7→ N (µ,Σ)|µ ∈ Rd; Σ = 12Id×d;φ ∈ Φ}. Then the V-information from X to Y is IV(X → Y ) = HV(Y )−HV(Y |X)\n= inf µ∈Rd\nEy∼Y [ − log 1\n(2π) d 2 |Σ| 12\ne−‖y−µ‖ 2 2 ] − inf φ∈Φ Ex,y∼X,Y [ − log 1\n(2π) dy 2 |Σ| 12\ne−‖y−φ(x)‖ 2 2 ] = inf µ∈Rd Ex,y∼X,Y [ ‖ y − µ ‖22 ] − inf φ∈Φ Ex,y∼X,Y [ ‖ y − φ(x) ‖22\n] = tr (Cov(Y )) 1− infφ∈ΦEx,y∼X,Y [ ‖ y − φ(x) ‖22 ] tr (Cov(Y ))\n = tr (Cov(Y ))R2 (13)\nA.2 PROOF OF PROPOSITION 2\nProposition 2. Let Y and X be any random variables on Y and X , and V and U be any predictive families, then we have\n1. Monotonicity: If V ⊆ U , then HV(Y ) ≥ HU (Y ), HV(Y | X) ≥ HU (Y | X).\n2. Non-Negativity: IV(X → Y ) ≥ 0.\n3. Independence: If X is independent of Y , IV(X → Y ) = IV(Y → X) = 0.\nProof. (1)\nHV(Y ) = inf f∈V\nEy∼Y [ log\n1\nf [∅](y) ] ≥ inf f∈U Ey∼Y [ log\n1\nf [∅](y)\n] = HU (Y ) (14)\nHV(Y |X) = inf f∈V\nEx,y∼X,Y [ log\n1\nf [x](y) ] ≥ inf f∈U Ex,y∼X,Y [ log\n1\nf [x](y)\n] = HU (Y |X) (15)\nThe inequalities (14) and (15) are because we are taking the infimum over a larger set.\n(2)\nDenote V∅ ⊂ V as the subset of f that satisfy f [x] = f [∅], ∀x ∈ X .\nHV(Y ) = inf f∈V\nEx,y∼X,Y [− log f [∅](y)]\n= inf f∈V∅\nEx,y∼X,Y [− log f [∅](y)] (By Optional Ignorance)\n= inf f∈V∅\nEx,y∼X,Y [− log f [x](y)]\n≥ inf f∈V Ex,y∼X,Y [− log f [x](y)] = HV(Y | X)\nTherefore IV(Y → X) = HV(Y )−HV(Y |X) ≥ 0\n(3)\nDenote V∅ ⊂ V as the subset of f that satisfy f [x] = f [∅], ∀x ∈ X .\nHV(Y | X) = inf f∈V Ex,y∼X,Y [− log f [x](y)]\n= inf f∈V\nEx∼XEy∼Y [− log f [x](y)] (Independence) ≥ Ex∼X [\ninf f∈V\nEy∼Y [− log f [x](y)] ]\n(Jensen)\n= Ex∼X [\ninf f∈V∅\nEy∼Y [− log f [x](y)] ]\n(Optional Ignorance)\n= inf f∈V∅\nEy∼Y [− log f [∅](y)] (No dependence on x)\n≥ inf f∈V Ey∼Y [− log f [∅](y)] = HV(Y )\nTherefore IV(Y → X) = HV(Y ) − HV(Y |X) ≤ 0. Combined with the Proposition 2.2 that IV(X → Y ) must be non-negative, IV(X → Y ) must be 0.\nA.3 PROOF OF THEOREM 1\nTheorem 1. Assume ∀f ∈ V, x ∈ X , y ∈ Y, log f [x](y) ∈ [−B,B]. Then for any δ ∈ (0, 0.5), with probability at least 1− 2δ, we have:∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ ≤ 4R|D|(GV) + 2B √ 2 log 1δ |D| (5)\nwhere we define the function family GV = {g|g(x, y) = log f [x](y), f ∈ V}, and RN (G) denotes the Rademacher complexity of G with sample number N .\nBefore proving theorem 1, we introduce two lemmas. Proofs for these Lemmas follow the same strategy as theorem 8 in Bartlett & Mendelson (2001): Lemma 3. LetX,Y be two random variables taking values in X ,Y andD denotes the set of samples drawn from the joint distribution overX×Y . Assume ∀f ∈ V, x ∈ X , y ∈ Y, log f [x](y) ∈ [−B,B]. Take f̂ = arg min\nf∈V\n1 |D| ∑ xi,yi∈D − log f [xi](yi), then ∀δ ∈ (0, 1), with probability at least 1 − δ, we\nhave: ∣∣∣∣∣∣HV(Y |X)− 1|D| ∑ xi,yi∈D − log f̂ [xi](yi) ∣∣∣∣∣∣ ≤ 2R|D|(GV) + 2B √ 2 log 1δ |D|\n(16)\nProof. We apply McDiarmid’s inequality to the function Φ defined for any sample D by\nΦ(D) = sup f∈V ∣∣∣∣∣∣Ex,y [− log f [x](y)]− 1|D| ∑ xi,yi∈D − log f [xi](yi) ∣∣∣∣∣∣ (17) Let D and D′ be two samples differing by exactly one point, then since the difference of suprema does not exceed the supremum of the difference and ∀f ∈ V, x ∈ X , y ∈ Y, log f [x](y) ∈ [−B,B], we have:\nΦ(D)− Φ(D′)\n≤ sup f∈V ∣∣∣∣∣∣ 1|D| ∑ xi,yi∈D log f [xi](yi)− Ex,y [log f [x](y)] ∣∣∣∣∣∣− ∣∣∣∣∣∣ 1|D′| ∑ xi,yi∈D′ log f [xi](yi)− Ex,y [log f [x](y)] ∣∣∣∣∣∣ \n≤ sup f∈V ∣∣∣∣∣∣ 1|D| ∑ xi,yi∈D − log f [xi](yi)| − 1 |D′| ∑ xi,yi∈D′ − log f [xi](yi) ∣∣∣∣∣∣ ≤ 2B |D|\nthen by McDiarmid’s inequality, for any δ ∈ (0, 1), with probability at least 1 − δ, the following holds:\nΦ(D) ≤ ED[Φ(D)] +B √ 2 log 1δ |D|\n(18)\nThen we bound the ED[Φ(D)] term:\nED[Φ(D)] = ED sup f∈V ∣∣∣∣∣∣Ex,y [− log f [x](y)]− 1|D| ∑ xi,yi∈D − log f [xi](yi) ∣∣∣∣∣∣ (19)\n= ED sup f∈V ∣∣∣∣∣∣ED′ 1 |D′| ∑ x′i,y ′ i∈D′ log f [x′i](y ′ i) − 1 |D| ∑ xi,yi∈D log f [xi](yi) ∣∣∣∣∣∣ (20)\n≤ ED sup f∈V ED′ ∣∣∣∣∣∣ 1|D′| ∑\nx′i,y ′ i∈D′\nlog f [x′i](y ′ i)| −\n1 |D| ∑\nxi,yi∈D log f [xi](yi) ∣∣∣∣∣∣ (21)\n≤ ED,D′ sup f∈V ∣∣∣∣∣∣ 1|D′| ∑\nx′i,y ′ i∈D′\nlog f [x′i](y ′ i)| −\n1 |D| ∑\nxi,yi∈D log f [xi](yi) ∣∣∣∣∣∣ (22)\n= ED,D′ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 (log f [x′i](y ′ i)− log f [xi](yi)) ∣∣∣∣∣∣ (23)\n≤ ED,D′,σ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σi(log f [x ′ i](y ′ i)− log f [xi](yi)) ∣∣∣∣∣∣ (24)\n≤ ED,σ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σi log f [xi](yi) ∣∣∣∣∣∣ + ED′,σ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σi log f [x ′ i](y ′ i) ∣∣∣∣∣∣ (25)\n= 2ED,σ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σi log f [xi](yi) ∣∣∣∣∣∣ (26)\n= 2ED,σ sup g∈G ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σig(xi, yi) ∣∣∣∣∣∣ = 2R|D|(GV) (27)\nwhere σis are Rademacher variables that is uniform in {−1,+1}. Inequality (22) follows from the convexity of sup, inequality (24) follows from the symmetrization argument for `1 norm for Radermacher random variables (Ledoux & Talagrand (2013), Section 6.1), inequality (21) follows from the convexity of |x− c|. (27) follows from the definition of G and Rademacher complexity. Finally, combining inequality (18) and (27) yields for all f ∈ V , with probability at least 1− δ∣∣∣∣∣∣Ex,y[− log f [x](y)]− 1|D| ∑ xi,yi∈D − log f [xi](yi) ∣∣∣∣∣∣ ≤ 2R|D|(GV) +B √ 2 log 1δ |D| (28)\nIn particular, the inequality holds for f̂ = arg min f∈V 1 |D| ∑ xi,yi∈D − log f [xi](yi) and f̃ = arg min f∈V Ex,y∼X,Y [− log f [x](y)]. Then we have:\nEx,y∼X,Y [ − log f̃ [x](y) ] − 1 |D| ∑ xi,yi∈D − log f̃ [xi](yi) ≤ HV(Y |X)− 1 |D| ∑ xi,yi∈D − log f̂ [xi](yi)\n≤ Ex,y∼X,Y [ − log f̂ [x](y) ] − 1 |D| ∑ xi,yi∈D − log f̂ [xi](yi)\nHence the bound (16) holds.\nSimilar bounds can be derived for HV(Y ) when we choose the domain of x to be X = {∅}: Lemma 4. Let Y be random variable taking values in Y and D denotes the set of samples drawn from the underlying distribution P (Y ). Assume ∀f ∈ V, y ∈ Y, log f [∅](y) ∈ [−B,B]. Take f̂ = arg min\nf∈V\n1 |D| ∑ xi,yi∈D − log f [∅](yi), then for any δ ∈ (0, 1), with probability at least 1− δ, we\nhave: ∣∣∣∣∣∣HV(Y )− 1|D| ∑ yi∈D − log f̂ [∅](yi) ∣∣∣∣∣∣ ≤ 2R|D|(GV∅) +B √ 2 log 1δ |D|\n(29)\n≤ 2R|D|(GV) +B √ 2 log 1δ |D|\n(30)\nwhere GV∅ = {g|g(y) = log f [∅](y), f ∈ V}.\nProof. The first inequality (29) can be derived similarly as Lemma 3. Since V is a predictive family, hence there exits a function h : V → V , such that h(f) = f ′ and ∀x ∈ X , f ′[x] = f [∅].\nR|D|(GV∅) = ED,σ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σi log f [∅](yi) ∣∣∣∣∣∣ \n= ED,σ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σi log h(f)[xi](yi) ∣∣∣∣∣∣ \n≤ ED,σ sup f∈V ∣∣∣∣∣∣ 1|D| |D|∑ i=1 σi log f [xi](yi) ∣∣∣∣∣∣ (31)\n= R|D|(GV)\nThe inequality (31) holds because of h(V) ⊆ V .\nNow we prove theorem 1:\nTheorem 1. Assume ∀f ∈ V, x ∈ X , y ∈ Y, log f [x](y) ∈ [−B,B], for any δ ∈ (0, 0.5), with probability at least 1− 2δ, we have:∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ ≤ 4R|D|(GV) + 2B √ 2 log 1δ |D|\nProof. Define f̂ = arg min f∈V ∑ xi,yi∈D − log f [xi](yi) and f̂∅ = arg min f∈V ∑ yi∈D − log f [∅](yi). Using the triangular inequality we have:∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ = ∣∣∣∣∣∣(HV(Y )−HV(Y |X))− 1 |D| ∑ yi∈D − log f̂∅[∅](yi)− 1 |D| ∑ xi,yi∈D − log f̂ [xi](yi)\n∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣ HV(Y )− 1|D| ∑ yi∈D − log f̂∅[∅](yi) − HV(Y |X)− 1|D| ∑ xi,yi∈D − log f̂ [xi](yi)\n∣∣∣∣∣∣ ≤ ∣∣∣∣∣∣HV(Y |X)− 1|D| ∑ xi,yi∈D − log f̂ [xi](yi) ∣∣∣∣∣∣+ ∣∣∣∣∣∣HV(Y )− 1|D| ∑ yi∈D − log f̂∅[∅](yi)\n∣∣∣∣∣∣ (32) For simplicity let\nDY |X = ∣∣∣∣∣∣HV(Y |X)− 1|D| ∑ xi,yi∈D − log f̂ [xi](yi) ∣∣∣∣∣∣ and\nDY = ∣∣∣∣∣∣HV(Y )− 1|D| ∑ yi∈D − log f̂∅[∅](yi) ∣∣∣∣∣∣ With inequality (32), Lemma 3 and Lemma 4, we have:\nPr ∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ > 4R|D|(GV) + 2B √\n2 log 1δ |D| ≤ Pr DY |X +DY > 4R|D|(GV) + 2B √\n2 log 1δ |D| (Inequality (32)) ≤ Pr DY |X > 2R|D|(GV) +B √\n2 log 1δ |D|\n ∨ DY > 2R|D|(GV) +B √ 2 log 1δ |D| ≤ Pr DY |X > 2R|D|(GV) +B √\n2 log 1δ |D|\n+ Pr DY > 2R|D|(GV) +B √ 2 log 1δ |D| (Union bound)\n≤ 2δ (Lemma 3 and Lemma 4) Hence we have:\nPr ∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ ≤ 4R|D|(GV) + 2B √\n2 log 1δ |D| ≥ 1− 2δ which completes the proof.\nA.4 PROOF OF COROLLARY 1.1\nCorollary 1.1. Assume X = {x ∈ Rdx , ‖x‖2 ≤ kx} and Y = {y ∈ Rdy , ‖y‖2 ≤ ky}. If\nV = {f : f [x] = N (Wx+ b, I), f [∅] = N (c, I),W ∈ Rdy×dx , b, c ∈ Rdy , ‖(W, b)‖2 ≤ 1}\nDenote M = (kx + ky)2 + log 2π, then ∀δ ∈ (0, 0.5), with probability at least 1− 2δ:∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ ≤ M√ 4|D| ( 1 + 4 √ 2 log 1 δ )\nThe proof is an adaptation of the proof for theorem 3 in Kakade et al. (2008).\nProof. From theorem 1 we have:∣∣∣IV(X → Y )− ÎV(X → Y ;D)∣∣∣ ≤ 4R|D|(GV) + 2B √\n2 log 1δ |D|\nIn the following ‖(W, b)‖2 is the matrix 2-norm of (W, b), then the Rademacher term can be bounded as follows:\nR|D|(GV) = 1\n|D| Eσ sup W,b,‖(W,b)‖2≤1 ∣∣∣∣∣∣ |D|∑ i=1 σi ( log 1√ 2π − 1 2 ‖yi −Wxi − b‖22 )∣∣∣∣∣∣ \n≤ 1 |D| Eσ sup W,b,‖(W, b)‖\n2 ≤1\n∣∣∣∣∣∣ |D|∑ i=1 σi ( −1 2 ‖yi −Wxi − b‖22 )∣∣∣∣∣∣ + 1 |D| Eσ ∣∣∣∣∣∣ |D|∑ i=1 σi log 1√ 2π ∣∣∣∣∣∣ \n(33)\nThe second term in RHS can be bounded as follows:\n1\n|D| Eσ ∣∣∣∣∣∣ |D|∑ i=1 σi log 1√ 2π ∣∣∣∣∣∣ ≤ 1 |D| √√√√√√Eσ |D|∑ i=1 σi log 1√ 2π 2 (concavity of x 12 )\n= 1\n|D|\n√ |D| ∗ (log 1√\n2π )2 (Independence of σis)\n=\n√ (log 1√\n2π )2\n|D| (34)\nThe first term in RHS can be bounded as follows:\n1\n|D| ED,σ sup W,b,‖(W, b)‖2≤1 ∣∣∣∣∣∣ |D|∑ i=1 σi ( −1 2 ‖yi −Wxi − b‖2 )∣∣∣∣∣∣ \n= 1\n2|D| ED,σ sup W,b,‖(W, b)‖2≤1 ∣∣∣∣∣∣ |D|∑ i=1 σi ( ‖yi −Wxi − b‖2 )∣∣∣∣∣∣ \n≤ maxi‖yi‖ 2 2\n2\n√ 1\n|D| + max i ‖xi‖2 √ maxi‖yi‖2 |D|\n+ 1\n2|D| ED,σ sup W,b,‖(W, b)‖2≤1 ∣∣∣∣∣∣ |D|∑ i=1 σi ( ‖Wxi + b‖2 )∣∣∣∣∣∣ (35)\n≤ maxi‖yi‖ 2 2\n2\n√ 1\n|D| + max i ‖xi‖2 √ maxi‖yi‖2 |D|\n+ maxi‖xi‖2\n2|D| ED,σ sup W,b,‖(W, b)‖2≤1 ∣∣∣∣∣∣ |D|∑ i=1 σi (‖Wxi + b‖) ∣∣∣∣∣∣ (36)\n≤ maxi‖yi‖ 2 2\n2\n√ 1\n|D| + max i ‖xi‖2 √ maxi‖yi‖22 |D| + maxi‖xi‖2 2 √ maxi‖xi‖22 |D|\n(37)\n≤ M√ 4|D|\nThe inequalities (36) and (35) follow the same proof in (34).\nHence we have:\nR|D|(GV) ≤ M√ 4|D|\n(38)\nIn this example, we can bound the upper bound of functions g ∈ GV by\nB = sup x∈X ,y∈Y,‖(W, b)‖2≤1\n∣∣∣∣(log 1√2π − 12‖y −Wx− b‖22 )∣∣∣∣\n≤ sup x∈X ,y∈Y,‖(W, b)‖2≤1\nlog 1√ 2π + 1 2\n( ‖y‖22 + ‖Wx+ b‖22 + 2‖y‖‖Wx+ b‖ ) ≤ log 1√\n2π +\n1 2 (kx + ky) 2 < M\nCombining inequality (38) we arrive at the theorem.\nA.5 PROOF OF THEOREM 2\nTheorem 2. Let {Xi}mi=1 be the set of m random variables, Di,j (resp. Dj) be the set of samples drawn from P (Xi, Xj) (resp. P (Xj)). Denote the optimal directed tree with maximum expected edge weights sum C(g) as g∗ and the optimal directed tree constructed on the dataset D as ĝ. Then with the assumption in theorem 1, for any δ ∈ (0, 12m(m−1) ), with probability at least 1− 2m(m− 1)δ, we have:\nC(ĝ) ≥ C(g∗)− 2(m− 1) max i,j\n{ 2RDi,j (GVi,j ) + 2RDj (GVj ) +B √ 2 log 1\nδ (|Dj |− 1 2 + |Di,j |− 1 2 ) } (8)\nProof. Let CD(g∗) be the estimated sum of edge weights on dataset D of the tree g∗, i.e.,\nC(g∗) = m∑ i=2 ÎVt(g∗)(i),i(Xt(g)(i) → Xi;D).\nwhere t(g) : N→ N is the function mapping each non-root node of directed tree g to its parent. The same notation for tree ĝ. Let\n= max i,j {∣∣∣IV(Xi → Xj)− ÎV(Xi → Xj ;D)∣∣∣} be the maximum absolute estimation error of single edge weight. By the definition of we have ∀g, |C(ĝ)− CD(ĝ)| ≤ (m− 1) , then:\nC(ĝ) + (m− 1) ≥ CD(ĝ) ≥ CD(g∗) ≥ C(g∗)− (m− 1) (39)\nFrom lemma 4 and lemma 3 we have:\nPr ( > max\ni,j\n{ 2RDi,j (Gi,j) + 2RDj (Gj) +B √ 2 log 1\nδ (|Dj |− 1 2 + |Di,j |− 1 2 )\n})\n≤ Pr ( ∃i, j, ∣∣∣IVi,j (Xi → Xj)− ÎVi,j (Xi → Xj ;D)∣∣∣ > 2RDi,j (Gi,j) + 2RDj (Gj) +B√2 log 1δ (|Dj |− 12 + |Di,j |− 12 ) )\n≤ Pr ∃i, j, ∣∣∣∣∣∣ HVj (Xj)− 1|Dj | ∑\nxj∈Dj\n− log f̂∅[∅](xj) − HVi,j (Xj |Xi)− 1|Di,j | ∑\nxi,xj∈Di,j\n− log f̂ [xi](xj) ∣∣∣∣∣∣ > 2RDi,j (Gi,j) + 2RDj (Gj) +B √ 2 log 1\nδ (|Dj |− 1 2 + |Di,j |− 1 2 )\n)\n≤ Pr ∃i, j, ∣∣∣∣∣∣HVj (Xj)− 1|Dj | ∑ xj∈Dj − log f̂∅[∅](xj) ∣∣∣∣∣∣ > 2RDj (Gj) +B √ 2 log 1 δ |Dj |− 1 2 ∨ ∣∣∣∣∣∣HVi,j (Xj |Xi)− 1|Di,j | ∑\nxi,xj∈Di,j\n− log f̂ [xi](xj) ∣∣∣∣∣∣ > 2RDi,j (Gi,j) +B √ 2 log 1 δ |Di,j |− 1 2 ≤ m(m− 1)2δ (By lemma 3, 4 and union bound)\nHence\nPr ( ≤ max\ni,j\n{ 2RDi,j (Gi,j) + 2RDj (Gj) +B √ 2 log 1\nδ (|Dj |− 1 2 + |Di,j |− 1 2 )\n}) ≥ 1−m(m− 1)2δ\n(40)\nThen combining inequality (39) and (40) we arrive at the result." }, { "heading": "B ANALYSIS OF APPROXIMATE ESTIMATORS FOR SHANNON INFORMATION", "text": "We consider two approximate estimators for Shannon information. The first is the CPC (or InfoNCE in Poole et al. (2019)) estimator (ICPC) proposed by van den Oord et al. (2018):\nICPC = E\n[ 1\nN N∑ i=1 log fθ(xi, yi) 1 N ∑N j=1 fθ(xi, yj)\n] ≤ I(X;Y ) (41)\nwhere the expectation is over N independent samples form the joint distribution ∏ i p(xi, yi).\nThe second is the NWJ estimator (INWJ) proposed by Nguyen et al. (2010): INWJ = Ex,y∼p(x,y) [fθ(x, y)]− e−1Ex,y∼p(x)p(y) [ efθ(x,y) ] ≤ I(X;Y ) (42)\nIn both cases, fθ is a parameterized function, and the objectives are to maximize these lower bounds parameterized by θ to approximate mutual information. Ideally, with sufficiently flexible models and data, we would be able recover the true mutual information. However, these ideal cases does not carry over to practical scenarios.\nFor ICPC, van den Oord et al. (2018) show that ICPC is no larger than logN , where N is the batch size. This means that the ICPC estimator will incur large bias when I(X;Y ) ≥ logN . We provide a proof for completeness as follows.\nProposition 3. ∀fθ : X × Y → R+,\nICPC ≤ logN. (43)\nProof. We have:\nICPC := E\n[ 1\nN N∑ i=1 log fθ(xi, yi) 1 N ∑N j=1 fθ(xi, yj)\n] (44)\n≤ E\n[ 1\nN N∑ i=1 log fθ(xi, yi) 1 N fθ(xi, yi)\n] ≤ E [ 1\nN N∑ i=1 logN\n] = logN (45)\nwhich completes the proof. For NWJ, we note that the INWJ involves a term denoted as Ex,y∼p(x)p(y) [ efθ(x,y) ] /e, which could be dominated by rare data-points that have high fθ values. Intuitively, this would make it a poor mutual information estimator by optimizing θ. The NWJ estimator may suffer from high variance when the estimator is optimal (Song & Ermon, 2019), this is also empirically observed in Poole et al. (2019). We provide a proof for completeness as follows.\nProposition 4. Assume that fθ achieves the optimum value for INWJ. Then the variance of the empirical NWJ estimator satisfies Var ( ÎNWJ ) ≥ e I(X;Y )−1 N , where\nÎNWJ = 1\nN N∑ i=1 [fθ(xi, yi)]− e−1 N N∑ i=1 [ efθ(x̄i,ȳi) ] is the empirical NWJ estimator with N i.i.d. samples {(xi, yi)}Ni=1 from p(x, y) and N i.i.d. samples {(x̄i, ȳi)}Ni=1 from p(x)p(y).\nProof. Let us denote zi = p(xi,yi) p(xi)p(yi) . Clearly Ep(x)p(y) [zi] = 1. Then we have:\nVar(zi) = Ep(x)p(y) [ z2i ] − (Ep(x)p(y) [zi])2\n= Ep(x)p(y) [ z2i ] − 1\n= Ep(x)p(y)\n[( p(xi, yi)\np(xi)p(yi)\n)2] − 1\n= Ep(x,y) [ p(xi, yi)\np(xi)p(yi)\n] − 1 (46)\n≥ eEp(x,y) [ log p(xi,yi) p(xi)p(yi) ] − 1 = eI(X;Y ) − 1 (47)\nwhere we use Jensen’s inequality for log at the last step.\nFrom Nguyen et al. (2010), we have:\nfθ(x, y) = 1 + log p(x, y)\np(x)p(y) . (48)\nfor all x, y. Since {(xi, yi)}Ni=1 (resp. {(x̄i, ȳi)}Ni=1) are N datapoints independently sampled from the distribution p(x, y) (resp. p(x)p(y)), we have\nVar ( ÎNWJ ) = Var\n( 1\nN N∑ i=1 [fθ(xi, yi)]− e−1 N N∑ i=1 [ efθ(x̄i,ȳi)\n])\n≥ Var\n( e−1\nN N∑ i=1 [ efθ(x̄i,ȳi)\n])\n= Var\n( 1\nN N∑ i=1 zi\n) ≥ e\nI(X;Y ) − 1 N\n(49)\nwhich completes the proof.\nAlgorithm 1 Construct Chow-Liu Trees with V-Information\nRequire: D = {X̂i}mi=1, with each X̂i being a set of datapoints sampled from the underlying distribution of random variable Xi. The set of function families {Vi,j}mi,j=1,i6=j between all the nodes.\n1: for i = 1, . . . ,m do 2: for j = 1, . . . ,m do 3: if i 6= j then 4: Calculate the edge weight: ei→j = ÎVi,j (Xi → Xj ; {X̂i, X̂j}). 5: end if 6: end for 7: end for 8: Construct the fully connected graph G = (V,E), with node set V = (X1, . . . , Xm) and edge\nset E = {ei→j}mi,j=1,i6=j . 9: Construct the maximal directed spanning tree g on G by Chow-Liu algorithm, where mutual\ninformation is replaced by V-information. 10: return g" }, { "heading": "C THE NEW ALGORITHM FOR CHU-LIU TREE CONSTRUCTION", "text": "See Algorithm 1; ÎVi,j (Xi → Xj ; {X̂i, X̂j}) denotes the empirical V-information." }, { "heading": "D DETAILED EXPERIMENTS SETUP", "text": "D.1 CHU-LIU TREE CONSTRUCTION\nFigure 2 shows the Chu-Liu tree construction of Simulation-1∼Simulation-6. The Simulation-A and Simulation-B in the main body correspond to Simulation-1 and Simulation-4.\nSimulation-1 ∼ Simulation-3 : The ground-truth Chu-Liu tree is a star tree (i.e. all random variables are conditionally independent given X1). We conduct all experiments for 10 times, each time with random simulated orthogonal matrices {Wi}20i=2. Simulation-1: X1 ∼ U(0, 10) and Xi | X1 ∼ N (WiX1, 6I), (2 ≤ i ≤ 20); Simulation-2: X1 ∼ U(0, 10) andXi | X1 ∼WiE(X1 + i), (2 ≤ i ≤ 20), i ∼ E(0.1); Simulation3 is a mixed version:X1 ∼ U(0, 10), Xi | X1 ∼ 12N (WiX1, 6I) + 1 2WiE(X1 + 1), (2 ≤ i ≤ 20).\nSimulation-4 ∼ Simulation-6 : The ground-truth Chu-Liu tree is a tree of depth two. We conduct all experiments for 10 times, each time with random simulated orthogonal matrices {Wi}7i=2. Simulation-4: X1 ∼ U(0, 10),Xi | X1 ∼ N (WiX1, 2I)(i = 2, 3), Xi | X2 ∼ N (WiX2, 2I)(i = 4, 5), Xi | X3 ∼ N (WiX3, 2I)(i = 6, 7); Simulation-5: X1 ∼ U(0, 10),Xi | X1 ∼ E(X1+ i)(i = 2, 3),Xi | X2 ∼WiE(X2+ i)(i = 4, 5), Xi | X3 ∼ WiE(X3 + i)(i = 6, 7), i ∼ E(0.1); Simulation-6 is a mixed version: X1 ∼ U(0, 10), Xi | X1 ∼ WiE(X1 + i)(i = 2, 3), Xi | X2 ∼ N (WiX2, 2I)(i = 4, 5), Xi | X3 ∼ N (WiX3, 2I)(i = 6, 7), i ∼ E(0.1).\nD.2 FAIRNESS\nWe can adapt the V-information perspective to fairness. Denote the random variable that represents sensitive data and the representation as U and Z respectively. Assume U is discrete and V belongs to preditive family 1. Then we have HV(U) = H(U) as long as V has softmax on the top and belongs to predictive family. In this case, minimizing IV(Z → U) equals to minimize −HV(Y |X). Let the joint distribution of Z and U be paramterized by φ. Hence the final objective is:\nmin φ {IV(u; z)} = min φ ( sup f∈V Ez,u∼qφ(z,u)[logPf (z|u)] )\nIn Edwards & Storkey (2015); Madras et al. (2018); Louizos et al. (2015); Song et al. (2018), functions in V are parameterized by a discriminator.\nFor the (Fi, Fj) elements described in the main body, please refer to figure 3b. The three datasets are: the UCI Adult dataset2 which has gender as the sensitive attribute; the UCI German credit dataset3 which has age as the sensitive attribute and the Heritage Health dataset4 which has the 18 configurations of ages and gender as the sensitive attribute.\nThe models in the figure are:\nVA = {f : Z → P(U)|f [z](u) = ∑\n(zi,ui)∈D\ne‖zi−z‖ 2 2/h∑\n(zi,ui)∈D e‖zi−z‖\n2 2/h ∗ I(ui = u), h ∈ R}, where D is\nthe training set.\nVB = {f : f [z] = softmax(g(z))}, where g is a two-layer MLP with Relu as the activation function. VC = {f : f [z] = softmax(g(z))}, where g is a three-layer MLP with LeakyRelu as the activation function.\nWe further visualize a special case of the (VA,VB) pair in figure 3a, where the Vi = {f : Z → P(U)|f [z](u) = ∑ (zi,ui)∈D e‖zi−z‖ 2 2/h∑ (zi,ui)∈D e‖zi−z‖ 2 2/h ∗ I(ui = u), h ∈ R} explicitly makes the features of different sensitivity attributes more evenly spread, and functions in VB is a simple two layers MLP with softmax at the top. The leaned features by VA-information minimization appear more evenly spread as expected, however, the attacker functions in VB can still achieve a high AUC of 0.857. The (i, j) elements of tables in Figure 3b stand for using function family Vi to attack features trained with Vj-information minimization. The diagonal elements in the matrix are usually the smallest in rows, indicating that the attacker function family Vi extracts more information on featured trained with Vj(j 6=i)-information minimization.\n2https://archive.ics.uci.edu/ml/datasets/adult 3https://archive.ics.uci.edu/ml/datasets 4https://www.kaggle.com/c/hhp" }, { "heading": "E MINIMALITY OF PREDICTIVE FAMILY", "text": "Define VX→P(Y) = {g : X → P(Y)|∃f ∈ V,∀x ∈ X , g[x] = f [x]}. Similarly define V∅→P(Y) = {g : ∅→ P(Y)|∃f ∈ V, g[∅] = f [∅]}. Intuitively, VX→P(Y) (resp. V∅→P(Y)) restricts the domain of functions in V to X (resp. ∅).\nNon-Negativity As we demonstrated in Proposition 2, optional-ignorance guarantees that information will be non-negative for anyX and Y . Conversely, given any discreteX , Z, V∅→P(Y), VX→P(Y) that does not satisfy optional-ignorance, there exists distribution X , Y such that IV(X → Y ) < 0. Choose Y ∼ f∗[∅] where f∗ is the function that has no corresponding g ∈ VX→P(Y) that can ignore its inputs. Pick X as the uniform distribution, and note that for all g ∈ G, there exists some measurable subset X ′ ⊂ X on which g will produce a distribution unequal to f∗[∅], and therefore having higher cross entropy. The expected cross entropy expressed in HVX→P(Y)(Y |X) is thus higher than in HV∅→P(Y)(Y ), and IV(X → Y ) < 0. Thus, if the function class does not satisfy optional ignorance, then the V-information could be negative.\nIndependence Given any discrete X , Y , V∅→P(Y), VX→P(Y) that does not satisfy optionalignorance, there exists an independent X , Y such that IV(X → Y ) > 0. Choose Y such that the distribution PY can be expressed as g[x] for some x ∈ X, g ∈ VX→P(Y), but cannot be expressed by any f ∈ V∅→P(Y). Let X be the distribution with all its mass on x; note that the cross entropy of PY with g[x] will be zero, and is less than that of the function f [∅] (because f [∅] and PY differs on a measurable subset, the cross entropy will be positive). Thus, if the function class does not satisfy optional ignorance, then the V-information does not take value 0 when the two distributions are independent." }, { "heading": "F LIMITATIONS AND FUTURE WORK", "text": "V-information is empirically useful, has several intuitive theoretical properties, but exhibits certain limitations. For example, Shannon information can be manipulated with certain additive algebra (e.g. H(X,Y ) = H(X) +H(Y | X)), while the same does not hold true for general V-Information. However, this could be possible if we choose V to be a mathematically simple set, such as the set of polynomial time computable functions. It would be interesting to find special classes of V-Information where additional theoretical development is possible. Another interesting direction is better integration of V-Information with machine learning. The production of usable information (representation learning), acquisition of usable information (active learning) and exploitation of usable information (classification and reinforcement learning) could potentially be framed in a similar V-information-theoretic manner. It is interesting to see whether fruitful theories can arise from these analyses." } ]
2,020
null
SP:f3287af29c0148119a9b84df7681dbccb4884ef7
[ "The paper addresses the challenge of intrinsically-driven exploration in tasks with sparse or delayed rewards. First, the authors try to bridge the gap between the objectives of intrinsically-motivated goal generation and maximum state entropy exploration. Then, they propose a new exploration method, called novelty-pursuit, that prescribes the following receipt: first, reach the exploration boundary through a goal-conditioned policy, then take random actions to explore novel states. Finally, the authors compare their approach to a curiosity-driven method based on Random Network Distillation in a wide range of experiments: from toy domains to continuous control, to hard-exploration video games.", "This paper proposes novelty-pursuit for exploration in large state space. In theory, novelty-pursuit is motivated by connecting intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE), showing that exploring least visited state can increase state distribution entropy most. In practice, novelty-pursuit works in two stages: First, it selects a goal (with largest value prediction error) to train a goal reaching policy to reach the boundary of explored and unexplored states. Second, after reaching goal states, it uses a randomly policy to explore, hopefully can get to unexplored states. Experiments on Empty Room show that the novelty-pursuit with perfect goal reaching policy and visit count information can maximize state distribution entropy. Experiments on Empty Room, Four Rooms, FetchReach and SuperMarioBros show that the proposed method can achieve better performance than vanilla (policy gradient?) and bonus (exploration bonus using Random Network Distillation)." ]
Efficient exploration is essential to reinforcement learning in huge state space. Recent approaches to address this issue include the intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE). In this paper, we disclose that goal-conditioned exploration behaviors in IMGEP can also maximize the state entropy, which bridges the IMGEP and the MSEE. From this connection, we propose a maximum entropy criterion for goal selection in goal-conditioned exploration, which results in the new exploration method novelty-pursuit. Novelty-pursuit performs the exploration in two stages: first, it selects a goal for the goal-conditioned exploration policy to reach the boundary of the explored region; then, it takes random actions to explore the non-explored region. We demonstrate the effectiveness of the proposed method in environments from simple maze environments, Mujoco tasks, to the long-horizon video game of SuperMarioBros. Experiment results show that the proposed method outperforms the state-of-the-art approaches that use curiosity-driven exploration.
[]
[ { "authors": [ "Marcin Andrychowicz", "Dwight Crow", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Proceedings of the 30th Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Adrien Baranes", "Pierre-Yves Oudeyer" ], "title": "R-IAC: robust intrinsically motivated exploration and active learning", "venue": "IEEE Transactions on Autonomous Mental Development,", "year": 2009 }, { "authors": [ "Marc G. Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Rémi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Proceedings of the 29th Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Deepak Pathak", "Amos J. Storkey", "Trevor Darrell", "Alexei A. Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "In Proceedings of the 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos J. Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "In Proceedings of 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Maxime Chevalier-Boisvert", "Lucas Willems", "Suman Pal" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O. Stanley", "Jeff Clune" ], "title": "Go-explore: a new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sébastien Forestier", "Yoan Mollard", "Pierre-Yves Oudeyer" ], "title": "Intrinsically motivated goal exploration processes with automatic curriculum", "venue": "learning. CoRR,", "year": 2017 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Matteo Hessel", "Ian Osband", "Alex Graves", "Volodymyr Mnih", "Rémi Munos", "Demis Hassabis", "Olivier Pietquin", "Charles Blundell", "Shane Legg" ], "title": "Noisy networks for exploration", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Anirudh Goyal", "Philemon Brakel", "William Fedus", "Soumye Singhal", "Timothy P. Lillicrap", "Sergey Levine", "Hugo Larochelle", "Yoshua Bengio" ], "title": "Recall traces: Backtracking models for efficient reinforcement learning", "venue": "In Proceedings of the 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tuomas Haarnoja", "Haoran Tang", "Pieter Abbeel", "Sergey Levine" ], "title": "Reinforcement learning with deep energy-based policies", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Elad Hazan", "Sham M. Kakade", "Karan Singh", "Abby Van Soest" ], "title": "Provably efficient maximum entropy exploration", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Christian Kauten" ], "title": "Super Mario Bros for OpenAI Gym. GitHub, 2018", "venue": "URL https://github. com/Kautenja/gym-super-mario-bros", "year": 2018 }, { "authors": [ "Michael J. Kearns", "Satinder P. Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine Learning,", "year": 2002 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of the 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "J. Zico Kolter", "Andrew Y. Ng" ], "title": "Near-bayesian exploration in polynomial time", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Tor Lattimore", "Marcus Hutter" ], "title": "Near-optimal PAC bounds for discounted mdps", "venue": "Theoretical Computer Science,", "year": 2014 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In Proceedings of the 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin A. Riedmiller", "Andreas Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adrià Puigdomènech Badia", "Mehdi Mirza", "Alex Graves", "Timothy P. Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Andrew Y. Ng", "Daishi Harada", "Stuart J. Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": "In Proceedings of the 16th International Conference on Machine Learning,", "year": 1999 }, { "authors": [ "Brendan O’Donoghue", "Rémi Munos", "Koray Kavukcuoglu", "Volodymyr Mnih" ], "title": "PGQ: combining policy gradient and q-learning", "venue": null, "year": 2016 }, { "authors": [ "Junhyuk Oh", "Yijie Guo", "Satinder Singh", "Honglak Lee" ], "title": "Self-imitation learning", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped DQN", "venue": "In Proceedings of the 29th Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Georg Ostrovski", "Marc G. Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A. Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Alexandre Péré", "Sébastien Forestier", "Olivier Sigaud", "Pierre-Yves Oudeyer" ], "title": "Unsupervised learning of goal spaces for intrinsically motivated goal exploration", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder", "Vikash Kumar", "Wojciech Zaremba" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "CoRR, abs/1802.09464,", "year": 2018 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y. Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": null, "year": 2017 }, { "authors": [ "Bradly C. Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "CoRR, abs/1507.00814,", "year": 2015 }, { "authors": [ "Alexander L. Strehl", "Michael L. Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Introduction to reinforcement learning", "venue": null, "year": 1998 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": null, "year": 2012 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Ziyu Wang", "Victor Bapst", "Nicolas Heess", "Volodymyr Mnih", "Rémi Munos", "Koray Kavukcuoglu", "Nando de Freitas" ], "title": "Sample efficient actor-critic with experience replay", "venue": "In Proceedings of the 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Brian D. Ziebart", "Andrew L. Maas", "J. Andrew Bagnell", "Anind K. Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Proceedings of the 33nd AAAI Conference on Artificial Intelligence,", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Efficient exploration is important to learn a (near-) optimal policy for reinforcement learning (RL) in huge state space (Sutton & Barto, 1998). Dithering strategies like epsilon-greedy, Gaussian action noise, and Boltzmann exploration are inefficient and require exponential interactions to explore the whole state space. In contrast, deep exploration (Osband et al., 2016) overcomes this dilemma via temporally extended behaviors with a long-term vision. Recently, proposed methods include the intrinsically motivated goal exploration process (IMGEP) (Forestier et al., 2017), and maximum state entropy exploration (MSEE) (Hazan et al., 2019). In particular, IMGEP selects interesting states from the experience buffer as goals for a goal-conditioned exploration policy. In this way, exploration behaviors are naturally temporally-extended via accomplishing self-generated goals. On the other hand, MSEE aims to search for a policy such that it maximizes the entropy of state distribution. In this way, the agent can escape from the local optimum caused by insufficient state exploration.\nIn this paper, we show that the target of maximizing the support of state distribution (discovering new states) and maximizing the entropy of state distribution (unifying visited state distribution) can be both achieved by the goal-conditioned policy. From this connection, we propose an exploration method called novelty-pursuit. Abstractly, our method performs in two stages: first, it selects a visited state with the least visitation counts as the goal to reach the boundary of the explored region; then, it takes random actions to explore the non-explored region. An illustration can be seen in Figure 1. Intuitively, this process is efficient since the agent avoids exploring within the explored region. Besides, the exploration boundary will be expanded further as more and more new states are discovered. Finally, the agent will probably explore the whole state space to find the optimal policy.\nA naive implementation of the above strategies can lead to inefficient exploration and exploitation on complex environments. First, to tackle the problem of the curse of dimension and exhaustive storage when selecting the least visited states, we approximate the visitation counts via prediction errors given by Random Network Distillation (Burda et al., 2019b). Besides, we observe that previous methods used in IMGEP (Forestier et al., 2017) are inefficient to train the goal-conditioned exploration policy. We employ training techniques based on reward shaping (Ng et al., 1999) and HER (Andrychowicz et al., 2017) to accelerate training the goal-conditioned policy. Finally, we additionally train an unconditioned exploitation policy to utilize samples collected by the goal-conditioned\nexploration policy with environment-specific rewards. Thus, the exploration and exploitation is decoupled in our method.\nOur contributions are summarized as follows: (1) We disclose that goal-conditioned behaviors can also maximize the state entropy, which bridges the intrinsically motivated goal exploration process and the maximum state entropy explore. (2) We propose a method called novelty-pursuit from this connection and give practical implementations. (3) We demonstrate the exploration efficiency of the proposed method and achieve better performance on environments from the maze, Mujoco tasks, to long-horizon video games of SuperMarioBros." }, { "heading": "2 BACKGROUND", "text": "Reinforcement Learning. In the standard reinforcement learning framework (Sutton & Barto, 1998) a learning agent interacts with a Markov Decision Process (MDP). The sequential decision process is characterized as follows: at each time t, the agent receives a state st from the environment and selects an action at from its policy π(s, a) = Pr{a = at|s = st}; that decision is sent back to the environment, and the environment gives a reward signal r(st, at) and transits to the next state st+1 based on the state transition probability pass′ = Pr{s′ = st+1|s = st, a = at}. This process repeats until the agent encounters a terminal state after which the process restarts. The main target of reinforcement learning is to maximize the expected discounted return Eπ[ ∑∞ t=0 γ\ntrt] in an unknown environment, where γ ∈ (0, 1] is a factor that balances the importance of future reward. Without information about environment dynamics and task-specific rewards in advance, the agent needs exploration to discover potential valuable states. Apparently, the learned policy may be sub-optimal if the exploration strategy cannot lead to explore the whole state space.\nIntrinsically Motivated Goal Exploration Process. Intrinsically motivated goal exploration process (IMGEP) (Baranes & Oudeyer, 2009; Forestier et al., 2017) relies on a goal-conditioned (or goal-parameterized) policy πg for unsupervised exploration. It involves the following steps: 1) selecting an intrinsic or interesting state from the experience buffer as the desired goal; 2) exploring with a goal-conditioned policy πg(s, a, g) = Pr{at = a|st = s, gt = g}; 3) reusing experience for an exploitaion policy πe(s, a) = Pr{at = a|st = s} to maximize the external reward. Note that the performance of exploitation policy πe relies on samples collected by the goal-exploration policy πg . Thus, the criterion of goal selection is crucial for IMGEP.\nMaximum State Entropy Exploration. Maximum state entropy exploration (Hazan et al., 2019) aims to search an exploration policy π∗ such that it maximizes the entropy of induced state distribution (or minimizes the KL-divergence between the uniform distribution and induced state distribution) among the class of stationary policies (i.e., π∗ ∈ argmaxπ H[dπ], where dπ is the state distribution induced by π). Without any information about tasks given by the environment, we think maximum state entropy exploration is safe for exploitation." }, { "heading": "3 IMGEP WITH MAXIMUM STATE ENTROPY EXPLORATION", "text": "In this section, we bridge the intrinsically motivated goal exploration process and maximum state entropy exploration. We begin with practical considerations when maximizing state entropy, then analyze the exploration characteristics of the proposed goal-selection method for IMGEP.\nIn practice, an exact density estimator for high-dimension state space is intractable, and the state space is unknown, which leads to an empirical state distribution over visited states. The differences are important. For example, directly optimizing the entropy of empirical state distribution over visited states is not what we want, because it ignores the non-visited states outside of the empirical state distribution (see the top row in Fig 2). Instead, we need to first maximize the support of induced state distribution (i.e., discovering new states), then we maximize the entropy of induced state distribution with full support (see the bottom row in Fig 2). In the following, we demonstrate that selecting the states with the least visitation counts among visited states as goals can achieve the above functions under some assumptions.\nLet the set {1, 2, · · · , |S|} denotes the state space S, π1:t denotes the set of policies {π1, π2, · · · , πt} over previous iterations, πt+1 denotes the policy of next iteration, xit denotes the cumulative visitation counts of state i induced by history policies π1:t, and Nt = ∑|S| i=1 x i t denotes the sum of all state visitation counts. Hence, the entropy of empirical state distribution induced by policies π1:t is defined as H[dπ1:t(s)] = ∑|S| i=1 xit Nt log xit Nt (Ht for short), and the counting measure of empirical\nstate distribution support induced by policies π1:t is defined as µ[dπ1:t(s)] = ∑|S|\ni=1 I(xit ≥ 1) (µt for short), where I is the indicator function.\nThe theoretical analysis starts with the situation that each iteration the goal-conditioned exploration policy can only select a state to visit without consideration of trajectories towards the goal. Our question is which state to visit gives the most benefits in terms of maximum state entropy. This question is closely related to the goal generation in IMGEP. To facilitate the analysis, let the unit vector e = [0, · · · , 1, . . . ] ∈ R|S| denotes a choice (i.e., e(i) = 1 indicates that the policy selects i-th state to visit). Note that xt+1 = xt + et with this assumption.\nProposition (Max Counting Measure of Support) For any state i ∈ {1, · · · , |S|} with xit ≥ 0, unless the unvisited state sets K = {i|xit = 0} is an empty set, for any choice et such that et(i) = 1 with xit = 0, we have µt+1 = µt + 1.\nThis Proposition states visiting the non-visited states is to maximize the counting measure of induced state distribution support. The agent improves its policy by discovering new valuable states. In practical applications, we don’t have access to non-visited states in advance. In other words, we can’t select these non-visited states as goals since they are not contained in the experience buffer. To deal with this problem, we assume that the chance of discovering non-visited states is high when the agent perform random actions to explore around the exploration boundary. The exploration boundary can be understood as the set of visited states with the least visitation counts (See Figure 1 for the illustration). Our assumption is based on the fact that the total visitations counts of the visited region are large and the total visitation counts of the non-visited region are small. In conclusion, the goal-conditioned exploration policy is asked to reach the exploration boundary, then it performs random actions to discover new states to maximize the counting measure.\nTheorem 1 (Max Entropy) For any state i ∈ {1, · · · , |S|} with xit ≥ 1; for any choice e∗t such that e∗t (i) = 1 with i ∈ argminj x j t , we have e ∗ t ∈ argmaxet Ht+1.\nWe provide the proof in the appendix A.1. Theorem 1 characterizes the behavior of visiting the states with the least visitations when the whole state space has been explored (i.e., the stage after maximizing the counting measure of induced state distribution support). Since Theorem 1 still suggests selecting states with the least visitation counts as goals, the above method can also be applied to maximize the entropy of induced state distribution. Actually, it is easy to unify the two stages via a smoothed entropy Hσ(dπ) = −Edπ [log(dπ)+σ] (Hazan et al., 2019). For our problem, the definition of entropy is proper by assigning non-visited states with a “dummy” visitation counts between 0 and 1. In that case, Theorem 1 still holds and suggests firstly selecting these non-visited states and subsequently selecting the states with least visitation counts to maximize the smoothed state entropy.\nThe proposed exploration method is called novelty-pursuit. We notice that the above analysis neglects the influences of trajectories towards the exploration boundary. However, the fluctuation of state distribution entropy by the trajectories towards the exploration boundary is less significant from practical considerations. In fact, the goal-conditioned policy should be trained to reach the exploration boundary quickly and pays more efforts to discover new states around the exploration boundary, as our experiment results in Section 5.1 indicate." }, { "heading": "4 METHOD", "text": "In this section, we present practical implementations for the proposed method. How to approximate visitation counts in high-dimension space and how to estimate the exploration boundary is given in Section 4.1. We describe the training technique of goal-conditioned policy in Section 4.2. Finally, we introduce an exploitation policy to learn the experience collected by the goal-conditioned exploration policy in Section 4.3. We outline the proposed exploration method in Algorithm 1." }, { "heading": "4.1 APPROXIMATING EXPLORATION BOUNDARY IN HIGH-DIMENSION SPACE", "text": "Generally, computing the visitation counts in high-dimension space is intractable. However, it is possible to build some variables related to the visitation counts. For example, Burda et al. (2019b) show that prediction errors given by two randomly initialized network have a strong relationship to the number of training samples on the MNIST dataset. Thus, we can use the prediction errors to sort visited states. Other approaches like pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017) can be also applied, but we find that RND is easy to scale up.\nRND is consist of two randomly initialized neural networks: a fixed network called target network f(x;ωt), and a trainable network called predictor networkf̂(x;ωp). Both two networks take a state s as input and output a vector with the same dimension. Each time a batch of data feeds into the predictor network to minimize the difference between the predictor network and the target network concerning the predictor network’s parameters, shown in Equation 1.\nmin ωp\n1\nK K∑ i=1 ||f(si;ωt)− f̂(si;ωp)||2 (1)\nIn practice, we employ an online learning setting to train RND and maintain a priority queue to store states with the highest prediction errors. In particular, after a goal-conditioned policy collects a mini-batch of transitions, this data feed to train the predictor network. Also, a state with high prediction error will be stored into the priority queue and the state with the least prediction error will be removed out of the priority queue if full. This process repeats and no historical data will be reused to train the predictor network. Besides, each iteration a state will be selected from the priority queue as a goal for the goal-conditioned policy. After achieving the goal, the exploration policy will perform random actions to discover new states. Consider the bias due to approximation, we sample goals from a distribution based on their prediction errors (e.g., softmax distribution).\nAlgorithm 1 Exploration by novelty-pursuit Input: predictor network update interval K; goal-conditioned policy update interval M ; minibatch size of samples for goal-conditioned policy N ; Initialize parameter θ for goal-conditioned exploration policy πg(s, g, a; θ). Initialize parameter ωt for target network f(x;ωt), and ωp for predictor network f̂(x;ωp). Initialize a buffer Dg for πg , and a priority queue Q to store states with least visitation counts. for each iteration do\nReset the environment and get the observation o0; Choose a goal g from priority queue Q, and set goal success = False; for each timestep t do\nif goal success == True then Choose an random action at; # Explore around the exploration boundary else Choose an action at from πg(st, g, at; θ); # Go to the exploration boundary end if Send at to the environment and get ret , st+1; Update goal success(st+1, g); # Store new states and update the predictor network if t%K == 0 then\nStore transitions {sk, g, ak, rek}tk=t−K into replay buffer Dg; Calculate prediction errors for {sk}tk=t−K and store them into priority queue Q; Update predictor network f̂(x;ωp) using {sk}tk=t−K ;\nend if # Update πg with reward shaping if t%M == 0 then\nUpdate πg with {sk, gk, ak, rik}Kk=1 sampled from Dg; end if\nend for end for" }, { "heading": "4.2 TRAINING GOAL-CONDITIONED POLICY EFFICIENTLY", "text": "Before we describe the training techniques for the goal-conditioned policy, we emphasize that training this policy doesn’t require the external reward signal from the environment. But we additionally use the external reward for the goal-conditioned policy to reduce the mismatch behaviors between the goal-conditioned policy πg and the exploitation policy πe.\nFollowing multi-goal reinforcement learning (Andrychowicz et al., 2017; Plappert et al., 2018a), we manually extract goal information from state space. Specifically, each state s is associated with an achieved goal of ag, and the desired goal is denoted as g. To avoid ambiguity, a goal-conditioned policy πg(s, a, g; θ)1 is asked to accomplish a desired goal g. For our settings, the achieved goal is coordinate information.\nr(agt, gt) = { 1 if d(agt, gt) < ϵ 0 otherwise (2)\nA proper reward function for the goal-conditioned policy is an indicator function with some tolerance, shown in Equation 2. With a little abuse of notations, between the achieved goal ag and the desired goal g we use d(ag, g) to denote some “distance” (e.g., L1 or L2 norm) between the achieved goal ag and the desired goal g. If the distance is less than some threshold ϵ, the goalconditioned policy receives a positive reward otherwise zero. Note that this function is also be used to judge whether agents reach the exploration boundary. However, the training of goal-conditioned policy is slow with this sparse reward function. Next, we introduce some techniques to deal with this problem. r(agt, gt) = d(agt−1, gt)− d(agt, gt) (3) Reward shaping introduces additional training rewards to guide the agent. Reward shaping is invariant to the optimal policy if shaping reward function is a potential function (Ng et al., 1999).\n1With the respect of input to a goal-conditioned policy, s contains ag to keep notations simple.\nSpecifically, we define the difference of two consecutive distances (between the achieved goal and the desired goal) as shaping reward function, shown in Equation 3. Since shaping reward function is dense, it can lead to substantial reductions in learning time. Verification of the optimal goalconditioned policy is invariant between this function and the indicator reward function is given in Appendix A.2. Alternatively, one can use also Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) to train the goal-conditioned policy via replacing each episode with an achieved goal rather than one that the agent was trying to achieve. But one should be careful since HER changes the goal distribution for learning. Besides, one can also utilize past trajectories to accelerate training, which we discuss in Appendix A.3." }, { "heading": "4.3 EXPLOITING EXPERIENCE FROM EXPLORATION POLICY", "text": "Parallel to the goal-conditioned exploration, we additionally train an unconditioned exploitation policy πe, which only takes the state as input. This policy learns from experience collected by the exploration policy πg in an off-policy learning fashion. At the same time, the exploitation policy also interacts with the environment to mitigate the side effect of exploration error (Fujimoto et al., 2019), a phenomenon that off-policy learning degenerates when data from the exploration policy is not correlated to the experience generated by the exploitation policy. Note that exploitation policy is trained with an RL objective to maximize expected discounted external return. Therefore, the exploration and exploitation are naturally decoupled, which turns out to help escape the local optimum on SuperMarioBros environments. From this perspective, our method is distinguished from Go-Explore Ecoffet et al. (2019), which employs exploration followed by exploitation." }, { "heading": "5 EXPERIMENT", "text": "In this section, we aim to answer the following research questions: 1) Does novelty-pursuit effectively maximize the state entropy? 2) Do the proposed goal-selection criterion and training techniques improve performance for IMGEP? 3) How does the performance of novelty-pursuit compare with the state-of-the-art approaches in complex environments? We conduct experiments from the simple maze environments, Mujoco tasks, to long-horizon video games of SuperMarioBros to evaluate the proposed method. Detailed policy network architecture and hyperparameters are given in Appendix A.6 and A.7, respectively.\nHere we briefly describe the environment settings (see Figure 3 for illustrations). Detailed settings are given in the Appendix A.5.\nEmpty Room & Four Rooms. An agent navigates in the maze of 17×17 to find the exit (ChevalierBoisvert et al., 2018). The agent receives a time penalty until it finds the exit and receives a positive reward. The maximum return for both two environments is +1, and the minimum total reward is −1. Note that the observation is a partial image of shape (7, 7, 3). FetchReach. A 7-DOF Fetch Robotics arm (simulated in the Mujoco (Todorov et al., 2012)) is asked to grip spheres above a table. There are 4 spheres on the table, and the robot receives a positive reward of +1 when its gripper catches a sphere (the sphere will disappear after being caught) otherwise it receives a time penalty. The maximum total reward is +4, and the minimum total reward is −1. SuperMarioBros. A Mario agent with raw image observation explores to discover the flag. The reward is based on the score given by the NES simulator (Kauten, 2018) and is clipped into −1 and +1 except +50 when getting a flag. There are 24 stages in the game, but we only focus on the 1-1, 1-2, and 1-3." }, { "heading": "5.1 COMPARISON OF EXPLORATION EFFICIENCY", "text": "In this section, we study the exploration efficiency in terms of the state distribution entropy. We focus on the Empty Room environment because it is tractable to calculate the state distribution entropy. Note that we don’t use any external reward the observation for RND is a local-view image.\nWe consider the following baselines: 1) random: uniformly selecting actions; 2) bonus: a policy receiving exploration bonus based on the prediction errors of RND (Burda et al., 2019b); 3) novelty-\npursuit: the proposed method. We also consider three variants of our method: 4) novelty-pursuitplanning oracle: the proposed method with a perfect goal-conditioned policy; 5) novelty-pursuitcounts-oracle: the proposed method with selecting goals based on true visitation counts; 6) noveltypursuit-oracles: the proposed method with both two oracles. The results are summarized in Table 1. Note that the maximum state distribution entropy for this environment is 5.666.\nFirst, we can see that novelty-pursuit achieves a higher entropy than the random and bonus method. Though exploration bonus via prediction errors of RND may help makes an exploration-exploitation trade-off (Burda et al., 2019b), but is inefficient to a maximum state entropy exploration. We attribute this to delayed and indirect feedbacks of the exploration bonus. Second, when the planning oracle and visitation counts oracle are available, the entropy of our method roughly improves by 0.228 and 0.124, respectively. We notice that the planning-oracle avoids exploration towards the exploration boundary and spends more meaningful steps to explore around the exploration boundary, thus greatly improves the entropy. Based on this observation, we think accelerating goal-conditioned policy training is more important for our method. Actually, we find the proposed method can satisfy our need to approximate the exploration boundary via prediction errors of RND (See Appendix A.4 for more results). Third, the combination of two oracles gives a near-perfect performance (the gap between the maximum state entropy is only 0.039). This result demonstrates that goal-condition exploration behaviors presented by novelty-pursuit can maximize the state entropy and validates the analysis in Section 3." }, { "heading": "5.2 ABLATION STUDY OF GOAL-SELECTION AND TRAINING TECHNIQUES", "text": "In this section, we study the factors that contribute to our method by ablation experiments. Firstly, we focus on the criterion of goal-section in IMGEP. We compare novelty-pursuit with two other goalselection methods: 1) random-selection: selecting states randomly from the experience buffer; 2) learning-progress: selecting a feasible state (goal success rate is between 0.3 and 0.7) with probability of 0.8 and an arbitrary visited state with the probability of 0.2, which is adopted from (Forestier et al., 2017). Results on the Empty Room are shown in Figure 4. Secondly, we study how goalconditioned policy learning affects performance. We compare HER and the reward-shaping with distance reward (i.e., reward based on L1 norm in our problem) used in (Forestier et al., 2017). Results on the Empty Room are shown in Figure 5.\nFrom Figure 4, we see that IMGEP doesn’t work when randomly selecting goals, but novelty-pursuit gives a greater boost compared to the learning-progress. We think the reason is that this heuristic method is brittle to the estimation of goal success rate and lacks an explicit exploration objective.\nFrom Figure 5, we find that the IMGEP with HER or reward shaping outperforms than the IMGEP with distance reward. As discussed in Ng et al. (1999), reward based on distance may change the optimal behavior of goal-condition exploration policy, thus hurts the performance for IMGEP.\n0 50k 100k 150k 200k steps\n1.0\n0.5\n0.0\n0.5 1.0 ep iso de re tu rn\nnovelty-pursuit learning-progress random-selection\nFigure 4: Comparison of goal-selection. 0 50k 100k 150k 200k steps\n1.0\n0.5\n0.0\n0.5\n1.0\nep iso\nde re\ntu rn\nreward shaping HER distance reward\nFigure 5: Comparison of training techniques." }, { "heading": "5.3 EVALUATION ON COMPLEX ENVIRONMENTS", "text": "In this section, we compare different methods in terms of external reward. We will see that without sufficient and efficient exploration, the policy may be stuck into the local optimum. Two baseline methods using reinforcement learning are considered: 1) vanilla: DDPG (Lillicrap et al., 2016) with Gaussian action noise on Fetch Reach and ACER (Wang et al., 2017) with policy entropy regularization on others; 2) bonus: an off-policy version of (Burda et al., 2019b) that combines the external reward and intrinsic reward based on the vanilla policy. Note reported results of novelty-pursuit are the performances of the exploitation policy πe rather than the goal-conditioned exploration policy πg . We keep the number of samples and training iterations same for all methods.\nFirst, we consider the previously used Empty Room and the Four Room environments. The results are shown in Figure 6. We see that the vanilla policy hardly finds the exit. Novelty-pursuit is comparative to bonus and outperforms bonus on the Four Rooms environment, where we observe that bonus is somewhat misled by the intrinsic reward though we have tried many weights to balance the external reward and intrinsic reward.\nSecondly, we consider the FetchReach environment and results are shown in Figure 6. We see that novelty-pursuit can consistently grip 4 spheres while other methods sometimes fail to efficiently explore the whole state space to grip 4 spheres.\nFinally, we consider the SuperMarioBros environments, in which it is very hard to discover the flag due to the huge space state and the long horizon. Learning curves are plotted in Figure 7 and the final performance is listed in Table 2. We find the vanilla method gets stuck into the local optimum on SuperMarioBros-1-1 while the bonus method and ours can find a near-optimal policy. All methods perform well on SuperMarioBros-1-2 thanks to dense rewards. On SuperMarioBros-1-3, reward is sparse and the task is very challenging. We plot trajectories of SuperMarioBros-1-3 in Figure 8, and\nmore results can be found in Appendix A.4. It turns out only our method can get positive rewards via a deep exploration presented by the goal-conditioned policy on SuperMarioBros-1-3." }, { "heading": "6 RELATED WORK", "text": "Exploration. Traditionally, the exploration strategy is based on the exploitation policy that receives an external reward from the environment. Traditional exploration methods include injecting noise on action space (Mnih et al., 2015; Lillicrap et al., 2016) or parameter space (Plappert et al., 2018b; Fortunato et al., 2018), and adding the policy’s entropy regularization (Schulman et al., 2017; Mnih et al., 2016).\nFor tabular Markov Decision Process, there are lots of work utilizing confidence based reward to balance exploration and exploitation (Kearns & Singh, 2002; Strehl & Littman, 2008; Kolter & Ng, 2009; Lattimore & Hutter, 2014). Several exploration strategies for deep RL based approximation visitation counts have been proposed in high-dimension space (Bellemare et al., 2016; Ostrovski et al., 2017). Another type of exploration is curiosity-driven exploration. These methods track the uncertainty of dynamic (Stadie et al., 2015; Pathak et al., 2017; Burda et al., 2019a;b) to explore intrinsic states. Deep (temporally extended) exploration via tracking the uncertainty of value function is studied in (Osband et al., 2016). Besides, maximum (policy) entropy reinforcement learning\nencourages exploration by maximizing the cumulative sum of external reward and policy entropy (Ziebart et al., 2008; Haarnoja et al., 2017; O’Donoghue et al., 2016; Haarnoja et al., 2018).\nRecently, Hazan et al. (2019) introduce a new exploration objective: maximum state entropy. They provide an efficient algorithm when restricted to a known tabular MDP (a density estimator oracle is required for an unknown tabular MDP) and gives the theoretical analysis. We derive the criterion of goal generation based on the principle of maximum state entropy.\nOur method is based on the framework of intrinsically motivated goal exploration processes (IMGEP) (Baranes & Oudeyer, 2009; Forestier et al., 2017; Péré et al., 2018). Go-Explore (Ecoffet et al., 2019) is reminiscent of IMGEP and achieves dramatic improvement on the hard exploration problem of Montezumas Revenge. But with the assumption that the environments are resettable or deterministic and many hand-engineering designs, Go-Explore is restricted to specific environments. Our method shares a similar exploration strategy like Go-Explore, but our method is implemented practically and can be applied to stochastic environments. Importantly, we aim to answer the core question: why such defined goal-conditioned exploration is efficient?\nGoal-conditioned Policy. By taking environment observation and desired goal as inputs, the goalconditioned policy is expected to accomplish a series of tasks. Schaul et al. (2015) propose the universal value function approximator (UVFA) and train it by bootstrapping from the Bellman equation. However, training goal-condtioned policy is also still a challenging problem due to goal-condition reward is sparse (e.g. 1 for success, 0 for failure). Andrychowicz et al. (2017) propose hindsight experience replay (HER) by replacing each episode with an achieved goal rather than one that the agent was trying to achieve. This operation introduces more reward signals and serves as an implicit curriculum. Florensa et al. (2018) use a generator network to adaptively produce artificial feasible goals. We also use a goal-conditioned policy, but goals are selected from the experience buffer rather than being specified in advance. What’s more, we utilize the technique of reward shaping (Ng et al., 1999) to accelerate training.\nLearning from experience. Off-policy reinforcement learning algorithms such as DQN(Mnih et al., 2015), DDPG (Lillicrap et al., 2016), and ACER (Wang et al., 2017), reuse experience to improve data efficiency. Besides, how to additionally utilize (good) experience to overcome exploration dilemma is studied in (Oh et al., 2018; Goyal et al., 2019). These works are perpendicular to ours since we focus on how to discover these valuable states." }, { "heading": "7 CONCLUSION", "text": "This paper bridges the intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE). We propose a method called novelty-pursuit from the connection. We demonstrate the proposed method is efficient towards exploring the whole state space. Therefore, the proposed method can escape from the local optimum and heads the (near-) optimal policy. We notice that current training techniques of the exploitation policy are based on an RL objective, which may not be efficient to utilize experience collected by the exploration policy. Theoretically, the influence of trajectories towards the exploration bound should be considered. We leave these for future works." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF THEOREM 1", "text": "Suppose we have two choices among state i and state j, we want to compare the difference g(i, j) between the entropy Hi[dπ1:t+1 ] by visiting state i and the entropy Hj [dπ1:t+1 ] by visiting state j. Let Nt = ∑ i x i t denotes visitation counts over all states. Note that the entropy difference of two choices can be attributed to changes in xit+1 and x j t+1:\ng(i, j) = Hi[dπ1:t+1 ]−Hj [dπ1:t+1 ]\n= (− x i t+1\nNt+1 log xit+1 Nt+1 − x j t Nt+1 log xjt Nt+1 )− (− x j t+1 Nt+1 log xjt+1 Nt+1 − x i t Nt+1 log xit Nt+1 )\n= ( xjt+1 Nt+1 log xjt+1 Nt+1 − x j t Nt+1 log xjt Nt+1 )− ( x i t+1 Nt+1 log xit+1 Nt+1 − x i t Nt+1 log xit Nt+1 )\n(4)\nLet f(x) = x+1Nt+1 log x+1 Nt+1 − xNt+1 log x Nt+1 , which yields\ng(i, j) = f(xjt )− f(xit) (5)\nBy looking at the derivative of f(x), we know that f(x) is a monotonically increasing function. Thus, for any xit < x j t , we have that g(i, j) > 0.\nf ′(x) = 1\nNt + 1 log(1 +\n1 x ) > 0 (6)\nIn conclusion, unless state i has the least visitation counts, we can always another state j with xjt < x i t to increase the entropy. Hence, visiting the states with the smallest visitation counts is optimal." }, { "heading": "A.2 REWARD SHAPING FOR MULTI-GOAL REINFORCEMENT POLICY", "text": "Reward shaping is invariant to the optimal policy under some conditions (Ng et al., 1999). Here we verify that reward shaping introduced by our method doesn’t change the optimal policy for goalconditioned policy. Adding up shaping rewards gives:\nT∑ t=1 − d(agt, g) + d(agt+1, g)\n= −d(ag1, g) + d(ag2, g)− d(ag2, g) + d(ag3, g) + · · · − d(agT , g) + d(agT+1, g) = −d(ag1, g) + d(agT+1, g)\n(7)\nFor the optimal policy πg , d(agT+1, g) = 0, while d(ag1, g) is a constant. Thus, for the fixed g, the optimal policy πg induced by the reward shaping is invariant to the one induced by sparse-reward in Equation 2." }, { "heading": "A.3 TRAINING GOAL-CONDITIONED POLICY WITH PAST TRAJECTORIES", "text": "In fact, training the goal-conditioned policy for our problem is different from the settings of multigoal reinforcement learning (Andrychowicz et al., 2017; Plappert et al., 2018a). The goal is selected from visited states rather than the non-visited states. Thus, we can utilize past trajectories to accelerate training with supervised learning. The optimization problem is defined in Equation 8. Note that we cannot rely on this information on stochastic environments like SuperMarioBros.\nmin θ ∑ (at,ot)∼τ(g) − log πg(at|ot, g; θ) (8)\nwhere τ(g) is the trajectory that covers the goal g in the previous exploration process." }, { "heading": "A.4 ADDITIONAL RESULTS", "text": "Empty Room. We depict the exploration boundary by visitation counts and the estimated one by our method in Figure 9. The agent starts from the left top corner and performs a random policy. The exploration boundary shown in black is top 10% states with least visitation counts or the largest prediction errors among all visited states.\nSuperMarioBros. In Figure 10, we make additional trajectories visualization on SuperMarioBros1-1 and SuperMarioBros-1-2. Trajectories are plotted with same number samples (18M). Vanilla method gets into the local optimum even with policy entropy regularization on SuperMarioBros-11. In addition, only our method can get the flag on SuperMarioBros-1-2." }, { "heading": "A.5 ENVIRONMENT PREPOSSESSING", "text": "Maze. Different from (Chevalier-Boisvert et al., 2018), we only use the image and coordination information as inputs. We only consider four actions: turn left, turn right, move forward and move backward. The maximum episode length is 190 for Empty Room, and 500 for Four Rooms. Each time the agent receives a time penalty of 1/max episode length and receives +1 when finding the exit.\nFetchReach. We implement this environment based on FetchReach-v0 in Gym (Brockman et al., 2016). The maximum episode length is 50. The locations of four spheres are (1.20, 0.90, 0.65), (1.10, 0.72, 0.45), (1.20, 0.50, 0.60), and (1.45, 0.50, 0.55). When sampling goals, we remove spheres outside of the table i.e., the valid x range: (1.0, 1.5), the valid y range is (0.45, 1.05), and valid z range is (0.45, 0.65).\nSuperMarioBros. We implement this environment based on (Kauten, 2018) with Gym wrappers. Prepossessing includes grey-scaling, observation downsampling, external reward clipping (except that 50 for getting flag), stacked frames of 4, and sticky actions with a probability of 0.25. The maximum episode length is 800. The environment restarts when the agent dies." }, { "heading": "A.6 NETWORK ARCHITECTURE", "text": "We use the convolutional neural network (CNN) for Empty Room, Four Rooms, and video games of SuperMarioBros, and multi-layer perceptron (MLP) for FetchReach environment. Network architecture design and parameters are based on baselines (Dhariwal et al., 2017). For each environment,\nRND uses a similar network architecture. The predictor network has additional MLP layers than the predictor network." }, { "heading": "A.7 HYPERPARAMETERS", "text": "Table 3 gives hyperparameters for ACER (Wang et al., 2017) on the maze and SuperMarioBros (the learning algorithm is RMSProp (Tieleman & Hinton, 2012)). DDPG (Lillicrap et al., 2016) used in Fetch Reach environments is based on the HER algorithm implemented in baselines (Dhariwal et al., 2017) expect that the actor learning rate is 0.0005. We run 4 parallel environments for DDPG and the size of the priority queue is also 100. As for the predictor network, the learning rate of the predictor network is 0.0005 and the optimization algorithm is Adam (Kingma & Ba, 2015) for all experiments, and the batch size of training data is equal to the product of rollout length and the number of parallel environments.\nThe goal-conditioned policy is trained with shaping rewards defined in Equation 3 and external rewards, which helps reduce mismatch behaviors between its and the exploitation policy’s . The weight is 1 for all environments except 2 for SuperMarioBros. For bonus method used in Section 5, the weight β to balance the exploration bonus and the external reward (i.e., r′ = rext + βrint) is 0.1 for Empty Room and Four Rooms, 0.01 for FetchReach, 1.0 for SuperMarioBros-1-1 and SuperMarioBros-1-3, and 0.1 for SuperMarioBros-1-2. We also do a normalization for the intrinsic reward by dividing the intrinsic rewards via a running estimate of the standard deviation of the sum of discounted intrinsic rewards." } ]
2,019
null
SP:33a9dffdcc2a5fc2a30a5a2e9b8cb65cd1010bed
[ "Authors proposed an enhanced Pointer-Generator model called SPNet. The key difference between SPNet and PG are the separate handling or using of speaker role, semantic slot and domain labels. Authors also proposed a new metrics called Critical Information Completeness (CIC) to address ROUGE's weakness in assessing if key information is missing in the output.", "The authors propose a new abstractive dialog summarization dataset and task based on the MultiWOZ dataset. Unlike previous work which targets very short descriptions of dialog transcripts (e.g. 'industrial designer presentation'), this paper looks to generate long descriptions of the entire dialog using the prompts in the MultiWOZ task. The authors also extend the pointer generator network of See et al. (2018) to use speaker, semantic slot and domain information. They show that this new model (SPNet) outperforms the baseline on existing automatic metrics, on a new metric tuned to measure recall on slots (dubbed CIC), and a thorough human evaluation." ]
IVE DIALOG SUMMARIZATION WITH SEMANTIC SCAFFOLDS Anonymous authors Paper under double-blind review
[ { "affiliations": [], "name": "SEMANTIC SCAFFOLDS" } ]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In ICLR 2015 : International Conference on Learning Representations", "year": 2015 }, { "authors": [ "Dzmitry Bahdanau", "Philemon Brakel", "Kelvin Xu", "Anirudh Goyal", "Ryan Lowe", "Joelle Pineau", "Aaron C. Courville", "Yoshua Bengio" ], "title": "An actor-critic algorithm for sequence prediction", "venue": "In ICLR 2017 : International Conference on Learning Representations", "year": 2017 }, { "authors": [ "David M. Blei", "Andrew Y. Ng", "Michael I. Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Pawe Budzianowski", "Tsung-Hsien Wen", "Bo-Hsiang Tseng", "Iigo Casanueva", "Stefan Ultes", "Osman Ramadan", "Milica Gai" ], "title": "Multiwoz - a large-scale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling", "venue": "arXiv preprint arXiv:1810.00278,", "year": 2018 }, { "authors": [ "Asli Celikyilmaz", "Antoine Bosselut", "Xiaodong He", "Yejin Choi" ], "title": "Deep communicating agents for abstractive summarization", "venue": "In NAACL HLT 2018: 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Michel Galley" ], "title": "A skip-chain conditional random field for ranking meeting utterances by importance", "venue": "In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing,", "year": 2006 }, { "authors": [ "Sebastian Gehrmann", "Yuntian Deng", "Alexander M. Rush" ], "title": "Bottom-up abstractive summarization", "venue": "EMNLP", "year": 2018 }, { "authors": [ "Chih-Wen Goo", "Yun-Nung Chen" ], "title": "Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts", "venue": "arXiv preprint arXiv:1809.05715,", "year": 2018 }, { "authors": [ "Jiatao Gu", "Zhengdong Lu", "Hang Li", "Victor O.K. Li" ], "title": "Incorporating copying mechanism in sequence-to-sequence learning", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Karl Moritz Hermann", "Tom Koisk", "Edward Grefenstette", "Lasse Espeholt", "Will Kay", "Mustafa Suleyman", "Phil Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume", "year": 2015 }, { "authors": [ "Andrew Hoang", "Antoine Bosselut", "Asli Celikyilmaz", "Yejin Choi" ], "title": "Efficient adaptation of pretrained transformers for abstractive summarization", "venue": "arXiv preprint arXiv:1906.00138,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Guillaume Klein", "Yoon Kim", "Yuntian Deng", "Jean Senellart", "Alexander M. Rush" ], "title": "Opennmt: Open-source toolkit for neural machine translation", "venue": "In Proceedings of ACL 2017, System Demonstrations,", "year": 2017 }, { "authors": [ "John D. Lafferty", "Andrew McCallum", "Fernando C.N. Pereira" ], "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "venue": "Proceedings of the Eighteenth International Conference on Machine Learning,", "year": 2001 }, { "authors": [ "Manling Li", "Lingyu Zhang", "Heng Ji", "Richard J. Radke" ], "title": "Keep meeting summaries on topic: Abstractive multi-modal meeting summarization", "venue": "In ACL 2019 : The 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A package for automatic evaluation of summaries", "venue": "In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop,", "year": 2004 }, { "authors": [ "Sameer Maskey", "Julia Hirschberg" ], "title": "Comparing lexical, acoustic/prosodic, structural and discourse features for speech summarization", "venue": "In INTERSPEECH, pp", "year": 2005 }, { "authors": [ "Gabriel Murray", "Steve Renals", "Jean Carletta" ], "title": "Extractive summarization of meeting recordings", "venue": "In INTERSPEECH,", "year": 2005 }, { "authors": [ "Ramesh Nallapati", "Bowen Zhou", "Ccero Nogueira dos Santos", "aglar Glehre", "Bing Xiang" ], "title": "Abstractive text summarization using sequence-to-sequence rnns and beyond", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Haojie Pan", "Junpei Zhou", "Zhou Zhao", "Yan Liu", "Deng Cai", "Min Yang" ], "title": "Dial2desc: End-to-end dialogue description generation", "venue": "arXiv preprint arXiv:1811.00185,", "year": 2018 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper", "year": 2018 }, { "authors": [ "Marc’Aurelio Ranzato", "Sumit Chopra", "Michael Auli", "Wojciech Zaremba" ], "title": "Sequence level training with recurrent neural networks", "venue": "In ICLR 2016 : International Conference on Learning Representations", "year": 2016 }, { "authors": [ "Alexander M. Rush", "Sumit Chopra", "Jason Weston" ], "title": "A neural attention model for abstractive sentence summarization", "venue": "arXiv preprint arXiv:1509.00685,", "year": 2015 }, { "authors": [ "Abigail See", "Peter J. Liu", "Christopher D. Manning" ], "title": "Get to the point: Summarization with pointer-generator networks", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Shikhar Sharma", "Jing He", "Kaheer Suleman", "Hannes Schulz", "Philip Bachman" ], "title": "Natural language generation in dialogue using lexicalized and delexicalized data", "venue": "In International Conference on Learning Representations (ICLR) Workshop,", "year": 2017 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Zhaopeng Tu", "Zhengdong Lu", "Yang Liu", "Xiaohua Liu", "Hang Li" ], "title": "Modeling coverage for neural machine translation", "venue": "arXiv preprint arXiv:1601.04811,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Lu Wang", "Claire Cardie" ], "title": "Domain-independent abstract generation for focused meeting summarization", "venue": "In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2013 }, { "authors": [ "Lu Wang", "Claire Cardie" ], "title": "Summarizing decisions in spoken meetings", "venue": "arXiv preprint arXiv:1606.07965,", "year": 2016 }, { "authors": [ "Tsung-Hsien Wen", "Milica Gasic", "Dongho Kim", "Nikola Mrksic", "Pei hao Su", "David Vandyke", "Steve J. Young" ], "title": "Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking", "venue": "In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue,", "year": 2015 }, { "authors": [ "Xingxing Zhang", "Furu Wei", "Ming Zhou" ], "title": "Hibert: Document level pre-training of hierarchical bidirectional transformers for document summarization", "venue": "In ACL 2019 : The 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors’ massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.\nThere are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization (Murray et al., 2005; Maskey & Hirschberg, 2005). Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail (Hermann et al., 2015) is on news documents. AMI meeting corpus (McCowan et al., 2005) is the common benchmark, but it only has extractive summary.\nIn this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ (Budzianowski et al., 2018). Seq2Seq models such as Pointer-Generator (See et al., 2017) have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder\nto attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator (See et al., 2017) and Transformer (Vaswani et al., 2017) on all the metrics." }, { "heading": "2 RELATED WORK", "text": "Rush et al. (2015) first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework (Sutskever et al., 2014) and attention mechanism (Bahdanau et al., 2015), achieving state-of-the-art results on Gigaword and DUC-2004 dataset. Gu et al. (2016) proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. See et al. (2017) applied pointing (Vinyals et al., 2015) as copy mechanism and use coverage mechanism (Tu et al., 2016) to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization (Ranzato et al., 2016; Celikyilmaz et al., 2018). However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias (Bahdanau et al., 2017).\nRecently, pre-training methods are popular in NLP applications. BERT (Devlin et al., 2018) and GPT (Radford et al., 2018) have achieved state-of-the-art performance in many tasks, including summarization. For instance, Zhang et al. (2019) proposed a method to pre-train hierarchical document encoder for extractive summarization. Hoang et al. (2019) proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.\nDialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: Galley (2006) used skip-chain conditional random fields (CRFs) (Lafferty et al., 2001) as a ranking method in extractive meeting summarization. Wang & Cardie (2013) compared support vector machines (SVMs) (Cortes & Vapnik, 1995) with LDA-based topic models (Blei et al., 2003) for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work (Wang & Cardie, 2016; Goo & Chen, 2018; Pan et al., 2018) created abstractive dialog summary benchmarks with existing dialog corpus. Goo & Chen (2018) annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation”. They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, Li et al. (2019) first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work." }, { "heading": "3 PROPOSED METHOD", "text": "As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator (See et al., 2017). SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain." }, { "heading": "3.1 BACKGROUND", "text": "We first introduce Pointer-Generator (See et al., 2017). It is a hybrid model of the typical Seq2Seq attention model (Nallapati et al., 2016) and pointer network (Vinyals et al., 2015). Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states hi in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states st. In Pointer-Generator, attention\ndistribution at is computed as in Bahdanau et al. (2015):\neti = v T tanh (Whhi +Wsst + battn) at = softmax ( et ) (1)\nwhere Wh, Ws, v and battn are all learnable parameters.\nWith the attention distribution at, context vector h∗t is computed as the weighted sum of encoder’s hidden states. Context vector is regarded as the attentional information in the source text:\nh∗t = ∑ i atihi (2)\nPointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability pgen is calculated as “a soft switch” to choose from copy and generation:\npgen = σ ( wTh∗h ∗ t + w T s st + w T x xt + bptr ) (3)\nwhere xt is the decoder input, wh∗ , ws, wx and bptr are all learnable parameters. σ is sigmoid function, so the generation probability pgen has a range of [0, 1].\nThe ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution P (w) on extended vocabulary is computed as follows:\nPvocab = softmax (V ′ (V [st, h ∗ t ] + b) + b ′) P (w) = pgenPvocab(w) + (1− pgen) ∑\ni:wi=w\nati (4)\nwhere Pvocab is the distribution on the original vocabulary, V ′, V , b and b′ are learnable parameters used to calculate such distribution." }, { "heading": "3.2 SCAFFOLD POINTER NETWORK (SPNET)", "text": "Our Scaffold Pointer Network (depicted in Figure 1) is based on Pointer-Generator (See et al., 2017). The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold." }, { "heading": "3.2.1 SPEAKER ROLE SCAFFOLD", "text": "Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances xusrt and system utterances x sys t are fed into a user encoder and a system encoder separately to obtain encoder hidden states husri and h sys i . The attention distributions and context vectors are calculated as described in section 3.1. In order to merge these two encoders in our framework, the decoder’s hidden state s0 is initialized as:\ns0 = concat(h usr T , h sys T ) (5)\nThe pointing mechanism in our model follows the Equation 3, and we obtain the context vector h∗t : h∗t = concat( ∑ i ausrti h usr i , ∑ i asysti h sys i ) (6)" }, { "heading": "3.2.2 SEMANTIC SLOT SCAFFOLD", "text": "We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system\nresearch ignored this issue (Wen et al., 2015) or completed single delexicalized utterance (Sharma et al., 2017) as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.\nWe first train the model with the delexicalized utterance. Attention distribution at over the source tokens instructs the decoder to fill up the slots with lexicalized values:\nvalue(wslot) = max value(wi):\nslot(wi)=wslot\nati (7)\nNote thatwslot specifies the tokens that represents the slot name (e.g. [hotel place], [time]). Decoder directly copies lexicalized value value(wi) conditioned on attention distribution ati. If w is not a slot token, then the probability P (w) is calculated as Equation 4." }, { "heading": "3.2.3 DIALOG DOMAIN SCAFFOLD", "text": "We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability d. The ith element di in d represents the probability\nof the ith domain: d = σ(U ′(ReLU(U [husrT , h sys T ] + bd)) + b ′ d) (8)\nwhere U , U ′, bd and b′d are all trainable parameters in the classifier. We denote the loss function of summarization as loss1 and domain classification as loss2. Assume target word at timestep t is w∗t , loss1 is the arithmetic mean of the negative log likelihood of w∗t over the generated sequence:\nloss1 = 1\nT T∑ t=0 − logP (w∗t ) (9)\nThe domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the ith domain label d̂i and predict probability di for this task:\nloss2 = 1\n|D| |D|∑ i=1 d̂i log di + (1− d̂i) log (1− di) (10)\nwhere |D| is the number of domains. Finally, we reweight the classification loss with hyperparameter λ and the objective function is:\nloss = loss1 + λloss2 (11)" }, { "heading": "4 EXPERIMENTAL SETTINGS", "text": "" }, { "heading": "4.1 DATASET", "text": "We validate SPNet on MultiWOZ-2.0 dataset (Budzianowski et al., 2018). MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multidomain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table 2. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing." }, { "heading": "4.2 EVALUATION METRICS", "text": "ROUGE (Lin, 2004) is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package1. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:\nReference: You are going to [restaurant name] at [time]. Summary: You are going to [restaurant name] at.\nIn this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to”) significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:\nCIC =\n∑ v∈V Countmatch(v)\nm (12)\n1https://github.com/pltrdy/files2rouge\nwhere V stands for a set of delexicalized values in the reference summary, Countmatch(v) is the number of values co-occurring in the candidate summary and reference summary, and m is the number of values in set V . In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.\nCIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain." }, { "heading": "4.3 IMPLEMENTATION DETAILS", "text": "We implemented our baselines with OpenNMT framework (Klein et al., 2017). We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe (Pennington et al., 2014), we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam (Kingma & Ba, 2014) with a learning rate of 0.001, β1 = 0.9, β2 = 0.999. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter λ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively." }, { "heading": "5 RESULTS AND DISCUSSIONS", "text": "" }, { "heading": "5.1 AUTOMATIC EVALUATION RESULTS", "text": "To demonstrate SPNet’s effectiveness, we compare it with two state-of-the-art methods, PointerGenerator (See et al., 2017) and Transformer (Vaswani et al., 2017). Pointer-Generator is the stateof-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in Gehrmann et al. (2018). The hyperparameters in the original implementation (See et al., 2017) were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.\nWe show all the models’ results in Table 1. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet’s increased performance, bringing the largest increase on all automatic evaluation metrics." }, { "heading": "5.2 HUMAN EVALUATION RESULTS", "text": "We also perform human evaluation to verify if our method’s increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).\nWe present human evaluation results in Table 3. In the scoring part, our model outperforms PointerGenerator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary." }, { "heading": "5.3 CASE STUDY", "text": "Table 2 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator’s summary mentions “free wifi” several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.\nOur method has limitations. In the example shown in Table 2, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table 6 in Appendix).\nFurthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.\nMoreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries." }, { "heading": "A SUPPLEMENT TO CASE STUDY", "text": "" }, { "heading": "B DIALOG SUMMARIZATION CASES", "text": "" } ]
2,019
null
SP:af31abf3d1d705bca5b2d35e7689d502c1520e99
[ "The paper proposes metrics for evaluating concept based explanations in terms of ‘completeness’ -- characterized by (1) whether the set of presented concepts if sufficient to retain the predictive performance of the original model and (2) how is performance affected when all information useful to a complete set of concepts (as per (1)) is removed from features at a specific layer. Assuming concept vectors lie in linear sub-spaces of the activations of the network at a specific layer, the underlying assumption is that if a given set of ‘concept’ vectors is complete, then using a projection of the intermediate features from input onto the sub-space spanned by concepts should not result in reduced / affected predictive performance. Based on these characterizations, the paper proposes an objective to discovering complete and interpretable set of concepts given a candidate cluster of concepts. Furthermore, the paper proposes metrics to quantify the importance of each concept (using Shapley values) and per-class importance of concepts. The authors conduct experiments on toy data, image and text classification datasets and show that their proposed approach can discover concepts that are complete and interpretable.", "The authors build on the work by Ghorbani et al. in concept-based interpretability methods by taking into account the \"completeness\" of the concepts. This basically tests whether the models accuracy holds if the input is projected onto the span of the discovered concepts. They propose \"ConceptSHAP\", based on Shapley values, to assign importance to the learned concepts. These could be shown to satisfy some reasonable properties such as efficiency (sum of importances equals total completeness value), symmetry, additivity, dummy (a concept that does not change the completeness universally should have zero importance) as stated in Prop. 4.1. The method is finally tested on a variety of datasets, including a synthetic one, which shows that optimizing \"completeness\" helps in discovering a richer variety of important concepts than prior work). " ]
Deep neural networks (DNNs) build high-level intelligence on low-level raw features. Understanding of this high-level intelligence can be enabled by deciphering the concepts they base their decisions on, as human-level thinking. In this paper, we study concept-based explainability for DNNs in a systematic framework. First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model’s prediction behavior. Based on performance and variability motivations, we propose two definitions to quantify completeness. under degenerate conditions, our method is equivalent to Principal Component Analysis. Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts. We use game-theoretic notions to aggregate over sets to define an importance score for each discovered concept, which we call ConceptSHAP. On specifically-designed synthetic datasets and real-world text and image datasets, we validate the effectiveness of our framework in finding concepts that are complete in explaining the decision, and interpretable.
[]
[ { "authors": [ "Marco Ancona", "Enea Ceolini", "Cengiz Öztireli", "Markus Gross" ], "title": "Towards better understanding of gradient-based attribution methods for deep neural networks", "venue": "arXiv preprint arXiv:1711.06104,", "year": 2017 }, { "authors": [ "Sercan Ömer Arik", "Tomas Pfister" ], "title": "Attention-based prototypical learning towards interpretable, confident and robust deep neural networks", "venue": null, "year": 1902 }, { "authors": [ "Sharon Lee Armstrong", "Lila R" ], "title": "Gleitman, and Henry Gleitman", "venue": "What some concepts might not be. Cognition,", "year": 1983 }, { "authors": [ "Diane Bouchacourt", "Ludovic Denoyer" ], "title": "Educe: Explaining model decisions through unsupervised concepts extraction", "venue": "arXiv preprint arXiv:1905.11852,", "year": 2019 }, { "authors": [ "Tsung-Han Chan", "Kui Jia", "Shenghua Gao", "Jiwen Lu", "Zinan Zeng", "Yi Ma" ], "title": "Pcanet: A simple deep learning baseline for image classification", "venue": "IEEE transactions on image processing,", "year": 2015 }, { "authors": [ "Jianbo Chen", "Le Song", "Martin J. Wainwright", "Michael I. Jordan" ], "title": "L-shapley and c-shapley: Efficient model interpretation for structured data", "venue": null, "year": 2018 }, { "authors": [ "Jan Chorowski", "Ron J Weiss", "Samy Bengio", "Aäron van den Oord" ], "title": "Unsupervised speech representation learning using wavenet autoencoders", "venue": null, "year": 1901 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "arXiv preprint arXiv:1811.12231,", "year": 2018 }, { "authors": [ "Amirata Ghorbani", "James Wexler", "James Zou", "Been Kim" ], "title": "Towards automatic concept-based explanations", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Leilani H Gilpin", "David Bau", "Ben Z Yuan", "Ayesha Bajwa", "Michael Specter", "Lalana Kagal" ], "title": "Explaining explanations: An overview of interpretability of machine learning", "venue": "IEEE 5th International Conference on data science and advanced analytics (DSAA),", "year": 2018 }, { "authors": [ "Rajiv Khanna", "Been Kim", "Joydeep Ghosh", "Sanmi Koyejo" ], "title": "Interpreting black box predictions using fisher kernels", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Been Kim", "Rajiv Khanna", "Oluwasanmi O Koyejo" ], "title": "Examples are not enough, learn to criticize! criticism for interpretability", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Been Kim", "Martin Wattenberg", "Justin Gilmer", "Carrie Cai", "James Wexler", "Fernanda Viegas" ], "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Christoph H Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Learning to detect unseen object classes by between-class attribute transfer", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alec Radford", "Rafal Józefowicz", "Ilya Sutskever" ], "title": "Learning to generate reviews and discovering sentiment", "venue": null, "year": 2017 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Why should i trust you?: Explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Wojciech Samek", "Alexander Binder", "Grégoire Montavon", "Sebastian Lapuschkin", "Klaus-Robert Müller" ], "title": "Evaluating the visualization of what a deep neural network has learned", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2016 }, { "authors": [ "Lloyd S. Shapley" ], "title": "A value for n-person games, pp. 31–40", "venue": null, "year": 1988 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": "arXiv preprint arXiv:1706.03825,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Joshua Brett Tenenbaum" ], "title": "A Bayesian framework for concept learning", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 1999 }, { "authors": [ "Fan Yang", "Mengnan Du", "Xia Hu" ], "title": "Evaluating explanation without ground truth in interpretable machine learning", "venue": "arXiv preprint arXiv:1907.06831,", "year": 2019 }, { "authors": [ "Chih-Kuan Yeh", "Joon Kim", "Ian En-Hsu Yen", "Pradeep K Ravikumar" ], "title": "Representer point selection for explaining deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chih-Kuan Yeh", "Cheng-Yu Hsieh", "Arun Sai Suggala", "David Inouye", "Pradeep Ravikumar" ], "title": "On the (in)fidelity and sensitivity of explanations", "venue": "arXiv preprint arXiv:1901.09392,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Yiyou Sun", "David Bau", "Antonio Torralba" ], "title": "Interpretable basis decomposition for visual explanation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have shown great success in numerous tasks (Goodfellow et al., 2016), from understanding images (Zoph et al., 2017) to answering questions (Devlin et al., 2018). Yet, in many scenarios their lack of explainability serves as a bottleneck against their real-world impact, especially in high-stake decisions such as in medicine, transportation, and finance, where such explanations help identify systematic failure cases, comply with regulations, and provide feedback to model builders. This has thus led to increasing interest in human-like explanations of DNNs.\nThe most commonly-used methods to explain DNNs explain each prediction by quantifying the importance of each input feature (Ribeiro et al., 2016; Lundberg & Lee, 2017). However, such explanations typically explain the behavior locally for each case, rather than globally explaining how the model makes its decisions. Also, input features (such as the raw pixel values), and weights on them, are not necessarily the most effective explanations for human understanding. Instead, “concept-based explanations” characterize the global behavior of a DNN in a way understandable to humans, by explaining how DNNs use concepts in arriving at particular decisions. Such conceptbased thinking, by extracting similarities from numerous examples and grouping them systematically based on their resemblance, has been shown to play an essential role in human minds for making generalizations (Armstrong et al., 1983; Tenenbaum, 1999). With a similar motivation, “concepts” can explain the decision-making rationale of DNNs and their generalizable knowledge. A few recent studies have thus focused on bringing such concept-based explainability to DNNs. Based on the common implicit assumption that the concepts should lie in certain linear subspaces of some intermediate DNN activations, they aim to find such concepts efficiently and relate them to data. These have ranged from supervised approaches (Kim et al., 2018; Zhou et al., 2018) that obtain concept representations given human-labeled data on salient concepts, to purely unsupervised approaches that provide concept explanations automatically without human labeling, ranging from k-means clustering of DNN activations (Ghorbani et al., 2019), to a self-interpretable Bayesian generative model (Bouchacourt & Denoyer, 2019). A key motivating question we ask in this paper is whether we could build on such unsupervised approaches to extract concepts, but where in addition to ensuring that the concepts are representative of the DNN activations, we would also like to ensure the additional facet that they are sufficiently predictive of the DNN function itself.\nThis leads naturally to a crucial unanswered question in concept-based explanation, which is how to evaluate whether a set of concepts are sufficient for prediction. Previous concept-based explanations\nselect concepts that are salient to a particular class (Kim et al., 2018). However, selecting a set of salient concepts does not guarantee that these concepts are sufficient for prediction. The notion of explanations that are sufficient for prediction is also called the “completeness” of explanations (Gilpin et al., 2018), which is acknowledged to be valuable for evaluating explanations (Yang et al., 2019). In this work, we propose such a completeness metric for a given set of concept explanations. The completeness measurement can be applied to a set of concept vectors that lie in the span of some intermediate DNN layer activations, which is a general assumption in previous concept-based explanation works (Kim et al., 2018). The core idea is that, by projecting the activations onto the span of concept vectors, we keep just that information that can be explained by the concepts, and discard the information that are orthogonal to all concepts. Thus, when projecting activations onto the span of concept activation vectors result in no loss in prediction accuracy, we can learn concepts that are “complete” (i.e. sufficient for prediction).\nInterestingly, we show that under a stringent degeneracy condition on the DNNs, principal component analysis (PCA) on the DNN activations can be shown to maximize these concept completeness metrics. Of course such degeneracy assumptions likely do not hold in general, so that maximizing these completeness metrics could be viewed as a generalization of PCA that additionally takes the DNN model into account. However the resulting “principal components” are not guaranteed to be interpretable to humans. We thus build on the concept-interpretability principles proposed in Ghorbani et al. (2019), and additionally consider carefully designed objectives that favors concepts that are more semantically meaningful to humans. A key facet of our approach is that it can work without any human supervision, which reduces the human labeling cost to provide explanations.\nAfter a set of highly-complete concepts are discovered, we use game-theoretic notions to aggregate over sets to define contextualized importance of a concept, which we call ConceptSHAP. ConceptSHAP is shown to be the only scoring method that satisfies a set of axioms, which accounts for the contribution of each concept to the completeness score. We also derive a class-specific version of ConceptSHAP that decomposes the ConceptSHAP score with respect to each class in the multi-class classification setting, which can be used to find concepts that contribute the most with respect to a specific class. To verify the effectiveness of our completeness-aware concept discovery method, we create a synthetic dataset where we can obtain the ground truth concepts and test whether existing methods can retrieve them. We find that our method is able to retrieve the ground truth concepts better than all compared methods. We also demonstrate examples from real-world language and vision datasets to show that our concept discovery algorithm provides additional insights on the behavior of the model." }, { "heading": "2 COMPLETENESS OF CONCEPTS", "text": "Problem setting: We are given a set of n training examples x1,x2, ...,xn P Ri, corresponding labels y1, y2, ..., yn P Ro, and a DNN fpxq that is learned to map the labels (with dimension o)\nfrom given inputs (with dimension i). We choose an intermediate layer of the DNN, and define the operation for generating the intermediate features from input as Φpxq P Rd and feed forwarding from the intermediate layer to logit layer as hp¨q, yielding the decomposition fpxq “ hpΦpxqq. We define the data matrix as X P Riˆn; the corresponding feature matrix as ΦpXq P Rdˆn, and the corresponding prediction matrix as fpXq P Roˆn. Assume that there is a set of m concepts denoted by vectors c1, c2, ..., cm that represented linear directions in some activation space Φp¨q P Rd given by a concept discovery algorithm. We define the concept matrix as c “ rc1 c2 . . . cms. Next, we propose two mathematical definitions that capture the completeness of a given set of given concepts. Both definitions are based on the idea that completeness should quantify how sufficient a particular set of concepts are in explaining the model’s behavior. A low completeness score of a set of concepts indicates that the corresponding concepts do not capture the model behavior fully, and that the model bases its decision on factors other than the given concepts. We propose two metrics of completenss based on two different assumptions, as we discuss below.\nAssumption 1: If the given set of concepts is complete, then using a projection of the intermediate features from input onto the feature subspace spanned by the concepts, concept space, would not deteriorate the model performance. We define the projection of some input embedding Φpxq onto the subspace spanned by v P Rdˆr as\nP pΦpxq,vq “ vpvJvq´1vJΦpxq. (1)\nWe define the completeness metric ηp1q on a set of validation data with T data points as V “ tpx1, y1q, ..., pxT , yT qu based on the assumption that projecting input features onto the span of a complete set of concepts should not reduce the model prediction performance. Definition 2.1. Given a prediction model fpxq “ hpΦpxqq, a set of concept vectors c1, ..., cm, and some loss metric L, we define the completeness score ηp1q as:\nηp1qpc1, ..., cmq “ R´\nř\ntx,yuPV LphpP pΦpxq, cqq, yq R´ ř tx,yuPV Lpfpxq, yq , (2)\nwhere R “ ř tx,yuPV Lphp0q, yq to ensure that ηp1qp0q “ 0. We omit the dependency of hp¨q, Φp¨q, fp¨q, and Lp¨q of ηp¨q for notation simplicity. When ηp1qpc1, ..., cmq is high, the network maintains a high accuracy even after projection, which supports that the set of discovered concepts hold sufficient information for prediction.\nAssumption 2: The second assumption is that if we remove all useful concept information for a classification task, the model should fail to discriminate different classes. Thus, when all salient information is removed from the network, predictions scores for examples in class A won’t be much different from other examples in class A. We define the data matrix of validation set as as Xv “ rx1 x2 . . . xT s. To quantify how much the prediction score varies across data samples, we use the sample variance of the predictions: v̂arpfpXvqq “ Trp ˆcovpfpXvqqq “ TrppfpXvq ´ ÊrfpXvqsqpfpXvq ´ ÊrfpXvqsqJq, where ÊrfpXvqs “ 1T řT i“1 fpxiq, and Tr stands for the trace. Then, we define the second completeness metric following this assumption. Definition 2.2. Given a prediction model fpxq “ hpΦpxqq, and a set of concept vectors c1, c2, ..., cm, we define the completeness score ηp2q as:\nηp2qpc1, ..., cmq “ 1´ v̂arphpΦpXvq ´ P pΦpXvq, cqqq\nv̂arphpΦpXvqqq , (3)\nBased on our assumption 2, the variance of the prediction gets lower after useful concept information is removed from the data, yielding a high completeness score ηp2q.\nWe now show that under degenerate assumptions, the top k PCA vectors of Φpxq maximize the completeness score for a set of concept vectors. Top PCA vectors are designed to capture as much information in data as possible, a set of concepts with high completeness score similarly preserve the necessary information in the data for the model to reach satisfactory predictions. Proposition 2.1. When h is an isometry function that maps from pΦp¨q, } ¨ }F q Ñ pfp¨q, ? Lq, where L is the loss metric in equation 2 and fpxiq “ yi, @pxi, yiq P V (i.e. the loss is minimized), the first m PCA vectors maximizes ηp1q.\nProposition 2.2. When h is an isometry function that maps from pΦp¨q, } ¨ }F q Ñ pfp¨q, } ¨ }F q, and each dimension of Φpxq is uncorrelated with unit variance, the first m PCA vectors maximize ηp2q.\nWe underline the two main differences between the concept vectors that maximize the completeness score and the PCA vectors. First, the propositions depend on degeneracy assumptions such as isometry of a DNN, which may not hold in practice. Therefore, the concepts that maximize the completeness score takes the prediction of the DNN into account, which can be seen as a generalization of the original PCA. Second, since the concept score only depends on the span of the set of concept vectors, any concept vectors whose span is equal to the span of the top PCA vectors also maximize the completeness score (i.e. the set of vectors that maximize the completeness is not unique). Each PCA vectors are constrained to minimize the reconstruction error and being orthogonal to other PCA directions. On the other hand, the discovered concept vectors that maximize the completeness can be designed so that each concept is interpretable and semantically-meaningful to humans, which will be further explained in the next section." }, { "heading": "3 DISCOVERING COMPLETE AND INTERPRETABLE CONCEPTS", "text": "Our goal is to discover a set of maximally-complete concepts, where each concept is also interpretable and semantically-meaningful. Ghorbani et al. (2019) has listed meaningfulness, coherency, and saliency as the desired properties for concept-based explanations. Our work on completeness is a crucial addition to the set: not only concept are meaningful coherent and salient, we ensure they are sufficient to models prediction.\nWe assume that we are given some candidate clusters of concepts (which can be given by human labeling or self-discovery) and each cluster shares some feature attributes that are coherent and semantically-meaningful to humans (which matches the two desired properties in Ghorbani et al. (2019)). We define the feature matrix of cluster i as τi “ rΦpxi1q Φpxi2q . . . s , where xi1,xi2 are samples that belong to cluster i. We denote the feature mean of cluster i as µi “ meanpτiq. Clusters can be obtained by human labeling (Kim et al., 2018) or by unsupervised grouping of relevant input features (e.g. segmentation of images based on grouping of pixels) (Ghorbani et al., 2019). In either case, we would not know which sets of clusters contain useful information to the model that we try to explain. We aim to find a minimum set of concepts that are maximally-complete to the prediction model. Additionally, we constraint that each concept is salient to one cluster only so that each concept direction is semantically-meaningful to humans. To discriminate different concepts (for coherency), we constraint that different concepts are not salient to the same cluster.\nWe now define our objective function for discovering a set of complete and interpretable concepts c. A primary goal is maximizing completeness η (which can be ηp1q or ηp2q), such that the set of concepts fully explain the model behavior. Besides, we introduce two regularization terms for interpretability (can be considered as generalization of the orthogonality constraint of PCA). We introduce cluster-sparsity regularization Lsparse,Clpcq to encourage each concept is salient to minimum number of clusters, and we introduce concept-sparsity regularization LSparse,Conpcq to encourage different concepts are not salient to the same cluster, i.e. each cluster to be salient to at most one concept. Given some clusters τ1, τ2, ..., τK , a set of training examples x1,x2, ...,xn, and a pre-trained prediction model fpxq “ hpΦpxqq, the overall objective function (to minimize) becomes:\n´ ηpcq ` λ1 ¨ LSparse,Clpcq ` λ2 ¨ LSparse,Conpcq, (4)\nwhere λ1 and λ2 are loss coefficients. To formulate the cluster-sparsity regularization Lsparse,Clpcq and concept-sparsity regularization LSparse,Conpcq, we first formally introduce the saliency score between concept cj to cluster τk as:\nρpcj , τkq “ |xcj , µky| b\nřK l“1 |xcj , µly|2\n.\nWe note that the saliency score is normalized such that the saliency score between any concept and all clusters has unit norm. When the saliency score between concept cj to cluster τk is large, cj can differentiate samples from cluster k from samples in a random cluster, and thus cj is salient to τk. To encourage that each concept can differentiate a small amount of clusters to random clusters, we regularize the L1 norm of saliency score for every concept-cluster pair (which can be seen as the\nsparse filtering objective in Ngiam et al. (2011)), leading to the cluster-sparsity regularization loss:\nLsparse,Clpcq “ m ÿ\nj“1\nK ÿ k“1 ρpcj , τkq,\nwhich encourages sparse saliency scores. To constrain that different concepts are not salient to the same cluster, we penalize the pairwise saliency score product between every pair of concepts for the same cluster, leading to the concept-sparsity regularization loss:\nLSparse,Conpcq “ ÿ\ni‰j\nK ÿ k“1 ρpci, τkq ¨ ρpcj , τkq.\nIf there are two concepts that are both salient with respect to the same cluster, the pairwise saliency score will be large and thus the concept-sparsity regularization loss will be large. We note that each concept has to be salient to some cluster, but a cluster can be not salient to any concepts. Therefore, we typically assume we have more clusters compared to concepts (i.e. K ą m)." }, { "heading": "4 HOW IMPORTANT IS EACH CONCEPT?", "text": "ConceptSHAP to quantify concept importance: Given a set of concepts CS “ tc1, c2, ...cmu with a high completeness score, we would like to evaluate the importance of each individual concept, specifically, by quantifying how much each individual concept contributes to the final completeness score. Let φi denote the importance score for concept Ci, such that φi quantifies how much of the completeness score ηpCSq is contributed by ci. Motivated by its successful applications in quantifying attributes in what-if scenarios for complex systems, we adapt Shapley values (Shapley, 1988; Lundberg & Lee, 2017), to fairly assign the importance of each concept (which we abbreviate as ConceptSHAP):\nDefinition 4.1. Given a set of concepts CS “ tc1, c2, ...cmu and some completeness metric η, we define the ConceptSHAP φi for concept ci as\nφipηq “ ÿ\nSĎCs\\ci\npm´ |S| ´ 1q!|S|! m! rηpSYtciuq ´ ηpSqs,\nThe main benefit of using Shapley value to assign importance is that Shapley value can be shown to uniquely satisfy a set of desired axioms, listed in the following proposition:\nProposition 4.1. Given a set of concepts CS “ tc1, c2, ...cmu and a completeness metric η, and some importance score φi for each concept ci that depends on the completeness metric η. φi defined by conceptSHAP is the unique importance assignment that satisfy the following four axioms:\n• Efficiency: The sum of all importance value should sum up to the total completeness value, řm\ni“1 φipηq “ ηpCSq.\n• Symmetry: For two equivalent concepts, which satisfy ηpuY tciuq “ ηpuY tcjuq for every subset u Ď CSztci, cju, φipηq “ φjpηq.\n• Dummy: If ηpuY tciuq “ ηpuq for every subset u Ď CSztciu, then φipηq “ 0.\n• Additivity: If η and η1 have importance value φpηq and φpη1q respectively, then the importance value of the sum of two completeness metric should be equal to the sum of the two importance values, i.e, φipη ` η1q “ φipηq ` φipη1q for all i.\nThe efficiency axiom distributes the completeness score of all concepts to the individual concepts. The symmetry axiom guarantees that two concepts that behaves the same get the same importance score for fairness. The dummy axiom guarantees that concepts that do not affect the completeness gets 0 importance score. The additivity axiom guarantees that decomposibility in the completeness leads to decomposibility in the importance score, and scaling the completeness does not change relative importance ratio between concepts.\nPer-class saliency of concepts: In multi-class classification, it may be more informative to obtain a set of related concepts that contribute to the prediction for a specific class, instead of the global contribution (i.e. concepts that are important to all classes). To obtain the concept importance score for each class, we first define the completeness score with respect to one class by only considering data points that belongs to that class, which is formalized as: Definition 4.2. Given a prediction model fpxq “ hpΦpxqq, a set of concept vectors c1, c2, ..., cm that lie in the feature subspace in Φp¨q. We then define the completeness score for class j as:\nη p1q j pc1, ..., cmq “\nRj ´ ř tx,yuPVj LphpP pΦpxq, cqq, yq R´ ř tx,yuPV Lpfpxq, yq , (5)\nwhere Vj is the set of validation data where ground truth label is j and Rj “ ř tx,yuPVj Lphp0q, yq. Given the completeness for a specific class, we define the ConceptSHAP for concept i with respect to class j as: Definition 4.3. Given a prediction model fpxq, a set of concept vectors c1, c2, ..., cm that lie in the feature subspace in Φp¨q. We can define the ConceptSHAP for concept i with respect to class j as:\nφi,jpηq “ φipηjq. (6)\nFor each class j, we may select the concepts with the highest conceptSHAP score with respect to class j. We note that ř\nj ηj “ η and thus with the additivity axiom, ř j φi,jpηjq “ φipηq." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 SYNTHETIC DATA WITH GROUND TRUTH CONCEPTS", "text": "Setting: We construct a synthetic image dataset with known complete concepts to evaluate whether the proposed automatic concept discovery algorithm can successfully extract the ground truth concept accurately. For each sample, we randomly sample 15-dimensional binary variable assigned as ground truth candidate concepts ξ1, ..., ξ15 that is generated with Bernoulli independently for each dimension with p “ 0.5. From ground truth concepts pξq, we generate input data x and output label y. For the label target y, we construct a 15-dimensional multi-label target for each sample, where the target y is a function that depends on the first 5 dimension of the 15-dimensional ξ. For example, y1 “„ pξ1 ¨ ξ3q ` ξ4, y2 “ ξ2 ` ξ3 ` ξ4, y3 “ ξ2 ¨ ξ3 ` ξ4 ¨ ξ51. Therefore, the minimum set of ground truth variable is tξ1, ..., ξ5u by construction. For the input data x, we construct a toy image dataset where each concept ξi is mapped to a specific shape, and the image contains the specific shape if and only if the concept ξi “ 1. For example, if ξ3t “ 1, a star (with random color and location) will occur in the image xt, and if ξ3t “ 0, there will be no star in the image xt. The map of concept to shape and two example images are given in Figure 2.\nFor the input cluster image for our discover concept algorithm, we either provide the ground truth clustering or by superpixel segmentation followed by K-means clustering as in Ghorbani et al. (2019), which we call the method as ours-supervised and ours-unsupervised respectively. In total, we use 48k training samples and 12k evaluation samples, where each ground truth concept corresponds to some specific shape in the image. We train a convolutional neural network with 6 layers which achieves 0.999 accuracy, and take the first fully connected layer as the feature layer (which is Φpxq in the problem definition.)\n1„ notes Not, and the details of generating this dataset is in the appendix.\nEvaluation metrics: Let the known concepts be ξ1, ξ2, ..., ξm̂, and assume we discover some concept vectors c1, ..., cm. We would like to evaluate how closely the discovered concept vectors align with the actual ground truth concepts. For a concept vector ci to align with a ground truth concept ξj , we assume that the ground truth concept can be linearly separated by the concept vector direction. More formally, we measure the accuracy of the best linear classifier with ci as the weight vector applied on the binary classification problem where ξj is the target.\nScorepci, ξjq “ max aPt´1,1u,bPR\nT ÿ\nt“1\n1rpa ¨ cTi Φpxtq ą bq ““ ξjts T .\nWe then evaluate how well the set of discovered concepts c1...cm matches the set of ground truth concepts ξ1, ..., ξm̂ as\nAlignemntScorepYmi“1tc1iu,Ym̂j“1tξjuq “ 1m̂` 1 maxPĎr1,msm m̂ ÿ\nj“1 AccpcP rjs, ξjq,\nwhich measures the best average accuracy by assigning the best concept vector to differentiate each ground truth concept.\nResults: We summarize the results in Table 1, where ours-supervised and TCAV takes supervised clusters as input, and ours-unsupervised, ACE, Raw-Clustering takes the clustered segments as input. For supervised clusters, we randomly choose examples where ξj “ 1 for cluster j. The term supervised and unsupervised refers to whether the actual ground truth concept set ξj is given or not. For ourssupervised 1, we maximize ηp1q in equation 4; for ours-supervised 2, we maximize ηp2q in equation 4. We see that both ours-supervised 1 and ours-supervised 2 obtain higher AlignemntScore compared to TCAV. ours-unsupervised 1 and ours-unsupervised 2 also achieves higher AlignemntScore than all compared baselines, which demonstrates the effectiveness of our concept discovery algorithm. We further observe that that completeness 1 and 2 are complementary: maximizing completeness 1 does not necessary lead to a higher value in completeness 2, and vice versa. Nevertheless, by jointly optimizing completeness 1 or completeness 2 along with additional sparsity regularization with respect to given clusters, we are able to retrieve the correct ground truth concepts. Lastly, we show the nearest neighbors (of the super-pixel segments) for the discovered concepts of ours supervised and TCAV along with the ground truth concepts in Figure 3 to validate that our concept discovering algorithm does retrieve the correct concept. While we only show the top-2 nearest neighbors, we note that the top-k nearest neighbors examples all belong to the same concept when k is large." }, { "heading": "5.2 TEXT CLASSIFICATION", "text": "Setting: We apply our method on the IMDB text classification dataset. The IMDB dataset contains text of 50k movie reviews, where 25k reviews is used as training data and 25k reviews are used for evaluation. For each review, it is either classified as a positive or negative review. We use a pre-trained model with a BERT language model (Devlin et al., 2018) from Keras, which achieves 0.94 testing accuracy. To obtain the input cluster, we use a 10-word sliding window to obtain sub-sentences over the IMDB sentences. We then obtain the embedding for all sub-sentences, and perform k-means clustering on the positive sub-sentences and negative sub-sentences. We then run our concept discovering algorithm to obtain 5 concepts with ηp1q 0.99.\nResults: For the 5 discovered concepts, we show the top nearest neighbors to each concept, and the ConceptSHAP value and related class (determined by TCAV score) for each concept discovered. Additional nearest-neighbor examples are shown in the appendix. We note that for all concepts, the nearest sub-sentences of other concepts mostly contain a specific word, which we highlight in blue. Nearest neighbors of concept 1 mostly contains the word “characters”, nearest neighbors of concept 2 and concept 3 mostly contains the word “think”, nearest neighbors of concept 4 mostly contains the word “watch”, and nearest neighbors of concept 5 mostly contains the word “after”. With a closer look at each concept’s nearest neighbors, we find that the nearest sub-sentences of the first concept usually contains negative adjectives alongside “characters”, nearest sub-sentences of the second concept usually contains the word \"think\" at the first or last position followed by disagreement towards the movie, nearest sub-sentences of the third concept usually contains “think” in the middle of the sub-sentence followed by the reviewer’s more neutral personal opinion, the nearest sub-sentences of the fourth concept often contain the phrase “watch it” where “it” refers to the movie, and the nearest sub-sentences of the fifth concept just contains the word “after”. We find that the most salient concept by ConceptSHAP value is the concept 4, where all of the top nearest\nneighbors explicitly mentioned the word “watch” with a positive sentiment in general. We perform TCAV test for all concepts with respect to the positive and negative class, and the first 3 concepts are significant to the class “negative” with TCAV score 1, and the last 2 concepts are significant to the class “positive” with TCAV score 1." }, { "heading": "5.3 IMAGE CLASSIFICATION", "text": "Setting: We next perform experiments on Animals with Attribute (AwA) (Lampert et al., 2009) to classify animals with 50 classes, where we take 26905 images as training data and 2965 images as evaluation data. Each training data has a ground truth label of one of 50 animals. We train an Inception-V3 model pre-trained on Imagenet (Szegedy et al., 2016) which reaches 0.94 testing accuracy. To obtain the input clusters, we employ the method of Ghorbani et al. (2019), which performs superpixel segmentation and k-means clustering with images to get 334 input clusters. We then perform our discovering concepts algorithm given the clusters to obtain 8 concepts with ηp1q 0.99.\nResults: For each of the 8 discovered concepts, we show the top nearest neighbor patches, the ConceptSHAP value, and the related classes where the concept has at least twice as large ConceptSHAP value than any other concepts. From the nearest neighbor of each concept, we find that the concepts learned by the network mostly consider textures and colors. Since we only learn 8 concepts for 50 classes, each concepts learned are useful to multiple classes. We find that the ripple texture that is the most common in ocean is significant to many marine animals. The leaf/ grass concepts are often significant to animals that live in trees or pastures. We note that out of the 8 concepts learned, there are two concepts representing stripes and two concepts representing ripples. While the concept “stripe 1” seems to contain thicker stripes compared to “stripe 2”, we do not observe significant difference between the top nearest neighbors of “ripple 1” and “ripple 2”. Other than this, each discovered concept seems to be meaningful and coherent to humans. We note that in some cases the related class of a concept may not necessarily contains the concept. One possible reason is that the concepts may be salient since they are “pertinent negative” to a certain class, which helps making the correct prediction since these concepts do not exist in images of a certain class. The main takeaway of this example is that the salient concepts for image classification shares similarity in texture instead of shape, which coincides with the finding in Geirhos et al. (2018)." }, { "heading": "6 RELATED WORK", "text": "Various approaches have been proposed to explain the decision making of pre-trained models. Most works fall under two categories: (i) feature-based explanation methods, that attribute the decision to important input features (Ribeiro et al., 2016; Lundberg & Lee, 2017; Smilkov et al., 2017; Chen et al., 2018), and (ii) sample-based explanation methods, that attribute the decision to previously observed samples (Koh & Liang, 2017; Yeh et al., 2018; Khanna et al., 2019; Arik & Pfister, 2019). Among these forms of interpretability, different evaluations of explanations are proposed, including more human-centric evaluations (Lundberg & Lee, 2017; Kim et al., 2018) and functional ly-grounded evaluations (Samek et al., 2016; Kim et al., 2016; Ancona et al., 2017; Yeh et al., 2019). However, providing the most important input features or samples for a specific prediction does not necessary give insights on how the model behaves globally, which our work aims to address with concept-based explanations. For concept-based explanations, few recent works are related. TCAV (Kim et al., 2018) use human-labeled data and estimates the importance of a concept with respect to a specific class. Zhou et al. (2018) decompose the prediction of a data sample into linear combinations of concept components. Ghorbani et al. (2019) automate TCAV by replacing human-labeled data by automatically super-pixel segmentation followed by k-means clustering. Bouchacourt & Denoyer (2019) discover concept by training a inherently explainable model which trains a concept classifier along with the prediction model. While all aforementioned works defines concept directions in the linear span of some activation layer of the model, our framework brings completeness and interpretability to concept discovery.\nOur work is also closely related to methods that perform dimension reduction in neural network layers to obtain meaningful latent variables and understand neural network. Chan et al. (2015) cascade PCA layers to obtain satisfactory prediction performances. Raghu et al. (2017) apply SVD followed by CCA to compare two representations of a deep model to help better understand the deep representations. Kingma & Welling (2013) perform deep dimension reduction for generative models where the latent space can be semantically-meaningful. For example, Chorowski et al. (2019) show that when learning with speech data, the latent dimension is closely related to the phonemes, which can be seen as human-relatable concepts in speech data; or Radford et al. (2017) show that when learning with language data, a single unit is closely related to the sentiment." }, { "heading": "7 CONCLUSIONS", "text": "Concept-based explanations can be a key direction to understand how DNNs make decisions. In this paper, we study concept-based explainability in a systematic framework. First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining the model’s behavior. Based on performance and variability motivations, we propose two definitions to quantify completeness. We show that they yield the commonly-used PCA method under certain assumptions. Next, we study two additional constraints to ensure the interpretability of discovered concept. Through experiments in toy data, text, and image domain, we demonstrate that our method is effective in finding concepts that are complete (in explaining the model’s prediction) and that are interpretable. Note that although our work focuses on post-hoc explainability of pre-trained DNNs, joint training with our proposed objective function can also be used to train an inherently-interpretable model. A future direction may be to explore whether jointly learning the concepts and the model can lead to better interpretability." }, { "heading": "APPENDIX A PROOF", "text": "" }, { "heading": "Proof of Proposition 2.1", "text": "Proof. By the basic properties of PCA, the first m PCA vectors (principal components) minimize the reconstruction `2 error. Define the concatenation of the m PCA vectors as a matrix p and } ¨ } as the `2 norm, the basic properties of PCA is equivalent to that for all c “ rc1 c2 . . . cms ,\nÿ\nxĎVX\n}P pΦpxq,pq ´ Φpxq}2F ď ÿ\nxĎVX\n}P pΦpxq, cq ´ Φpxq}2F .\nBy the isometry of h from the Frobenius norm to ? L, we have\nÿ\nxĎVX\nLphpP pΦpxq,pqq, hpΦpxqqq ď ÿ\nxĎVX\nLphpP pΦpxq, cqq, hpΦpxqqq,\nand since fpxq is equal to Y, we can rewrite to ÿ\nx,yĎV LphpP pΦpxq,pqq, yq ď\nÿ\nx,yĎV LphpP pΦpxq, cqq, yq\nand subsequently get that for any c\nR´ ř tx,yuĎV LphpP pΦpxq,pqq, yq R´ ř tx,yuĎV Lpfpxq, yq ě R´ ř tx,yuĎV LphpP pΦpxq, cqq, yq R´ ř tx,yuĎV Lpfpxq, yq ." }, { "heading": "Proof of Proposistion 2.2", "text": "Proof. We note that the completeness only depends on the span of c1, ...cm. If we assume the matrix c to have rankm\n1 ď m, we may find a set of orthonormal basis (by QR decomposition) c1, ...cm1 that is orthonormal with the same completeness score. Therefore, for any set of given concepts c1, ...cm, we can replace them with a set of orthonormal concepts c1, ...cm1 without loss of generality. By the basic properties of PCA, the first m PCA vectors p1, ...,pm maximizes the total projection data variance on the projected space with at most m orthonormal vectors, which can be formalized as\nm ÿ i“1 v̂arpΦpXvqJpiq ě\nm1 ÿ i“1 v̂arpΦpXvqJciq, (7)\nBy using the notation cpjq for the jth entry of vector c, we may rewrite total projected variance as\nm1 ÿ i“1 v̂arpΦpXvqJciq “ m1 ÿ i“1 v̂arpΦpXvqJciq d ÿ j“1 pcpjqi q 2\n“ m1 ÿ\ni“1\nd ÿ j“1 v̂arpΦpXvqJciqpcpjqi q 2\n“ d ÿ\nj“1\nm1 ÿ i“1 v̂arpΦpXvqJcicpjqi q\n“ d ÿ\nj“1 v̂arp\nm1 ÿ i“1 ΦpXvqJcicpjqi q\n“ d ÿ\nj“1 v̂arpP pΦpXvq, cqpjqq\n“ v̂arpP pΦpXvq, cqq.\n(8)\nThe fourth equality holds since ΦpXvqci and ΦpXvqcj are uncorrelated, which can be shown by calculating the co-variance between ΦpXvqci and ΦpXvqcj as:\npΦpXvqci ´ ÊX rΦpXvqJcisqpΦpXvqJcj ´ ÊX rΦpXvqJcjsq\n“ d ÿ\nt“1 pΦpXvqptqcptqi ´ ÊX rΦpXvq ptqc ptq i sq\nh ÿ s“1 pΦpXvqpsqcpsqj ´ ÊX rΦpXvq psqc psq j sq\n“ d ÿ\nt“1\nd ÿ s“1 ÊX rΦpXvqptqΦpXvqpsqscptqi c psq j ´ d ÿ t“1 d ÿ s“1 ÊX rΦpXvqptqsÊX rΦpXvqpsqscptqi c psq j\n“ d ÿ\nt“1\nd ÿ s“1 ˆcovpΦpXvqptq,ΦpXvqpsqqcptqi c psq j “ d ÿ t“1 v̂arpΦpXvqptqqcptqi c ptq j “ 0.\nWhere the last two equations follow by each dimension of ΦpXvq is uncorrelated with unit variance and ci and cj is uncorrelated. By plugging in equation 8 into equation A, we may obtain\nv̂arpP pΦpXvq,pqq ě v̂arpP pΦpXvq, cqq.\nDefine ĉ as the concatenated matrix for the orthonormal basis for orthogonal complement of c, and define call by concatenating c and ĉ. We know ΦpXvq ´ P pΦpXvq, cq “ P pΦpXvq, ĉq by fundamental properties of linear projections. Since all vectors in c is orthogonal to vectors in ĉ and by pluggin in equation 8 for c “ call, we get v̂arpP pΦpXvq, cqq ` v̂arpP pΦpXvq, ĉqq “ v̂arpP pΦpXvq, callqq “ v̂arpΦpXvqq. By combining the observations we get\nv̂arpΦpXvq ´ P pΦpXvq, cqq “ v̂arpP pΦpXvq, ĉqq “ v̂arpΦpXvqq ´ v̂arpP pΦpXvq, cqq ě v̂arpΦpXvqq ´ v̂arpP pΦpXvq,pqq “ v̂arpP pΦpXvq, p̂qq “ v̂arpΦpXvq ´ P pΦpXvq,pqq.\nand following the isometry of D, we have\nv̂arphpΦpXvq ´ P pΦpXvq,pqqq ď v̂arphpΦpXvq ´ P pΦpXvq, cqqq,\nand thus the first m PCA vectors maximizes η2." }, { "heading": "APPENDIX B ADDITIONAL EXPERIMENTS RESULTS AND SETTINGS", "text": "Detailed Experiment Settings in Toy Example The complete list of the target y is y1 “„ pξ1 ¨ ξ3q ` ξ4, y2 “ ξ2 ` ξ3 ` ξ4, y3 “ ξ2 ¨ ξ3 ` ξ4 ¨ ξ5, y4 “ ξ2 XOR ξ3, y5 “ ξ2 ` ξ5, y6 “„ pξ1` ξ4q` ξ5, y7 “ pξ2 ¨ ξ3q XOR ξ5, y8 “ ξ1 ¨ ξ5` ξ2, y9 “ ξ3, y10 “ pξ1 ¨ ξ2q XOR ξ4, y11 “„ pξ3 ` ξ5q, y12 “ ξ1 ` ξ4 ` ξ5, y13 “ ξ2 XOR ξ3, y14 “„ pξ1 ¨ ξ5 ` ξ4q, y15 “ ξ4 XOR ξ5. We create the dataset in matplotlib, where the color of each shape is sampled independently from green,red,blue,black,orange,purple,yellow, and the location is sampled randomly with the constraint that different shapes do not coincide with each other. For hyper-parameter selection, we simply set λ1 “ λ2 “ 10.0. We fix this hyper-parameter throughout all experiments to prevent exhaustive tuning and over-fitting. Scaling the hyper-parameter in the same order produces similar results. We use 1000 images in each cluster for all methods that are compared. For selecting the concepts in TCAV and ACE, we compare the number of labels where the concept has p-value < 0.16 and choose the top 5 concepts (since even TCAV score 1.0 does not have p-value < 0.05). We note that we have tried many alternatives for choosing concepts for TCAV and ACE, but failed to achieve better performance for TCAV and ACE. The main reason may be that the ground truth y contains functional such as XOR, which has 0 TCAV score for inputs.\nFor a more concrete example, consider the case Y “ X1 XOR X2, and assume that we have 3 concepts candidates X1, X2, X3. All 3 concepts would have 0 TCAV score when each concept has independent Bernoulli distribution with p “ 0.5. Therefore, TCAV will choose X1, X2, X3 with an equal probability. Although our method also produces a linear concept direction, the completeness measure for tX1, X2u would be 1, while the completeness measure for completeness measure for tX1, X3u and tX2, X3u would be far less than 1. The reason is that we project the activation space onto the concept space, and then pass through the remaining model to get our result. By projecting the activation space onto the span of tX1, X2u, we can still get Y “ X1 XOR X2. On the other hand, if we project the activation space onto tX1, X3u, the information of X2 would be loss, and thus we get a much worse completeness score. The key difference between our method and TCAV is that we feed the projected activations back into the original model (which is hp.q in our problem setting), which may capture the non-linear relationship between the projected space and the output. Such a non-linear relationship might be neglected in the TCAV score.\nImplementation Details For calculating ConceptSHAP, we use the method in kernelSHAP (Lundberg & Lee, 2017) to calculate the Shapley values efficiently. Before calculating the nearest neighbor, we ensure that the dot product between each concept vector and its most salient cluster mean has a positive dot product (if it is negative, we take the negative of the concept vector as the new concept vector, which does not effect the loss at all). For the input cluster proposals in AwA, we follow the code of ? and their hyper-parameters. For input cluster in Imdb, we obtain 500 clusters from positive sub-sentences and 500 clusters from negative sub-sentences by k-means clustering. We train a linear classifier differentiating the cluster segments and random segments, and remove clusters with accuracy lower than 0.95. We also remove clusters that have less than 100 elements. For input cluster proposals in the toy dataset, we used k-means clustering with 20 clusters.\nAdditional Nearest Neighbors for IMDB We show addition nearest neighbors for each concept obtained in IMDB in Figure 5. We observe that some top nearest sub-sentences of concept 2 and concept 3 do not have the word \"think\" in it. The top nearest neighbors in concept 4 generally has a tone that encourages readers of the review to watch the movie, which is probably why it has the largest ConceptSHAP score.\nAdditional Nearest Neighbors for AwA We show addition nearest neighbors for each concept obtained in AwA in Figure 6. The nearest neighbors all share the same texture. Interestingly, some of the nearest neighbors of ripple 2 are not exactly ripple, but tree/leaves that share similar texture as ripple. Some nearest neighbors of dots contains dots from leaves instead of pure dots on animals. This again validates that the concepts are based on the texture of the image." }, { "heading": "APPENDIX C ADDITIONAL EXPERIMENTS", "text": "Throughout this section, we refer to ηp1q when the term completeness score is mentioned for the simplicity of presentation." }, { "heading": "C.1 HYPER-PARAMETERS", "text": "We run a controlled experiment on different λ “ λ1 “ λ2 values for our concept discovery algorithm to show the robustness of our algorithm against different hyper-parameters. We also perform an ablation study when we do not optimize for the completeness score, which we call ours_unsup˚ and ours_sup˚. We summarize the result in Figure 7. We observe that our method outperforms the baselines for λ P r0.5, 50s. This shows that our concept discovery algorithm is robust with respect to the hyper-parameter λ. We also observe that for the controlled version where the completeness score is not optimized, the alignment score and completeness score is worse compared to the baselines. This shows that the completeness objective is essential in obtaining complete and correct concepts in our algorithm." }, { "heading": "C.2 NUMBER OF CONCEPTS", "text": "We show the performance metric when different number of concepts are retrieved for both our concept discovery algorithm and the baselines in the toy dataset as well as the AwA dataset respectively in figure 8 and 9. In the toy dataset, we observe that the completeness for all the baselines are low compared to the completeness of our method. On the other hand, the alignment score of our method increases more significantly compared to the baselines when the number of concepts increases from 1 to 9. Only ours-sup and ours-unsup achieves a high completeness score when the concept number is larger or equal to 4, which may explain the superior performance of our method on the alignment score. As we have argued in the introduction, concepts that are not complete fails to \"fully\" interpret the model, and thus since most baseline methods are not complete, they fail to reach a high alignment score. On the other hand, our method which obtains a high completeness score achieves a high alignment score.\nFor the AwA dataset, we only show the completeness score for different algorithms with increasing number of concepts. We observe that the completeness of our method is much higher than ACE and PCA. We note that PCA still achieves satisfactory completeness score when the number of concepts is high. This is not surprising since we proved in Proposition 2.1 that PCA maximizes the completeness under isometry of the model. We point out that a larger number of total concepts makes the interpretation more understandable by human, and thus more interpretable. However, the explanation obtained by PCA is only close to complete (i.e. has a completeness larger than 0.95) with 29 concepts. This experiment provides additional empirical support that PCA obtains worse completeness score compared to our method, which may only be caused that the isometry (and perfect accuracy) assumption not holding in practice. This further validates the effectiveness of our method against naive PCA." }, { "heading": "C.3 ROBUSTNESS AGAINST NOISE", "text": "We would also be interested in how the completeness metric may change when the input is slightly perturbed. We perform an experiment on AwA where a Gaussian random noise is added to all the input in the validation set. We plot the completeness score against different standard deviation for each dimension of the added noise in Figure 10. We observe that the completeness score is still above 0.96 even when a multivariate Gaussian noise with standard deviation 10 in each dimension is added to the original input for all the validation set data. We further note that since the completeness score is calculated by all the validation data, perturbing only one validation point will cause negligible impact on the completeness score." }, { "heading": "C.4 OPTIMIZING WITHOUT COMPLETENESS", "text": "We conduct an ablation study on how the result changes when we drop the completeness term in the main objective. The results for optimizing without completeness in the toy dataset has been shown by ours_unsup˚ and ours_sup˚ in section C.1, which is much worse compared to ours with the completeness score optimized. We further show results of the concepts when optimizing without the completeness metric. We note that the completeness score is 0 for AwA dataset. We further visualize the top nearest neighbors for the concepts that are optimized without the completeness metric in Figure 11. We observe that while some concepts are the same as ours when the completeness is optimized, it contains more concepts on wheels and bars which does not seem to be very related to the classification of animals. We notice that the top nearest neighbors are still coherent in general. This is not surprising since our cluster-sparsity regularizers enforces the discovered to be coherent, however, the result now may not be sufficient to models prediction since we dropped the completeness objective and the completeness score becomes 0." } ]
2,019
ON CONCEPT-BASED EXPLANATIONS
SP:46d31f575928c68f60302520901feabe823e0dd4
[ "This paper proposes a new kind of episodic finite MDPs called \"deep hierarchical MDP\" (hMDP). An L-layer hMDP can be *roughly* thought of as L episodic finite MDPs stacked together. A variant of UCRL2 [JOA10] is proposed to solve these hMDPs and some results from its regret analysis are provided. ", "This paper studies the theoretical aspects of HRL. It provides theoretical analysis for the complexity of Deep HRL. The idea is to exploit a given action hierarchy, and known state decomposition, the fact that the high-level state space shares similar low-level structures. The final result is an exponential improvement of HRL to flat RL. " ]
Modern complex sequential decision-making problem often requires both lowlevel policy and high-level planning. Deep hierarchical reinforcement learning (Deep HRL) admits multi-layer abstractions which naturally model the policy in a hierarchical manner, and it is believed that deep HRL can reduce the sample complexity compared to the standard RL frameworks. We initiate the study of rigorously characterizing the complexity of Deep HRL. We present a modelbased optimistic algorithm which demonstrates that the complexity of learning a near-optimal policy for deep HRL scales with the sum of number of states at each abstraction layer whereas standard RL scales with the product of number of states at each abstraction layer. Our algorithm achieves this goal by using the fact that distinct high-level states have similar low-level structures, which allows an efficient information exploitation and thus experiences from different high-level state-action pairs can be generalized to unseen state-actions. Overall, our result shows an exponential improvement using Deep HRL comparing to standard RL framework.
[]
[ { "authors": [ "Alekh Agarwal", "Sham Kakade", "Lin F Yang" ], "title": "On the optimality of sparse model-based planning for markov decision processes", "venue": null, "year": 1906 }, { "authors": [ "Shipra Agrawal", "Randy Jia" ], "title": "Posterior sampling for reinforcement learning: worst-case regret bounds", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Mohammad Gheshlaghi Azar", "Rémi Munos", "Hilbert J Kappen" ], "title": "Minimax pac bounds on the sample complexity of reinforcement learning with a generative model", "venue": "Machine learning,", "year": 2013 }, { "authors": [ "Mohammad Gheshlaghi Azar", "Ian Osband", "Rémi Munos" ], "title": "Minimax regret bounds for reinforcement learning", "venue": "arXiv preprint arXiv:1703.05449,", "year": 2017 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Andrew G Barto", "Sridhar Mahadevan" ], "title": "Recent advances in hierarchical reinforcement learning", "venue": "Discrete event dynamic systems,", "year": 2003 }, { "authors": [ "Jianyu Chen", "Zining Wang", "Masayoshi Tomizuka" ], "title": "Deep hierarchical reinforcement learning for autonomous driving with distinct behaviors", "venue": "IEEE Intelligent Vehicles Symposium (IV),", "year": 2018 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton" ], "title": "Feudal reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Thomas G Dietterich" ], "title": "Hierarchical reinforcement learning with the maxq value function decomposition", "venue": "Journal of artificial intelligence research,", "year": 2000 }, { "authors": [ "Simon S Du", "Akshay Krishnamurthy", "Nan Jiang", "Alekh Agarwal", "Miroslav Dudı́k", "John Langford" ], "title": "Provably efficient RL with rich observations via latent state decoding", "venue": null, "year": 1901 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1704.03012,", "year": 2017 }, { "authors": [ "Kevin Frans", "Jonathan Ho", "Xi Chen", "Pieter Abbeel", "John Schulman" ], "title": "Meta learning shared hierarchies", "venue": "arXiv preprint arXiv:1710.09767,", "year": 2017 }, { "authors": [ "Ronan Fruit", "Matteo Pirotta", "Alessandro Lazaric", "Emma Brunskill" ], "title": "Regret minimization in mdps with options without prior knowledge", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Thomas Jaksch", "Ronald Ortner", "Peter Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Nan Jiang", "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford", "Robert E Schapire" ], "title": "Contextual decision processes with low bellman rank are PAC-learnable", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Chi Jin", "Zeyuan Allen-Zhu", "Sebastien Bubeck", "Michael I Jordan" ], "title": "Is Q-learning provably efficient", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chi Jin", "Zhuoran Yang", "Zhaoran Wang", "Michael I Jordan" ], "title": "Provably efficient reinforcement learning with linear function approximation", "venue": null, "year": 1907 }, { "authors": [ "Sham Kakade", "Mengdi Wang", "Lin Yang" ], "title": "Variance reduction methods for sublinear reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Tor Lattimore", "Marcus Hutter" ], "title": "Pac bounds for discounted mdps", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2012 }, { "authors": [ "Hoang Minh Le", "Nan Jiang", "Alekh Agarwal", "Miroslav Dudı́k", "Yisong Yue", "Hal Daumé" ], "title": "Hierarchical imitation and reinforcement", "venue": "learning. ArXiv,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Jun Morimoto", "Kenji Doya" ], "title": "Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning", "venue": "Robotics and Autonomous Systems,", "year": 2001 }, { "authors": [ "Ronald Parr", "Stuart J Russell" ], "title": "Reinforcement learning with hierarchies of machines", "venue": "In Advances in neural information processing systems,", "year": 1998 }, { "authors": [ "Doina Precup" ], "title": "Temporal abstraction in reinforcement learning", "venue": null, "year": 2001 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael Jordan", "Pieter Abbeel" ], "title": "Trust region policy optimization", "venue": null, "year": 2015 }, { "authors": [ "Aaron Sidford", "Mengdi Wang", "Xian Wu", "Lin Yang", "Yinyu Ye" ], "title": "Near-optimal time and sample complexities for solving markov decision processes with a generative model", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aaron Sidford", "Mengdi Wang", "Xian Wu", "Yinyu Ye" ], "title": "Variance reduced value iteration and faster algorithms for solving markov decision processes", "venue": "In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2018 }, { "authors": [ "David Silver", "Aja Huang", "Christopher Maddison", "Arthur Guez", "Laurent Sifre", "George Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go with deep neural networks and tree search", "venue": "Nature, 529:484–489,", "year": 2016 }, { "authors": [ "Martin Stolle", "Doina Precup" ], "title": "Learning options in reinforcement learning", "venue": "In International Symposium on abstraction, reformulation, and approximation,", "year": 2002 }, { "authors": [ "Richard S Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "Tsachy Weissman", "Erik Ordentlich", "Gadiel Seroussi", "Sergio Verdu", "Marcelo J Weinberger" ], "title": "Inequalities for the l1 deviation of the empirical distribution", "venue": "Hewlett-Packard Labs, Tech. Rep,", "year": 2003 }, { "authors": [ "Zheng Wen", "Benjamin Van Roy" ], "title": "Efficient exploration and value function generalization in deterministic systems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Lin F Yang", "Mengdi Wang" ], "title": "Reinforcement leaning in feature space: Matrix bandit, kernels, and regret bound", "venue": "arXiv preprint arXiv:1905.10389,", "year": 2019 }, { "authors": [ "Dongyang Zhao", "Liang Zhang", "Bo Zhang", "Lizhou Zheng", "Yongjun Bao", "Weipeng Yan" ], "title": "Deep hierarchical reinforcement learning based recommendations via multi-goals abstraction", "venue": null, "year": 1903 }, { "authors": [ "Weissman" ], "title": "2003), for any 1 ≤ i ≤ `−1", "venue": null, "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) is a powerful tool to solve sequential decision making problems in various domains, including computer games (Mnih et al., 2013), Go (Silver et al., 2016), robotics (Schulman et al., 2015). A particular feature in these successful applications of RL is that these tasks are concrete enough to be solved by primitive actions and do not require high-level planning. Indeed, when the problem is complex and requires high-level planning, directly applying RL algorithms cannot solve the problem. An example is the Atari game Montezuma’s Revenge, in which the agent needs to find keys, kills monsters, move to correct rooms, etc. This is notoriously difficulty problem that requires more sophisticated high-level planning.\nHierarchical reinforcement learning (HRL) is powerful framework that explicitly incorporate highlevel planning. Roughly speaking, HRL divides the decision problem into multiple layers and each layer has its own state space. States in higher layers represent more abstraction and thus higher layer states are some time named meta-states. When number of layers of abstraction is large, we call this framework, deep hierarchical reinforcement learning (deep HRL). In deep HRL, the agent makes decisions by looking at states from all layers. The dependency on higher layer states represents high-level planning. HRL has been successfully applied to many domains that require high-level planning, including autonomous driving (Chen et al., 2018), recommendation system (Zhao et al., 2019), robotics (Morimoto & Doya, 2001). Recently, extension to imitation learning has also been studied (Le et al., 2018).\nWhile being a practically powerful framework, theoretical understanding on HRL is still limited. Can we provably show the benefit of using HRL instead of naı̈ve RL? In particular, what can we gain from multi-layer abstraction of deep HRL? Existing theories mostly focus on option RL setting, which transits from the upper layer to lower layer when some stopping criterion is met (Fruit et al., 2017). This is different from our setting requiring horizons in each layer to be the same, which is common in computer games and autonomous driving Moreover, the number of samples needed in Fruit et al. (2017) is proportional to the total number of states and total number of actions. In our setting, both the total number of states and number of actions can be exponentially large and hence their algorithm becomes impractical.\nWe initiate the study of rigorously characterizing the complexity of deep HRL and explaining its benefits compared to classical RL. We study the most basic form, tabular deep HRL in which there\nare total L-layers and each layer has its own state space S` for ` ∈ [L]. One can simply apply classical RL algorithm on the enlarged state space S = S1 × S2 × · × SL. The sample complexity will however depend on the size of the enlarged states space |S| = ΠL`=1 |S`|. In this paper, we show because of the hierarchical structure, we can reduce the sample complexity exponentially, from ∝ poly(ΠL` |S`|) to ∝ ∑L `=1 poly(|S`|). We achieve this via a model-based algorithm which carefully constructs confidence of the model in a hierarchical manner. We fully exploit the structure that lower-level MDPs of different high-level states share the same transition probability, which allows us to combine the information collected at different high-level states and use it to give an accurate estimator of the model for all low-level MDPs. Due to this information aggregation, we are able to improve the sample complexity bound. To our knowledge, this is the first theoretical result quantifying the complexity of deep HRL and explain its benefit comparing to classical RL.\nOrganization. This paper is organized as follows. In Section 2 we discuss related work. In Section 3, we review basic RL concepts and formalize deep HRL. In Section 4, we present our main algorithm and present its theoretical guarantees. In Section 5, we give a proof sketch of our main theorem. We conclude in Section 6 and defer some technical lemmas to appendix." }, { "heading": "2 RELATED WORK", "text": "We are going to provide several related literatures on tabular MDP and hierarchical reinforcement learning in this section.\nAs for tabular MDP, many works focus on solving MDP with a simulator which can provide samples of the next state and reward given the current state-action pair. These work includes Lattimore & Hutter (2012); Azar et al. (2013); Sidford et al. (2018b;a); Agarwal et al. (2019). Since we do not need to consider the balance between exploration and exploitation, this setting is easier than the setting of minimizing the regret.\nThere are also a line of work on analysis of the regret bound in RL setting. Jaksch et al. (2010) and Agrawal & Jia (2017) propose a model-based reinforcement learning algorithm, which estimates the transition model using past samples and add a bonus to the estimation. Their algorithms achieve regret bound Õ( √ H4S2AT ) and Õ( √ H3S2AT ) respectively. Later, the UCBVI algorithm in Azar et al. (2017) adds bonus term to the Q function directly, and achieves the regret bound Õ( √ H2SAT ), which matches the lower bound when the number of episode is sufficiently large. Adopting the technique of variance reduction, the vUCQ algorithm in Kakade et al. (2018) improves the lower order term in the regret bound. Jin et al. (2018) proposed a model-free Q-learning algorithm is proved to achieve regret bound Õ( √ H3SAT ).\nHierarchical reinforcement learning Barto & Mahadevan (2003) are also broadly studied in Dayan & Hinton (1993); Parr & Russell (1998); Sutton et al. (1999); Dietterich (2000); Stolle & Precup (2002); Bacon et al. (2017); Florensa et al. (2017); Frans et al. (2017). The option framework, which is studied in Sutton et al. (1999); Precup (2001), is another popular formulation used in hierarchical RL. In Fruit et al. (2017), the regret analysis is carried on option reinforcement learning, but their analysis only applies to the setting of option RL. To our current knowledge, there is no such work analyzing the regret bound of multi-level hierarchical RL." }, { "heading": "3 PRELIMINARIES", "text": "" }, { "heading": "3.1 EPISODIC MARKOV DECISION PROCESS", "text": "In this paper, we consider finite horizon Markov decision process (MDP). An MDP is specified by a tuple (S,A, H, P, r), where S is the (possibly uncountable) state space, A is a finite action space, H ∈ Z+ is a planning horizon, P : S×A → 4 (S) is the transition function, and r : S×A → [0, 1] is the reward function. At each state s ∈ S , an agent is able to interact with the MDP by playing an action a ∈ A. Once an action a is played on state s, the agent receives an immediate reward r(s, a) ∈ [0, 1] 1, and the state transitions to next state s′ with probability P (s′|s, a). Starting\n1Here we only consider cases where rewards are in [0, 1], but it is easy to know that this result can generalize to rewards in a different range using standard reduction Sidford et al. (2018a). We can also generalize our\nfrom some initial state s1 ∈ S (draw from some distribution), the agent is able to play H steps (an episode) and then the system resets to another initial state s1 sampled from the initial distribution.\nFor an MDP, our goal is to obtain an optimal (will be precise shortly) policy, π : S → A, which is a function that maps each state to an action. If an agent always follows the action given by a policy π, then it induces a random trajectory for an episode: s1, a1, r1, s2, a2, r2, . . . , sH , aH , rH where r1 = r(s1, a1), s2 ∼ P (·|s1, a1), a2 ∼ π(s2), etc. The value function and Q-function at step h of a given policy is defined as V πh (s) = Eπ [∑H h′=h rh′\n∣∣sh = s] and Qπh(s, a) = Eπ [∑H h′=h\n∣∣sh = s, ah = a], where the expectation is over all sample trajectories. Then the optimal policy, π∗, is defined to be the policy with largest V π1 (s1) for all s1 ∈ S. For any optimal policy, its value and Q-function satisfy the following Bellman equation\n∀s ∈ S, a ∈ A, h ∈ [H] : Q∗h(s, a) = r(s, a) + Es′∼P (·|s,a)V ∗h+1(s′), V ∗h (s) = max a∈A Q∗h(s, a)\nand V ∗H+1(s) = 0. (1)\nWe consider the MDP problem in the online learning setting, where the probability transition is unknown. However, our goal is still to collect the maximum amount reward, i.e., play a policy that is comparable to the optimal one. Therefore, the agent needs to learn to play through trial and error, i.e., improving the policy by learning from experiences. Suppose we allow the agent to play in total K ≥ 1 episodes. For each episode, the agent is following a policy πk, which is computed based on her experiences collected from episodes 1, 2, . . . , k − 1. To measure the performance of the agent, we use the following standard regret formulation, which compares the reward collected by the agent to the performance of an optimal policy.\nR(K) = K∑ k=1 (V ∗1 (s1)− V πk 1 (s1)) . (2)\nNote that, if the agent learns nothing, we then expect R(K) ∝ K. But if the agent is able to learn, then the average regret, R(K)/K, which measures the average error per step, goes to 0 when K becomes large. In the online MDP literature, model based algorithms (e.g. Jaksch et al. (2010)) achieves regret R(K) ≤ Õ (√ H2|S|2|A|HK ) ." }, { "heading": "3.2 DEEP HIERARCHICAL MDP", "text": "In this section we introduce a special type of episodic MDPs, the hierarchical MDP (hMDP). If we view them as just normal MDPs, then their state space size can be exponentially large. Formally, each hMDP consists of L levels of episodic MDPs with the `-th level having planning horizon H`. One can view the `-th level MDP as a subtask of the (`+1)-th level MDP. To transition between two state in (` + 1)-th level, the agent needs to play an episode in `-th level MDP (the state transition will be defined formally shortly). Therefore, the total planning horizon of the hierarchical MDP is H = ∏L i=1H`.\nFor each step h in the hMDP, we can represent it by a tuple (h1, · · · , hL), where h` ∈ [H`] is the step of `-th level MDP for 1 ≤ ` ≤ L. We use H̃` = ∏L i=`Hi to denote the effective horizon of level `, which represents the total number of actions in A` ∪ · · · ∪ AL needed to be played in an episode of the full hMDP. Note that, for each h = (h1, · · · , hL) ∈ [1, H], we have h+ 1 = (h′1, · · · , h′L) is the immediate next lexicographical tuple of (h1, · · · , hL). We now describe how the agent can interact with the full hMDP. In fact, in each step h, only an action in one level can be played. This level is given by the function σ : [H] → [L], formally defined as\nσ(h) = arg max `∈[L] {h` = H`}+ 1.\nIt characterizes the lowest level of MPDs which does not reach the last step in its horizon.\nresult to the setting where the reward is stochastic, since estimating the reward accurately requires much fewer samples than estimating the transition.\nTo make it formal for the state-transition, we use S` to denote the set of states at level `, A` to denote the set of actions at level `. To be convenient, we assume for every ` ∈ [L], for any H`length trajectory in the `-th level MDP, the last state always falls in S ′` ⊂ Sl, which we call as the endstates in level `. At step h of the full hMDP, the full state is described as a length-L tuple: (s1, · · · , sL). For any 1 ≤ ` < σ(h), we immediately have s` ∈ S ′` is an endstate of level `. Note that the total number of states of the full MDP is ∏L `=1 |S`|, which is exponentially larger than the average size of a level’s state space.\nNow we define the transition. At step h of the full hMDP, the agent plays an action aσ(h)h ∈ Aσ(h) for the σ(h)-level MDP. The state of this MDP triggers a transition at level σ(h):\ns σ(h) h+1 ∼P ( · | sσ(h)h , a σ(h) h , s σ(h)−1 h ) Note that the probability transition is determined by the state-action-ending-state tuple (s σ(h) h , a σ(h) h , s σ(h)−1 h ), instead of single state-action pair. Moreover, all the MDPs with level lower than σ(h) will reset their state based on some initial distribution P i0:\nsih+1 ∼P i0 (·) , ∀1 ≤ i ≤ σ(h)− 1, and all the MDPs with level higher than σ(h) will keep their states unmoved.\nFor any given ` ∈ [L], we use E` to denote the state-action-ending-state tuple at level `:\nE` = {(s`, a`, s`−1) | s` ∈ S`, a` ∈ A`, s`−1 ∈ S′`−1}.\nAs for the reward, we use r(s1h, · · · , sLh , a σ(h) h ) ∈ [0, 1] to denote the immediate reward obtained after executing the action aσ(h)h . We illustrate the hierarchical MDP model in Figure 1.\nAn Example: Autonomous Driving. We here give a more concrete example. Suppose we want our vehicle to reach the destination, while not hitting obstacles or crashing into another vehicles or pedestrians. We use the following hierarchical MDP structure to formulate this problem.\nLevel 1 represents the status (e.g. position on the road, whether has an obstacle in the front) of the vehicle, and the ending state represents whether the vehicle avoids all the obstacles, other vehicles, pedestrians and arrives at the end of the road. Level 2 represents the road map, and the ending state represents whether the vehicle reaches the desired position.\nAt each time step, if the vehicle does not reach the end state of level 1, that is, it still on the road and not at a crossing, then the vehicle needs to decide whether speeding up, slowing down or dodging the obstacle in the front. If the vehicle reaches the end state of level 1, that is, it arrives at the end of a road, then it needs to decide whether going straight forward, turning left or turning right. This process ends if and only if the vehicle reaches the desired position." }, { "heading": "3.3 DEEP HIERARCHICAL REINFORCEMENT LEARNING OBJECTIVE", "text": "Suppose the environment is an hMDP. The hierarchical structure and the reward is known but the transition models are not known. Similar to the classic RL setting, our agent needs to interact with the unknown hMDP while being able to accumulate the amount of rewards comparable to an optimal policy. Our goal is design an algorithm that minimizes the regret defined in Equation (2).\nSince an hMDP is very special compared to a normal MDP, we redefine its related quantities here. The policy π is a mapping from S1× · · · × SL× [H] toA1 ∪ · · ·AL, where π(s1, · · · , sL, h) ∈ A` if and only if σ(h) = `, s1 ∈ S ′1, · · · , s`−1 ∈ S ′`−1 and s` ∈ S`, · · · , sL ∈ SL. Given a policy π, and step h, the value function and Q function are again defined in Equation (1), but can be rewritten as,\nQπh(s 1, · · · , sL; a) = r(s1, · · · , s`; a)\n+ Es̃1∼P 10 ,··· ,s̃`−1∼P `−1,s̃`∼P (·|e)V π h+1(s̃ 1, · · · , s̃`, s`+1 · · · , sL),\nV πh (s 1, · · · , sL) = Qkh(s1, · · · , sL;π(s1, · · · , sL, h))\nV πH+1(s 1, · · · , sL) = 0, ∀s` ∈ S`, 1 ≤ ` ≤ L.\nOur objective is to find a policy π∗ such that the value function V πh is maximized for all states and steps in a horizon. We use V ∗h and Q ∗ h to denote the optimal value function and optimal Q-function, which is the value function and the Q-function when applying the optimal policy π∗." }, { "heading": "4 ALGORITHM", "text": "In this section, we will present a model-based hierarchical reinforcement learning algorithm, together with its regret bound analysis." }, { "heading": "4.1 MODEL-BASED HIERARCHICAL REINFORCEMENT LEARNING ALGORITHM", "text": "To formally present our algorithm, we first explain the high-level ideas. Note that the full model size is O (∏L `=1 |S`||E`| ) , where |S`| is the number of states in level ` and |E`| is the number of stateaction-endstate tuples in level `. However, we notice that there are rich structures for the algorithm to exploit: low-level MDPs corresponding to different high-level states share the same transition model. Recall that, our eventual goal to learn the hierarchical model with number of samples much less than ∏L `=1 |S`||E`|. To achieve this goal, we group our samples obtained from transition models by state-action-endstate tuple, and samples obtained from initial distributions by levels: even if the samples are collected at a different high-level state-action pair, they are grouped to a same set as long as they come from a same state-action-endstate pair. We then use the samples from a level to estimate the MDP initial distribution corresponding to that level. In effect, to estimate all the MDPs corresponding to a level accurately, we only need to visit this level a number of times proportional to the size of a single MDP in this level, which is far smaller than the model of all MDPs in this level combined.\nNext, we explain how we can deploy the algorithm in the online setting, where we can only visit a state by following the trajectory of some policy. Initially, we have no knowledge of the MDPs corresponding to each level, we just initialize each of them to an arbitrary MDP. Suppose we play the whole game for K episodes, and for each episode, we play H = ∏L `=1H` steps, where H`\nis the horizon of an MDP in level `. Suppose at episode k ∈ [K] and step h ∈ [H], the transition happens at level ` (which means σ(h) = `). We denote the full state we observed as (s1h,k, · · · , sLh,k) and the action we take as a`h,k. Then we collect data samples of the form (e ` h,k, s ` h+1,k), where e`h,k = (s ` h,k, a ` h,k, s `−1 m,k) ∈ E` is a state-action-endstate tuple at level `, and also data samples of the form (i, sih+1,k) for every 1 ≤ i ≤ ` − 1. We add them to a buffer, s`h+1,k to Neh,k corresponding to state-action-endstate tuple eh,k, and sih+1,k to Mi corresponding to level i. Here Mi and Neh,k are multisets, i.e., their elements can be repeated. We use Neh,k to estimate probability transition P (·|e`h,k), and Mi to estimate P i0(·) respectively.\nWhen a new episode k starts, we first do a model estimation based on all the samples collected and partitioned. Using this estimated model, a value function and a Q-function is computed. However, the estimation error always exists in the estimated model due to insufficient samples from certain state-action-endstate tuples. To account for this, we estimate the model uncertainty based on concentration inequalities. Specifically, we add the uncertainties to our value function estimator, and use the modified value function to play for the new episode. Note that doing so encourages exploration on unexperienced state-action pairs. In fact, as we will show shortly, by using appropriate uncertainty estimation, the model becomes more accurate if the algorithm makes a mistake (e.g., by playing a less-optimal action). With a pigeon hole principle argument, we can show that our algorithm achieves a low regret bound.\nOur model-based algorithm is formally presented in Algorithm 1. We denote our estimator of the initial distribution at level ` as\nP̃ `k,0(s) = #{s ∈M`} |M`| , (3)\nwhere #{s ∈ M`} and |M`| are the number of appearance of state s in buffer M` and the total number of states in buffer M`, respectively. We also denote our estimator of transition distribution at state-action-endstate tuple e as\nP̃k(s|e) = #{s ∈ Ne} |Ne| , (4)\nwhere #{s ∈ Ne} and |Ne| are the number of appearance of state s in bufferNe and the total number of states in buffer Ne, respectively. With this these estimators, we use dynamic programming to solve for the Q-function and value functions as follows,\nQkh(s 1, · · · , sL; a) = r(s1, · · · , sL; a) + b(k, h, `, e)\n+ Es̃1∼P̃ 1k,0,··· ,s̃`−1∼P̃ `−1 k,0 ,s̃ `∼P̃k(·|e)V k h+1(s̃ 1, · · · , s̃`, s`+1 · · · , sL),\nV kh (s 1, · · · , sL) = min\n{ H,max\na∈S`\n[ Qkh(s 1, · · · , sL; a) ]} ,\n(5)\nwhere for 1 ≤ h ≤ H, ` = σ(h), e ∈ E` and V kH+1(s1, · · · , sL) = 0. Here the bonus function b(k, h, `, e) is used to estimate the uncertainty of the Q, V estimator, which are defined as follows:\nb(k, h, `, e) = H min { 1, √ 8(|S`|+ log(4L2|S`||E`|k2/δ))\nn(k − 1, e)\n}\n+H `−1∑ i=1 min\n{ 1, √ 8(|Si|+ log(4L2|Si||E`|k2/δ))\n(k − 1)H̃i\n} ,\n(6)\nwhere δ is a constant between [0, 1] to be specified before, and n(k − 1, e) is the number of times we encountered state-action-endstate tuple before k-horizon. These bonus functions bound the difference between the estimated Q-functions to the exact value (per-step) with high probability. For episode k, our exploratory policy is then\nπk(s1, · · · , sL, h) = arg max a∈A`\n[ Qkh(s 1, · · · , sL; a) ] . (7)\nAlgorithm 1 Model-based Algorithm for Hierarchical RL 1: Input: An MDP with hierarchical structure, δ 2: Initialize: M` = Ne = ∅ (elements repeatable) for every 1 ≤ ` ≤ L, e ∈ E`; 3: Initialize: b(k, h, `, e) as in formula (6) 4: for k = 1 : K do 5: Calculate V kh , Q k h, πk with uses formula (5), (7).\n6: for h = 1 : H do 7: Play action aσ(h)h,k = arg maxa∈Aσ(h) [ Qkh(s 1 h,k, · · · , sLh,k; a) ] ;\n8: Get next state (s1h+1,k, · · · , sLh+1,k); 9: for i = 1 : σ(h) do\n10: Put sih+1,k into M`; 11: Put s`h+1,k into N(s`h,k,a`h,k,s`−1h,k ).\n12: Update\nP̃ `k,0(s) = #{s ∈M`} |M`| , ∀1 ≤ ` ≤ L, s ∈ S`,\nP̃k(s|e) = #{s ∈ Ne} |Ne| , ∀1 ≤ ` ≤ L, e ∈ E`, s ∈ S`." }, { "heading": "4.2 REGRET BOUND FOR ALGORITHM 1", "text": "In this subsection we provide a formal guarantee for Algorithm 1. We present a proof sketch in the next section, and the full proof is deferred to appendix. Theorem 4.1. Suppose we run Algorithm 1 for K ≥ 1 episodes on an hMDP. For k ∈ [K], let πk be policy played by the algorithm in episode k. Then we have, with probability at least 1− δ,\nR(K) = L∑ `=1 Õ ( HH̃`|E`|+H √ KH̃` · |E`|(|S`|+ log δ−1) ) .\nwhere δ ∈ (0, 1) and R(K) is defined in Equation (2). From this theorem, we observe that the regret bound only depends on ∑L `=1 √ |S`||E`|, where |E`| = |S`||A`||S ′`−1| (here |E ′`−1| is the number of endstates at level ` − 1). Usually, the number of actions and the number of endstates at a level are much smaller than the number of states and can be viewed as constant. In this way our regret bound only depends on ∑L `=1 |S`|. It means\nafter K = Ω̃ (∑L `=1H √ H̃`|E`||S`| ) episodes, the algorithms achieves a constant average regret\nR(K)/K = O(1) (this is when the agent learns a meaningful amount of information). Let us consider the full hMDP, whose state space size is ∏L `=1 |S`|. With a model based or model free\nalgorithm like Jaksch et al. (2010); Jin et al. (2019), the number of episodes needed would be K &∏L `=1 |S`| to achieve a constant average regret. Note that ∏L `=1 |S`| can be exponentially larger than\npoly( ∑L `=1 |S`|), therefore our algorithm achieves an exponential saving in the sample complexity for RL." }, { "heading": "5 PROOF SKETCH", "text": "The proof of Theorem 4.1 is divided into two parts. In the first part, we prove that with high probability, the difference between empirical expectation and true expectation of the value function can be bounded by the bonus b. The proof of this property involves estimation of the total variation (TV) distance between a distribution on the state space S` of level ` and its empirical estimation using n samples. This TV distance can be bounded by Õ( √ |S`|/n) with high probability.\nThe second part of proof tells that if the difference between empirical expectation and true expectation of the value function can be bounded by the bonus b, then the estimator Qkh of Q function\nis always an optimistic estimation to the true Q-function with high probability. That is, for every si ∈ Si, we have Qkh(s1, · · · , sL) ≥ Q∗h(s1, · · · , sL). Then we can show that the regret can be upper bounded by the sum of all bonuses along the sample path. Hence we can obtain the regret bound by summing over all bonuses in each step and horizon. We notice that at level ` there are only |E`| distributions we need to estimate, and each one is a distribution on S`. Therefore applying Hölder inequality we obtain the regret bound ∑L `=1 Õ( √ |S`||E`|K), where we put the dependence on H and δ into Õ." }, { "heading": "6 CONCLUSION", "text": "In this paper we prove the benefit of hierarchical reinforcement learning theoretically. We propose a model-based hierarchical RL algorithm which achieves a regret bound that is exponentially better than the naive RL algrorithm. To our knowledge, this is the first theoretical result demonstrating the benefit of using deep hierarchical reinforcement learning. Below we list two future directions.\nDeep Hierarchical Reenforcement Learning with Function Approximation The current work focuses the most basic formulation, tabular RL: When state space is large, function approximation is required for generalization across states. Recently, a line of work gave provably polynomial sample complexity upper bound for RL with function approximation under various assumptions (Wen & Van Roy, 2013; Du et al., 2019; Jiang et al., 2017; Yang & Wang, 2019; Jin et al., 2019). An interesting direction is to combine our analysis with these results and obtain guarantees on deep HRL with function approximation.\nDeep Hierarchical Reenforcement Imitation Learning Imitation learning is another paradigm where expert’s trajectories are available to the agent. Le et al. (2018) presented a framework to combine hierarchical learning and imitation learning. However, there is no formal statistical guarantee. We believe our analysis can be leveraged to understand deep hierarchical imitation learning too." }, { "heading": "A PROOF OF THEOREM 4.1", "text": "A.1 NOTATIONS FOR THE PROOF\nWe will specify some useful notations in the proof first.\nTo be convenient, we use Vh[K], Qh[K, a] to denote Vh(s1k,h1 , · · · , s L k,hL ) and Qh(s 1 k,h, · · · , sLk,h; a). We use Vh[k]⊗` to denote the |S1| × · · · × |S`|-dimensional tensor whose (x1, · · · , x`)-element is Vh(x1, · · · , x`, s`+1k,h+1, · · · , sLk,h+1). Given probability distribution P i(·) (1 ≤ i ≤ `) over the state space Si, we use Pi to denote the operator over tensors:\nPi [ V ⊗`h [k] ] = ∑ si∈S P i(si)Vh1,··· ,hL(·, · · · , si, · · · , ·, x`, s`+1k,h+1, · · · , s L k,h+1), (8)\nwhich can be understood as taking the expectation on the i-th element. We can also define the tensor product operator Pi1 ⊗Pi2 , · · ·⊗Pij for different i1, · · · , ij as the composite operator, where each Pij is a operator of dimension |Sij |.\nA.2 PROOF OF THEOREM 4.1\nWe present the proof of Theorem 4.1 in this subsection. In the next we use ek,h to denote the state-action-endstate pair (slk,h, a l k,h, s l−1 k,h ).\nWe first present a lemma indicating that with high probability, the difference between the empirical expectation of the value function is bounded by the bonus. Lemma A.1. With probability at least 1− δ, we have∣∣∣[P̃1k,0 ⊗ · · · ⊗ P̃`−1k,0 ⊗ P̃k(·|e)− P1k,0 ⊗ · · · P`−1k,0 ⊗ Pk(·|e)]V kh+1[k]⊗`∣∣∣ ≤ b(k, h, `, e) for every k ≥ 1, 1 ≤ h ≤ H, 1 ≤ ` ≤ L and e ∈ E`.\nProof. Given any k ≥ 1, 1 ≤ h ≤ H, 1 ≤ ` ≤ L and e = (sl, al, sl−1) ∈ E`, we have the following estimation of error between estimated transition model and true transition model.∣∣∣[P̃1k,0 ⊗ · · · ⊗ P̃`−1k,0 ⊗ P̃k(·|e)− P1k,0 ⊗ · · · P`−1k,0 ⊗ Pk(·|e)]V kh+1[k]⊗`∣∣∣ ≤\n`−1∑ i=1 ∣∣∣[P̃1k,0 ⊗ · · · P̃ik,0 · · · ⊗ P`−1k,0 ⊗ P̃k(·|e)− P̃1k,0 ⊗ · · · Pik,0 · · · P`−1k,0 ⊗ P̃k(·|e)]V kh+1[k]⊗`∣∣∣ + ∣∣∣[P1k,0 ⊗ · · · P`−1k,0 ⊗ P̃k(·|e)− P1k,0 ⊗ · · · P`−1k,0 ⊗ Pk(·|e)]V kh+1[k]⊗`∣∣∣\n= `−1∑ i=1 ∣∣∣[[P̃ik,0 − Pik,0]⊗ P̃1k,0 ⊗ · · · P̃i−1k,0 ⊗ Pi+1k,0 · · · ⊗ P`−1k,0 ⊗ P̃k(·|e)]V kh+1[k]⊗`∣∣∣ + ∣∣∣[[P̃k(·|e)− Pk(·|e)]⊗ P1k,0 ⊗ · · · P`−1k,0 − P1k,0 ⊗ · · · P`−1k,0 ]V kh+1[k]⊗`∣∣∣ ≤ `−1∑ i=1 ∥∥∥P̃ ik,0 − P ik,0∥∥∥ 1\n∥∥∥[P̃1k,0 ⊗ · · · P̃i−1k,0 ⊗ Pi+1k,0 · · · ⊗ P`−1k,0 ⊗ P̃k(·|e)]V kh+1[k]⊗`∥∥∥∞ + ∥∥∥P̃k(·|e)− Pk(·|e)∥∥∥\n1 ∥∥∥[P1k,0 ⊗ · · · P`−1k,0 − P1k,0 ⊗ · · · P`−1k,0 ]V kh+1[k]⊗`∥∥∥∞ ≤\n`−1∑ i=1 ∥∥∥P̃ ik,0 − P ik,0∥∥∥ 1 ·H + ∥∥∥P̃k(·|e)− Pk(·|e)∥∥∥ 1 ·H\nAccording to Theorem 2.1 in Weissman et al. (2003), for any 1 ≤ i ≤ `−1, with probability at least 1− δ we have ∥∥∥P̃ ik,0 − P ik,0∥∥∥\n1 ≤\n√ 8(|Si|+ log δ−1)\n(k − 1)H̃i , (9)\nwhere we use the fact that till k horizons, we have kH̃i samples for the initial distribution of level i. Similarly, with probability at least 1− δ we have∥∥∥P̃k(·|e)− Pk(·|e)∥∥∥\n1 ≤ √ 8(|S`|+ log δ−1) n(k − 1, e) ,\nwhere we use n(k−1, e) to denote the number of appearance of state-action-endstate tuple till k−1 horizons (including k − 1 horizon). Replacing δ with δ/(4L2|Si||E`|k2) and applying union bound on all 1 ≤ i ≤ `, we obtain\n`−1∑ i=1 ∥∥∥P̃ ik,0 − P ik,0∥∥∥ 1 ·H + ∥∥∥P̃k(·|e)− Pk(·|e)∥∥∥ 1 ·H\n≤ H\n(√ 8(|S`|+ log(4L2|Si||E`|k2/δ))\nn(k − 1, e)\n} +H\n`−1∑ i=1 min\n{ 1, √ 8(|Si|+ log(4L2|Si||E`|k2/δ))\n(k − 1)H̃i ) with probability at least 1− δ/(4L|Si||E`|k2). Therefore, noticing that∣∣∣[P̃1k,0 ⊗ · · · ⊗ P̃`−1k,0 ⊗ P̃k(·|e)− P1k,0 ⊗ · · · P`−1k,0 ⊗ Pk(·|e)]V kh+1[k]⊗`∣∣∣ ≤ H if we choose\nb(k, h, `, e) = H min { 1, √ 8(|S`|+ log(4L2|Si||E`|k2/δ))\nn(k − 1, e)\n}\n+H `−1∑ i=1 min\n{ 1, √ 8(|Si|+ log(4L2|Si||E`|k2/δ))\n(k − 1)H̃i\n} ,\n(10)\nthen we have∣∣∣[P̃1k,0 ⊗ · · · ⊗ P̃`−1k,0 ⊗ P̃k(·|e)− P1k,0 ⊗ · · · P`−1k,0 ⊗ Pk(·|e)]V kh+1[k]⊗`∣∣∣ ≤ b(k, h, `, e) with probability at least 1− δ/(2L|E`|k2). Finally, we apply the union bound on all k ≥ 1, 1 ≤ ` ≤ L and e ∈ E`,∣∣∣[P̃1k,0 ⊗ · · · ⊗ P̃`−1k,0 ⊗ P̃k(·|e)− P1k,0 ⊗ · · · P`−1k,0 ⊗ Pk(·|e)]V kh+1[k]⊗`∣∣∣ ≤ b(k, h, `, e) holds for every k ≥ 1, 1 ≤ ` ≤ L, e ∈ E` with probability at least 1− δ.\nNext we present a lemma indicating that if the event in Lemma A.1 holds, then V kh is always an optimistic estimation to the true value function V ∗h . Lemma A.2. Suppose the event in Lemma A.1 holds, then\nV kh [k] ≥ V ∗h [k] holds for every 1 ≤ h ≤ H and 1 ≤ k ≤ K.\nProof. We will prove a stronger version of this lemma. That is, for every (s1, · · · , sL) ∈ S1×· · ·× SL, we have\nV kh (s 1, · · · , sL) ≥ V ∗h (s1, · · · , sL).\nWe will prove this result by induction. When h = H , since V kh = V ∗ h = 0, the inequality V k h [k] ≥ V ∗h [k] already holds. Next we assume that this inequality holds for h+ 1, and we consider the case h. For any (s1, · · · , sL) ∈ S1 × · · · × SL, a` ∈ A`, we let e = (s`, a`, s`−1). Suppose σ(h) = `, and for every i < `, s`, si are all end-state of level i. According to the events in Lemma A.1, we have\nQkh(s 1, · · · , sL; a`) = r(s1, · · · , sL, a`h`) + [ P̃1k,0 ⊗ · · · ⊗ P̃`−1k,0 ⊗ P̃k(·|e) ] V kh+1(· · · , ·, sl+1, · · · , sL)\n+ b(k, h, `, e) ≥ r(s1, · · · , sL, a`h`) + [ P1k,0 ⊗ · · · ⊗ P`−1k,0 ⊗ Pk(·|e) ] V kh+1(· · · , ·, sl+1, · · · , sL)\n≥ r(s1, · · · , sL, a`h`) + [ P1k,0 ⊗ · · · ⊗ P`−1k,0 ⊗ Pk(·|e) ] V ∗h+1(· · · , ·, sl+1, · · · , sL)\n= Q∗h[k, a ` h` ],\nwhere the first inequality uses the event in Lemma A.1, and last inequality uses the fact that V kh+1 ≥ V ∗h+1 and [ P1k,0 ⊗ · · · ⊗ P `−1 k,0 ⊗ Pk(·|e) ] is a nonnegative operator. Therefore, we have\nV kh (s 1, · · · , sL) = min { max a`∈A` Qkh(s 1, · · · , sL; a`), H } ≥ min\n{ max a`∈A` Q∗h(s 1, · · · , sL; a`), H } = V ∗h (s 1, · · · , sL)\nThis indicates that this lemma holds for h, which completes the induction. Hence for every 1 ≤ h ≤ H , this lemma holds.\nEquipped with these two lemma, we are ready to prove Theorem 4.1.\nProof. Suppose the event in Lemma A.1 always holds (which will happen with probability at least 1− δ), then we can calculate the value function.\nV kh [k]− V πk h [k] = Q k h[k, a ` h` ]−Qπkh [k, a ` h` ] = [ P̃1k,0 ⊗ · · · ⊗ P̃ l−1k,0 ⊗ P̃k(·|ek,l) ] V kh+1[k] ⊗` + b(k, h, `, ek,h)\n− [ P1k,0 ⊗ · · · ⊗ P l−1k,0 ⊗ Pk(·|ek,l) ] V πkh+1[k] ⊗`\n= [ P1k,0 ⊗ · · · ⊗ P l−1k,0 ⊗ Pk(·|ek,l) ] ( V kh+1[k] ⊗` − V πkh+1[k] ⊗`)+ b(k, h, `, ek,h)\n+ [ P̃1k,0 ⊗ · · · ⊗ P̃ l−1k,0 ⊗ P̃k(·|ek,l)− P 1 k,0 ⊗ · · · P l−1k,0 ⊗ Pk(·|ek,l) ] V kh+1[k] ⊗`\n= ( V kh+1[k]− V πk h+1[k] ) + ξh+1,k + b(k, h, `, ek,h)\n+ [ P̃1k,0 ⊗ · · · ⊗ P̃ l−1k,0 ⊗ P̃k(·|ek,l)− P 1 k,0 ⊗ · · · P l−1k,0 ⊗ Pk(·|ek,l) ] V kh+1[k] ⊗`\n≤ ( V kh+1[k]− V πk h+1[k] ) + ξh+1,k + 2b(k, h, `, ek,h),\nwhere the last inequality uses Lemma A.2, and we define ξh+1,k as follows:\nξh+1,k = [ P1k,0 ⊗ · · · ⊗ P l−1k,0 ⊗ Pk(·|ek,l) ] ( V kh+1[k] ⊗` − V πkh+1[k] ⊗`)− (V kh+1[k]− V πkh+1[k]) .\nSum up this inequality for all 1 ≤ h ≤ H , and noticing that V kH = V πk H = 0, we get\nV k1 [k]− V πk 1 [k] ≤ H∑ h=1 ξh+1,k + 2 H∑ h=1 b(k, h, σ(h), ek,h),\nwhich indicates that\nK∑ k=1 V ∗1 [k]−V πk 1 [k] ≤ K∑ k=1 V k1 [k]−V πk 1 [k] ≤ K∑ k=1 H∑ h=1 ξh+1,k+2 K∑ k=1 H∑ h=1 b(k, h, σ(h), ek,h). (11)\nAs for the first term in the above , it is easy to see that ξh+1,k is a martingale difference sequence with respect to h, k. Since every ξh,k is bounded byH = H1 · · ·HL, according to Azuma-Hoeffding inequality we have ∣∣∣∣∣ K∑ k=1 H∑ h=1 ξkh+1\n∣∣∣∣∣ ≤ 4H√HK log δ−1 (12) with probability at least 1− δ.\nNext, we will analyze the second term in equation 11. According to formula equation 10, we have\nK∑ k=1 H∑ h=1 b(k, h, σ(h), ek,h)\n= H K∑ k=1 H∑ h=1 min\n{ 1, √ 8(|Sσ(h)|+ log(4L2|Sσ(h)||Eσ(h)|k2/δ))\nn(k − 1, ek,h)\n}\n+H K∑ k=1 H∑ h=1 σ(h)−1∑ i=1 min\n{ 1, √ 8(|Si|+ log(4L2|Si||Eσ(h)|k2/δ))\n(k − 1)H̃i\n} .\n(13)\nIn the first summation of the above equation, the one involved with √\n1/n(k − 1, ek,h) appears n(k, ek,h)− n(k − 1, ek,h) times. And given 1 ≤ ` ≤ L, there exists H̃`−1 − H̃` choices of h such that σ(h) = `. For given `, there are |E`| choices of n(k, e) where e ∈ E`. Therefore, we have\nK∑ k=1 H∑ h=1,σ(h)=` min\n{ 1, √ 8(|S`|+ log(4L2|S`||E`|k2/δ))\nn(k − 1, ek,h)\n}\n= K∑ k=1 ∑ e∈E` (n(k, e)− n(k − 1, e)) min\n{ 1, √ 8(|S`|+ log(4L2|S`||E`|k2/δ))\nn(k − 1, e)\n}\n= ∑ e∈E` K∑ k=1,n(k−1,e)≤H̃` (n(k, e)− n(k − 1, e))\n+ ∑ e∈E` K∑ k=1,n(k−1,e)>H̃` (n(k, e)− n(k − 1, e))\n√ 8(|S`|+ log(4L2|S`||E`|k2/δ))\nn(k − 1, e)\n≤ 2H̃`−1|E`|+ 2 · K∑\nk=1,n(k−1,e)>H̃`\n∑ e∈E` (n(k, e)− n(k − 1, e))\n√ 8(|S`|+ log(4L2|S`||E`|k2/δ))\nn(k, e)\n≤ K∑ k=1 ∑ e∈E` n(k,e)∑ j=n(k−1,e)+1\n√ 8(|S`|+ log(4L2|S`||E`|k2/δ))\nj\n= 2H̃`|E`|+ ∑ e∈E` n(K,e)∑ j=1\n√ 8(|S`|+ log(4L2|S`||E`|k2/δ))\nj ≤ 2H̃`|E`|+ ∑ e∈E` 2 √ 8(|S`|+ log(4L2|S`||E`|k2/δ)) · √ n(K, e)\n≤ 2H̃`|E`|+ 2 √ 8(|S`|+ log(4L2|S`||E`|k2/δ)) · √ |E`| · ∑ e∈E` n(K, e)\n= 2H̃`|E`|+ 2 √ 8(|S`|+ log(4L2|S`||E`|k2/δ)) · √ |E`| ·K(H̃` − H̃`+1)\n= Õ ( H̃`|E`|+ √ KH̃` · |E`|(|S`|+ log δ−1) ) ,\nwhere the third inequality uses the fact that for any e ∈ E` we have n(k, e)− n(k− 1, e) ≤ H̃`, and the second last equation uses the fact that ∑ e∈E` n(K, e) is the number of all possible state-actionendstate pair appears up to K horizons, which is K times the number of m such that σ(h) = `.\nTherefore, the first term in equation 13 has the following estimation\nH K∑ k=1 H∑ h=1 min\n{ 1, √ 8(|Sσ(h)|+ log(4L2|Sσ(h)||Eσ(h)|k2/δ))\nn(k, ek,h)\n}\n= L∑ `=1 Õ ( HH̃`|E`|+H √ KH̃` · |E`|(|S`|+ log δ−1) ) .\nAs for the second term in equation 13, we have\nH K∑ k=1 H∑ h=1 σ(h)−1∑ i=1 min\n{ 1, √ 8(|Si|+ log(4L2|Si||Eσ(h)|k2/δ))\n(k − 1)H̃i\n}\n= H K∑ k=1 L∑ `=1 H∑ h=1,σ(h)=` `−1∑ i=1 min\n{ 1, √ 8(|Si|+ log(4L2|Si||E`|k2/δ))\n(k − 1)H̃i\n}\n= H K∑ k=1 L∑ i=1 L∑ `=i+1 H∑ h=1,σ(h)=` min\n{ 1, √ 8(|Si|+ log(4L2|Si||E`|k2/δ))\n(k − 1)H̃i\n}\n≤ H K∑ k=1 L∑ i=1 L∑ `=i+1 H∑ h=1,σ(h)=` min 1, √√√√8(|Si|+ log(4L2|Si|(∑Lj=1 |Ej |)k2/δ)) (k − 1)H̃i = H\nK∑ k=1 L∑ i=1 H̃i min 1, √√√√8(|Si|+ log(4L2|Si|(∑Lj=1 |Ej |)k2/δ)) (k − 1)H̃i ≤ H\nL∑ i=1 H̃i +H K∑ k=1 L∑ i=1 H̃i √√√√8(|Si|+ log(4L2|Si|(∑Lj=1 |Ej |)k2/δ)) (k − 1)H̃i\n≤ L∑ i=1 Õ ( HH̃i +H √ KH̃i(|Si|+ log δ−1) ) ,\nwhere in the last inequality we apply the Hölder inequality. Combined previous two estimations together, we obtain that\nK∑ k=1 H∑ h=1 b(k, h, σ(h), ek,h)\n≤ L∑ `=1 Õ ( HH̃`|E`|+H √ KH̃` · |E`|(|S`|+ log δ−1) )\n+ L∑ i=1 Õ ( HH̃i +H √ KH̃i(|Si|+ log δ−1) )\n= L∑ `=1 Õ ( H √ KH̃` · |E`|(|S`|+ log δ−1) ) This equation, together with equation 11 and equation 12, indicates the regret bound\nR(K) = K∑ k=1 V ∗1 [k]− V πk 1 [k]\n≤ L∑ `=1 Õ ( HH̃`|E`|+H √ KH̃` · |E`|(|S`|+ log δ−1) ) + 4H √ HK log δ−1\n= L∑ `=1 Õ ( HH̃`|E`|+H √ KH̃` · |E`|(|S`|+ log δ−1) )\nholds with probability at least 1− 2δ. This completes the proof of Theorem 4.1." } ]
2,019
null
SP:55f583b190d59af8aaa7bda3c9e44bf5ed7ea96c
[ "This paper uses visual representation learned over monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs for multimodal NMT. Their approach enables visual information to be integrated into large-scale text-only NMT. Experiments on four widely used translation datasets show that the proposed approach achieves significant improvements over strong baselines.", "The authors propose to augment NMT with a grounded inventory of images. The intuition is clear and the premise is very tempting. The key architectural choice is to allow the transformer to use language embeddings to attend into a topic-image lookup table. The proportion is learned to balance how much signal comes from each source. Figure 4, attempts to investigate the importance of this sharing and its effects on performance." ]
Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of largescale bilingual sentence-image pairs, thereby extending image applicability in NMT. In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pretrained ResNet. An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations. In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodal NMT. Experiments on four widely used translation datasets, including the WMT’16 English-to-Romanian, WMT’14 English-to-German, WMT’14 Englishto-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines.
[ { "affiliations": [], "name": "Zhuosheng Zhang" }, { "affiliations": [], "name": "Kehai Chen" }, { "affiliations": [], "name": "Rui Wang" }, { "affiliations": [], "name": "Masao Utiyama" }, { "affiliations": [], "name": "Eiichiro Sumita" }, { "affiliations": [], "name": "Zuchao Li" }, { "affiliations": [], "name": "Hai Zhao" } ]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Loı̈c Barrault", "Fethi Bougares", "Lucia Specia", "Chiraag Lala", "Desmond Elliott", "Stella Frank" ], "title": "Findings of the third shared task on multimodal machine translation", "venue": "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers,", "year": 2018 }, { "authors": [ "Ozan Caglayan", "Loı̈c Barrault", "Fethi Bougares" ], "title": "Multimodal attention for neural machine translation", "venue": "arXiv preprint arXiv:1609.03976,", "year": 2016 }, { "authors": [ "Ozan Caglayan", "Walid Aransa", "Adrien Bardet", "Mercedes Garcı́a-Martı́nez", "Fethi Bougares", "Loı̈c Barrault", "Marc Masana", "Luis Herranz", "Joost van de Weijer" ], "title": "Lium-cvc submissions for wmt17 multimodal translation task", "venue": "In Proceedings of the Second Conference on Machine Translation,", "year": 2017 }, { "authors": [ "Ozan Caglayan", "Pranava Swaroop Madhyastha", "Lucia Specia", "Loı̈c Barrault" ], "title": "Probing the need for visual context in multimodal machine translation", "venue": null, "year": 2019 }, { "authors": [ "Iacer Calixto", "Qun Liu" ], "title": "Incorporating global visual features into attention-based neural machine translation", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Iacer Calixto", "Desmond Elliott", "Stella Frank" ], "title": "Dcu-uva multimodal mt system report", "venue": "In Proceedings of the First Conference on Machine Translation:", "year": 2016 }, { "authors": [ "Iacer Calixto", "Qun Liu", "Nick Campbell" ], "title": "Doubly-attentive decoder for multi-modal neural machine translation", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Kehai Chen", "Rui Wang", "Masao Utiyama", "Eiichiro Sumita", "Tiejun Zhao" ], "title": "Neural machine translation with sentence-level topic context", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2019 }, { "authors": [ "Michael Collins", "Philipp Koehn", "Ivona Kucerova" ], "title": "Clause restructuring for statistical machine translation", "venue": "In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, Ann Arbor, Michigan,", "year": 2005 }, { "authors": [ "Desmond Elliott" ], "title": "Adversarial evaluation of multimodal machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Desmond Elliott", "Ákos Kádár" ], "title": "Imagination improves multimodal translation", "venue": "In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Desmond Elliott", "Stella Frank", "Eva Hasler" ], "title": "Multilingual image description with neural sequence models", "venue": "arXiv preprint arXiv:1510.04709,", "year": 2015 }, { "authors": [ "Desmond Elliott", "Stella Frank", "Khalil Sima’an", "Lucia Specia" ], "title": "Multi30k: Multilingual englishgerman image descriptions", "venue": "In Proceedings of the 5th Workshop on Vision and Language,", "year": 2016 }, { "authors": [ "Desmond Elliott", "Stella Frank", "Loı̈c Barrault", "Fethi Bougares", "Lucia Specia" ], "title": "Findings of the second shared task on multimodal machine translation and multilingual image description", "venue": "In Proceedings of the Second Conference on Machine Translation,", "year": 2017 }, { "authors": [ "Marzieh Fadaee", "Arianna Bisazza", "Christof Monz" ], "title": "Data augmentation for low-resource neural machine translation", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Yann Dauphin" ], "title": "A convolutional encoder model for neural machine translation", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Stig-Arne Grönroos", "Benoit Huet", "Mikko Kurimo", "Jorma Laaksonen", "Bernard Merialdo", "Phu Pham", "Mats Sjöberg", "Umut Sulubacak", "Jörg Tiedemann", "Raphael Troncy" ], "title": "The memad submission to the wmt18 multimodal translation task", "venue": "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jindřich Helcl", "Jindřich Libovickỳ", "Dusan Varis" ], "title": "Cuni system for the wmt18 multimodal translation task", "venue": "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers,", "year": 2018 }, { "authors": [ "Po-Yao Huang", "Frederick Liu", "Sz-Rung Shiang", "Jean Oh", "Chris Dyer" ], "title": "Attention-based multimodal neural machine translation", "venue": "In Proceedings of the First Conference on Machine Translation:", "year": 2016 }, { "authors": [ "Julia Ive", "Pranava Madhyastha", "Lucia Specia" ], "title": "Distilling translations with visual awareness", "venue": "arXiv preprint arXiv:1906.07701,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Guillaume Lample", "Myle Ott", "Alexis Conneau", "Ludovic Denoyer" ], "title": "Phrase-based & neural unsupervised machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Jason Lee", "Elman Mansimov", "Kyunghyun Cho" ], "title": "Deterministic -autoregressive neural sequence modeling by iterative refinement", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Zuchao Li", "Rui Wang", "Kehai Chen", "Masao Utiyama", "Eiichiro Sumita", "Zhuosheng Zhang", "Hai Zhao" ], "title": "Explicit sentence compression for neural machine translation", "venue": "In Proceedings of the ThirtyFourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Zuchao Li", "Rui Wang", "Kehai Chen", "Masao Utiyama", "Eiichiro Sumita", "Zhuosheng Zhang", "Hai Zhao" ], "title": "Data-dependent gaussian prior objective for language generation", "venue": "In Eighth International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jindřich Libovickỳ", "Jindřich Helcl" ], "title": "Attention strategies for multi-source sequence-to-sequence learning", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Xuezhe Ma", "Chunting Zhou", "Xian Li", "Graham Neubig", "Eduard Hovy" ], "title": "Flowseq: Nonautoregressive conditional sequence generation with generative flow", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Junhua Mao", "Wei Xu", "Yi Yang", "Jiang Wang", "Alan L Yuille" ], "title": "Explain images with multimodal recurrent neural networks", "venue": "arXiv preprint arXiv:1410.1090,", "year": 2014 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations),", "year": 2019 }, { "authors": [ "Lucia Specia", "Stella Frank", "Khalil Sima’an", "Desmond Elliott" ], "title": "A shared task on multimodal machine translation and crosslingual image description", "venue": "In Proceedings of the First Conference on Machine Translation:", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Subhashini Venugopalan", "Marcus Rohrbach", "Jeffrey Donahue", "Raymond Mooney", "Trevor Darrell", "Kate Saenko" ], "title": "Sequence to sequence-video to text", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Jingyi Zhang", "Masao Utiyama", "Eiichro Sumita", "Graham Neubig", "Satoshi Nakamura" ], "title": "Nict-naist system for wmt17 multimodal translation task", "venue": "In Proceedings of the Second Conference on Machine Translation,", "year": 2017 }, { "authors": [ "Zhuosheng Zhang", "Yuwei Wu", "Hai Zhao", "Zuchao Li", "Shuailiang Zhang", "Xi Zhou", "Xiang Zhou" ], "title": "Semantics-aware BERT for language understanding", "venue": "In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Zhuosheng Zhang", "Yuwei Wu", "Junru Zhou", "Sufeng Duan", "Hai Zhao", "Rui Wang" ], "title": "SG-Net: Syntax-guided machine reading comprehension", "venue": "In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Chunting Zhou", "Xuezhe Ma", "Junjie Hu", "Graham Neubig" ], "title": "Handling syntactic divergence in low-resource machine translation", "venue": "arXiv preprint arXiv:1909.00040,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Visual information has been introduced for neural machine translation in some previous studies (NMT) (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018; Ive et al., 2019) though the contribution of images is still an open question (Elliott, 2018; Caglayan et al., 2019). Typically, each bilingual (or multilingual) parallel sentence pair is annotated manually by one image describing the content of this sentence pair. The bilingual parallel corpora with manual image annotations are used to train a multimodal NMT model by an end-to-end framework, and results are reported on a specific data set, Multi30K (Calixto & Liu, 2017; Calixto et al., 2017).\nOne strong point of the multimodal NMT model is the ability to use visual information to improve the quality of the target translation. However, the effectiveness heavily relies on the availability of bilingual parallel sentence pairs with manual image annotations, which hinders the image applicability to the NMT. As a result, the visual information is only applied to the translation task over a small and specific multimodal data set Multi30K (Elliott et al., 2016), but not to large-scale text-only NMT (Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017) and low-resource\n∗Corresponding author. Zhuosheng Zhang and Zuchao Li were internship research fellows at NICT when this work was done. Hai Zhao was partially supported by Key Projects of National Natural Science Foundation of China (No. U1836222 and No. 61733011). Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): “Unsupervised Neural Machine Translation in Universal Scenarios” and NICT tenure-track researcher startup fund “Toward Intelligent Machine Translation.”\ntext-only NMT (Fadaee et al., 2017; Lample et al., 2018; Ma et al., 2019; Zhou et al., 2019). In addition, because of the high cost of annotation, the content of one bilingual parallel sentence pair is only represented by a single image, which is weak in capturing the diversity of visual information. The current situation of introducing visual information results in a bottleneck in the multimodal NMT and is not feasible for text-only NMT and low-resource NMT.\nIn this paper, we present a universal visual representation (VR) method1 relying only on image-monolingual annotations instead of the existing approach that depends on image-bilingual annotations, thus breaking the bottleneck of using visual information in NMT. In detail, we transform the existing sentence-image pairs into a topic-image lookup table from a small-scale multimodal data set Multi30K. During the training and decoding process, a group of images with a similar topic to the source sentence will be retrieved from the topic-image lookup table learned by the term frequency-inverse document frequency, and thus is encoded as image representations by a pretrained ResNet (He et al., 2016). A simple and effective attention layer is then designed to fuse the image representations and the original source sentence representations as input to the decoder for predicting target translations. In particular, the proposed approach can be easily integrated into the text-only NMT model without annotating large-scale bilingual parallel corpora. The proposed method was evaluated on four widely-used translation datasets, including the WMT’16 Englishto-Romanian, WMT’14 English-to-German, WMT’14 English-to-French, and Multi30K which are standard corpora for NMT and multimodal machine translation (MMT) evaluation. Experiments and analyses show effectiveness. In summary, our contributions are primarily three-fold:\n1. We present a universal visual representation method that overcomes the shortcomings of the bilingual (or multilingual) parallel data with manual image annotations for MMT.\n2. The proposed method enables the text-only NMT to use the multimodality of visual information without annotating the existing large scale bilingual parallel data.\n3. Experiments on different scales of translation tasks verified the effectiveness and generality of the proposed approach." }, { "heading": "2 RELATED WORK", "text": "Building fine-grained representation with extra knowledge is an essential topic in language modeling (Li et al., 2020a;b; Zhang et al., 2020b;a), among which adopting visual modality could potentially benefit the machine with a more comprehensive perception of the real world. Inspired by the studies on the image description generation (IDG) task (Mao et al., 2014; Elliott et al., 2015; Venugopalan et al., 2015; Xu et al., 2015), a new shared translation task for multimodal machine translation was addressed by the machine translation community (Specia et al., 2016). In particular, the released dataset Multi30K (Elliott et al., 2016) includes 29,000 multilingual (English, German, and French) parallel sentence pairs with image annotations (Elliott et al., 2017; Barrault et al., 2018). Subsequently, there has been a rise in the number of studies (Caglayan et al., 2016; 2017; Calixto et al., 2016; Huang et al., 2016; Libovickỳ & Helcl, 2017; Helcl et al., 2018). For example, Calixto et al. (2017) proposed a doubly-attentive multimodal NMT model to incorporate spatial-visual features, improving translation performance. Compared with spatial-visual features, Calixto & Liu (2017) further incorporated global image features as words in the source sentence and to enhance the encoder or decoder hidden state. In contrast, some recent studies indicated that the visual modality is either unnecessary (Zhang et al., 2017) or only marginally beneficial (Grönroos et al., 2018). More recently, Ive et al. (2019) showed that visual information is only needed in particular cases, such as for ambiguous words where the textual context is not sufficient.\nHowever, these approaches only center around a small and specific Multi30K data set to build a multimodal NMT model, which hinders image applicability to NMT. The reason would be the high cost of image annotations, resulting potentially in the image information not being adequately discovered. We believe that the capacity of MMT has not yet been excavated sufficiently, and there is still a long way to go before the potential of MMT is fully discovered. In this work, we seek to break this constraint and enable visual information to benefit NMT, especially text-only NMT.\n1The code is publicly available at https://github.com/cooelf/UVR-NMT." }, { "heading": "3 UNIVERSAL VISUAL RETRIEVAL", "text": "Algorithm 1 Topic-image Lookup Table Conversion Algorithm Require: Input sentences, S = {X1, X2, . . . XI} and paired images E = {e1, e2, . . . , eI} Ensure: Topic-image lookup table Q where each word is associated with a group of images\n1: Obtain the TF-IDF dictionary F = TF-IDF(S) 2: Transform sentence-image pair to topic-image lookup table Q = LookUp(S, E, F) 3: procedure TF-IDF(S) 4: for each sentence in S do 5: Filter stop-words in the sentence 6: Calculate the TF-IDF weight for each word 7: end for 8: return TF-IDF dictionary F 9: end procedure\n10: procedure LOOKUP(S, E, F) 11: for For each pair {Ti, ei} ∈ zip{S,E} do 12: Rank and pick out the top-w “topic” words in the sentence according to the TF-IDF\nscore in the dictionary F , and each sentence is reformed as T = {t1, t2, . . . , tw} 13: Pair the w words with the corresponding image ei 14: for For each word tj in T do 15: if ei not in Q[tj ] then 16: Add ej to the corresponding image set Q[tj ] for word tj 17: end if 18: end for 19: end for 20: return Topic-image lookup table Q 21: end procedure\nIn this section, we will introduce the proposed universal visual representation method. Generally, the default input setting of the MMT is a sentence-image pair. Our basic intuition is to transform the existing sentence-image pairs into topic-image lookup table2, which assumes the topic words in a sentence should be relevant to the paired image. Consequently, a sentence can possess a group of images by retrieving the topic-image lookup table.\nTopic-image Lookup Table Conversion To focus on the major part of the sentence and suppress the noise such as stopwords and low-frequency words, we design a filtering method to extract the “topic” words of the sentence through the term frequency-inverse document frequency (TF-IDF)3 inspired by Chen et al. (2019). Specifically, given an original input sentence X = {x1, x2, . . . , xI} of length I and its paired image e, X is first filtered by a stopword list4 and then the sentence is treated as a document g. We then compute TF-IDF TIi,j for each word xi in g,\nTIi,j = oi,j∑ k ok,j × log |G| 1 + |j : xi ∈ g| , (1)\nwhere oi,j represents the number of occurrences of the word xi in the input sentence g, |G| the total number of source language sentences in the training data, and |j : xi ∈ g| the number of source sentences including word xi in the training data. We then select the top-w high TF-IDF words as the new image description T = {t1, t2, . . . , tw} for the input sentence X . After preprocessing, each filtered sentence T is paired with an image e, and each word ti ∈ T is regarded as the topic word for image e. After processing the whole corpus (i.e., Multi30K), we form a topic-image lookup tableQ as described in Algorithm 1, in which each topic word ti would be paired with dozens of images.\nImage Retrieval For the input sentence, we first obtain its topic words according to the text preprocessing method described above. Then we retrieve the associated images for each topic word\n2We use the training set of the Multi30K dataset to build the topic-image lookup table. 3We describe our methods by regarding the processing unit as word though this method can also be applied\nto a subword-based sentence for which the subword is considered to be the processing unit. 4https://github.com/stopwords-iso/stopwords-en\nfrom the lookup table Q and group all the retrieved images together to form an image list G. We observe that an image might be associated with multiple topic words so that it would occur multiple times in the list G. Therefore, we sort the images according to the frequency of occurrences in G to maintain the total number of images for each sentence at m.\nFigure 1 illustrates the retrieval process5. In the left block, we show six examples of sentence-image pairs in which the topic words are in boldface. Then we process the corpus using the topic-image transformation method demonstrated above and obtain the topic-image lookup table. For example, the word dog is associated with 1,512 images. For an input source sentence, we obtain the topic words (in boldface) using the same preprocessing. Then we retrieve the corresponding images from the lookup table for each topic word. Now we have a list of images, and some images appear multiple times as they have various topics (like the boxed image in Figure 1). So we sort the retrieved image list by the count of occurrence to pick out the top-m images that cover the most topics of the sentence.\nAt test time, the process of getting images is done using the image lookup table built by the training set, so we do not need to use the images from the dev and test sets in Multi30K dataset6. Intuitively, we do not strictly require the manual alignment of the word (or concept) and image, but rely on the co-occurrence of topic word and image, which is simpler and more general. In this way, we call our method as universal visual retrieval." }, { "heading": "4 NMT WITH UNIVERSAL VISUAL REPRESENTATION", "text": "In this section, we introduce the proposed universal visual representation (VR) method for NMT. The overview of the framework of our proposed method is shown in Figure 2." }, { "heading": "4.1 SOURCE REPRESENTATION FOR NEURAL MACHINE TRANSLATION", "text": "In the state-of-the-art Transformer-based NMT (Vaswani et al., 2017), source information is encoded as source representation by an SAN-based encoder with multiple layers. Specifically, the encoder is composed of a stack of L identical layers, each of which includes two sub-layers. The first sublayer is a self-attention module, whereas the second is a position-wise, fully connected feed-forward network. A residual connection (He et al., 2016) is applied between the two sub-layers, and then\n5More examples are provided in the Appendix A.1. 6The lookup table can be easily adapted to a wide range of other NLP tasks even without any paired image,\nand therefore opens our proposed model to generalization.\na layer normalization (Ba et al., 2016) is performed. Formally, the stack of learning the source representation is organized as follows:\nHl = LN(ATTl(Ql−1,Kl−1,Vl−1) + Hl−1), Hl = LN(FFNl(Hl) + Hl), (2)\nwhere ATTl(·), LN(·), and FFNl(·) are the attention module, layer normalization, and the feedforward network for the l-th identical layer, respectively. {Ql−1,Kl−1,Vl−1} are query, key, and value vectors that are transformed from the (l-1)-th layer Hl−1. For example, {Q0, K0, V0} are packed from the summation H0 of the positional embeddings and word embeddings. Finally, the output of the stack of L identical layers HL is the final source sentence representation." }, { "heading": "4.2 AGGREGATION FOR TEXT AND IMAGE REPRESENTATIONS", "text": "After retrieval as described in Section 3, each original sentence X = {x1, x2, . . . , xI} is paired with m images E = {e1, e2, . . . , em} retrieved from the topic-image lookup table Q. First, the source sentence X={x1, x2, . . . , xI} is fed into the encoder (Eq.2) to learn the source sentence representation HL. Second, the images E ={e1, e2, . . . , em} are the inputs to a pre-trained ResNet (He et al., 2016) followed by a feed forward layer to learn the source image representation textM ∈ Rm×2048. Then, we apply an attention mechanism7 to append the image representation to the text representation:\nH = ATTM(HL,KM,VM), (3) where {KM, VM} are packed from the learned source image representation M.\n7We used single head here for simplicity.\nIntuitively, NMT aims to produce a target word sequence with the same meaning as the source sentence rather than a group of images. In other words, the image information may play an auxiliary effect during the translation prediction. Therefore, we compute λ ∈ [0, 1] to weight the expected importance of source image representation for each source word:\nλ = sigmoid(WλH+ UλHL), (4)\nwhere Wλ and Uλ are model parameters. We then fuse HL and H to learn an effective source representation: H = HL + λH. (5) Finally, H is fed to the decoder to learn a dependent-time context vector for predicting target translation. Note that there is a single aggregation layer to fuse image and text information." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 DATA", "text": "The proposed method was evaluated on four widely-used translation datasets, including WMT’16 English-to-Romanian (EN-RO), WMT’14 English-to-German (EN-DE), WMT’14 English-toFrench (EN-DE), and Multi30K which are standard corpora for NMT and MMT evaluation.\n1) For the EN-RO task, we experimented with the officially provided parallel corpus: Europarl v7 and SETIMES2 from WMT’16 with 0.6M sentence pairs. We used newsdev2016 as the dev set and newstest2016 as the test set.\n2) For the EN-DE translation task, 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7. The newstest2013 and newstest2014 datasets were used as the dev set and test set, respectively.\n3) For the EN-FR translation task, 36M bilingual sentence pairs from the WMT14 dataset were used as training data. Newstest12 and newstest13 were combined for validation and newstest14 was used as the test set, following the setting of Gehring et al. (2017).\n4) The Multi30K dataset contains 29K English→{German, French} parallel sentence pairs with visual annotations. The 1,014 English→{German, French} sentence pairs visual annotations are as dev set. The test sets are test2016 and test2017 with 1,000 pairs for each." }, { "heading": "5.2 SYSTEM SETTING", "text": "Image Retrieval Implementation We used 29,000 sentence-image pairs from Multi30K to build the topic-image lookup table. We segmented the sentences using the same BPE vocabulary as that for each source language. We selected top-8 (w = 8) high TF-IDF words, and the default number of images m was set 5. The detailed case study is shown in Section 6.2. After preprocessing, we had about 3K topic words, associated with a total of 10K images for retrieval. Image features were extracted from the averaged pooled features of a pre-trained ResNet50 CNN (He et al., 2016). This led to feature maps V ∈ R2048.\nBaseline Our baseline was text-only Transformer (Vaswani et al., 2017). We used six layers for the encoder and the decoder. The number of dimensions of all input and output layers was set to 512 and 1024 for base and big models. The inner feed-forward neural network layer was set to 2048. The heads of all multi-head modules were set to eight in both encoder and decoder layers. For the Multi30K dataset, we further evaluated a multimodal baseline (denoted as MMT) where each source sentence was paired with an original image. The other settings were the same as our proposed model.\nModel Implementation The byte pair encoding algorithm was adopted, and the size of the vocabulary was set to 40,000. In each training batch, a set of sentence pairs contained approximately 4096×4 source tokens and 4096×4 target tokens. During training, the value of label smoothing was set to 0.1, and the attention dropout and residual dropout were p = 0.1. We used Adam optimizer (Kingma & Ba, 2014) to tune the parameters of the model. The learning rate was varied\nSystem Architecture EN-RO EN-DE EN-FRBLEU #Param BLEU #Param BLEU #Param Existing NMT systems Vaswani et al. (2017) Trans. (base) N/A N/A 27.3 N/A 38.1 N/ATrans. (big) N/A N/A 28.4 N/A 41.0 N/A Lee et al. (2018) Trans. (base) 32.40 N/A 24.57 N/A N/A N/A\nOur NMT systems\nunder a warm-up strategy with 8,000 steps. For evaluation, we validated the model with an interval of 1,000 batches on the dev set. For the Multi30K dataset, we trained the model up to 10,000 steps, and the training was early-stopped if dev set BLEU score did not improve for ten epochs. For the ENDE, EN-RO, and EN-FR tasks, following the training of 200,000 batches, the model with the highest BLEU score of the dev set was selected to evaluate the test sets. During the decoding, the beam size was set to five. All models were trained and evaluated on a single V100 GPU. Multi-bleu.perl8 was used to compute case-sensitive 4-gram BLEU scores for all test sets. The signtest (Collins et al., 2005) is a standard statistical-significance test. In addition, we followed the model configurations of Vaswani et al. (2017) to train Big models for WMT EN-RO, EN-DE, and EN-FR translation tasks. All experiments were conducted with fairseq9 (Ott et al., 2019). The analysis in Section 6 is conducted on base models." }, { "heading": "5.3 RESULTS", "text": "Table 1 shows the results for the WMT’14 EN-DE, EN-FR, and WMT’16 EN-RO translation tasks. Our implemented Transformer (base/big) models showed similar BLEU scores with the original Transformer (Vaswani et al., 2017), ensuring that the proposed method can be evaluated over strong baseline NMT systems. As seen, the proposed +VR significantly outperformed the baseline Transformer (base), demonstrating the effectiveness of modeling visual information for text-only NMT. In particular, the effectiveness was adapted to the translation tasks of the three language pairs, which have different scales of training data, verifying that the proposed approach is a universal method for improving translation performance.\nOur method introduced only 1.5M and 4.0M parameters for the base and big transformers, respectively. The number is less than 3% of the baseline parameters as we used the fixed image embeddings from the pre-trained ResNet feature extractor. Besides, the training time was basically the same as the baseline model (Section 6.4).\nIn addition, the proposed method was also evaluated for MMT on the multimodal dataset, Multi30K. Results in Table 2 show that our model also outperformed the transformer baseline. Compared with the results in text-only NMT, we find that the image presentation gave marginal contribution, which was consistent with the findings in previous work (Zhang et al., 2017; Grönroos et al., 2018; Caglayan et al., 2019). The most plausible reason might be that the sentences in Multi30K are so simple, short, and repetitive that the source text is sufficient to perform the translation (Caglayan et al., 2019; Ive et al., 2019). This verifies our assumption of the current bottleneck of MMT due to the limitation of Multi30K and shows the necessity of our new setting of transferring multimodality into more standard and mature text-only NMT tasks.\n8https://github.com/moses-smt/mosesdecoder/tree/RELEASE-4.0/scripts/ generic/multi-bleu.perl\n9https://github.com/pytorch/fairseq" }, { "heading": "6 ANALYSIS", "text": "" }, { "heading": "6.1 WHY DOES THE LOOKUP TABLE WORK", "text": "The contribution of the lookup table could be two folds: 1) the content connection of the sentences and images; 2) the topic-aware co-occurrence of similar images and sentences. There are cases when paired images are not accurately related to the given sentence. A simple solution is to set a threshold heuristically for the TF-IDF retrieval to filter out the “improper” images. However, we maintain the specific number of the images in this work because of the second potential benefits of the cooccurrence, by taking images as diverse topic information. According to Distributional Hypothesis (Harris, 1954), which states that words that occur in similar contexts tend to have similar meanings, we are inspired to extend the concept in the multimodal world, the sentences with similar meanings would be likely to pair with similar even the same images. Therefore, the consistent images (with a related topic) could play the role of topic or type hints for similar sentence modeling.\nThis is also very similar to the idea of word embedding by taking each image as a “word”. Because we use the average pooled output of ResNet, each image is represented as a 2400-d vector. For all the 29,000 images, we have an embedding layer with size (29000, 2400). The “content” of the image is regarded as the embedding initialization. It indeed makes effects, but the capacity of the neural network is not up to it. In contrast, the mapping from text word to the index in the word embedding is critical. Similarly, the mapping of sentence to image in image embedding would be essential, i.e., the similar sentences (with the same topic words) tend to map the same or similar image.\nTo verify the hypotheses, we replace our ResNet features with 1) Shuffle: shuffle the image features but keep the lookup table; 2) Random Init: randomly initialize the image embedding but keep the lookup table; 3) Random Mapping: randomly retrieve unrelated images. The BLEU scores are on EN-RO are 33.53, 33,28, 32.14, respectively. The results of 1-2 are close to the proposed VR (33.78) and outperform the baseline (32.66), which shows that the content of images would not be very important. The ablation 3) gives a lower result, which verifies the necessity of the mapping, especially the topic relationship." }, { "heading": "6.2 INFLUENCE OF THE NUMBER OF IMAGES", "text": "To evaluate the influence of the number of paired images m, we constrained m in {0, 1, 3, 5, 7, 9, 15, 20, 30} for experiments on the EN-RO test set, as shown in Figure 4. When m = 0, the model is the baseline NMT model, whose BLEU score was lower than all the models with images. As the number of images increases, the BLEU score also increased at the beginning (from 32.66 to 33.78)\nand then slightly decreased when m exceeds 5. The reason might be that too many images for a sentence would have a higher chance of noise. Therefore, we set m = 5 in our models.\nThe number of sentence-image pairs to create the lookup table could also make effects. We randomly split the pairs of Multi30K into the proportion in [0.1, 0.3, 0.5, 0.7, 0.9], the corresponding BLEU scores for EN-RO are [33.07, 33.44, 34.01, 34.06, 33.80]. Furthermore, we also evaluate the performance by adding external sentence-pairs from the training set of MS COCO image caption dataset (Lin et al., 2014). The BLEU scores are 33.55 and 33.71, respectively, for COCO only and Multi30K+COCO. These results indicate that a modest number of pairs would be beneficial.\n6.3 THE INFLUENCE OF GATING WEIGHT λ\nIn our model, the weight λ of the gated aggregation method was learned automatically to measure the importance of the visual information. We compared by manually setting the weight λ into scalar values in {0.1, 0.3, 0.5, 0.7, 0.9} for experiments on the EN-RO test set. Figure 5 shows that all models with manual λ outperformed the baseline Trans. (base), indicating the effectiveness of image information. In contrast, they were inferior to the performance of our model. This means that the degree of dependency for image information varies for each source sentence, indicating the necessity of automatically learning the gating weights of image representations." }, { "heading": "6.4 EXTRA COMPUTATION TIME", "text": "There are mainly two extra computation costs using our method, including 1) obtaining image data for sentences and 2) learning image representations, which are negligible compared with training an NMT model. The time of obtaining image data for MT sentences for the EN-RO dataset is less than 1 minute using GPU. The lookup table is formed as the mapping of token (only topic words) index to image id. Then, the retrieval method is applied as the tensor indexing from the sentence token indices (only topic words) to image ids, which is the same as the procedure of word embedding. The retrieved image ids are then sorted by frequency. Learning image representations takes about 2 minutes for all the 29,000 images in Multi30K using 6G GPU memory for feature extraction and eight threads of CPU for transforming images. The extracted features are formed as the “image embedding layer” with the size of (29000, 2400) for quick access in the neural network." }, { "heading": "7 CONCLUSION", "text": "This work presents a universal visual representation method for neural machine translation relying on monolingual image annotations, which breaks the restraint of heavy dependency on bilingual sentence-image pairs in the current multimodal NMT setting. In particular, this method enables visual information to be applied to large-scale text-only NMT through a topic-image lookup. We hope this work sheds some light on future MMT research. In the future, we will try to adopt the proposed method for other tasks." }, { "heading": "A APPENDIX", "text": "A.1 EXAMPLES OF RETRIEVED IMAGES\nRetrieved Images for Sentences (WMT)" } ]
2,020
null
SP:4c1ba325175a1a289d7467dc269d20eabc67383c
[ "The paper talks about a recently highlighted problem in word embeddings which is their incapability to represent numerals, especially the out-of-vocabulary numerals. For addressing the problem, they propose a method that induces a finite set of prototype numerals using either self-organizing map or Gaussian Mixture model. Then, each numeral is represented as a weighted average of prototype numeral embeddings. The method also involves squashing large quantities using log function. Finally, the training is performed similar to Skip-gram in word2vec but with the embedding of numerals computed using prototype numerals. ", "The paper proposes a novel method for embedding numerals which can be learned by using neural word embedding learning techniques. The paper motivates the work by reviewing the difficulty of embedding components to represent numerals: OOV in most cases. Their main contribution is the introduction of a method composes numeral embedding by a weighted average of prototype embeddings based on the similarities between the numeral and prototypes. There are two proposed prototypes: SOM and GMM and the similarity functions are an absolute difference and the density function respectively. During the training, the numerals have the proposed embeddings while the others have normal word embeddings. The paper slightly modifies the negative sampling to ensure numerals being sampled. A series of 4 empirical studies have been presented. First, the paper confirms that the proposed method does not negatively affect non-numeral embeddings. And then, the quality of the numeral embeddings are evaluated and compared. The experiments show that the proposed method has better performance on numerical property tests, numeral prediction, and a sequence labeling task." ]
Word embedding is an essential building block for deep learning methods for natural language processing. Although word embedding has been extensively studied over the years, the problem of how to effectively embed numerals, a special subset of words, is still underexplored. Existing word embedding methods do not learn numeral embeddings well because there are an infinite number of numerals and their individual appearances in training corpora are highly scarce. In this paper, we propose two novel numeral embedding methods that can handle the out-ofvocabulary (OOV) problem for numerals. We first induce a finite set of prototype numerals using either a self-organizing map or a Gaussian mixture model. We then represent the embedding of a numeral as a weighted average of the prototype number embeddings. Numeral embeddings represented in this manner can be plugged into existing word embedding learning approaches such as skip-gram for training. We evaluated our methods and showed its effectiveness on four intrinsic and extrinsic tasks: word similarity, embedding numeracy, numeral prediction, and sequence labeling.
[]
[ { "authors": [ "Yoshua Bengio", "Réjean Ducharme", "Pascal Vincent", "Christian Jauvin" ], "title": "A neural probabilistic language model", "venue": "Journal of machine learning research,", "year": 2003 }, { "authors": [ "Johannes Blömer", "Kathrin Bujna" ], "title": "Simple methods for initializing the em algorithm for gaussian mixture models", "venue": null, "year": 2013 }, { "authors": [ "Elia Bruni", "Nam Khanh Tran", "Marco Baroni" ], "title": "Multimodal distributional semantics", "venue": "J. Artif. Int. Res.,", "year": 2014 }, { "authors": [ "John A Bullinaria", "Joseph P Levy" ], "title": "Extracting semantic representations from word co-occurrence statistics: A computational study", "venue": "Behavior research methods,", "year": 2007 }, { "authors": [ "Chung-Chi Chen", "Hen-Hsen Huang", "Hiroya Takamura", "Hsin-Hsi Chen" ], "title": "Numeracy-600k: Learning numeracy for detecting exaggerated information in market comments", "venue": "In Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Marie-Catherine De Marneffe", "Anna N Rafferty", "Christopher D Manning" ], "title": "Finding contradictions in text", "venue": "Proceedings of ACL-08: HLT, pp", "year": 2008 }, { "authors": [ "Stanislas Dehaene" ], "title": "The number sense: How the mind creates mathematics", "venue": "OUP USA,", "year": 2011 }, { "authors": [ "Stanislas Dehaene", "Manuela Piazza", "Philippe Pinel", "Laurent Cohen" ], "title": "Three parietal circuits for number processing", "venue": "Cognitive neuropsychology,", "year": 2003 }, { "authors": [ "Lev Finkelstein", "Evgeniy Gabrilovich", "Yossi Matias", "Ehud Rivlin", "Zach Solan", "Gadi Wolfman", "Eytan Ruppin" ], "title": "Placing search in context: The concept revisited", "venue": "In Proceedings of the 10th International Conference on World Wide Web,", "year": 2001 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "Computer Science,", "year": 2013 }, { "authors": [ "Felix Hill", "Roi Reichart", "Anna Korhonen" ], "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", "venue": "CoRR, abs/1408.3456,", "year": 2014 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Ander Intxaurrondo", "Eneko Agirre", "Oier Lopez De Lacalle", "Mihai Surdeanu" ], "title": "Diamonds in the rough: Event extraction from imperfect microblog data", "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2015 }, { "authors": [ "Stanislaw Jastrzebski", "Damian Lesniak", "Wojciech Marian Czarnecki" ], "title": "How to evaluate word embeddings? on importance of data efficiency and simple supervised tasks", "venue": "CoRR, abs/1702.02170,", "year": 2017 }, { "authors": [ "Teuvo Kohonen" ], "title": "The self-organizing map", "venue": "Proceedings of the IEEE,", "year": 1990 }, { "authors": [ "Rémi Lebret", "Ronan Lebret" ], "title": "Word emdeddings through hellinger PCA", "venue": "CoRR, abs/1312.5542,", "year": 2013 }, { "authors": [ "Iddo Lev", "Bill MacCartney", "Christopher Manning", "Roger Levy" ], "title": "Solving logic puzzles: From robust processing to precise semantics", "venue": "In Proceedings of the 2nd Workshop on Text Meaning and Interpretation,", "year": 2004 }, { "authors": [ "Kevin Lund", "Curt Burgess" ], "title": "Producing high-dimensional semantic spaces from lexical cooccurrence", "venue": "Behavior research methods, instruments, & computers,", "year": 1996 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Aman Madaan", "Ashish Mittal", "Ganesh Ramakrishnan", "Sunita Sarawagi" ], "title": "Numerical relation extraction with minimal supervision", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "Proceedings of the International Conference on Learning Representations (ICLR", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Arindam Mitra", "Chitta Baral" ], "title": "Learning to use formulas to solve simple arithmetic problems", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Aakanksha Naik", "Abhilasha Ravichander", "Carolyn Rose", "Eduard Hovy" ], "title": "Exploring numeracy in word embeddings", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Andreas Nieder", "Earl K Miller" ], "title": "Coding of cognitive magnitude: Compressed scaling of numerical information in the primate prefrontal cortex", "venue": null, "year": 2003 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Douglas LT Rohde", "Laura M Gonnerman", "David C Plaut" ], "title": "An improved model of semantic similarity based on lexical co-occurrence", "venue": "Communications of the ACM,", "year": 2006 }, { "authors": [ "Subhro Roy", "Dan Roth" ], "title": "Solving general arithmetic word problems", "venue": "arXiv preprint arXiv:1608.01413,", "year": 2016 }, { "authors": [ "Subhro Roy", "Tim Vieira", "Dan Roth" ], "title": "Reasoning about quantities in natural language", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2015 }, { "authors": [ "Georgios Spithourakis", "Sebastian Riedel" ], "title": "Numeracy for language models: Evaluating and improving their ability to predict numbers. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Georgios P. Spithourakis", "Isabelle Augenstein", "Sebastian Riedel" ], "title": "Numerically grounded language models for semantic error correction", "venue": "CoRR, abs/1608.04147,", "year": 2016 }, { "authors": [ "Eric Wallace", "Yizhong Wang", "Sujian Li", "Sameer Singh", "Matt Gardner" ], "title": "Do nlp models know", "venue": "Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Wang", "Xiaojiang Liu", "Shuming Shi" ], "title": "numbers? probing numeracy in embeddings", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Word embeddings, the distributed vector representations of words, have become the essential building block for deep learning approaches to natural language processing (NLP). The quality of pretrained word embeddings has been shown to significantly impact the performance of neural approaches to a variety of NLP tasks. Over the past two decades, significant progress has been made in the development of word embedding techniques (Lund & Burgess, 1996; Bengio et al., 2003; Bullinaria & Levy, 2007; Mikolov et al., 2013b; Pennington et al., 2014). However, existing word embedding methods do not handle numerals adequately and cannot directly encode the numeracy and magnitude of a numeral (Naik et al., 2019). Most methods have a limited vocabulary size and therefore can only represent a small subset of the infinite number of numerals. Furthermore, most numerals have very scarce appearances in training corpora and therefore are more likely to be outof-vocabulary (OOV) compared to non-numerical words. For example, numerals account for 6.15% of all unique tokens in English Wikipedia, but in GloVe Pennington et al. (2014) which is partially trained on Wikipedia, only 3.79% of its vocabulary is numerals. Previous work (Spithourakis et al., 2016) also shows that the numeral OOV problem is even more severe when learning word embeddings from corpora with abundant numerals such as clinical reports. Even if a numeral is included in the vocabulary, its scarcity in the training corpus would negatively impact the learning accuracy of its embedding.\nThe inadequate handling of numerals in existing word embedding methods can be problematic in scenarios where numerals convey critical information. Take the following sentences for example,\n“Jeff is 190, so he should wear size XXL” (190 is a reasonable height for size XXL) “Jeff is 160, so he should wear size XXL” (160 is an unreasonable height for size XXL) “Jeff is 10, so he should wear size XS” (10 is an age instead of a height)\nIf the numerals in the example are OOV or their embeddings are not accurately learned, then it becomes impossible to judge the categories of the numerals or the reasonableness of the sentences.\nIn this paper, we propose two novel methods that can produce reasonable embeddings for any numerals. The key idea is to represent the embedding of a numeral as a weighted average of a small set of prototype number embeddings. The prototype numerals are induced from the training corpus\nusing either a self-organizing map (Kohonen, 1990) or a Gaussian mixture model. The weights are computed based on the differences between the target numeral and the prototype numerals, reflecting the inductive bias that numerals with similar quantities are likely to convey similar semantic information and thus should have similar embeddings. Numeral embeddings represented in this manner can then be plugged into a traditional word embedding method for training. We empirically evaluate our methods on four tasks: word similarity, embedding numeracy, numeral prediction, and sequence labeling. The results show that our methods can produce high-quality embeddings for both numerals and non-numerical words and improve the performance of downstream tasks." }, { "heading": "2 RELATED WORK", "text": "Word Embedding Word embeddings are vector representations of words which carry semantic meanings implicitly and are trained without supervision. Most existing word embedding training methods can be divided into two classes. The first class of methods (Lund & Burgess, 1996; Rohde et al., 2006; Bullinaria & Levy, 2007; Lebret & Lebret, 2013) extract word co-occurrence statistics from the training corpus, compute a word-word matrix based on measures such as PPMI, and then apply dimension reduction techniques such as principle component analysis to produce a low-dimensional vector representation for each word. The second class of methods (Bengio et al., 2003; Collobert & Weston, 2008; Mikolov et al., 2013a;b) use a simple neural network to model the relation between a word and its context within a sliding window in the training corpus. GloVe (Pennington et al., 2014) has been proposed as a method that combines the advantages of both classes. All the above methods have a finite vocabulary size and use a ‘UNK’ symbol to represent OOV words. Recent work (Naik et al., 2019) shows that these popular methods do not handle numerals adequately. Wallace et al. (2019) shows that existing word embedding methods can encode numeracy implicitly for high-frequency numerals, but the embedding’s numeracy for OOV numerals is not investigated. Our goal is to design better numeral embedding methods that can be integrated into traditional word embedding methods and handle the OOV problem for numerals.\nNumeracy in natural language Numeral understanding has been found important in textual entailment (Lev et al., 2004; De Marneffe et al., 2008; Roy et al., 2015) and information extraction (Intxaurrondo et al., 2015; Madaan et al., 2016), but existing systems often use manually defined task-specific features and logic rules to identify numerals, which is hard to generalize to other tasks. A lot of research has been done trying to solve math problems, using either manually designed features and rules (Roy et al., 2015; Mitra & Baral, 2016; Roy & Roth, 2016; Upadhyay et al., 2016) or sequence-to-sequence neural networks Wang et al. (2017), but the quantity of numerals is not important in this task and hence existing methods often replace numerals by dummy symbols such as n1 and n2. Spithourakis & Riedel (2018) studied different strategies to better model numerals in language models. Chen et al. (2019) created Numeracy-600K dataset and studied the ability of neural network models to learn numeracy. Our work differs from previous work in that we aim to produce general-purpose numeral embeddings that can be employed in any neural NLP approach." }, { "heading": "3 METHODS", "text": "Given a training corpus C, we first extract all the numerals using regular expressions and form a dataset X containing all the numbers represented by these numerals. A number (e.g., 2000) may appear for multiple times in X if its corresponding numerals (e.g., ‘2000’, ‘2,000’, etc.) appear for multiple times in C. We then induce a finite set P of typical numerals (i.e., prototypes) from X using a self-organizing map (Kohonen, 1990) or a Gaussian mixture model. We also define a function sim(n1, n2) outputting the similarity between two arbitrary numbers n1 and n2. Now we represent the embedding of any target numeral n as a weighted average of the prototype number embeddings with the weights computed by the similarity function:\ne(n) = · ∑ p∈P α · sim(n, p) · e(p), ∑ p∈P α · sim(n, p) = 1 (1)\nWe use e(·) to denote the embedding of a number α is the normalization factor. This formulation satisfies the intuition that numerals with similar quantities are likely to convey similar semantic information and thus should have similar embeddings.\nOur numeral embeddings can be integrated into traditional word embedding methods such as skipgram for training. During training, we back-propagate the error gradient to update the prototype number embeddings. In this way, the prototype number embeddings (and hence all the numeral embeddings) are learned jointly with non-numerical word embeddings." }, { "heading": "3.1 SQUASHING NUMBERS TO LOG-SPACE", "text": "Inspired by psychological evidence that our brain compresses large quantities nonlinearly using a logarithmic scale on the mental number line (Nieder & Miller, 2003; Dehaene, 2011), we design the following squashing function to transform all the numbers in X into the log-space before prototype induction. Alternatively, we can apply the function only in the similarity function. Besides the psychological motivation, squashing is also necessary for our methods to avoid overflow during training when there are very large numbers such as 1015 in the training corpus.\nf(x) = { log(x) + 1, if x > 1 x, if x ∈ [−1, 1] − log(−x)− 1, if x < −1\n(2)" }, { "heading": "3.2 PROTOTYPE INDUCTION", "text": "We develop two methods for inducing a small prototype set P from the number dataset X . Denote the number of prototypes by m.\nSelf-Organizing Map A self-organizing map (SOM) (Kohonen, 1990) is an artificial neural network that can be viewed as a clustering method. After training a SOM on the dataset X , we regard each cluster centroid as a prototype. One advantage of using a SOM in comparison with traditional clustering methods is that it distributes prototypes more evenly on the number line and may assign prototypes to number ranges with few training samples, which we expect would lead to better generalizability.\nGaussian Mixture Model Inspired by psychological study of the mental number line (Dehaene et al., 2003) and previous work on language modeling (Spithourakis & Riedel, 2018), we train a Gaussian mixture model (GMM) to induce number prototypes. A GMM is defined as follows.\np(U = n) = m∑ k=1 P (Z = k)P (U = n|Z = k) = m∑ k=1 πkN (n;µk, σ2k) (3)\nwhere Z is a latent variable representing the mixture component for random variable U , and N is the probability density function of a normal distribution, and πk, µk, σk ∈ R represent the mixing coefficient, mean and standard deviation of the k-th Gaussian component. We train a GMM on the number dataset X using the expectation-maximization (EM) or hard-EM algorithm and regard the means of the learned Gaussian components as our prototypes P = {µ1, · · · , µm}. We use three GMM initialization methods described in Appendix A." }, { "heading": "3.3 SIMILARITY FUNCTION", "text": "For SOM-induced prototypes, we define the following similarity function:\nsim(p, n) = |g(p)− g(n)|−β , β > 0, p ∈ P (4)\nwhere function g is equal to the squashing function f defined in Eq.2 if we do not apply log transformation before prototype induction and is the identity function I otherwise. β is a hyper-parameter set to 1.0 by default.\nFor GMM-induced prototypes, we can naturally use the posterior probability of the component assignment to define the similarity function.\nsim(pk, n) ∝ P (Z = k|U = n) = πkN (n;µk, σ2k)∑m k=1 πkN (n;µk, σ2k) , pk ∈ P (5)" }, { "heading": "3.4 EMBEDDING TRAINING", "text": "We now describe how to integrate our numeral embeddings into traditional word embedding methods for training. We choose skip-gram with negative sampling (Mikolov et al., 2013a;b) as the word embedding method here, but many other word embedding methods such as CBOW (Mikolov et al., 2013a), HAL (Lund & Burgess, 1996) and GloVe (Pennington et al., 2014) can be used as well.\nSkip-gram is a word embedding method based on the idea of context word prediction. The training corpus C is regarded as a sequence of words (x1, . . . , xT ). For token xt, we define the preceding and following c tokens as the context of xt. Skip-gram aims to maximize p(xt+j |xt) (−c ≤ j ≤ c), the probability of a context word given the center word xt. To formulate p(xt+j |xt), skip-gram associates each word xi with two vector representations: the input embedding vixt for being a center word and the output embedding voxt for being a context word. The input and output embeddings of all the words in the vocabulary V constitute matrices EI ∈ RD×|V| and EO ∈ RD×|V| respectively, where D is the dimension of word embeddings. The conditional probability p(xt+j |xt) is then defined to based on the dot product s(xt+j |xt) = vixt T voxt+j . Nagative sampling is used to approximate the normalization factor for the conditional probability.\nlog p(xt+j |xt) ≈ log σ(voxt+j T vixt) + k∑ i=1 E xi∼Pn(x) [log σ(−voxi T vixt)] (6)\nwhere σ denotes the sigmoid function, and Pn(x) is the negative word sampling distribution used to draw k negative samples.\nWe modify skip-gram by computing numeral embeddings differently from non-numerical word embeddings. We associate each prototype number with an input embedding and an output embedding. The input and output embeddings of all the prototypes constitute matrices MI ∈ RD×|P| and MO ∈ RD×|P| respectively. For any numeral, we can compute its input and output embeddings by taking a weighted average of the prototype input and output embeddings respectively based on Eq.1 and use them in exactly the same way as the embeddings of non-numerical words to compute the learning objective (Eq.6). When drawing negative samples, we first set the ratio of numerals and non-numerical words to their actual ratio in the training corpus, to guarantee a sufficient number of numeral negative samples. Then we sample numerals and non-numerical words separately from their respective distributions in the training corpus raised to the power of 34 . During training, we optimize the objective function Eq.6 by back-propagating the gradient of error to update both the embedding matrices both the non-numerical word embedding matrices EI , EO and the prototype number embedding matrices MI , MO. In this way, the embeddings of non-numerical words and numerals are learned jointly in the same space. We show an example in Figure 1.\nMethods WS353 MEN SIM999 SOM 64.40 71.79 36.09 GMM 64.90 71.89 36.29\nNumAsTok 65.30 71.83 35.85 D-LSTM 63.60 71.82 34.58\nFixed 64.35 72.17 36.27 SG GoogleNews-100B 70.00 74.10 44.20\nGloVe Wiki-6B 52.20 73.70 37.10\nTable 2: Results on word similarity tasks trained on Wiki-1B. For reference, we also show the results of the official skip-gram and GloVe trained on larger corpora." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "We evaluate our methods on four intrinsic and extrinsic tasks: word similarity, embedding numeracy, numeral prediction, and sequence labeling. We report results of our methods based on SOM and GMM separately. We choose the hyper-parameters (e.g., the number of prototypes, GMM initialization and training methods) using validation sets and report the best hyper-parameters for each experiment in Appendix B." }, { "heading": "4.1 BASELINES", "text": "NumAsTok This baseline treats numerals and non-numerical words in the same way, which is very similar to the original skip-gram. The vocabulary includes both high-frequency words and highfrequency numerals. OOV non-numerical words are replaced with symbol UNKword and OOV numerals are replaced with symbol UNKnum. D-LSTM Character-level RNNs are often used to encode OOV words (Graves, 2013). Here we apply an LSTM (Hochreiter & Schmidhuber, 1997) to the digit sequence of a numeral and use the last hidden state of the LSTM as the embedding of the numeral. We use the embedding to compute the skip-gram objective function and propagate the gradients back to update the LSTM. The vocabulary of digits is: {0-9, ‘.’, ‘+’, ‘−’, ‘e’}. Fixed This baseline fixed embeddings for numerals with no training. We define the embedding a numeral with value n as [f(n);1]/Z where f is the squashing function defined in Eq.2, 1 ∈ RD−1 is an all-ones vector, and Z is a constant used to keep the vector norm close to those of non-numerical words and is set to 2×D by default. We compare the vocabularies of different methods in Table 1. Our methods, D-LSTM, and Fixed have finite non-numerical vocabularies but infinite numeral vocabularies. In contrast, the NumAsTok baseline has a finite numeral vocabulary and treats all the OOV numerals as UNKnum." }, { "heading": "4.2 WORD SIMILARITY FOR NON-NUMERICAL WORDS", "text": "To ensure that our methods can still generate high quality embeddings for non-numerical words, we evaluate our trained embeddings on classical intrinsic word similarity tasks, including WordSim353, (Finkelstein et al., 2001), MEN (Bruni et al., 2014) and Simplex-999 (Hill et al., 2014). We train 300-dimensional word embeddings on the 1B Wikipedia dump and set the context window size to 5, the number of negative samples to 5, and the vocabulary size to 3× 105. We use the evaluation tools1 provided by Jastrzebski (Jastrzebski et al., 2017). Note that while the training data contains numerals, the evaluation tasks do not involve numerals and are only designed to evaluate quality of non-numerical word embeddings. The results are shown in Table 2.\nIt can be seen that our methods can achieve scores comparable to those of the baselines. The performance of SG trained on 100B GoogleNews is much better than all the other methods probably because of its much larger training corpus. The results show that adding our numeral embedding methods into skip-gram does not harm the quality of non-numerical word embeddings. Additional results of our methods can be found in Appendix C.\n1https://github.com/kudkudak/word-embeddings-benchmarks" }, { "heading": "4.3 MAGNITUDE AND NUMERATION OF EMBEDDINGS", "text": "Naik et al. (2019) propose a framework for evaluating the ability of numeral embeddings to capture magnitude and numeration. Given a target numeral, its embedding is evaluated against a set of numerals using the OVA (One-vs-All), SC (Strict Contrastive) and BC (Broad Contrastive) tests:\n• OVA: The embedding vector distance between the target and its nearest neighbor on the number line should be smaller than that between the target and any other numeral in the set.\n• SC: The embedding vector distance between the target and its nearest neighbor on the number line should be smaller than that between the target and its second nearest neighbors on the number line.\n• BC: The embedding vector distance between the target and its nearest neighbor on the number line should be smaller than that between the target and its furthest neighbors on the number line.\nWe follow the settings described by Naik et al. (2019): for the magnitude evaluation, we run the tests using a set of 2342 numerals that are most frequent in Wikipedia-1B, whose embeddings are well learned by all the methods; and for the numeration evaluation, we run the tests using 113 English words that represent numbers (e.g., ‘three’, ‘billion’) sampled from the same corpus and we measure the distance between the target numeral embedding and the word embeddings of these words. We report the accuracy of various embedding models on these three tests, along with the average rank (denoted as AVGR) of the target numeral’s nearest neighbor among all the candidates based on their vector distances to the target. We use the embeddings trained on Wikipedia-1B.\nTable 3 shows the results. The Fixed baseline has the best performance in the magnitude evaluation, which is unsurprising because the numeral embedding vector explicitly contains the (squashed) magnitude. NumAsTok performs very well in the numeration evaluation, which is because the number-representing words used in the evaluation are high-frequency words and their embeddings are adequately trained. Except for these two special cases, our methods can be seen to outperform the baselines with a large margin.\nWallace et al. (2019) recently show that classic embeddings of numerals may contain magnitude information that can be extracted by neural networks. Following their methodology, we conduct two probing tests on our 2342 numerals using multi-layer perceptrons and bilinear functions and then use the resulting models to predict distances between numerals in the OVA, SC, and BC tasks. The results again show the advantage of our methods over the baselines. See Appendix D for details." }, { "heading": "4.4 NUMERAL PREDICTION", "text": "To evaluate the quality of numeral embeddings, we design a new numeral prediction task: choosing the right numeral from a set of candidates given the context of the numeral in a sentence.\nWe randomly sample 2000 sentences containing numerals from a subset of Wikipedia that is not used in training, with 600 for validation and 1400 for testing. For each sentence, we use the five words preceding and following the target numeral as its context. An example is shown below, where the ten bold words are the context and 2.31 is the target numeral.\nIn Hollywood, the average household size was [2.31] and the average family size was 3.00.\nWe use all the 1400 numerals in the test set as the candidates from which one has to select the right numeral for each test sentence. Given the learned word and numeral embeddings, we define two score functions to rank candidate numerals given the context. Following the skip-gram model, we first define the score of center numeral n predicting context word cj as s(cj |n) = vocj\nT vin and the score of context word cj predicting the center numeral n as s(n|cj) = vonT vicj . Our first candidateranking score function SA is the sum of log probabilities of center numeral n predicting each context word cj . We use softmax here to calculate the probability.\nSA(n) = ∑ j log p(cj |n) ≈ ∑ j log es(cj |n)∑ ck∈Vt es(ck|n) = ∑ j s(cj |n)− ∑ j logZ(n) (7)\nwhere Vt is the vocabulary of non-numerical words and Z(n) is the normalization factor. The other candidate-ranking score function SB is the sum of log probabilities of each context word cj predicting center numeral n.\nSB(n) = ∑ j log p(n|cj) ≈ ∑ j log es(n|cj)∑ nk∈Vn es(nk|cj) = ∑ j s(n|cj)− Constant (8)\nwhere Vn is the set of numerals in the dataset. There are a few other possible score functions, but we find that they lead to results similar to SA and SB.\nWe use three metrics to evaluate numeral prediction (Spithourakis & Riedel, 2018). MdAE is the median of the absolute errors between the predicted and true numerals, MdAPE is the median of the absolute percentage errors between the predicted and true numerals, and AVGR is the average rank of the true numeral among the candidates. Detailed formulas of the three metrics are shown in Appendix E.\nWe train embeddings on Wikipedia-1B and report the evaluation results in the left part of Table 4. Our methods significantly outperform the NumAsTok and Fixed baselines on all the three metrics. D-LSTM also performs well but needs more parameters and computing time than our methods.\nWe also conduct a slightly different numeral prediction task on the recently released Numeracy600K dataset (the Article Title part) (Chen et al., 2019). This dataset contains 600k sentences with numerals and in each sentence, one numeral is selected and tagged with its order of magnitude. There are eight possible orders of magnitude and the goal is to predict the correct one for the target numeral from its context. To solve this multi-class classification problem, we sample 100 numerals for each order of magnitude and use the mean of their numeral embeddings to create a ‘meta’ embedding; we then use these ‘meta’ embeddings to replace the numeral embeddings in the score functions SA and SB and the highest-scoring order of magnitude is returned.\nWe split the dataset to 450k sentences for training, 50k for validation and 100k for testing. We use micro-F1 and macro-F1 in addition to AVGR as the evaluation metrics. The result is shown in the right part of Table 4. The result shows that our methods achieve much better performance compared to the baselines." }, { "heading": "4.5 SEQUENCE LABELING ON CUSTOMER SERVICE DATA", "text": "To verify the effectiveness of our methods in practice, we evaluate our methods with a sequence labeling task on a dataset of customer service chat log from an online apparel shopping website.\nThis dataset contains a large number of numerals related to height, weight, foot length, etc., and therefore is a good testbed for evaluating numeral embeddings.\nThe task is to assign a label to each word or numeral in the dataset indicating its information type. We shows two examples below:\nW O H O O O O O O W H O O O 82 kg 177 cm what size shall I choose 82 177 what size ?\nW, H, O are labels representing weight, height and ordinary word respectively. We show the statistics of the dataset in Appendix G. In order to better evaluate the generalizability, we create two additional test sets. The first one is created by ‘augmenting’ the original test set with new sentences containing slightly perturbed numerals. For example, we can create new sentences by replacing ‘177’ in the above example with ‘176’ and ‘178’. The second one contains ‘hard’ sentences from the original test set that do not have explicit cues for label prediction. For example, the first sentence above contains ‘kg’ and ‘cm’ that can greatly facilitate the prediction of W and H, but the second sentence above does not contain such cues and hence is a ‘hard’ sentence. More details about the two test sets can be found in Appendix F. Finally, we also test the low-resource settings in which only 30% or 10% of the training set is used.\nWe learn embeddings from the training set using our methods and the baselines and use a validation set to do model selection. We plug the learned embeddings into the Neural-CRF model (Yang & Zhang, 2018) 2 to do sequence labeling without using part-of-speech and character-level features and embedding fine-tuning.\nThe results are shown in Table 5. Our methods consistently outperform all the baselines on the Accuracy, Recall, and F1 metrics in different configurations. NumAsTok trained with 100% training samples has the highest precision on the original and hard test sets probably because it learns highquality embeddings for high-frequency numerals included in its vocabulary; but its recall is lower than that of our methods, most likely because of its numeral OOV problem. Comparing the results on the original and augmented test sets, we see that NumAsTok shows a more significant drop in performance than the other methods, which suggests that NumAsTok does not generalize well because of the numeral OOV problem. In the low-resource settings, the advantage of our methods over the baselines becomes even larger, indicating better generalizability and less annotation required for our methods to achieve a promising performance." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose two novel numeral embedding methods that represent the embedding of a numeral as a weighted average of a set of prototype numeral embeddings. The methods can be\n2https://github.com/jiesutd/NCRFpp\nintegrated into traditional word embedding approaches such as skip-gram for training. We evaluate our methods on four intrinsic and extrinsic tasks, including word similarity, embedding numeracy, numeral prediction, and sequence labeling, and show that our methods can improve the performance of numeral-related tasks and has better generalizability. Our code and sample data can be found at path/to/code/.\nAn important future direction is to handle numeral polysemy. For example, the numeral “2019” may denote either a year or an ordinary number. One potential method is to assign a different embedding to each sense of a numeral. In this way, “2019” would have one embedding for representing a year and another for representing an ordinary quantity. The similarity function would treat different senses of a numeral differently. For example, the year sense of “2019” would be similar to the year sense of “19” but dissimilar to the sole sense of “2019.5”, while the quantity sense of “2019” would be similar to that of “2019.5”." }, { "heading": "A GMM INITIALIZATION", "text": "Both EM and hard-EM are sensitive to initialization and we use the initialization methods described in (Blömer & Bujna, 2013). We first initialize the mean µk of the k-th Gaussian component using one of the following three strategies:\nRandom initialization: choose µk from X randomly. This is suitable when X contains a wide range of numbers, e.g., numbers collected from Wikipedia.\nSOM-based initialization: initialize µk to pk ∈ P produced by the SOM method. K-means initialization: run randomly initialized k-means on X and then use k-means centroids to initialize µk.\nWe then assign the data samples to their closest means. The standard deviation of the data samples assigned to the k-th mean becomes σk." }, { "heading": "B HYPER-PARAMETERS", "text": "We list all of the important hyper-parameters we tune for each model.\nGeneral hyper-parameters: embedding dimension, context window size, SGD learning rate, batch size, vocabulary size, etc.\nSOM hyper-parameters: number of prototypes, stage of applying the log-squashing function (stage 1: before prototype induction; stage 2: only in the similarity function).\nGMM hyper-parameters: number of prototypes, whether we apply the log-squashing function to the numerals, EM initialization (from SOM, random initialization, or k-means initialization), type of EM (hard-EM or soft-EM).\nWe show the values of the SOM and GMM hyper-parameters in Table 6 and the values of the general hyper-parameters of all the methods in Table 7. We find that the general hyper-parameters influence the performance of our methods and the baselines in the same way, so in most cases, these hyperparameters are set to be identical for all the methods. For large training corpora (Wiki1B, Numeracy600k), we use 2048 as the batch size for D-LSTM, because D-LSTM consumes much more GPU memory. We set the batch size of the other methods to 4096. For the sequence labeling tasks, because the data is relatively small and confined to a very specific domain (chat log from online apparel shops), we set a small vocabulary size of 500 for all the methods except NumAsTok and set the vocabulary size of NumAsTok to 550 to ensure that different methods have similar numbers of parameters for word embedding training. Consequently, our methods have (500 + |P|) × D parameters for word embedding training and NumAsTok has 550 × D parameters, where P is the prototype set, whose size is typically smaller than 50, and D is the embedding dimension.\nTable 6 also shows that the optimal number of prototypes is around 200–500 for the Wiki1B corpus and 10–25 for the much smaller sequence labeling dataset. As a rule of thumb, we suggest setting the number of prototypes to (logN)2, where N is the number of distinct numerals in the training corpus." }, { "heading": "C MORE RESULTS ON WIKIPEDIA-1B", "text": "We show the histograms of numerals in the Wikipedia-1B dataset and the prototypes learned by SOM and GMM in Fig.2. It can be seen that the prototypes induced by our methods have a similar distribution compared to the original numerals.\nWe also show some examples of prototypes and their nearest non-numerical words in Table 8. We use the embedding trained by the SOM model with 200 prototypes on Wikipedia-1B, and use log transformation in the similarity function.\nIn addition, we select several typical numerals and non-numerical words and project their embeddings to 2D using t-SNE (Maaten & Hinton, 2008) (Figure 3). We use embeddings learned on Wikipedia-1B corpus using the SOM and GMM methods. The examples and the figures show that our model does capture some semantic relations between numeral quantities and normal words.\nWe show the training speed of each embedding method on the Wikipedia-1B dataset in Table 9. The batch size is set to 2048 for all the methods. Our methods are slower than NumAsTok but are faster than D-LSTM." }, { "heading": "D PROBING TESTS", "text": "We apply two probing tests using neural network on our methods and baselines in order to compare their ability to encode magnitude information in a non-linear way. The first test is Decoding (predicting the numeral value from its embedding using MLP). The second is Subtraction (predicting the difference between two numerals from their embeddings using MLP or BiLinear functions). We illustrate the tasks and the models we use in Figure 4.\nWe first create the datasets for the two probing tests based on the dataset from the magnitude evaluation of Section 4.3 (containing 2342 numerals). For Decoding, the dataset can be directly used. For Subtraction, we randomly sample 105 pairs of numerals (n1, n2) from the dataset and assign n1 − n2 as the prediction target. Following Wallace et al. (2019), we randomly split 80% of each dataset for training and 20% for testing. We use SGD to optimize the mean square error (MSE) loss. We report the root-mean-square error (RMSE) results for the two tasks in Table 10.\nThe results show that our two methods are significantly better than the baselines on Decoding. On Subtraction, they are better than the baselines when using BiLinear and are comparable to NumAsTok but much better than the other baselines when using MLP. We found that the performance is very sensitive to the neural network architecture and MLP with two hidden layers performs best.\nWe then use the MLP2 models trained on Subtraction to determine the distance between two numerals when conducting the magnitude evaluation of Section 4.3. The results are shown in Table 11. The results show that our methods have better performance than the baselines overall. One interesting observation is that, although our SOM based method has worse RMSE than NumAsTok as shown in Table 10, it outperforms NumAsTok in the magnitude evaluation." }, { "heading": "E NUMERAL PREDICTION EVALUATION METRICS", "text": "We denote the target numeral by ni, the numeral with the highest ranking score by n̂i, and the rank of the target numeral by ri. The error ei and percentage error pei can be calculated as:\nei = ni − n̂, pei = ni − n̂i ni\n(9)\nThen we use the median of the absolute errors, the median of the absolute percentage errors, and the average rank as the evaluation metrics.\nMdAE = median{|ei|}, MdAPE = median{|pei|}, AV GR = ri (10)" }, { "heading": "F AUGMENTED AND HARD TEST SETS IN SEQUENCE LABELING", "text": "The augmented test set is created by reasonably perturbing the numerals in a sentence. For example, for a numeral ‘173’ that describes height, we generate new samples by changing ‘173’ to ‘174’ or ‘175’ while keeping the other non-numerical words in the sentence unchanged. For a decimal such as ‘1.7 meters’, we change it to ‘1.6’ or ‘1.8’. The perturbation will not change the decimal places of numerals and will only change the quantity slightly, which makes the generated sentences reasonable.\nThe hard test set is created by manually collect ‘hard’ samples in the original test set. Hard samples do not have explicit patterns, meaning that a numeral’s tag cannot be easily inferred by its adjacent words. For example, tags of numerals followed by units like ‘cm’, ‘m’, ‘kg’, ‘years’ and ‘feet’ can be figured out easily, so we exclude them from the hard test set. Customers are very likely to use ambiguous expressions like: ‘I’m 16.5, can I buy 24?’, where 16.5 is about foot length and 24 is the shoe size. These ambiguous sentences are included in the hard test set." }, { "heading": "G STATISTICS OF SEQUENCE LABELING DATASET", "text": "We show the statistics of the customer-service dataset in the Table 12. The vocabulary is small because the dataset is confined to a specific domain: online customer service chat log about apparel purchase. In this dataset, most of the sentences are about sizes of various kinds of clothes and are very short and ambiguous.\nH SEQUENCE LABELING RESULT WITH STANDARD DEVIATION." } ]
2,019
null
SP:1db20b6170874b3c477e7429d5d6e853680b6e5b
[ "This work uses imitations learning (from synthetic data) to train a deep model which takes a natural language instruction, and a visual representation of a robot's environment, and outputs a trajectory for the robot to follow which executes this instruction. The work focuses on a robotic pick-and-place task, where the instruction indicates which of the available bins an item should be placed in. In addition to the trajectory model, a second model is trained which allows the agent to predict whether a given command is actually feasible (i.e. whether the target bin exists). Empirical results show a reasonably high success rate in placing objects in the bin specified by the instruction, though there is still room for improvement in cases where the shape o a combination of features is important to the selection of the correct bin. ", "The paper addresses the problem of using multiple modalities for learning from demonstration. Approaches that take in task or joint space data to learn a policy for replicating that task are numerous. Doing the same with multiple modalities involved, in particular vision, language and motion, has only been recently considered, so this is a timely paper. " ]
In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which is in turn used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to direct a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability.
[]
[ { "authors": [ "Heni Ben Amor", "Gerhard Neumann", "Sanket Kamthe", "Oliver Kroemer", "Jan Peters" ], "title": "Interaction primitives for human-robot cooperation tasks", "venue": "In 2014 IEEE international conference on robotics and automation (ICRA),", "year": 2014 }, { "authors": [ "Brenna D Argall", "Sonia Chernova", "Manuela Veloso", "Brett Browning" ], "title": "A survey of robot learning from demonstration", "venue": "Robotics and autonomous systems,", "year": 2009 }, { "authors": [ "Michael Burke", "Svetlin Penkov", "Subramanian Ramamoorthy" ], "title": "From explanation to synthesis: Compositional program induction for learning from demonstration. feb 2019", "venue": null, "year": 2019 }, { "authors": [ "Sylvain Calinon" ], "title": "Robot programming by demonstration", "venue": "EPFL Press,", "year": 2009 }, { "authors": [ "Thomas Cederborg", "Pierre-Yves Oudeyer" ], "title": "From language to motor gavagai: Unified imitation learning of multiple linguistic and nonlinguistic sensorimotor skills", "venue": "IEEE Trans. on Auton. Ment. Dev.,", "year": 2013 }, { "authors": [ "Rawichote Chalodhorn", "David B Grimes", "Keith Grochow", "Rajesh PN Rao" ], "title": "Learning to walk through imitation", "venue": "In IJCAI,", "year": 2007 }, { "authors": [ "Chelsea Finn", "Tianhe Yu", "Tianhao Zhang", "Pieter Abbeel", "Sergey Levine" ], "title": "One-shot visual imitation learning via meta-learning", "venue": "Proceedings of the 1st Annual Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": null, "year": 2015 }, { "authors": [ "Guglielmo Gemignani", "Emanuele Bastianelli", "Daniele Nardi" ], "title": "Teaching robots parametrized executable plans through spoken interaction", "venue": "In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CoRR, abs/1512.03385,", "year": 2015 }, { "authors": [ "Yordan Hristov", "Daniel Angelov", "Michael Burke", "Alex Lascarides", "Subramanian Ramamoorthy" ], "title": "Disentangled Relational Representations for Explaining and Learning from Demonstration", "venue": null, "year": 2019 }, { "authors": [ "Auke Jan Ijspeert", "Jun Nakanishi", "Heiko Hoffmann", "Peter Pastor", "Stefan Schaal" ], "title": "Dynamical movement primitives: learning attractor models for motor behaviors", "venue": "Neural computation,", "year": 2013 }, { "authors": [ "S Mohammad Khansari-Zadeh", "Aude Billard" ], "title": "Learning stable nonlinear dynamical systems with gaussian mixture models", "venue": "IEEE Transactions on Robotics,", "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization. 2015 iclr", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2015 }, { "authors": [ "Guilherme Maeda", "Marco Ewerton", "Rudolf Lioutikov", "Heni Ben Amor", "Jan Peters", "Gerhard Neumann" ], "title": "Learning interaction for collaborative tasks with probabilistic movement primitives", "venue": "In Humanoid Robots (Humanoids),", "year": 2014 }, { "authors": [ "Cetin Mericli", "Steven D. Klee", "Jack Paparian", "Manuela Veloso" ], "title": "An interactive approach for situated task specification through verbal instructions", "venue": "In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems,", "year": 2014 }, { "authors": [ "Dipendra Misra", "John Langford", "Yoav Artzi" ], "title": "Mapping Instructions and Visual Observations to Actions with Reinforcement Learning, jan 2018", "venue": "URL https://arxiv.org/abs/1704", "year": 2018 }, { "authors": [ "Katharina Mülling", "Jens Kober", "Oliver Kroemer", "Jan Peters" ], "title": "Learning to select and generalize striking movements in robot table tennis", "venue": "The International Journal of Robotics Research,", "year": 2013 }, { "authors": [ "Monica N. Nicolescu", "Maja J. Mataric" ], "title": "Natural methods for robot task learning: Instructive demonstrations, generalization and practice", "venue": "In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems,", "year": 2003 }, { "authors": [ "Alexandros Paraschos", "Christian Daniel", "Jan R Peters", "Gerhard Neumann" ], "title": "Probabilistic movement primitives", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543,", "year": 2014 }, { "authors": [ "Dean A Pomerleau" ], "title": "Alvinn: An autonomous land vehicle in a neural network", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "Stefan Schaal" ], "title": "Is imitation learning the route to humanoid robots", "venue": "Trends in cognitive sciences,", "year": 1999 }, { "authors": [ "Yuuya Sugita", "Jun Tani" ], "title": "Learning Semantic Combinatoriality from the Interaction between Linguistic and Behavioral Processes", "venue": "Technical report,", "year": 2005 }, { "authors": [ "Jaeyong Sung", "Ian Lenz", "Ashutosh Saxena" ], "title": "Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds, Language and Trajectories", "venue": null, "year": 2015 }, { "authors": [ "Stefanie Tellex", "Pratiksha Thaker", "Joshua Joseph", "Nicholas Roy" ], "title": "Learning perceptually grounded word meanings from unaligned parallel data", "venue": "Machine Learning,", "year": 2014 }, { "authors": [ "Zichao Yang", "Xiaodong He", "Jianfeng Gao", "Li Deng", "Alexander J. Smola" ], "title": "Stacked attention networks for image question answering", "venue": "CoRR, abs/1511.02274,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning (Schaal, 1999), is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) (Ijspeert et al., 2013) or Gaussian Mixture Regression (GMR) (Calinon, 2009) largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis Mülling et al. (2013), locomotion Chalodhorn et al. (2007), and human-robot interaction Amor et al. (2014) an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control.\nIn this paper, we present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. The main rationale of our approach is that a teacher typically provides substantially more information than just the kind of motion to perform. Imagine an athletic trainer that is demonstrating a tennis swing while also verbally explaining the involved steps, the target position, or the speed. As a result of this rich collection of information, the student can develop complex associations between (a) the observed visual features, (b) the demonstrated arm movement, and (c) the provided verbal descriptions. We argue that such a multi-modal teaching approach enables robots to acquire complex policies that generalize to a wide variety of environmental conditions. To this end, we propose a neural network architecture, including several sub-networks, that can be trained in an end-to-end fashion to capture the complex relationships between language, vision, and motion observed in the demonstrations. After training, the network can be provided with a camera image of the current environment and a natural language description of the intended task. The description typically corresponds to verbal commands given by the current user. In turn, the network produces control parameters for a lower-level control policy that can be run on a robot to synthesize the corresponding motion. The hierarchical nature of our approach, i.e., a high-level policy generating the parameters of a lower-level policy, allows for generalization of the trained task to a variety of spatial, visual and contextual changes. Further, the ability to provide\ncommands and instructions to the policy enables easy human-robot interaction through language. At execution time, the user can influence the behavior of the robot by simply talking to it. Our main contributions can be summarized as follows:\n• We propose a Multimodal Policy Network (MPN), an approach that fundamentally combines language, vision, and motion control in to a single differentiable neural network that can learn the cross-modal relationships found in the data.\n• We empirically show that our model is capable of generating task-specific robot controllers given demonstrations of a task containing natural language and visual descriptors\nIn order to outline our problem statement, we contrast our approach to Imitation learning (Schaal, 1999) which considers the problem of learning a policy π from a given set of demonstrations D = {d0, ..,dm}. Each demonstration spans a time horizon T and contains information about the robot states and actions, e.g., demonstrated sensor values and control inputs at each time step. Robot states at each time step within a demonstration are denoted by xt. In contrast to other imitation learning approaches, we assume that we have access to the raw camera images of the robot It at teach time step, as well as access to a verbal description of the task in natural language. This description may provide critical information about the context, goals or objects involved in the task and is denoted as s. Given this information, our overall objective is to learn a policy π which imitates the demonstrated behavior, while also capturing semantics and important visual features. After training, we can provide the policy π(s, I) with a different, new state of the robot and a new verbal description (instruction) as parameters. The policy will then generate the control signals needed to perform which take the new visual input and semantic context into account." }, { "heading": "2 BACKGROUND", "text": "A fundamental challenge in imitation learning is the extraction of policies that do not only cover the trained scenarios, but also generalize to a wide range of other situations. A large body of literature has addressed the problem of learning robot motor skills by imitation (Argall et al., 2009). The majority of these approaches focus on learning functional (Ijspeert et al., 2013) or probabilistic representations (Maeda et al., 2014) of motion trajectories. Once such a model is learned, an input state vector is used to adapt the original motion to changes in position, orientation, or force. However, the state vector has to be carefully designed in order to ensure that all necessary information for adaptation is available. Neural approaches to imitation learning Pomerleau (1989) circumvent this problem by learning feature representations that are best suited for the task. Extracting feature information from rich data sources such as natural language and visual data for motion control has an extensive history. The work presented in (Arumugam et al., 2019; Burke et al., 2019; Hristov et al., 2019; Misra et al., 2018) focuses on sequencing manipulation tasks or choosing when to switch skill based on language and/or vision input from the environment. However, these approaches assume that underlying motion primitives are available that actuate the robot in the form of a motion planner or goal-directed controller. Fine grained robot control has been learned from high-level task descriptions in recent work presented by Chang et al. which utilizes robot trajectories from demonstrations by learning a parameterized neural policy from visual perception of the environment. While not using natural language to specify the target, this work outlines the importance of combining robot motions with other modalities. The work presented in Sung et al. (2015) combines natural language, point-cloud perceptions of the environment and trajectories into a joint embedding that locates tasks and trajectory representations in close proximity to each other in the latent space. At inference time, a control trajectory is generated by locating the task in the latent space and selecting an appropriate control policy.\nModern variants of this line of research leverage recent progress in training convolutional neural networks in order to train increasingly complex policies from raw (visual) sensor inputs. Building upon the same basic framework, the work in Finn et al. (2017) investigates how meta-learning can be used to learn rapidly adaptable policies. More specifically, meta-learning aims at learning policy parameters that can quickly be fine-tuned to new tasks. While very successful in dealing with visual and spatial information, these approaches do not incorporate any semantic or linguistic component into the learning process. Creating policies that can be conditioned on natural language is one potential pathway to overcome this limitation. Several works have investigated the idea of combining natural language and imitation learning: Nicolescu & Mataric (2003); Gemignani et al. (2015); Cederborg\n& Oudeyer (2013); Mericli et al. (2014); Sugita & Tani (2005). However, many of these approaches assume that either a sufficiently large set of motion primitives is already available or that a taxonomy of the task is available, i.e., language and motion are not trained in conjunction.\nOur work is most closely related to the framework introduced in Tellex et al. (2014), which also focuses on the symbol grounding problem. More specifically, the work in Tellex et al. (2014) aims at mapping perceptual features in the external world to constituents in an expert-provided natural language instruction. Our work approaches the problem of generating dynamic robot policies by fundamentally combining language, vision, and motion control in to a single differentiable neural network that can learn the cross-modal relationships found in the data with minimal human feature engineering. Unlike previous work, our proposed model is capable of directly generating complex low-level control policies from language and vision that reassemble robot motions demonstrated during training." }, { "heading": "3 MULTIMODAL POLICY GENERATION VIA IMITATION", "text": "We motivate our approach with a simple example: consider a binning task in which a robot has to drop an object into one of several differently shaped and colored bowls on a table. A human expert can teach the task to the robot providing a kinesthetic demonstration, i.e., physically maneuvering the robot through the necessary motion trajectory. However, in this example, it is critical to place the object in the correct bowl rather than only reproducing the control trajectories from the demonstrations. To this end, the human demonstrator may provide a verbal command, e.g., “Move towards the blue bowl” during teaching. The trajectory generation would then have to be conditioned on the blue bowl’s position which, however, has to be extracted from visual sensing. Our approach automatically detects and extracts these relationships between vision, language, and motion modalities during learning. The result is a neural network representation that integrates all available information in order to make best usage of contextual information for better generalization and disambiguation.\nFigure 1 (left) provides an overview of our method. Our goal is to train a deep neural network that can take as input a task description s and and image I and consequently generates robot controls. In the remainder of this paper, we will refer to our network as the MPN. Rather than immediately producing control signals, the MPN will generate the parameters for a lower-level controller. This distinction allows us to build upon well-established control schemes in robotics and optimal control. In our specific case, we use the widely used Dynamic Motor Primitives (Ijspeert et al., 2013) as a lower-level controller for control signal generation.\nGiven an image and a task description as input, first a so-called semantic network is utilized to combine the information from natural language with the visual perception of the robot in order to produce a joint task embedding. The joint embedding is created by converting words into a sentence embedding, which is in turn concatenated as a fourth channel to the input image. Images are provided to the network as difference images between an empty environment and the current raw camera image, resulting in an image that highlights the objects located in the environment.\nThis step is performed as a simple background substraction process to improve learning speed. The joint task embedding serves as a robot-independent description of the desired task. The embedding is forwarded to a sub-network, called the Policy Translation network, which synthesizes the parameters needed to fully define a low-level control policy. The resulting parameter vector can be used to execute the DMP and actuate the robot. While the MPN is activated only once per task to yield the DMP parameters, the synthesized low-level controller is continuously utilized at every time step during task execution. The following sections will introduce each part of the MPN in more detail. An in-depth overview of our architecture can be found in Figure 1." }, { "heading": "3.1 SEMANTIC NETWORK", "text": "In order to extract salient information from a natural language sentence, we tokenize the sentence into a vector of words s. The vector s is modified to have length ls; sentences with fewer than ls words are zero-padded and sentences with greater than ls words are truncated. Each word is transformed into a lw-dimensional word representation via the pre-trained GloVe model (Pennington et al., 2014) such that we produce a word representation matrix W ∈ Rls×lw = fW (s) . We then extract the relevant n-grams relating to the task at hand through the use of a CNN as in Yang et al. (2015). In this method, the filters of the CNN are used to extract individual n-grams, such that a filter with dimension n × lw produces a gram of size n. In order to determine which of these n-grams is relevant, we concatenate all of the convolved feature maps resulting from all filters, mc = [mc,1,mc,2, , ...,mc,ls−c+1], then apply max pooling such that m′c = max0≤i≤ls−c+1(mc,1,mc,2, ...,mi). The final n-gram representation is built by concatenating the feature maps s′ = m′c∀c ∈ C. However, the relationship between the n-grams is still unknown; in contrast to prior work we leverage this information by further passing the n-gram map s′ through a two-layer fully-connected network: es = ReLU(K1ReLU(K2s′ + b2) + b1) where Ki and bi represent the kernel and bias for each of the two layers. The process of converting W into es is denoted fL(W ) in Figure 1. We expand the input image I with a fourth channel, composed of the sentence embedding es. To this end, we stack the sentence embedding to match the size of one input channel of the image e′s = [e ′ s, ..., e ′ s]. The resulting image Iin is used as an input for fI(Iin) to generate the task embedding e, which is produced with three blocks of convolutional layers, composed of two convolutions, followed by a residual convolution each. The use of residual convolutions as proposed in He et al. (2015) allows the network to utilize possible accuracy gains from increased depth without increasing the complexity of the network significantly, while maintaining the property of being easily optimized. The goal of the image network fI() is to generate a joint task representation from language and environmental perception that can be further utilized to generate low-level policies." }, { "heading": "3.2 POLICY TRANSLATION NETWORK", "text": "The objective of the Policy Translation network is to produce the control parameters for a low-level controller. Hence, it can be seen as a function that maps task embeddings to control parameters. Since in our case the controller is a DMP, we will first formally introduce the basics of this control framework. A DMP is fundamentally a damped spring dynamical system which produces a trajectory of joint configurations, y ∈ Rdr , for dr actuated robot DoFs,\nτ ÿ = αy (βy (g − y)− ẏ) + f (x;Θ) , τ ẋ = −αxx, (1)\nattracted to the point g ∈ Rdr according to the phase x, with constant coefficients αy , βy , and αx and the temporal scaling factor τ . The forcing function f determines the shape of the trajectory produced by the dynamical system, which we define as a linear combination of nonlinear Gaussian basis functions, Ψ:\nf(x;Θ) = ∑b i=1 Ψi(x)θi∑b i=1 Ψi(x) x(g − y0), (2)\nin which Θ ∈ Rdr×b is a set of b weight coefficients for dr DoFs and y0 is the initial state. Most applications of DMPs for imitation learning (Schaal, 1999) directly learn a static set of weights for the forcing function from the demonstration data. At runtime these weights Θ and a goal position can be used to synthesize robot control signals. However, this assumes that a goal position has been generated by some other means, e.g., vision, kinematics, etc. In our approach, both the weight\ncoefficients Θ as well as the goal position g are generated by the Policy Translation network. Given the task embedding e, the policy translation network generates the hyper-parameters Θ ∈ R7×15 and g ∈ R7 for the low-level DMP. The generation of the hyper-parameters is defined as\nΘ, g = fT (e) = fG (ReLU (WGe+ bG)) , fH (ReLU (WGe+ bG)) (3)\nwhere fG() and fH() are multilayer-perceptrons that generate g and Θ respectively after having processed e in a single perceptron with weight WG and bias bG. One interesting advantage of using DMPs is the fact that we can leverage a large body of research regarding their behavior and stability, while also allowing other extensions of DMPs (Amor et al., 2014; Paraschos et al., 2013; Khansari-Zadeh & Billard, 2011) to be incorporated to our framework." }, { "heading": "3.3 TRAINING", "text": "The MPN, including all of its sub-networks, is trained in an end-to-end fashion and uses a single Adam (Kingma & Ba, 2015) optimizer OL for the entire network. Due to the used Rectified Linear Unit activation throughout the entire network, it does not suffer from the vanishing gradient problem and allows the use of a single, combined loss function, defined as follows:\nLC = λc ∗MSE(T ,J) +MSE(T−1,:,J−1,:) (4)\nwhere MSE() denotes the Mean Squared Error loss function. The goal of the low-level controller is to re-create the shape of each trajectory d during training as well as predict the target joint configuration g. Both of these objectives are summed in the loss function and weighted by λc to maintain an equal contribution of both objectives to the overall loss. To allow the model for better generalization capabilities, we add Dropout at each stage of the network as well as a small amount of random Gaussian noise on the input image and demonstrated trajectory." }, { "heading": "3.4 DETECTION OF INVALID TASKS", "text": "In its current state, the proposed model is forced to act in every possible scenario and language combination presented to the MPN. Situations that are not possible, e.g. moving towards an object that is not present in the current environment, may lead to dangerous behaviour of the robot. In order to address this problem, we extended the current model with an additional 3-layered MLP v = fV (e) that predicts whether or not the requested task is possible. This function is based on the embedding e and performs a binary-classification regarding the validity of the task. This extension requires three additions to the previously described network structure. First, we add an adversary sentence sa to each demonstration d that requests a task that is not possible given the current environment. We also introduce an additional loss LV (v) that calculates the classification capabilities of the entire semantic network fE() and fV (), since the ability to distinguish tasks needs to be propagated through the entire network up until this point. The optimizer utilizes sparse softmax cross entropy for exclusive single class classification. The third addition is an additional optimizer OE that optimizes the embedding network utilizing the following loss:\nLE = LC + λvLV (5)\nWe combine the controller and embedding network losses with an additional weight hyper-parameter λv .The addition of LC to LE is necessary to maintain the ability of the semantic network to generate useful embeddings for the translation network. This is also reflected in the choice of λv which leaves a strong weight onLC . As a last step, the former optimizer is reduced to only optimize the translation network instead of the entire network, as described in the previous section." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we describe our experimental setup and conduct and extensive set of experiments to verify the capabilities of our proposed model. We evaluate our model in a simulated environment on a binning task in which the goal is drop a cube in one of randomly placed multiple bins. An image sequence of the task can be seen in Figure 2.\nData Collection: For data collection, we automatically generate random binning scenarios in which we present the robot with three to five different bowl of different color, shape and size. In total, we\nutilize five colors (yellow, red, green, blue and pink), two sizes (small and large) and two shapes (round and squared). This procedure provides us with 20 different objects and the ability to test different levels of ambiguity regarding the required amount of features needed to uniquely identify an object. As an example, when all four red objects are in the scene, a unique description of an object is only possible when all three features (color, size and shape) are used at the same time to describe the task. In order to generate a larger variety of different possible sentences, we conducted an IRB approved human subject study in which we presented multiple colored objects to participants and asked them to explain verbally how they are interacting with these objects in a pick-and-place task. This allowed us to extract multiple sentence templates for going towards objects, as well as multiple synonyms for actions, colors, containers and reference points. In our study, we did not restrict participants in any way, such that each individual was able to choose a description that is most natural to them. In addition, we expanded the list of synonyms by gathering additional words from publicly available synonym databases. The combination of the data from the IRB study and synonym database allows us to generate a large variety of natural task explanations for arbitrarily generated scenarios. A detailed description of the template generation can be found in Appendix A. The visual perception of the robot is provided as a top-down image from above to robot, see Figure 2. To use the images in our model, we scale them to a resolution of 96 × 96 × 3 where the last three channel refer to the RGB values of each pixel. After generating a task and scenario, kinesthetic demonstrations are generated with a physics based simulation of a UR5 robot arm, taking into account inertia, weights and other properties of the robot and environment, allowing us to collect realistic movements in the simulated environment. The simulator is running with 20Hz to collect a reasonable amount of samples for each trajectory. In total, we collected over 20000 generated demonstrations as described above." }, { "heading": "4.1 GENERALIZATION", "text": "In our binning scenario, the robot needs to stop its movement above the bowl outlined in the experiment within a radius of the bowl’s center such that the dropped object from the gripper is successfully placed inside the bowl. We utilized two different bowl sizes in these experiments, large and small, with a 17.5 cm and 12.5 cm diameter respectively. The object that is to be delivered, a cube, has an edge length of 5 cm. All experiments were conducted by generating new random scenarios with new environments, images, and sentences corresponding to the generated task.\nIn our first experiment, we evaluated the success rate of the object delivery by using only a single necessary feature. The results of this set of experiments can be seen in Table 1. Each test was conducted on an equal amount of small and large bowl with 250 attempts each, resulting in 500 attempts for each shown feature. Except from the first row, all features were tested in scenarios with 3 bowls, one being the target and two serving as distraction. Based on the reported success rate, the robot is capable of successfully achieving a task with at least 96% probability except from when the only distinguishing feature is the shape (round or square), in which the success rate drops to 79%. This drop in successful task completion in these cases is due to the chosen image resolution of the environment in which distinguishing small round from small squared bowl is a particularly challenging task that was chosen on purpose to test the limits of our approach.\nThe generated parameters of the low-level DMP controller – the weights and goal position – must be sufficiently accurate in order to successfully deliver the object to the specified bin. A set of weights for the first four dimensions of a DMP controller can be seen in Figure 4b. The figure shows the generated weights for the movement to two different objects, one of which is closer to the robot than the other as well as being on different sides of the robots. We quantify the accuracy of the parameter generation by computing the Euclidean distance between the ground truth target location and the end effector position of the robot, based on the predicted joint configuration. For this, we generated 6000 positions on a grid that were equally distributed inside the physically reachable work space of the UR5 robot. The comparison between the end effector position, calculated with forward kinematics, and the target position can be seen in figure 3b. Within the area used to generate training data, the robot predicts the correct position with well under 5cm error, which is precise enough for the tested binning task. Additionally, the model is able to accurately generalize to target positions located outside of the training area, with the error increasing as the distance from the training area increases. The proposed addition of classifying if a requested task is possible was evaluated on random environments with low and high ambiguity regarding the number of separating features. In an environment with low ambiguity, a single feature is enough to tell targets apart, where as in environments with high ambiguity, multiple features are needed for each object to tell them apart. The results of this test are shown in Table 2. In addition to generalizing to different bowl locations, the model is also capable of generalizing to changes in the verbal task description. This is important when interacting with different users that may describe the same task with different words. In Figure 3a we show the spatial and verbal generalization capability of our model. Since color is a key component of our verbal task descriptions, we expect that the robot is able to generalize to different color shades, something that can be caused by variations in illumination. In this experiment, we changed the colors of our green objects to different shades of green. Additionally, we also change the color components towards other bowls by increasing the red and/or green component. We empirically evaluated the MPNs ability to incorporate these changes in the selection of a target object. An example of the changes we made can be seen in Figure 5. In that scenario the robot chooses the dark green object over an object that added a blue component. However, when tasked with going to the large green object, it also moves towards the larger object with increase blue component. This experiment shows that our network can combine information from multiple modalities to disam-\nbiguate a situation, i.e., the greener object is chosen when no size is defined, while the slightly bluer, larger object is chosen when a size is part of the verbal description." }, { "heading": "4.2 UNCERTAINTY", "text": "We leverage recent theoretical insights in order to generate uncertainties via probabilistic outputs from our trained MPN. In particular, it was shown in Gal & Ghahramani (2015) that neural network learning using the Dropout method is equivalent to a Bayesian approximation of a Gaussian Process modeling the training data. In each of the forward passes, we randomly drop neurons from the network as done in the Dropout algorithm. However, in this case, the neurons are dropped at inference time and not at training time. The generated samples form a possibly complex distribution represented as a set of outputs of the neural network. By analyzing this set we can glean important information about the uncertainty in our networks outputs. Figure 4a shows the application of stochastic forward passes on the predicted goal position in Cartesian space generated by using a forward kinematics on the predicted goal configuration of the robot. As can be seen in the picture, the network is certain about the position of five red objects on the table in five different tasks. The variance of all forward passes is below 5cm, which allows for a successful binning task. However, when only providing a red object in the environment and asking the robot to move to a green object, the uncertainty drastically increases. This can be seen in the green scatter plot, which shows positions of the green bowl over 100 stochastic passes. Based on the distribution of the predicted goals, it becomes apparent that the green object is not available in the current environment. These types of analyses allow for granular decisions regarding task execution success to be made." }, { "heading": "4.3 DYNAMIC ENVIRONMENTS", "text": "It is desirable for robots to be able to cope with dynamically changing environments, particularly when a human is in the loop. In this experiment, we evaluate the robot’s ability to adapt its generated\npolicy to a dynamically changing environment by asking the robot to drop an object in a constantly moving bowl. During data collection and training, the robot was only provided with examples from static environments, such that it was enough to generate a DMP once at the beginning of each interaction. However, to adapt to a changing environment, a new DMP needs to be generated for each time step. Figure 6a shows such a scenario, in which the red bowl is moving on an arc from left to right around the robot by moving 1.5 cm in each step. For this experiment, we utilize the same model as for previous experiments without having trained it for dynamically changing environments. At each time step, the same sentence s is combined with the new environment image I, generating a new policy by providing the parameters for an updated DMP. As can be seen from the image sequence in Figure 6a (a), the robot is successfully able to adapt to the changed bowl position." }, { "heading": "4.4 TRAJECTORY RECONSTRUCTION", "text": "In our work, we chose a DMP as a low-level controller for the MPN model to give the robot the ability to not just learn to approach a predicted goal position, but to also reassemble the shape of demonstrated trajectories. This ability is essential in scenarios in which the trajectory shape encodes additional information, e.g. object avoidance or a certain way in which an object needs to be approached. Figure 6b (b) shows the MPN’s ability to generate trajectories that are similar to what was shown during training. The dashed blue line shows the position of the tool center point (in x, y, z coordinates) of a demonstrated trajectory from the test set, the green line shows the respectively generated trajectory, executed by the DMP controller and the red line demonstrates the executed trajectory when using a proportional controller. The movement along the Z-axis of the trajectory clearly shows a different behaviour of the robot when using a proportional controller. On average, the difference between the tool center point position and the demonstrated trajectory is 1.6cm and 19.1cm when using the DMP and proportional controller, respectively." }, { "heading": "4.5 ABLATION STUDY", "text": "Dataset Size Even though we are able to generate a large amount of artificial demonstrations in simulation, the ability of the MPN to train on fewer data is desirable. For this purpose we looked at the performance of the MPN when being trained with less than 20000 demonstrations, see Table\n3. As when testing the spatial generalization capability of the network in section 4.1, we conducted experiments with various combinations of features, separated by their success rate with regards to the object size. The trained MPN seems to be working better with larger objects, which again, might be related to the chosen image size. However, the experiments showed that when identifying the location of a single object or a scenario in which the color is sufficient to distinguish targets, it does not significantly benefit from more than 10000 training data. In addition to the amount of training data, we also trained a model without augmenting the the training data with synonyms, which is shown in the last two columns. This model was tested on data using synonyms (2nd last column) and data not using synonyms (last column). As expected the problem becomes easier when no synonyms are used. However, this model shows the ability of the Glove word embeddings to embed words with a similar meaning closer to each other, resulting in a partially usable model.\nNetwork Structure In addition to the size and variety of the dataset, the structure of the network is an important component of our approach. In this section, we compare different choices with regards to the network structure. Table 4 analyzes the performance of different n-gram sizes as compared to the original model. As expected, using a single n-gram size performs significantly worse across all tested sizes as compared to our original model using n-gram sizes of 1, 2, 3 and 5 concurrently. However, the results suggest that smaller n-gram sizes are better at capturing cases in which a single feature is enough to uniquely describe an object where as large n-grams seem to loose the ability to focus on the important part of the sentence.\nIn addition to process language, our approach is able to ground sentences with the current environment perception. ResNet is a common model for tasks related to computer vision (He et al., 2015) and achieved its performance by introducing residual layers in the CNN structure. In the right-most column of table 4 we analyzed the influence of the residual sections of our network by replacing them with max-pooling layers to maintain a similar output structure. As can be seen form the results, the residual units have a significant influence on the overall performance of the network." }, { "heading": "5 CONCLUSION", "text": "In this work, we presented an imitation learning approach combining language, vision, and motion. A neural network architecture called Multimodal Policy Network was introduced which is able to learn the cross-modal relationships in the training data and achieve high generalization and disambiguation performance as a result. Our experiments showed that the model is able to generalize towards different locations and sentences while maintaining a high success rate of delivering an object to a desired bowl. In addition, we discussed two extensions of the method that allow us to obtain uncertainty information from the model by either learning a separate classifier or utilizing stochastic network outputs to get a distribution over the belief.\nFinally, we hope to further expand the verbal fidelity of our model by adding the ability to utilize relational object descriptions to parameterize the task. Using the full range of natural language descriptions will give us the ability to ground additional constraints into robot control." } ]
2,019
null
SP:f676894db5781369ec25d27ccf44e51c12d081ea
[ "This paper presents a bilingual generative model for sentence embedding based variational probabilistic framework. By separating a common latent variable from language-specific latent variables, the model is able to capture what's in common between parallel bilingual sentences and language-specific semantics. Experimental results show that the proposed model is able to produce sentence embeddings that reach higher correlation scores with human judgments on Semantic Textual Similarity tasks than previous models such as BERT. ", "This paper addresses the problem of constructing a sentence embedding using a generative transformer model which encodes semantic aspects and language-specific aspect separately. They use transformers to encode and decode sentence embedding, and the objective reconstructs input with a latent variables (language variables for each language and semantic language). These latent variables are sampled from multivariate Gaussian prior, and the learning uses evidence lower bound (ELBO) for variational approximation of the joint distribution of latent variables and input. " ]
Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences. Bilingual data offers a useful signal for learning such embeddings: properties shared by both sentences in a translation pair are likely semantic, while divergent properties are likely stylistic or language-specific. We propose a deep latent variable model that attempts to perform source separation on parallel sentences, isolating what they have in common in a latent semantic vector, and explaining what is left over with language-specific latent vectors. Our proposed approach differs from past work on semantic sentence encoding in two ways. First, by using a variational probabilistic framework, we introduce priors that encourage source separation, and can use our model’s posterior to predict sentence embeddings for monolingual data at test time. Second, we use highcapacity transformers as both data generating distributions and inference networks – contrasting with most past work on sentence embeddings. In experiments, our approach substantially outperforms the state-of-the-art on a standard suite of unsupervised semantic similarity evaluations. Further, we demonstrate that our approach yields the largest gains on more difficult subsets of these evaluations where simple word overlap is not a good indicator of similarity.
[]
[ { "authors": [ "Eneko Agirre", "Mona Diab", "Daniel Cer", "Aitor Gonzalez-Agirre" ], "title": "SemEval-2012 task 6: A pilot on semantic textual similarity", "venue": "In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational Linguistics,", "year": 2012 }, { "authors": [ "Eneko Agirre", "Carmen Banea", "Claire Cardie", "Daniel Cer", "Mona Diab", "Aitor Gonzalez-Agirre", "Weiwei Guo", "Rada Mihalcea", "German Rigau", "Janyce Wiebe" ], "title": "SemEval-2014 task 10: Multilingual semantic textual similarity", "venue": "In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval", "year": 2014 }, { "authors": [ "Eneko Agirre", "Carmen Banea", "Claire Cardie", "Daniel Cer", "Mona Diab", "Aitor Gonzalez-Agirre", "Weiwei Guo", "Inigo Lopez-Gazpio", "Montse Maritxalar", "Rada Mihalcea", "German Rigau", "Larraitz Uria", "Janyce Wiebe" ], "title": "SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability", "venue": "In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015),", "year": 2015 }, { "authors": [ "Eneko Agirre", "Carmen Banea", "Daniel Cer", "Mona Diab", "Aitor Gonzalez-Agirre", "Rada Mihalcea", "German Rigau", "Janyce Wiebe" ], "title": "SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation", "venue": "Proceedings of SemEval,", "year": 2016 }, { "authors": [ "Sanjeev Arora", "Yingyu Liang", "Tengyu Ma" ], "title": "A simple but tough-to-beat baseline for sentence embeddings", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Mikel Artetxe", "Holger Schwenk" ], "title": "Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond", "venue": "arXiv preprint arXiv:1812.10464,", "year": 2018 }, { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Daniel Cer", "Mona Diab", "Eneko Agirre", "Inigo Lopez-Gazpio", "Lucia Specia" ], "title": "SemEval-2017 Task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "venue": "In Proceedings of the 11th International Workshop on Semantic Evaluation", "year": 2017 }, { "authors": [ "Daniel Cer", "Yinfei Yang", "Sheng-yi Kong", "Nan Hua", "Nicole Limtiaco", "Rhomni St John", "Noah Constant", "Mario Guajardo-Cespedes", "Steve Yuan", "Chris Tar" ], "title": "Universal sentence encoder", "venue": "arXiv preprint arXiv:1803.11175,", "year": 2018 }, { "authors": [ "Mingda Chen", "Qingming Tang", "Sam Wiseman", "Kevin Gimpel" ], "title": "A multi-task approach for disentangling syntax and semantics in sentence representations", "venue": null, "year": 1904 }, { "authors": [ "Alexis Conneau", "Douwe Kiela" ], "title": "Senteval: An evaluation toolkit for universal sentence representations", "venue": "arXiv preprint arXiv:1803.05449,", "year": 2018 }, { "authors": [ "Alexis Conneau", "Douwe Kiela", "Holger Schwenk", "Loı̈c Barrault", "Antoine Bordes" ], "title": "Supervised learning of universal sentence representations from natural language inference data", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Alexis Conneau", "German Kruszewski", "Guillaume Lample", "Loı̈c Barrault", "Marco Baroni" ], "title": "What you can cram into a single vector: Probing sentence embeddings for linguistic properties", "venue": "arXiv preprint arXiv:1805.01070,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Bill Dolan", "Chris Quirk", "Chris Brockett" ], "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", "venue": "In Proceedings of COLING,", "year": 2004 }, { "authors": [ "Cristina Espana-Bonet", "Adám Csaba Varga", "Alberto Barrón-Cedeño", "Josef van Genabith" ], "title": "An empirical analysis of nmt-derived interlingual embeddings and their use in parallel sentence identification", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2017 }, { "authors": [ "Juri Ganitkevitch", "Benjamin Van Durme", "Chris Callison-Burch" ], "title": "PPDB: The Paraphrase Database", "venue": "In Proceedings of HLT-NAACL,", "year": 2013 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": null, "year": 1901 }, { "authors": [ "Felix Hill", "Kyunghyun Cho", "Anna Korhonen" ], "title": "Learning distributed representations of sentences from unlabelled data", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Ryan Kiros", "Yukun Zhu", "Ruslan R Salakhutdinov", "Richard Zemel", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Skip-thought vectors", "venue": "In Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Taku Kudo", "John Richardson" ], "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "venue": "arXiv preprint arXiv:1808.06226,", "year": 2018 }, { "authors": [ "Xuezhe Ma", "Chunting Zhou", "Xian Li", "Graham Neubig", "Eduard Hovy" ], "title": "FlowSeq: Nonautoregressive conditional sequence generation with generative flow", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S. Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "Proceedings of Empirical Methods in Natural Language Processing", "year": 2014 }, { "authors": [ "Gabriel Pereyra", "George Tucker", "Jan Chorowski", "Łukasz Kaiser", "Geoffrey Hinton" ], "title": "Regularizing neural networks by penalizing confident output distributions", "venue": "arXiv preprint arXiv:1701.06548,", "year": 2017 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of NAACL-HLT,", "year": 2018 }, { "authors": [ "Martin Popel", "Ondřej Bojar" ], "title": "Training tips for the transformer model", "venue": "The Prague Bulletin of Mathematical Linguistics,", "year": 2018 }, { "authors": [ "Nils Reimers", "Iryna Gurevych" ], "title": "Sentence-bert: Sentence embeddings using siamese bertnetworks", "venue": "arXiv preprint arXiv:1908.10084,", "year": 2019 }, { "authors": [ "Holger Schwenk" ], "title": "Filtering and mining parallel data in a joint multilingual space", "venue": "arXiv preprint arXiv:1805.09822,", "year": 2018 }, { "authors": [ "Holger Schwenk", "Matthijs Douze" ], "title": "Learning joint multilingual sentence representations with neural machine translation", "venue": "arXiv preprint arXiv:1704.04154,", "year": 2017 }, { "authors": [ "Sandeep Subramanian", "Adam Trischler", "Yoshua Bengio", "Christopher J Pal" ], "title": "Learning general purpose distributed sentence representations via large scale multi-task learning", "venue": "arXiv preprint arXiv:1804.00079,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "John Wieting", "Kevin Gimpel" ], "title": "Revisiting recurrent networks for paraphrastic sentence embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "John Wieting", "Kevin Gimpel" ], "title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 451–462", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "John Wieting", "Douwe Kiela" ], "title": "No training required: Exploring random encoders for sentence classification", "venue": "arXiv preprint arXiv:1901.10444,", "year": 2019 }, { "authors": [ "John Wieting", "Mohit Bansal", "Kevin Gimpel", "Karen Livescu" ], "title": "Towards universal paraphrastic sentence embeddings", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "John Wieting", "Mohit Bansal", "Kevin Gimpel", "Karen Livescu" ], "title": "Charagram: Embedding words and sentences via character n-grams", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "John Wieting", "Taylor Berg-Kirkpatrick", "Kevin Gimpel", "Graham Neubig" ], "title": "Beyond bleu: Training neural machine translation with semantic similarity", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "John Wieting", "Kevin Gimpel", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Simple and effective paraphrastic similarity from parallel translations", "venue": "Proceedings of the ACL,", "year": 2019 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "title": "Improved variational autoencoders for text modeling using dilated convolutions", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Zachary Ziegler", "Alexander Rush" ], "title": "Latent normalizing flows for discrete sequences", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Pierre Zweigenbaum", "Serge Sharoff", "Reinhard Rapp" ], "title": "Overview of the third bucc shared task: Spotting parallel sentences in comparable corpora", "venue": "In Proceedings of 11th Workshop on Building and Using Comparable Corpora,", "year": 2018 }, { "authors": [ "Popel", "Bojar" ], "title": "especially true when training sequence-to-sequence models to learn sentence embeddings. Figure 3 shows plots of the average 2012-2016 STS performance of the learned sentence embedding as batch size increases for both the BiLSTM and Transformer. Initially, at a batch size of 2500 tokens, sentence embeddings learned", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning useful representations of language has been a source of recent success in natural language processing (NLP). Much work has been done on learning representations for words (Mikolov et al., 2013; Pennington et al., 2014) and sentences (Kiros et al., 2015; Conneau et al., 2017). More recently, deep neural architectures have been used to learn contextualized word embeddings (Peters et al., 2018; Devlin et al., 2018) which have enabled state-of-the-art results on many tasks. We focus on learning semantic sentence embeddings in this paper, which play an important role in many downstream applications. Since they do not require any labelled data for fine-tuning, sentence embeddings are useful for a variety of problems right out of the box. These include Semantic Textual Similarity (STS; Agirre et al. (2012)), mining bitext (Zweigenbaum et al., 2018), and paraphrase identification (Dolan et al., 2004). Semantic similarity measures also have downstream uses such as fine-tuning machine translation systems (Wieting et al., 2019a).\nThere are three main ingredients when designing a sentence embedding model: the architecture, the training data, and the objective function. Many architectures including LSTMs (Hill et al., 2016; Conneau et al., 2017; Schwenk & Douze, 2017; Subramanian et al., 2018), Transformers (Cer et al., 2018; Reimers & Gurevych, 2019), and averaging models (Wieting et al., 2016a; Arora et al., 2017) have found success for learning sentence embeddings. The choice of training data and objective are intimately intertwined, and there are a wide variety of options including next-sentence prediction (Kiros et al., 2015), machine translation (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Schwenk, 2018; Artetxe & Schwenk, 2018), natural language inference (NLI) (Conneau et al., 2017), and multi-task objectives which include some of the previously mentioned objectives (Cer et al., 2018) as well as additional tasks like constituency parsing (Subramanian et al., 2018).\nSurprisingly, despite ample testing of more powerful architectures, the best performing models for many sentence embedding tasks related to semantic similarity often use simple architectures that are mostly agnostic to the interactions between words. For instance, some of the top performing\ntechniques use word embedding averaging (Wieting et al., 2016a), character n-grams (Wieting et al., 2016b), and subword embedding averaging (Wieting et al., 2019b) to create representations. These simple approaches are competitive with much more complicated architectures on in-domain data and generalize well to unseen domains, but are fundamentally limited by their inability to capture word order. Training these approaches generally relies on discriminative objectives defined on paraphrase data (Ganitkevitch et al., 2013; Wieting & Gimpel, 2018) or bilingual data (Wieting et al., 2019b). The inclusion of latent variables in these models has also been explored (Chen et al., 2019).\nIntuitively, bilingual data in particular is promising because it potentially offers a useful signal for learning the underlying semantics of sentences. Within a translation pair, properties shared by both sentences are more likely semantic, while those that are divergent are more likely stylistic or language-specific. While previous work learning from bilingual data perhaps takes advantage of this fact implicitly, the focus of this paper is modelling this intuition explicitly, and to the best of our knowledge, this has not not been explored in prior work. Specifically, we propose a deep generative model that is encouraged to perform source separation on parallel sentences, isolating what they have in common in a latent semantic embedding and explaining what is left over with language-specific latent vectors. At test time, we use inference networks (Kingma & Welling, 2013) for approximating the model’s posterior on the semantic and source-separated latent variables to encode monolingual sentences. Finally, since our model and training objective are generative, our approach does not require knowledge of the distance metrics to be used during evaluation,1 and it has the additional property of being able to generate text.\nIn experiments, we evaluate our probabilistic source-separation approach on a standard suite of STS evaluations. We demonstrate that the proposed approach is effective, most notably allowing the learning of high-capacity deep transformer architectures (Vaswani et al., 2017) while still generalizing to new domains, significantly outperforming a variety of state-of-the-art baselines . Further, we conduct a thorough analysis by identifying subsets of the STS evaluation where simple word overlap is not able to accurately assess semantic similarity. On these most difficult instances, we find that our approach yields the largest gains, indicating that our system is modeling interactions between words to good effect. We also find that our model better handles cross-lingual semantic similarity than multilingual translation baseline approaches, indicating that stripping away language-specific information allows for better comparisons between sentences from different languages.\nFinally, we analyze our model to uncover what information was captured by the source separation into the semantic and language-specific variables and the relationship between this encoded information and language distance to English. We find that the language-specific variables tend to explain more superficial or language-specific properties such as overall sentence length, amount and location of punctuation, and the gender of articles (if gender is present in the language), but semantic and syntactic information is more concentrated in the shared semantic variables, matching our intuition. Language distance has an effect as well, where languages that share common structures with English put more information into the semantic variables, while more distant languages put more information into the language-specific variables. Lastly, we show outputs generated from our model that exhibit its ability to do a type of style transfer." }, { "heading": "2 MODEL", "text": "Our proposed training objective leverages a generative model of parallel text in two languages (e.g. English (en) and French (fr)) that form a pair consisting of an English sentence xen and a French sentence xfr. Importantly, this generative process utilizes three underlying latent vectors: languagespecific variation variables (language variables) zfr and zen respectively for each side of the translation, as well as a shared semantic variation variable (semantic variable) zsem. In this section we will first describe the generative model for the text and latent variables. In the following section we will describe the inference procedure of zsem given an input sentence, which corresponds to our core task of obtaining sentence embeddings useful for downstream tasks such as semantic similarity.\nFurther, by encouraging the model to perform this source separation, the learned semantic encoders will more crisply represent the underlying semantics, increasing performance on downstream semantic tasks.\n1In other words, we don’t assume cosine similarity as a metric, though it does work well in our experiments.\nThe generative process of our model, the Bilingual Generative Transformer (BGT), is depicted in Figure 1 and its computation graph is shown in Figure 2. First, we sample latent variables 〈zfr, zen, zsem〉, where zi ∈ Rk, from a multivariate Gaussian prior N(0, Ik). These variables are then fed into a decoder that samples sentences; xen is sampled conditioned on zsem and zen, while xfr is sampled conditioned on zsem and zfr. Because sentences in both languages will use zsem in generation, we expect that in a well-\ntrained model this variable will encode semantic, syntactic, or stylistic information shared across both sentences, while zfr and zen will handle any language-specific peculiarities or specific stylistic decisions that are less central to the sentence meaning and thus do not translate across sentences. In the following section, we further discuss how this is explicitly encouraged by the learning process.\nDecoder Architecture. Many latent variable models for text use LSTMs (Hochreiter & Schmidhuber, 1997) as their decoders (Yang et al., 2017; Ziegler & Rush, 2019; Ma et al., 2019). However, state-of-the-art models in neural machine translation have seen increased performance and speed using deep Transformer architectures. We also found in our experiments (see Appendix C for details) that Transformers led to increased performance in our setting, so they are used in our main model.\nWe use two decoders in our model, one for modelling p(xfr|zsem, zfr; θ) and one for modeling p(xen|zsem, zen; θ). These decoders are depicted on the right side of Figure 2. Each decoder takes in two latent variables, a language variable and a semantic variable. These variables are concatenated together prior to being used by the decoder for reconstruction. We explore four ways of using this latent vector: (1) Concatenate it to the word embeddings (Word) (2) Use it as the initial hidden state (Hidden, LSTM only) (3) Use it as you would the attention context vector in the traditional sequenceto-sequence framework (Attention) and (4) Concatenate it to the hidden state immediately prior to computing the logits (Logit). Unlike Attention, there is no additional feedforward layer in this setting. We experimented with these four approaches, as well as combinations thereof, and report this analysis in Appendix A. From these experiments, we see that the closer the sentence embedding is to the softmax, the better the performance on downstream tasks evaluating its semantic content. We hypothesise that this is due to better gradient propagation because the sentence embedding is now closer to the error signal. Since Attention and Logit performed best, we use these in our Transformer experiments." }, { "heading": "3 LEARNING AND INFERENCE", "text": "Our model is trained on a training set X of parallel text consisting of N examples, X = {〈x1en, x1fr〉, . . . , 〈xNen, xNfr〉}, and Z is our collection of latent variables Z = (〈z1en, z1fr, z1sem〉, . . . , 〈zNen, zNfr, zNsem〉). We wish to maximize the likelihood of the parameters of the two decoders θ with respect to the observed X , marginalizing over the latent variables Z.\np(X; θ) = ∫ Z p(X,Z; θ)dZ\nUnfortunately, this integral is intractable due to the complex relationship between X and Z. However, related latent variable models like variational autoencoders (VAEs (Kingma & Welling, 2013)) learn by optimizing a variational lower bound on the log marginal likelihood. This surrogate objective is called the evidence lower bound (ELBO) and introduces a variational approximation, q to the true posterior of the model p. The q distribution is parameterized by a neural network with parameters φ. ELBO can be written for our model as follows:\nELBO =Eq(Z|X;φ)[log p(X|Z; θ)]− KL(q(Z|X;φ)||p(Z; θ))\nThis lower bound on the marginal can be optimized by gradient ascent by using the reparameterization trick (Kingma & Welling, 2013). This trick allows for the expectation under q to be approximated through sampling in a way that preserves backpropagation.\nWe make several independence assumptions for q(zsem, zen, zfr|xen, xfr;φ). Specifically, to match our goal of source separation, we factor q as q(zsem, zen, zfr|xen, xfr;φ) = q(zsem|xen, xfr;φ)q(zen|xen)q(zfr|xfr;φ), with φ being the parameters of the encoders that make up the inference networks, defined in the next paragraph.\nLastly, we note that the KL term in our ELBO equation encourages explaining variation that is shared by translations with the shared semantic variable and explaining language-specific variation with the corresponding language-specific variables. Information shared by the two sentences will result in a lower KL loss if it is encoded in the shared variable, otherwise that information will be replicated and the overall cost of encoding will increase.\nEncoder Architecture. We use three inference networks as shown on the left side of Figure 2: an English inference network to produce the English language variable, a French inference network to produce the French language variable, and a semantic inference network to produce the semantic variable. Just as in the decoder architecture, we use a Transformer for the encoders.\nThe semantic inference network is a bilingual encoder that encodes each language. For each translation pair, we alternate which of the two parallel sentences is fed into the semantic encoder within a batch. Since the semantic encoder is meant to capture language agnostic semantic information, its outputs for a translation pair should be similar regardless of the language of the input sentence. We note that other operations are possible for combining the views each parallel sentence offers. For instance, we could feed both sentences into the semantic encoder and pool their representations. However, in practice we find that alternating works well and leave further study of this to future work." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 BASELINE MODELS", "text": "We experiment with fourteen baseline models, covering both the most effective approaches for learning sentence embeddings from the literature and ablations of our own BGT model. These baselines can be split into three groups as detailed below.\nModels from the Literature (Trained on Different Data) We compare to well known sentence embedding models Infersent (Conneau et al., 2017), GenSen (Subramanian et al., 2018), the Universal Sentence Encoder (USE) (Cer et al., 2018), as well as BERT (Devlin et al., 2018).2 We used the pretrained BERT model in two ways to create a sentence embedding. The first way is to concatenate the hidden states for the CLS token in the last four layers. The second way is to concatenate the hidden states of all word tokens in the last four layers and mean pool these representations. Both methods result in a 4096 dimension embedding. Finally, we compare to the newly released model, Sentence-Bert (Reimers & Gurevych, 2019). This model is similar to Infersent (Conneau et al., 2017) in that it is trained on natural language inference data, SNLI (Bowman et al., 2015). However, instead of using pretrained word embeddings, they fine-tune BERT in a way to induce sentence embeddings.3\nModels from the Literature (Trained on Our Data) These models are amenable to being trained in the exact same setting as our own models as they only require parallel text. These include the sentence piece averaging model, SP, from (Wieting et al., 2019b), which is among the best of the averaging models (i.e. compared to averaging only words or character n-grams) as well the LSTM model, BILSTM, from (Wieting & Gimpel, 2017). These models use a contrastive loss with a margin. Following their settings, we fix the margin to 0.4 and tune the number of batches to pool for selecting negative examples from {40, 60, 80, 100}. For both models, we set the dimension of the embeddings to 1024. For BILSTM, we train a single layer bidirectional LSTM with hidden states of 512 dimensions. To create the sentence embedding, the forward and backward hidden states are concatenated and mean-pooled. Following (Wieting & Gimpel, 2017), we shuffle the inputs with probability p, tuning p from {0.3, 0.5}. We also implicitly compare to previous machine translation approaches like (Espana-Bonet et al., 2017; Schwenk & Douze, 2017; Artetxe & Schwenk, 2018) in Appendix A where we explore different variations of training LSTM sequence-to-sequence models. We find that our translation baselines reported in the tables below (both LSTM and Transformer) outperform the architectures from these works due to using the Attention and Logit methods mentioned in Section 2 , demonstrating that our baselines represent, or even over-represent, the state-of-the-art for machine translation approaches.\nBGT Ablations Lastly, we compare to ablations of our model to better understand the benefits of parallel data, language-specific variables, the KL loss term, and how much we gain from the more conventional translation baselines.\n• ENGLISHAE: English autoencoder on the English side of our en-fr data. • ENGLISHVAE: English variational autoencoder on the English side of our en-fr data. • ENGLISHTRANS: Translation from en to fr. • BILINGUALTRANS: Translation from both en to fr and fr to enwhere the encoding parameters\nare shared but each language has its own decoder. • BGT W/O LANGVARS: A model similar to BILINGUALTRANS, but it includes a prior over the\nembedding space and therefore a KL loss term. This model differs from BGT since it does not have any language-specific variables. • BGT W/O PRIOR: Follows the same architecture as BGT, but without the priors and KL loss term. 2Note that in all experiments using BERT, including Sentence-BERT, the large, uncased version is used. 3Most work evaluating accuracy on STS tasks has averaged the Pearson’s r over each individual dataset for each year of the STS competition. However, Reimers & Gurevych (2019) computed Spearman’s ρ over concatenated datasets for each year of the STS competition. To be consistent with previous work, we re-ran their model and calculated results using the standard method, and thus our results are not the same as those reported Reimers & Gurevych (2019)." }, { "heading": "4.2 EXPERIMENTAL SETTINGS", "text": "The training data for our models is a mixture of OpenSubtitles 20184 en-fr data and en-fr Gigaword5 data. To create our dataset, we combined the complete corpora of each dataset and then randomly selected 1,000,000 sentence pairs to be used for training with 10,000 used for validation. We use sentencepiece (Kudo & Richardson, 2018) with a vocabulary size of 20,000 to segment the sentences, and we chose sentence pairs whose sentences are between 5 and 100 tokens each.\nIn designing the model architectures for the encoders and decoders, we experimented with Transformers and LSTMs. Due to better performance, we use a 5 layer Transformer for each of the encoders and a single layer decoder for each of the decoders. This design decision was empirically motivated as we found using a larger decoder was slower and worsened performance, but conversely, adding more encoder layers improved performance. More discussion of these trade-offs along with ablations and comparisons to LSTMs are included in Appendix C.\nFor all of our models, we set the dimension of the embeddings and hidden states for the encoders and decoders to 1024. Since we experiment with two different architectures,6 we follow two different optimization strategies. For training models with Transformers, we use Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.98, and = 10−8. We use the same learning rate schedule as (Vaswani et al., 2017), i.e., the learning rate increases linearly for 4,000 steps to 5× 10−4, after which it is decayed proportionally to the inverse square root of the number of steps. For training the LSTM models, we use Adam with a fixed learning rate of 0.001. We train our models for 20 epochs.\nFor models incorporating a translation loss, we used label smoothed cross entropy (Szegedy et al., 2016; Pereyra et al., 2017) with = 0.1. For ENGLISHVAE, BGT and BILINGUALTRANS, we anneal the KL term so that it increased linearly for 216 updates, which robustly gave good results in preliminary experiments. We also found that in training BGT, combining its loss with the BILINGUALTRANS objective during training of both models increased performance, and so this loss was summed with the BGT loss in all of our experiments. We note that this doesn’t affect our claim of BGT being a generative model, as this loss is only used in a multi-task objective at training time, and we calculate the generation probabilities according to standard BGT at test time.\nLastly, in Appendix B, we illustrate that it is crucial to train the Transformers with large batch sizes. Without this, the model can learn the goal task (such as translation) with reasonable accuracy, but the learned semantic embeddings are of poor quality until batch sizes approximately reach 25,000 tokens. Therefore, we use a maximum batch size of 50,000 tokens in our ENGLISHTRANS, BILINGUALTRANS, and BGT W/O PRIOR, experiments and 25,000 tokens in our BGT W/O LANGVARS and BGT experiments." }, { "heading": "4.3 EVALUATION", "text": "Our primary evaluation are the 2012-2016 SemEval Semantic Textual Similarity (STS) shared tasks (Agirre et al., 2012; 2013; 2014; 2015; 2016), where the goal is to accurately predict the degree to which two sentences have the same meaning as measured by human judges. The evaluation metric is Pearson’s r with the gold labels.\nSecondly, we evaluate on Hard STS, where we combine and filter the STS datasets in order to make a more difficult evaluation. We hypothesize that these datasets contain many examples where their gold scores are easy to predict by either having similar structure and word choice and a high score or dissimilar structure and word choice and a low score. Therefore, we split the data using symmetric word error rate (SWER),7 finding sentence pairs with low SWER and low gold scores as well as sentence pairs with high SWER and high gold scores. This results in two datasets, Hard+ which have SWERs in the bottom 20% of all STS pairs and whose gold label is between 0 and 1,8 and\n4http://opus.nlpl.eu/OpenSubtitles.php 5https://www.statmt.org/wmt10/training-giga-fren.tar 6We use LSTMs in our ablations. 7We define symmetric word error rate for sentences s1 and s2 as 12WER(s1, s2) + 1 2 WER(s2, s2), since\nword error rate (WER) is an asymmetric measure. 8STS scores are between 0 and 5.\nHard- where the SWERs are in the top 20% of the gold scores are between 4 and 5. We also evaluate on a split where negation was likely present in the example.9 Examples are shown in Table 1.\nLastly, we evaluate on STS in es and ar as well as cross-lingual evaluations for en-es, en-ar, and en-tr. We use the datasets from SemEval 2017 (Cer et al., 2017). For this setting, we train BILINGUALTRANS and BGT on 1 million examples from en-es, en-ar, and en-tr OpenSubtitles 2018 data." }, { "heading": "4.4 RESULTS", "text": "The results on the STS and Hard STS are shown in Table 2.10 From the results, we see that BGT has the highest overall performance. It does especially well compared to prior work on the two Hard STS datasets.\nWe show further difficult splits in Table 3, including a negation split, beyond those used in Hard STS and compare the top two performing models in the STS task from Table 2. We also show easier splits in the bottom of the table.\nFrom these results, we see that both positive examples that have little shared vocabulary and structure and negative examples with significant shared vocabulary and structure benefit significantly from using a deeper architecture. Similarly, examples where negation occurs also benefit from our deeper model. These examples are difficult because more than just the identity of the words is needed to\n9We selected examples for the negation split where one sentence contained not or ’t and the other did not. 10We obtained values for STS 2012-2016 from prior works using SentEval (Conneau & Kiela, 2018). Note\nthat we include all datasets for the 2013 competition, including SMT, which is not included in SentEval.\ndetermine the relationship of the two sentences, and this is something that SP is not equipped for since it is unable to model word order. The bottom two rows show easier examples where positive examples have high overlap and low SWER and vice versa for negative examples. Both models perform similarly on this data, with the BGT model having a small edge consistent with the overall gap between these two models.\nLastly, in Table 4, we show the results of STS evaluations in es and ar and cross-lingual evaluations for en-es, en-ar, and en-tr. From these results, we see that BGT has the best performance across all datasets, however the performance is significantly stronger than the BILINGUALTRANS and BGT W/O PRIOR baselines in the cross-lingual setting. Since BGT W/O LANGVARS also has significantly better performance on these tasks, most of this gain seems to be due to the prior have a regularizing effect. However, BGT outperforms BGT W/O LANGVARS overall, and we hypothesize that the gap in performance between these two models is due to BGT being able to strip away the language-specific information in the representations with its language-specific variables, allowing for the semantics of the sentences to be more directly compared." }, { "heading": "5 ANALYSIS", "text": "We next analyze our BGT model by examining what elements of syntax and semantics the language and semantic variables capture relative both to each-other and to the sentence embeddings from the BILINGUALTRANS models. We also analyze how the choice of language and its lexical and syntactic distance from English affects the semantic and syntactic information captured by the semantic and language-specific encoders. Finally, we also show that our model is capable of sentence generation in a type of style transfer, demonstrating its capabilities as a generative model." }, { "heading": "5.1 STS", "text": "We first show that the language variables are capturing little semantic information by evaluating the learned English language-specific variable from our BGT model on our suite of semantic tasks. The results in Table 5 show that these encoders perform closer to a random encoder than the semantic encoder from BGT. This is consistent with what we would expect to see if they are capturing extraneous language-specific information." }, { "heading": "5.2 PROBING", "text": "We probe our BGT semantic and language-specific encoders, along with our BILINGUALTRANS encoders as a baseline, to compare and contrast what aspects of syntax and semantics they are\nlearning relative to each other across five languages with various degrees of similarity with English. All models are trained on the OpenSubtitles 2018 corpus. We use the datasets from (Conneau et al., 2018) for semantic tasks like number of subjects and number of objects, and syntactic tasks like tree depth, and top constituent. Additionally, we include predicting the word content and sentence length. We also add our own tasks to validate our intuitions about punctuation and language-specific information. In the first of these, punctuation number, we train a classifier to predict the number of punctuation marks11 in a sentence. To make the task more challenging, we limit each label to have at most 20,000 examples split among training, validation, and testing data.12 In the second task, punctuation first, we train a classifier to predict the identity of the first punctuation mark in the sentence. In our last task, gender, we detect examples where the gender of the articles in the sentence is incorrect in French of Spanish. To create an incorrect example, we switch articles from {le, la, un, une} for French and {el, la, los, las} for Spanish, with their (indefinite or definite for French and singular or plural for Spanish) counterpart with the opposite gender. This dataset was balanced so random chances gives 50% on the testing data. All tasks use 100,000 examples for training and 10,000 examples for validation and testing. The results of these experiments are shown in Table 6.\nThese results show that the source separation is effective - stylistic and language-specific information like length, punctuation and language-specific gender information are more concentrated in the language variables, while word content, semantic and syntactic information are more concentrated in the semantic encoder. The choice of language is also seen to be influential on what these encoders are capturing. When the languages are closely related to English, like in French and Spanish, the performance difference between the semantic and English language encoder is larger for word content, subject number, object number than for more distantly related languages like Arabic and\n11Punctuation were taken from the set { ’ ! ” # $ % & \\’ ( ) ∗ + , − . / : ; < = > ? @ [ ] ˆ ‘ {— } ’̃ . }. 12The labels are from 1 punctuation mark up to 10 marks with an additional label consolidating 11 or more\nmarks.\nTurkish. In fact, word content performance is directly tied to how well the alphabets of the two languages overlap. This relationship matches our intuition, because lexical information will be cheaper to encode in the semantic variable when it is shared between the languages. Similarly for the tasks of length, punctuation first, and punctuation number, the gap in performance between the two encoders also grows as the languages become more distant from English. Lastly, the gap on STS performance between the two encoders shrinks as the languages become more distant, which again is what we would expect, as the language-specific encoders are forced to capture more information.\nJapanese is an interesting case in these experiments, where the English language-specific encoder outperforms the semantic encoder on the semantic and syntactic probing tasks. Japanese is a very distant language to English both in its writing system and in its sentence structure (it is an SOV language, where English is an SVO language). However, despite these difference, the semantic encoder strongly outperforms the English language-specific encoder, suggesting that the underlying meaning of the sentence is much better captured by the semantic encoder." }, { "heading": "5.3 GENERATION AND STYLE TRANSFER", "text": "In this section, we qualitatively demonstrate the ability of our model to generate sentences. We focus on a style-transfer task where we have original seed sentences from which we calculate our semantic vector zsem and language specific vector zen. Specifically, we feed in a Source sentence into the semantic encoder to obtain zsem, and another Style sentence into the English languagespecific encoder to obtain zen. We then generate a new sentence using these two latent variables. This can be seen as a type of style transfer where we expect the model to generate a sentence that has the semantics of the Source sentence and the style of the Style sentence. We use our en-fr BGT model from Table 6 and show some examples in Table 7. All input sentences are from heldout en-fr OpenSubtitles data. From these examples, we see further evidence of the role of the semantic and language-specific encoders, where most of the semantics (e.g. topical word such as seen and tech in the Source sentence) are reflected in the output, but length and structure are more strongly influenced by the language-specific encoder." }, { "heading": "6 CONCLUSION", "text": "We propose Bilingual Generative Transformers, a model that uses parallel data to learn to perform source separation of common semantic information between two languages from language-specific information. We show that the model is able to accomplish this source separation through probing tasks and text generation in a style-transfer setting. We find that our model bests all baselines on semantic similarity tasks, with the largest gains coming from a new challenge we propose as Hard STS, designed to foil methods approximating semantic similarity as word overlap. We also find our model to be especially effective on cross-lingual semantic similarity, due to its stripping away of language-specific information allowing for the underlying semantics to be more directly compared. In future work, we will explore generalizing this approach to the multilingual setting." }, { "heading": "A LOCATION OF SENTENCE EMBEDDING IN DECODER FOR LEARNING REPRESENTATIONS", "text": "As mentioned in Section 2, we experimented with 4 ways to incorporate the sentence embedding into the decoder: Word, Hidden, Attention, and Logit. We also experimented with combinations of these 4 approaches. We evaluate these embeddings on the STS tasks and show the results, along with the time to train the models 1 epoch in Table 8.\nFor these experiments, we train a single layer bidirectional LSTM (BiLSTM) ENGLISHTRANS model with embedding size set to 1024 and hidden states set to 512 dimensions (in order to be roughly equivalent to our Transformer models). To form the sentence embedding in this variant, we mean pool the hidden states for each time step. The cell states of the decoder are initialized to the zero vector.\nFrom this analysis, we see that the best performance is achieved with Logit, when the sentence embedding is place just prior to the softmax. The performance is much better than Hidden or Hidden+Word used in prior work. For instance, recently (Artetxe & Schwenk, 2018) used the Hidden+Word strategy in learning multilingual sentence embeddings.\nA.1 VAE TRAINING\nWe also found that incorporating the latent code of a VAE into the decoder using the Logit strategy increases the mutual information while having little effect on the log likelihood. We trained two LSTM VAE models following the settings and aggressive training strategy in (He et al., 2019), where one LSTM model used the Hidden strategy and the other used the Hidden + Logit strategy. We trained the models on the en side of our en-fr data. We found that the mutual information increased form 0.89 to 2.46, while the approximate negative log likelihood, estimated by importance weighting, increased slightly from 53.3 to 54.0 when using Logit." }, { "heading": "B RELATIONSHIP BETWEEN BATCH SIZE AND PERFORMANCE FOR TRANSFORMER AND LSTM", "text": "It has been observed previously that the performance of Transformer models is sensitive to batch size Popel & Bojar (2018) . We found this to be especially true when training sequence-to-sequence models to learn sentence embeddings. Figure 3 shows plots of the average 2012-2016 STS performance of the learned sentence embedding as batch size increases for both the BiLSTM and Transformer. Initially, at a batch size of 2500 tokens, sentence embeddings learned are worse than random, even though validation perplexity does decrease during this time. Performance rises as batch size increases up to around 100,000 tokens. In contrast, the BiLSTM is more robust to batch size, peaking much earlier around 25,000 tokens, and even degrading at higher batch sizes." }, { "heading": "C MODEL ABLATIONS", "text": "In this section, we vary the number of layers in the encoder and decoder in BGT W/O PRIOR. We see that performance increases as the number of encoder layers increases, and also that a large decoder hurts performance, allowing us to save training time by using a single layer. These results can be compared to those in Table 9 showing that Transformers outperform BiLSTMS in these experiments." }, { "heading": "D CLASSIFICATION EXPERIMENTS", "text": "To explore our embeddings in more detail, we evaluated them on the Quora Question Pairs dataset13 (QQP). This is a paraphrase classification task, which is also part of GLUE (Wang et al., 2018). Since the test set is private, we deviated slightly from the standard evaluation protocol and split the development set into two halves of 20,215 examples each – one half for model selection and the other for evaluation. We evaluated in two ways, cosine, where we score all pairs with cosine similarity and then find the threshold that gives the best accuracy, and logistic regression where we use logistic regression. Its worth noting that the pretrained baseline models on this task were directly trained to produce the feature set used by the downstream classifier, while our embeddings are trained without this supervision. They also tend to have larger dimensions which also gives them an advantage which is discussed in more detail in (Wieting & Kiela, 2019). The results are shown in Table 10 and show that our BGT model outperforms the baseline models, SP, ENGLISHTRANS,\n13data.quora.com/First-Quora-Dataset-Release-Question-Pairs\nand BILINGUALTRANS for both evaluations, and compares favorably to the pretrained models when evaluated using cosine similarity scores. The only models which perform better are USE which was trained on Quora data in an unsupervised way and Sentence-BERT which uses BERT. Our models are not as strong when using classification for final predictions. This indicates that the embeddings learned by our approach may be most useful when no downstream training is possible – though semisupervised objectives that consider the downstream task might aid our approach, like the baselines, if downstream training is the goal." } ]
2,019
A BILINGUAL GENERATIVE TRANSFORMER
SP:345e244321aa18121d73d55e9e572eb904f48e9e
[ "This paper asks whether it works to remove task-specific heads and treat classification and regression problems as span extraction, by formatting problems in such a way that a single span extraction model can be used. This is a reasonable question to ask, and the authors performed a very large number of experiments attempting to answer this question. The authors claim that using span extractive models instead of task-specific heads yields improved performance over separate heads.", "This paper introduces a method for converting sentence pair classification tasks and sentence regression tasks into span into span extraction tasks, by listing all the possible classes (entailment, contradiction, neural) or the discretized scores (0.0, 0.25 ...) and concatenating them with the source text. With this formulation, one can train a BERT-based span-extraction model (SpEx-BERT) on classification, regression, and QA tasks without introducing any task-specific parameters. The purposed SpEx-BERT model achieves moderate improvement (0.3 points) over the BERT-large baseline on the GLUE test set when fine-tuned on intermediate STILTs tasks (Phang et al., 2018)." ]
Even as pre-trained language encoders such as BERT are shared across many tasks, the output layers of question answering, text classification, and regression models are significantly different. Span decoders are frequently used for question answering, fixed-class, classification layers for text classification, and similarityscoring layers for regression tasks, We show that this distinction is not necessary and that all three can be unified as span extraction. A unified, span-extraction approach leads to superior or comparable performance in supplementary supervised pre-trained, low-data, and multi-task learning experiments on several question answering, text classification, and regression benchmarks.
[]
[ { "authors": [ "Roy Bar-Haim", "Ido Dagan", "Bill Dolan", "Lisa Ferro", "Danilo Giampiccolo", "Bernardo Magnini", "Idan Szpektor" ], "title": "The second pascal recognising textual entailment challenge", "venue": "In Proceedings of the second PASCAL challenges workshop on recognising textual entailment,", "year": 2006 }, { "authors": [ "Luisa Bentivogli", "Peter Clark", "Ido Dagan", "Danilo Giampiccolo" ], "title": "The fifth pascal recognizing textual entailment challenge", "venue": "In TAC,", "year": 2009 }, { "authors": [ "Daniel Cer", "Mona Diab", "Eneko Agirre", "Inigo Lopez-Gazpio", "Lucia Specia" ], "title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation", "venue": "arXiv preprint arXiv:1708.00055,", "year": 2017 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Urvashi Khandelwal", "Christopher D Manning", "Quoc V Le" ], "title": "Bam! born-again multi-task networks for natural language understanding", "venue": null, "year": 1907 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Ronan Collobert", "Jason Weston", "Léon Bottou", "Michael Karlen", "Koray Kavukcuoglu", "Pavel Kuksa" ], "title": "Natural language processing (almost) from scratch", "venue": "Journal of machine learning research,", "year": 2011 }, { "authors": [ "Alexis Conneau", "Guillaume Lample", "Ruty Rinott", "Adina Williams", "Samuel R Bowman", "Holger Schwenk", "Veselin Stoyanov" ], "title": "Xnli: Evaluating cross-lingual sentence representations", "venue": "arXiv preprint arXiv:1809.05053,", "year": 2018 }, { "authors": [ "Ido Dagan", "Bill Dolan", "Bernardo Magnini", "Dan Roth" ], "title": "Recognizing textual entailment: Rational, evaluation and approaches–erratum", "venue": "Natural Language Engineering,", "year": 2010 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "William B Dolan", "Chris Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In Proceedings of the Third International Workshop on Paraphrasing (IWP2005),", "year": 2005 }, { "authors": [ "Danilo Giampiccolo", "Bernardo Magnini", "Ido Dagan", "Bill Dolan" ], "title": "The third pascal recognizing textual entailment challenge", "venue": "In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing,", "year": 2007 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Luheng He", "Mike Lewis", "Luke S. Zettlemoyer" ], "title": "Question-answer driven semantic role labeling: Using natural language to annotate natural language", "venue": "In EMNLP,", "year": 2015 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "arXiv preprint arXiv:1801.06146,", "year": 2018 }, { "authors": [ "Mandar Joshi", "Eunsol Choi", "Daniel S Weld", "Luke Zettlemoyer" ], "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "venue": "arXiv preprint arXiv:1705.03551,", "year": 2017 }, { "authors": [ "Hector Levesque", "Ernest Davis", "Leora Morgenstern" ], "title": "The winograd schema challenge", "venue": "In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning,", "year": 2012 }, { "authors": [ "Omer Levy", "Minjoon Seo", "Eunsol Choi", "Luke Zettlemoyer" ], "title": "Zero-shot relation extraction via reading comprehension", "venue": "arXiv preprint arXiv:1706.04115,", "year": 2017 }, { "authors": [ "Xiaodong Liu", "Pengcheng He", "Weizhu Chen", "Jianfeng Gao" ], "title": "Multi-task deep neural networks for natural language understanding", "venue": "arXiv preprint arXiv:1901.11504,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "Bryan McCann", "James Bradbury", "Caiming Xiong", "Richard Socher" ], "title": "Learned in translation: Contextualized word vectors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Bryan McCann", "Nitish Shirish Keskar", "Caiming Xiong", "Richard Socher" ], "title": "The natural language decathlon: Multitask learning as question answering", "venue": "arXiv preprint arXiv:1806.08730,", "year": 2018 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th International Conference on Machine Learning", "year": 2010 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Jason Phang", "Thibault Févry", "Samuel R Bowman" ], "title": "Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks", "venue": "arXiv preprint arXiv:1811.01088,", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/ openai-assets/research-covers/langu ageunsupervised/language understand ing paper.pdf, 2018", "venue": null, "year": 2018 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 conference on empirical methods in natural language processing,", "year": 2013 }, { "authors": [ "Alon Talmor", "Jonathan Herzig", "Nicholas Lourie", "Jonathan Berant" ], "title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge", "venue": "arXiv preprint arXiv:1811.00937,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Alex Wang", "Amapreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "arXiv preprint arXiv:1704.05426,", "year": 2017 }, { "authors": [ "Caiming Xiong", "Victor Zhong", "Richard Socher" ], "title": "Dynamic coattention networks for question answering", "venue": "arXiv preprint arXiv:1611.01604,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Pre-trained natural language processing (NLP) systems (Radford et al., 2019; Devlin et al., 2018; Radford et al., 2018; Howard & Ruder, 2018; Peters et al., 2018; McCann et al., 2017; Liu et al., 2019b) have been shown to transfer remarkably well on downstream tasks including text classification, question answering, machine translation, and summarization (Wang et al., 2018; Rajpurkar et al., 2016; Conneau et al., 2018). Such approaches involve a pre-training phase followed by the addition of task-specific layers and a subsequent re-training or fine-tuning of the conjoined model. Each task-specific layer relies on an inductive bias related to the kind of target task. For question answering, a task-specific span-decoder is often used to extract a span of text verbatim from a portion of the input text (Xiong et al., 2016). For text classification, a task-specific classification layer with fixed classes is typically used instead. For regression, similarity-measuring layers such as leastsquares and cosine similarity are employed. These task-specific inductive biases are unnecessary. On several tasks predominantly treated as text classification or regression, we find that reformulating them as span-extraction problems and relying on a span-decoder yields superior performance to using a task-specific layers.\nFor text classification and regression problems, pre-trained NLP systems can benefit from supplementary training on intermediate-labeled tasks (STILTs) (Phang et al., 2018), i.e. supplementary supervised training. We find this is similarly true for question answering, classification, and regression when reformulated as span-extraction. Because we rely only on the span-extractive inductive bias, we are able to further explore previously unconsidered combinations datasets. By doing this, we find that question answering tasks can benefit from text classification tasks and classification tasks can benefit from question answering ones.\nThe success of pre-training for natural language processing systems affords the opportunity to reexamine the benefits of our inductive biases. Our results on common question answering, text classification, and regression benchmark tasks suggest that it is advantageous to discard the inductive bias that motivates task-specific, fixed-class, classification and similarity-scoring layers in favor of the inductive bias that views all three as span-extraction problems." }, { "heading": "1.1 CONTRIBUTIONS", "text": "Summarily, we demonstrate the following:\n1. Span-extraction is an effective approach for unifying question answering, text classification, and regression.\n2. Span-extraction benefits as much from intermediate-task training as more traditional text classification and regression methods.\n3. Span-extraction allows for combinations of question answering and text classification datasets in intermediate-task training that outperform using only one or the other.\n4. Span-extractive multi-task learning yield stronger multi-task models, but weaker singletask models compared to intermediate-task training.\n5. Span-extraction with intermediate-task training proves more robust in the presence of limited training data than the corresponding task-specific versions." }, { "heading": "2 RELATED WORK", "text": "Transfer Learning. The use of pre-trained encoders for transfer learning in NLP dates back to Collobert & Weston (2008); Collobert et al. (2011) but has had a resurgence in the recent past. BERT (Devlin et al., 2018) employs the recently proposed Transformer layers (Vaswani et al., 2017) in conjunction with a masked language modeling objective as a pre-trained sentence encoder. Prior to BERT, contextualized word vectors (McCann et al., 2017) were pre-trained using machine translation data and transferred to text classification and question answering tasks. ELMO (Peters et al., 2018) improved contextualized word vectors by using a language modeling objective instead of machine translation. ULMFit (Howard & Ruder, 2018) and GPT (Radford et al., 2018) showed how traditional, causal language models could be fine-tuned directly for a specific task, and GPT2 (Radford et al., 2019) showed that such language models can indirectly learn tasks like machine translation, question answering, and summarization.\nIntermediate-task and Multi-task Learning. The goal of unifying NLP is not new (Collobert & Weston, 2008; Collobert et al., 2011). In Phang et al. (2018), the authors explore the efficacy of supplementary training on intermediate tasks, a framework that the authors abbreviate as STILTs. Given a target task T and a pre-trained sentence encoder, they first fine-tune the encoder on an intermediate (preferably related) task I and then finally fine-tune on the task T . The authors showed that such an approach has several benefits including improved performance and better robustness to hyperparameters. The authors primarily focus on the GLUE benchmark (Wang et al., 2018). Liu et al. (2019a) explore the same task and model class (viz., BERT) in the context of multi-tasking. Instead of using supplementary training, the authors choose to multi-task on the objectives and, similar to BERT on STILTs, fine-tune on the specific datasets in the second phase. Further improvements can be obtained through heuristics such as knowledge distillation as demonstrated in Clark et al. (2019). All of these approaches require a different classifier head for each task, e.g., a two-way classifier for SST and a three-way classifier for MNLI. Two recent approaches: decaNLP (McCann et al., 2018) and GPT-2 Radford et al. (2019) propose the unification of NLP as question answering and language modeling, respectively. As investigated in this work, the task description is provided in natural language instead of fixing the classifier a-priori." }, { "heading": "3 METHODS", "text": "We propose treating question answering, text classification, and regression as span-extractive tasks. Each input is split into two segments: a source text which contains the span to be extracted and an auxiliary text that is used to guide extraction. Question answering often fits naturally into this framework by providing both a question and a context document that contains the answer to that question. When treated as span-extraction, the question is the auxiliary text and the context document is the source text from which the span is extracted. Text classification input text most often does not contain a natural language description of the correct class. When it is more natural to consider the input text as one whole, we treat it as the auxiliary text and use a list of natural language descriptions of all possible classification labels as source text. When the input text contains two clearly delimited segments, one is treated as auxiliary text and the other as source text with appended natural language descriptions of possible classification labels. For regression, we employ a process similar to classification; instead of predicting a floating-point number, we bucket the possible range and classify the text instead.\nOur proposal is agnostic to the details of most common preprocessing and tokenization schemes for the tasks under consideration, so for ease of exposition we assume three phases: preprocessing, encoding, and decoding. Preprocessing includes any manipulation of raw input text; this includes tokenization. An encoder is used to extract features from the input text, and an output layer is used to decode the output from the extracted features. Encoders often include a conversion of tokens to distributed representation followed by application of several layers of LSTM, Transformer, convolutional neural network, attention, or pooling operations. In order to properly use these extracted features, the output layers often contain more inductive bias related to the specific task. For many question answering tasks, a span-decoder uses the extracted features to select a start and end token in the source document. For text classification, a linear layer and softmax allow for classification of the extracted features. Similarly, for regression, a linear layer and a similarity-scoring objective such as cosine distance or least-squares is employed. We propose to use span-decoders as the output layers for text classification and regression in place of the more standard combination of linear layer with task-specific objectives." }, { "heading": "3.1 SPAN-EXTRACTIVE BERT (SPEX-BERT)", "text": "In our experiments, we start with a pre-trained BERT as the encoder with preprocessing as described in Devlin et al. (2018). This preprocessing takes in the source text and auxiliary text and outputs a sequence of p = m+n+2 tokens: a special CLS token, the m tokens of the source text, a separator\ntoken SEP, and the n auxiliary tokens. The encoder begins by converting this sequence of tokens into a sequence of p vectors in Rd. Each of these vectors is the sum of a token embedding, a positional embedding that represents the position of the token in the sequence, and a segment embedding that represents whether the token is in the source text or the auxiliary text as described in Devlin et al. (2018). This sequence is stacked into a matrix X0 ∈ Rp×d so that it can be processed by several Transformer layers (Vaswani et al., 2017). The ith layer first computes αp(Xi) by applying selfattention with k heads over the previous layer’s outputs:\nαk(Xi) = [h1; · · · ;hk]Wo (1) where hj = α(XiW 1j , XiW 2 j , XiW 3 j )\nα(X,Y, Z) = softmax ( XY >√\nd\n) Z (2)\nA residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) merge information from the input and the multi-head attention:\nHi = LayerNorm(αp(Xi) +Xi) (3)\nThis is followed by a feedforward network with ReLU activation (Nair & Hinton, 2010; Vaswani et al., 2017), another residual connection, and a final layer normalization. With parameters U ∈ Rd×f and V ∈ Rf×d:\nXi+1 = LayerNorm(max(0, HiU)V +Hi)) (4)\nLetXsf ∈ Rm×d represent the final output of these Transformer layers. At this point, a task-specific head usually uses some part of Xsf to classify, regress, or extract spans. Our proposal is to use a span-decoder limited to Xsf whenever a classification or similarity-scoring layer is typically used. In this case, we add only two trainable parameter vectors dstart and dend following Devlin et al. (2018), and we compute start and end distributions over possible spans by multiplying these vectors with Hf and applying a softmax function:\npstart = softmax(Xsfdstart) pend = softmax(Xsfdend) (5)\nDuring training, we are given the ground truth answer span (a∗, b∗) as a pair of indices into the source text. The summation of cross-entropy losses over the start and end distributions then gives an overall loss for a training example:\nLstart = − ∑ i I{a∗ = i} log pstart(i) Lend = − ∑ i I{b∗ = i} log pend(i) (6)\nWith L = Lstart + Lend and at inference, we extract a span (a, b) as\na = argmax i pstart(i) b = argmax i pend(i) (7)" }, { "heading": "4 EXPERIMENTAL SETUP", "text": "" }, { "heading": "4.1 TASKS, DATASETS AND METRICS", "text": "We divide our experiments into three categories: classification, regression, and question answering. For classification and regression, we evaluate on all the GLUE tasks (Wang et al., 2018). This includes the Stanford Sentiment Treebank (SST) (Socher et al., 2013), MSR Paraphrase Corpus (MRPC) (Dolan & Brockett, 2005), Quora Question Pairs (QQP), Multi-genre Natural Language Inference (MNLI) (Williams et al., 2017), Recognizing Textual Entailment (RTE) (Dagan et al., 2010; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), Question-answering as NLI (QNLI) (Rajpurkar et al., 2016), and Semantic Textual Similarity (STS-B) Cer et al. (2017). The Winograd schemas challenge as NLI (WNLI) Levesque et al. (2012) was excluded during training because of known issues with the dataset. As with most other models on the GLUE leaderboard, we report the majority class label for all instances. With the exception of STS-B, which is a regression dataset, all other datasets are classification datasets. For question answering, we employ 6 popular\ndatasets: the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), QA Zero-shot Relationship Extraction (ZRE; we use the 0th split and append the token unanswerable to all examples so it can be extracted as a span) (Levy et al., 2017), QA Semantic Role Labeling (SRL) (He et al., 2015), Commonsense Question Answering (CQA; we use version 1.0) (Talmor et al., 2018) and the two versions (Web and Wiki) of TriviaQA (Joshi et al., 2017). Unless specified otherwise, all scores are on development sets. Concrete examples for several datasets can be found in Table 1." }, { "heading": "4.2 TRAINING DETAILS", "text": "For training the models, we closely follow the original BERT setup Devlin et al. (2018) and Phang et al. (2018). We refer to the 12-layer model as BERTBASE and the 24-layer model as BERTLARGE. Unless otherwise specified, we train all models with a batch size of 20 for 5 epochs. For the SQuAD and QQP datasets, we train for 2 epochs. We coarsely tune the learning rate but beyond this, do not carry out any significant hyperparameter tuning. For STILTs experiments, we re-initialize the Adam optimizer with the introduction of each intermediate task. For smaller datasets, BERT (especially BERTLARGE) is known to exhibit high variance across random initializations. In these cases, we repeat the experiment 20 times and report the best score as is common in prior work (Phang et al., 2018; Devlin et al., 2018). The model architecture, including the final layers, stay the same across all tasks and datasets – no task-specific classifier heads or adaptations are necessary." }, { "heading": "4.3 MODELS AND CODE", "text": "Pre-trained models and code can be found at MASKED. We rely on the BERT training library1 available in PyTorch Paszke et al. (2017)." }, { "heading": "5 RESULTS", "text": "Next, we present numerical experiments to buttress the claims presented in Section 1.1.\n1https://github.com/huggingface/pytorch-pretrained-BERT/\nSpan-extraction is similar or superior to task-specific heads (classification or regression). Table 2 shows our results comparing BERT (with and without STILTs) with the corresponding variant of SpEx-BERT on the GLUE tasks Wang et al. (2018). For almost all datasets, the performance for SpEx-BERT is better than that of BERT, which is perhaps especially surprising for the regression task (STS-B). One can reasonably expect model performance to improve by converting such problems into a span-extraction problem over natural language class descriptions.\nSpEx-BERT improves on STILTs. As in the case of Phang et al. (2018), we find that using supplementary tasks for pre-training improves the performance on the target tasks. We follow the setup of Phang et al. (2018) and carry out a two-stage training process. First, we fine-tune the BERT model with a span-extraction head on an intermediate task. Next, we fine-tune this model on the target task with a fresh instance of the optimizer. Note that Phang et al. (2018) require a new classifier head when switching between tasks that have different numbers of classes or task, but no such modifications are necessary when SpEx-BERT is applied. SpEx-BERT also allows for seamless switching between question answering, text classification, and regression tasks.\nIn Table 5, we present the results for SpEx-BERT on STILTs. In a majority of cases, the performance of SpEx-BERT matches or outperforms that of BERT. This is especially pronounced for datasets with limited training data, such as MRPC and RTE with SpEx-BERTLARGE and BERTLARGE: 85.2 vs 83.4 for RTE, and 90.4 vs 89.5 for MRPC). We hypothesize that this increase is due to the fact that the class choices are provided to the model in natural language, which better utilizes the pretrained representations of a large language model like BERT. Finally, we note, perhaps surprisingly, that question answering datasets (SQuAD and TriviaQA) improve performance of some of the classification tasks. Notable examples include SST (pre-trained from the Wiki version of TriviaQA) and RTE (pre-trained from any of the three datasets).\nSTILTs improves question answering as well. Table 3a shows similar experiments on popular question answering datasets. The transferability of question answering datasets is well-known. Datasets such as TriviaQA, SQuAD and ZRE have been known to improve each other’s scores and have improved robustness to certain kinds of queries (Devlin et al., 2018; McCann et al., 2018). We further discover that through the formulation of SpEx-BERT, classification datasets also help question answering datasets. In particular, MNLI improves the scores of almost all datasets over their baselines. For SQuAD, the benefit of STILTs with the classification dataset MNLI is almost as much as the question answering dataset TriviaQA.\nSTILTs can be chained. Pre-training models using intermediate tasks with labeled data has been shown to be useful in improving performance. Phang et al. (2018) explored the possibility of using one intermediate task to demonstrate this improvement. We explore the possibility of chaining multiple intermediate tasks in Table 3a. Conceptually, if improved performance on SQuAD during the first stage of fine-tuning leads to improved performance for the target task of CQA, improving performance of SQuAD through in turn pre-training it on MNLI would improve the eventual goal of CQA. Indeed, our experiments suggest the efficacy of chaining intermediate tasks in this way. CQA\nobtains a score of 63.8 when fine-tuned from a SQuAD model (of score 84.0) and obtains a score of 65.7 when fine-tuned on a SQuAD model that was itself fine-tuned using MNLI (of score 84.5) as an intermediate task.\nMulti-task STILTs yields stronger multi-task models, but weaker single-task models. We also experiment with multi-task learning during intermediate-task training. We present the results for such intermediate-multi-task training on RTE in Table 4a. In intermediate-multi-task training, we cycle through one batch for each of the tasks until the maximum number of iterations is reached. No special consideration is made for the optimizer or weighing of objectives. The results show that intermediate-multi-task training improves performance over the baseline for RTE, but this improvement is less than when only MNLI is used for intermediate-task training. Though not desirable if RTE is the only target task, such intermediate-multi-task training yields a better multi-task model that performs well on both datasets: the joint (single) model achieved 75.0 on RTE and 86.2 on MNLI, both of which are better than their single-task baselines. In some cases, the increased performance for one task (MNLI) might be preferable to that on another (RTE). We note that this observation is similar to the one of Phang et al. (2018).\nSpEx-BERT on STILTs is more robust than BERT on STILTs when training data is limited. In Table 3b, we present results for the same models (BERT and SpEx-BERT) being fine-tuned with sub-sampled versions of the dataset. For this experiment, we follow Phang et al. (2018) and subsample 1000 data points at random without replacement and choose the best development set accuracy across several random restarts. The rest of the experimental setup remains unchanged. When used in conjunction with STILTs, the performance improves as expected and, in a majority of cases, significantly exceeds that of the corresponding baseline that does not use span-extraction." }, { "heading": "6 DISCUSSION", "text": "" }, { "heading": "6.1 PHRASING THE QUESTION", "text": "As described in Section 3, when converting any of the classification or regression problems into a span-extraction one, the possible classes or bucketed values need to be presented in natural language as part of the input text. This leaves room for experimentation. We found that separation of naturally delimited parts of the input text into source and auxiliary text was crucial for best performance. Recall that for question answering, the natural delimitation is to assign the given context document as the source text and the question as the auxiliary text. This allows the span-decoder to extract a span from the context document, as expected. For single-sentence problems, there is no need for delimitation and the correct span is typically not found in the given sentence, so it is treated as auxiliary text.\nNatural language descriptions of the classes or allowable regression values are provided as source text for span extraction. For two-sentence problems, the natural delimitation suggests treating one sentence as source text and the other as auxiliary. The classification or regression choices must be in the source text, but it was also the case that one of the sentences must also be in the source text. Simply concatenating both sentences and assigning them as the source text was detrimental for tasks like MNLI.\nFor the case of classification, when experimenting with various levels of brevity, we found that simpler is better. Being terse eases training since the softmax operation over possible start and end locations is over a relatively smaller window. While more detailed explanations might elaborate on what the classes mean or otherwise provide additional context for the classes, these potential benefits were outstripped by increasing the length of the source text. We present these results on the development set of the MNLI dataset with BERTBASE in Table 4b. For regression, there exists a trade-off between brevity and granularity of the regression. We found that dividing the range into 10 – 20 buckets did not appreciably change the resulting correlation score for STS-B." }, { "heading": "6.2 A FULLY JOINT MODEL WITHOUT TASK-SPECIFIC PARAMETERS", "text": "Unlike similar approaches using task-specific heads Liu et al. (2019a), SpEx-BERT allows for a single model across a broader set of tasks. This makes possible a single, joint model with all parameters shared. We present the results of this experiment in Table 5 in the Appendix; we multi-task over all datasets considered so far. Multi-task performance exceeds single-task performance for many of the question answering datasets (ZRE, SRL, CQA) as well as the classification dataset RTE. In some cases, these improvements are drastic (over 9% accuracy). Unfortunately, the opposite is true for the two tasks that are the greatest source of transfer, MNLI and SQuAD, and the remaining GLUE tasks. Understanding why such vampiric relationships amongst datasets manifest, why any particular dataset appears beneficial, neutral, or detrimental to the performance of others, and why question answering tasks appear more amenable to the fully-joint setting remain open questions. Nonetheless, a purely span-extractive approach has allowed us to observe such relationships more directly than in settings that use multiple task-specific heads or fine-tune separately on each task. Because some tasks benefit and others suffer, these results present a trade-off. Depending on which tasks and datasets are more pertinent, multi-task learning might be the right choice, especially given the ease of deploying a single architecture that does not require any task-specific modifications.\nJoint models for NLP have already been studied Collobert et al. (2011); McCann et al. (2018); Radford et al. (2019) with a broad set of tasks that may require text generation and more general architectures. These approaches have yet to perform as well as task-specific models on common benchmarks, but they have demonstrated that large amounts of unsupervised data, curriculum learning, and task sampling strategies can help mitigate the negative influence multitasking tends to have on datasets that are especially good for transfer learning. This work represents a connection between those works and work that focuses on task-specific fine-tuning of pre-trained architectures." }, { "heading": "7 CONCLUSION", "text": "With the successful training of supervised and unsupervised systems that rely on increasingly large amounts of data, more of the natural variation in language is captured during pre-training. This suggests that less inductive bias in the design of task-specific architectures might be required when approaching NLP tasks. We have proposed that the inductive bias that motivates the use task-specific layers is no longer necessary. Instead, a span-extractive approach, common to question answering, should be extended to text classification and regression problems as well. Experiments comparing the traditional approach with BERT to SpEx-BERT have shown that the span-extractive approach often yields stronger performance as measured by scores on the GLUE benchmark. This reduces the need for architectural modifications across datasets or tasks, and opens ways for applying methods like STILTs to question answering or a combination of text classification, regression, and question answering datasets to further improve performance. Experiments have further shown that spanextraction proves more robust in the presence of limited training data. We hope that these findings will promote further exploration into the design of unified architectures for a broader set of tasks." }, { "heading": "A MULTITASKING RESULTS", "text": "Below is the table that supports the commentary in Section 6.2." } ]
2,019
null
SP:891db9f5c3c7f981f4b9e37e36436a471f65117a
[ "The paper \"Denoising Improves Latent Space Geometry in Text Autoencoders\" tackles the problem of text autoencoding in a space which respects text similarities. It is an interesting problem for which various attempts have been proposed, while still facing difficulties for encoding in smooth spaces. The paper proposes a simple (rather straightforward) approach based on adversarial learning, with some theoretical guarantees, which obtains good performances for reconstruction and neighborhood preservation. ", "This paper presented a denoising adversarial autoencoder for sentence embeddings. The idea is that by introducing perturbations (word omissions, etc) the embeddings are more meaningful and less \"memorized\". Evaluations include measuring sentence perplexity in generation/reconstruction, tense changing via vector arithmetic, sentiment changes via negative/positive vector additions, and sentence interpolations. " ]
Neural language models have recently shown impressive gains in unconditional text generation, but controllable generation and manipulation of text remain challenging. In particular, controlling text via latent space operations in autoencoders has been difficult, in part due to chaotic latent space geometry. We propose to employ adversarial autoencoders together with denoising (referred as DAAE) to drive the latent space to organize itself. Theoretically, we prove that input sentence perturbations in the denoising approach encourage similar sentences to map to similar latent representations. Empirically, we illustrate the trade-off between textgeneration and autoencoder-reconstruction capabilities, and our model significantly improves over other autoencoder variants. Even from completely unsupervised training without style information, DAAE can perform various style transfers, including tense and sentiment, through simple latent vector arithmetic.1
[]
[ { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yoshua Bengio", "Li Yao", "Guillaume Alain", "Pascal Vincent" ], "title": "Generalized denoising auto-encoders as generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "In Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Xi Chen", "Diederik P Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Variational lossy autoencoder", "venue": "arXiv preprint arXiv:1611.02731,", "year": 2016 }, { "authors": [ "Ondřej Cífka", "Aliaksei Severyn", "Enrique Alfonseca", "Katja Filippova" ], "title": "Eval all, trust a few, do wrong to none: Comparing sentence generation models", "venue": "arXiv preprint arXiv:1804.07972,", "year": 2018 }, { "authors": [ "Antonia Creswell", "Anil Anthony Bharath" ], "title": "Denoising adversarial autoencoders", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": null, "year": 1901 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Toward controlled generation of text", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Daniel Im Jiwoong Im", "Sungjin Ahn", "Roland Memisevic", "Yoshua Bengio" ], "title": "Denoising criterion for variational auto-encoding framework", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Yoon Kim", "Sam Wiseman", "Andrew C Miller", "David Sontag", "Alexander M Rush" ], "title": "Semi-amortized variational autoencoders", "venue": "arXiv preprint arXiv:1802.02550,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Guillaume Lample", "Myle Ott", "Alexis Conneau", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Phrasebased & neural unsupervised machine translation", "venue": "arXiv preprint arXiv:1804.07755,", "year": 2018 }, { "authors": [ "Lajanugen Logeswaran", "Honglak Lee", "Samy Bengio" ], "title": "Content preserving text generation with attribute controls", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Wen-tau Yih", "Geoffrey Zweig" ], "title": "Linguistic regularities in continuous space word representations", "venue": "In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2013 }, { "authors": [ "Jonas Mueller", "David Gifford", "Tommi Jaakkola" ], "title": "Sequence to better sequence: continuous revision of combinatorial structures", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting on association for computational linguistics,", "year": 2002 }, { "authors": [ "Ben Poole", "Jascha Sohl-Dickstein", "Surya Ganguli" ], "title": "Analyzing noise in autoencoders and deep networks", "venue": "arXiv preprint arXiv:1406.1831,", "year": 2014 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Paul K Rubenstein", "Bernhard Schoelkopf", "Ilya Tolstikhin" ], "title": "On the latent space of wasserstein auto-encoders", "venue": "arXiv preprint arXiv:1802.03761,", "year": 2018 }, { "authors": [ "Anton Maximilian Schäfer", "Hans Georg Zimmermann" ], "title": "Recurrent neural networks are universal approximators", "venue": "In International Conference on Artificial Neural Networks,", "year": 2006 }, { "authors": [ "Tianxiao Shen", "Tao Lei", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Style transfer from non-parallel text by cross-alignment", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sandeep Subramanian", "Guillaume Lample", "Eric Michael Smith", "Ludovic Denoyer", "Marc’Aurelio Ranzato", "Y-Lan Boureau" ], "title": "Multiple-attribute text style transfer", "venue": "arXiv preprint arXiv:1811.00552,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "title": "Improved variational autoencoders for text modeling using dilated convolutions", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Junbo Zhao", "Yoon Kim", "Kelly Zhang", "Alexander M Rush", "Yann LeCun" ], "title": "Adversarially regularized autoencoders", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Shen" ], "title": "The Yelp dataset", "venue": "Our second dataset of Yahoo answers is from Yang et al", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Autoencoder based generative models have recently become popular tools for advancing controllable text generation such as style or sentiment transfer (Bowman et al., 2016; Hu et al., 2017; Shen et al., 2017; Zhao et al., 2018). By mapping sentences to vectors in the latent space, these models offer in principle an attractive, continuous approach to manipulating text by means of simple latent vector arithmetic. However, the success of such manipulations rests heavily on the latent space geometry and how well it agrees with underlying sentence semantics. Indeed, we demonstrate that without additional guidance, fortuitous geometric agreements are unlikely to arise, shedding light on challenges faced by existing methods.\nWe use adversarial autoencoders (Makhzani et al., 2015, AAEs) to study the latent space geometry. In contrast to variational autoencoders (Kingma & Welling, 2014, VAEs), AAEs can maintain strong coupling between the encoder and decoder that the decoder does not omit the encoded input sentence (Bowman et al., 2016). The training criterion for AAEs consists of two parts, the ability to reconstruct sentences and the additional constraint that the encoded sentences are overall indistinguishable from prior samples, typically Gaussian. We show that these objectives alone do not suffice to force proper latent space geometry for text control. Specifically, for discrete objects such as sentences where continuity assumptions no longer hold, powerful AAEs can easily learn to map training sentences into latent prior samples arbitrarily (Figure 1, Left), while retaining perfect reconstruction. Latent space manipulations in such cases will yield random, unpredictable results.\nTo remedy this, we augment AAEs with a simple denoising objective (Vincent et al., 2008; Creswell & Bharath, 2018) that requires perturbed sentence with some words missing to be mapped back to the original version. We refer to our model as DAAE. We prove that the denoising criterion can eliminate disorganized solutions and drive the latent space to organize itself. As a result, similar sentences begin to be mapped to similar latent vectors (Figure 1, Right).\nImprovements in latent space geometry carry many positive consequences. Through systematic evaluations of the generation and reconstruction capabilities of various text autoencoders (Cífka et al., 2018), we find that our proposed DAAE provides the best trade-off between producing highquality text vs. informative sentence representations. We empirically verify that DAAE has the best neighborhood preservation property, consistent with our theory. We further investigate to what extent\n1Our code will be made publicly available after the review process.\ntext can be manipulated by applying simple transformations in the learned latent space. Our model is able to perform sentence-level vector arithmetic (Mikolov et al., 2013) fairly well to change the tense or sentiment of a sentence without any training supervision. It also produces higher quality sentence interpolations than other text autoencoders, suggesting better linguistic continuity in its latent space (Bowman et al., 2016)." }, { "heading": "2 RELATED WORK", "text": "Denoising is first introduced into standard autoencoders by Vincent et al. (2008, DAE) to learn robust representations. Without a latent prior, DAE requires sophisticated MCMC sampling to be employed generatively (Bengio et al., 2013). Creswell & Bharath (2018) applied denoising with AAEs to generative image modeling. Here, we demonstrate that input perturbations are particularly useful for discrete text modeling because they encourage preservation of data structure in the latent space.\nApart from the AAE framework that our paper focuses on, another popular latent variable generative model is the variational autoencoder (Kingma & Welling, 2014, VAE). Unfortunately, when the decoder is a powerful autoregressive model (such as a language model), VAE suffers from the posterior collapse problem where the latent representations get ignored (Bowman et al., 2016; Chen et al., 2016). If denoising is used in conjunction with VAEs (Im et al., 2017) in text applications, then the noisy inputs will only exacerbate VAE’s neglect of the latent variable. Bowman et al. (2016) proposed to weaken VAE’s decoder by masking words on the decoder side to alleviate its collapse issue. However, even with a weakened decoder and combined with other techniques including KL-weight annealing and adjusting training dynamics, it is still difficult to inject significant content into the latent code (Yang et al., 2017; Kim et al., 2018; He et al., 2019). Alternatives like the β-VAE (Higgins et al., 2017) appear necessary.\nPrevious work on controllable text generation has employed autoencoders trained with attribute label information (Hu et al., 2017; Shen et al., 2017; Zhao et al., 2018; Logeswaran et al., 2018; Subramanian et al., 2018). We show that the proposed DAAE model can perform text manipulations despite being trained in a completely unsupervised manner without attribute labels. This suggests that on the one hand, our model can be adapted to semi-supervised learning when a few labels are available. On the other hand, it can be easily scaled up to train one large model on unlabeled text corpora and then applied for transferring various styles." }, { "heading": "3 METHOD", "text": "Define X = Vm to be a space of sequences of discrete symbols from vocabulary V (with maximum length m); also define Z = Rd to be a continuous latent space. Our goal is to learn a mapping between the data distribution pdata(x) over X and a given prior distribution p(z) over latent space Z (following common practice, a Gaussian prior is used in our experiments, although not required by our methodology). Such a mapping allows us to easily manipulate discrete data through continuous latent representations z, and provides a generative model where data samples can be obtained by first drawing z from the prior and then sampling a corresponding sequence via p(x|z).\nWe adopt the adversarial autoencoder (AAE) framework, which involves a (deterministic) encoder E : X → Z , a probabilistic decoder G : Z → X , and a discriminator D : Z → [0, 1] . Both E and G are recurrent neural networks (RNNs)2. E takes input sequence x and outputs the last hidden state as its encoding z. G generates a sequence x autoregressively, with each step conditioned on z and previous symbols. The discriminator D is a feed-forward net that outputs the probability of z coming from the prior rather than the encoder. E, G and D are trained jointly with a min-max objective:\nmin E,G max D Lrec(θE , θG)− λLadv(θE , θD) (1)\nwith: Lrec(θE , θG) = Epdata(x)[− log pG(x|E(x))] (2) Ladv(θE , θD) = Ep(z)[− logD(z)] + Epdata(x)[− log(1−D(E(x)))] (3)\nwhere reconstruction loss Lrec and adversarial loss3 Ladv are weighted via hyperparameter λ > 0. We further introduce perturbations in X space to learn smoother representations that reflect local structure in the data, ending up with the denoising adversarial autoencoder (DAAE) model. Given a perturbation process C that stochastically maps x to nearby x̃ ∈ X , let p(x, x̃) = pdata(x)pC(x̃|x) and p(x̃) = ∑ x p(x, x̃). We change the loss functions to be:\nLrec(θE , θG) = Ep(x,x̃)[− log pG(x|E(x̃))] (4) Ladv(θE , θD) = Ep(z)[− logD(z)] + Ep(x̃)[− log(1−D(E(x̃)))] (5)\nHere, Lrec is the loss of reconstructing x from x̃, and Ladv is the adversarial loss evaluated on perturbed x. The objective function combines the denoising technique with the AAE (Vincent et al., 2008; Creswell & Bharath, 2018). When pC(x̃|x) = 1[x̃ = x] (i.e. there is no perturbation), the above simply becomes the usual AAE objective.\nLet pE(z|x) denote the encoder distribution. With our perturbation process C, the posterior distributions of the DAAE are of the following form:\nq(z|x) = ∑ x̃ pC(x̃|x)pE(z|x̃) (6)\nThis enables the DAAE to utilize stochastic encodings even by merely employing a deterministic encoder network trained without any reparameterization-style tricks. Note that since q(z|x) of the form (6) is a subset of all possible conditional distributions, our model is still minimizing an upper bound of the Wasserstein distance between data and model distributions, as previously shown by Tolstikhin et al. (2017) for AAE (see Appendix A for a full proof)." }, { "heading": "4 LATENT SPACE GEOMETRY", "text": "The latent space geometry of text autoencoders is an important yet understudied problem. Only when the latent space is smooth and regular can meaningful text manipulations be enacted via simple modifications of the corresponding latent representations. Here, we discuss in detail the posterior characteristics of the DAAE, and provide a theoretical analysis of how input perturbations help better structure the latent space geometry (all proofs are relegated to the appendix).\nAssume our perturbations preserve x with some probability (i.e. pC(x|x) > 0). When the support of C(x1) and C(x2) do not overlap for different training examples x1 6= x2, the encoder can learn to assign pE(z|x̃) = pE(z|x) for x̃ ∈ C(x), and we are back to the unconstrained posterior scenario q(z|x) = pE(z|x) (Eq. 6). If C(x1) and C(x2) do intersect, then the latent posterior of x1 and x2 will have overlapping components pE(z|x̃) for x̃ ∈ C(x1)∩C(x2). For example, if pC(x̃|x) assigns a high probability to x̃ that lies close to x (based on some metric over X ), then for similar x1 and x2, the high-probability overlap between their perturbations will inherently force their posteriors closer together in the latent space. This is desirable for learning good representations z, while not guaranteed by merely minimizing the statistical divergence between pdata(x) and pG(x) = Ep(z)[pG(x|z)]. Now we formally analyze which type of x-z mappings will be learned by AAE and DAAE, respectively, to achieve global optimality of their training objectives. Unlike previous analyses of\n2Transformer models (Vaswani et al., 2017) did not outperform LSTMs on our moderately-sized datasets. 3We actually train E to maximize logD(E(x)) instead of − log(1 −D(E(x))), which is more stable in\npractice (Goodfellow et al., 2014). We also tried WGAN (Arjovsky et al., 2017) but did not notice any gains.\nnoise in single-layer networks (Poole et al., 2014), here we study high-capacity encoder/decoder networks (Schäfer & Zimmermann, 2006) with a large number of parameters that are used in modern sequence models (Devlin et al., 2018; Radford et al., 2019). Throughout, we assume that: Assumption 1. E is a universal approximator capable of producing any mapping from x’s to z’s. Assumption 2. G can approximate arbitrary p(x|z) so long as it remains sufficiently Lipschitz continuous in z. Namely, there exists L > 0 such that all decoder models G obtainable via training satisfy that for all x ∈ X , z1, z2 ∈ Z: | log pG(x|z1)− log pG(x|z2)| ≤ L‖z1 − z2‖. Following prior analysis of language decoders (Mueller et al., 2017), we assume that G is L-Lipschitz in its continuous input z (denote this set of possible decoders by GL). When G is implemented as a RNN or Transformer language model, log pG(x|z) will remain Lipschitz in z if the recurrent or attention weight matrices have bounded norm. This property is naturally encouraged by popular training methods that utilize SGD with early stopping and L2 regularization (Zhang et al., 2017). Note we have not assumed E or G is Lipschitz in x, which would be unreasonable since x stands for discrete text, and when a few symbols change, the decoder likelihood for the entire sequence can vary drastically (e.g., G may assign a much higher probability to a grammatical sentence than an ungrammatical one that only differs by one word). Our discussion is directed to the nature of such families of log-likelihood functions with a continuous variable z and a discrete variable x.\nWe further assume an effectively trained discriminator that succeeds in its adversarial task: Assumption 3. D ensures that the latent encodings z1, · · · , zn of training examples x1, · · · , xn are indistinguishable from prior samples z ∼ p(z). For simplicity, we directly assume that z1, · · · , zn are actual samples from p(z) which are given a priori. Here, the task of the encoder E is to map given unique training examples to the given latent points, and the goal of the decoder pG(·|·) is to maximize −Lrec under the encoder mapping. The question now is which one-to-one mapping an optimal encoder/decoder will learn under the AAE and DAAE objective (Eq. 2 and Eq. 4). We start with the following observation: Theorem 1. For any one-to-one encoder mappingE from {x1, · · · , xn} to {z1, · · · , zn}, the optimal value of objective maxG∈GL 1 n ∑n i=1 log pG(xi|E(xi)) is the same.\nIntuitively, this result stems from the fact that the model receives no information about the structure of x, and x1, · · · , xn are simply provided as different symbols. Hence AAE offers no preference over x-z couplings, and a random matching in which the z do not reflect any data structure is equally good as any other matching (Figure 1, Left). Latent point assignments start to differentiate, however, once we introduce local input perturbations.\nTo elucidate how perturbations affect latent space geometry, it helps to first consider a simple setting with only four examples x1, x2, x3, x4 ∈ X . Again, we consider given latent points z1, z2, z3, z4 sampled from p(z), and the encoder/decoder are tasked with learning which x to match with which z. As depicted in Figure 1, suppose there are two pairs of x closer together and also two pairs of z closer together. Let σ denote the sigmoid function, we have the following conclusion: Theorem 2. Let d be a distance metric over X . Suppose x1, x2, x3, x4 satisfy that with some > 0: d(x1, x2) < , d(x3, x4) < , and d(xi, xj) > for all other (xi, xj) pairs. In addition, z1, z2, z3, z4 satisfy that with some 0 < δ < ζ: ‖z1 − z2‖ < δ, ‖z3 − z4‖ < δ, and ‖zi − zj‖ > ζ for all other (zi, zj) pairs. Suppose our perturbation process C reflects local X geometry with: pC(xi|xj) = 1/2 if d(xi, xj) < and = 0 otherwise. For δ < 1L (2 log (σ(Lζ)) + log 2) and ζ > 1 L log ( 1/( √ 2− 1) ) , the denoising objective maxG∈GL 1 n ∑n i=1 ∑n j=1 pC(xj |xi) log pG(xi|E(xj)) (where n = 4) achieves the largest value when encoder E maps close pairs of x to close pairs of z.\nThis entails that DAAE will always prefer to map similar x to similar z. Note that Theorem 1 still applies here, and AAE will not prefer any particular x-z pairing over the other possibilities. We next generalize beyond the basic four-points scenario to consider n examples of x that are clustered. Here, we can ask whether this cluster organization will be reflected in the latent space of DAAE. Theorem 3. Suppose x1, · · · , xn are divided into n/K clusters of equal size K, with Si denoting the cluster index of xi. Let the perturbation process C be uniform within clusters, i.e. pC(xi|xj) = 1/K if Si = Sj and = 0 otherwise. For a one-to-one encoder mapping E from {x1, · · · , xn} to {z1, · · · , zn}, the denoising objective maxG∈GL 1n ∑n i=1 ∑n j=1 pC(xj |xi) log pG(xi|E(xj)) is\nupper bounded by: 1n2 ∑ i,j:Si 6=Sj log σ(L‖E(xi)− E(xj)‖)− logK.\nTheorem 3 provides an upper bound of the DAAE objective that can be achieved by a particular x-z mapping. This achievable limit is substantially better when examples in the same cluster are mapped to the latent space in a manner that is well-separated from encodings of other clusters. In other words, by preserving input space cluster structure in the latent space, DAAE can achieve better objective values and thus is incentivized to learn such encoder/decoder mappings. An analogous corollary can be shown for the case when examples x are perturbed to yield additional inputs x̃ not present in the training data. In this case, the model would aim to map each example and its perturbations as a group to a compact group of z points well-separated from other groups in the latent space.\nIn conclusion, our analysis shows that a well-trained DAAE is guaranteed to learn neighborhoodpreserving latent representations, whereas even a perfectly-trained AAE model may learn latent representations whose geometry fails to reflect similarity in the x space. Empirical experiments in Section 5.2 confirm that our theory holds in practice." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate our proposed model and other text autoencoders on two benchmark datasets: Yelp reviews and Yahoo answers (Shen et al., 2017; Yang et al., 2017). Detailed descriptions of datasets, training settings, human evaluations, and additional results/examples can be found in the appendix.\nPerturbation Process We randomly delete each word with probability p, so that perturbations of sentences with more words in common will have a larger overlap. We also tried replacing each word with a <mask> token or a random word and found that they all brought improvements, but deleting words worked best. We leave it to future work to explore more sophisticated text perturbations.\nBaselines We compare our proposed DAAE with four alternative text autoencoders: adversarially regularized autoencoder (Zhao et al., 2018, ARAE), β-VAE (Higgins et al., 2017), AAE (Makhzani et al., 2015), and latent-noising AAE (Rubenstein et al., 2018, LAAE). Similar to our model, the LAAE uses Gaussian perturbations in the latent space to improve AAE’s latent geometry (rather than perturbations in the sentence space). However, LAAE requires enforcing an L1 penalty (λ1 · ‖ log σ2(x)‖1) on the latent perturbations’ log-variance to prevent them from vanishing. In contrast, input perturbations in DAAE enable stochastic latent representations without parametric restrictions like Gaussianity." }, { "heading": "5.1 GENERATION-RECONSTRUCTION TRADE-OFF", "text": "We evaluate various latent variable generative models in terms of both generation quality and reconstruction accuracy. A strong model should not only generate high quality sentences, but also learn useful latent variables that capture significant data content. Recent work on text autoencoders has found an inherent tension between these aims (Bowman et al., 2016; Cífka et al., 2018), yet only when both goals are met can we successfully manipulate sentences by modifying their latent representation (in order to produce valid output sentences that retain the semantics of the input).\nWe compute the BLEU score (Papineni et al., 2002) between input and reconstructed sentences to measure reconstruction accuracy, and compute Forward/Reverse PPL to measure sentence generation quality (Zhao et al., 2018; Cífka et al., 2018).4 Forward PPL is the perplexity of a language model trained on real data and evaluated on generated data. It measures the fluency of the generated text, but cannot detect the collapsed case where the model repeatedly generates a few common sentences. Reverse PPL is the perplexity of a language model trained on generated data and evaluated on real data. It takes into account both the fluency and diversity of the generated text. If a model generates only a few common sentences, a language model trained on it will exhibit poor PPL on real data.\nWe thoroughly investigate the performance of different models and their trade-off between generation and reconstruction. Figure 2 plots reconstruction BLEU (higher is better) vs. Forward/Reverse PPL (lower is better). The lower right corner indicates an ideal situation where good reconstruction accuracy and generation quality are both achieved. For models with tunable hyperparameters, we sweep the full spectrum of their generation-reconstruction trade-off by varying the KL coefficient β of β-VAE, the log-variance L1 penalty λ1 of LAAE, and the word drop probability p of DAAE.\n4 While some use importance sampling estimates of data likelihood to evaluate VAEs (He et al., 2019), adopting the encoder as a proposal density is not suited for AAE variants, as they are optimized based on Wasserstein distances rather than likelihoods and lack closed-form posteriors.\nIn the left panel, we observe that a standard VAE (β = 1) completely collapses and ignores the latent variable z, resulting in reconstruction BLEU close to 0. At the other extreme, AAE can achieve near-perfect reconstruction, but its latent space is highly non-smooth and generated sentences are of poor quality, indicated by its large Forward PPL. Decreasing β in VAE or introducing latent noises in AAE provides the model with a similar trade-off curve between reconstruction and generation. We note that ARAE falls on or above their curves, revealing that it does not fare better than these methods (Cífka et al. (2018) also reported similar findings). Our proposed DAAE provides a trade-off curve that is strictly superior to other models. With discrete x and a complex encoder, the Gaussian perturbations added to the latent space by β-VAE and LAAE are not directly related to how the inputs are encoded. In contrast, input perturbations added by DAAE can constrain the encoder to maintain coherence between neighboring inputs in an end-to-end fashion and help learn smoother latent space.\nThe right panel in Figure 2 illustrates that Reverse PPL first drops and then rises as we increase the degree of regularization/perturbation. This is because when z encodes little information, generations from prior-sampled z lack enough diversity to cover the real data. Again, DAAE outperforms the other models which tend to have higher Reverse PPL and lower reconstruction BLEU. In subsequent experiments, we set β = 0.15 for β-VAE, λ1 = 0.05 for LAAE, and p = 0.3 for DAAE, to ensure they have strong reconstruction abilities and encode enough information to enable text manipulations." }, { "heading": "5.2 NEIGHBORHOOD PRESERVATION", "text": "In this section, we empirically investigate whether our previous theory holds in practice. That is, in actual autoencoder models trained on real text datasets, do sentence perturbations induce latent space organization that better preserves neighborhood structure in the data space?\nUnder our word-drop perturbation process, sentences with more words in common are more likely to be perturbed into one another. This choice of C approximately encodes sentence similarity via the normalized edit distance5. Within the test set, we find both the 10 nearest neighbors of each sentence based on the normalized edit distance (denote this set by NNx), as well as the k nearest neighbors based on Euclidean distance between latent representations (denote this set by NNz). We\n5Normalized edit distance ∈ [0, 1] is the Levenshtein distance divided by the max length of two sentences.\ncompute the recall rate |NNx ∩ NNz| / |NNx|, which indicates how well local neighborhoods are preserved in the latent space of different models.\nFigures 3 shows that DAAE consistently gives the highest recall, about 1.5∼2 times that of AAE, implying that input perturbations have a substantial effect on shaping the latent space geometry. Tables 1 presents the five nearest neighbors found by AAE and DAAE in their latent space for example test set sentences. The AAE sometimes encodes entirely unrelated sentences close together, while the latent space geometry of the DAAE is structured based on key words such as “attentive” and “personable”, and tends to group sentences with similar semantics close together." }, { "heading": "5.3 APPLICATIONS TO CONTROLLABLE TEXT GENERATION", "text": "" }, { "heading": "5.3.1 STYLE TRANSFER VIA VECTOR ARITHMETIC", "text": "Mikolov et al. (2013) previously discovered that word embeddings from unsupervised learning can capture linguistic relationships via simple arithmetic. A canonical example is the embedding arithmetic “King” - “Man” + “Woman” ≈ “Queen”. Here, we use the Yelp dataset with tense and sentiment as two example attributes (Hu et al., 2017; Shen et al., 2017) to investigate whether analogous structure emerges in the latent space of our sentence-level models.\nTense We use the Stanford Parser to extract the main verb of a sentence and determine the sentence tense based on its part-of-speech tag. We compute a single “tense vector” by averaging the latent code z separately for 100 past tense sentences and 100 present tense sentences in the dev set, and then calculating the difference between the two. Given a sentence from the test set, we attempt to change its tense from past to present or from present to past through simple addition/subtraction of the tense vector. More precisely, a source sentence x is first is encoded to z = E(x), and then the tense-modified sentence is produced via G(z ± v), where v ∈ Rd denotes the fixed tense vector.\nTo quantitatively compare different models, we compute their tense transfer accuracy as measured by the parser, the output BLEU with the input sentence, and output (forward) PPL evaluated by a language model. DAAE achieves the highest accuracy, lowest PPL, and relatively high BLEU (Table 2, Above), indicating that the output sentences produced by our model are more likely to be of high quality and of the proper tense, while remaining similar to the source sentence. A human evaluation on 200 test sentences (100 past and 100 present, details in Appendix G) suggests that DAAE outperforms β-VAE twice as often as it is outperformed, and our model successfully inverts tense for (48 + 26)/(200− 34) = 44.6% of sentences, 13.8% more than β-VAE (Table 2, Below). Tables 4 and J.2 show the results of adding or subtracting this fixed latent vector offset under different models. DAAE can successfully change “enjoy” to “enjoyed”, or change the subjunctive mood to declarative mood and adjust the word order. Other baselines either fail to alter the tense, or undesirably change the semantic meaning of the source sentence (e.g. “enjoy” to “made”).\nSentiment Following the same procedure used to alter tense, we compute a “sentiment vector” v from 100 negative and positive sentences and use it to change the sentiment of test sentences. Table 3 reports the automatic evaluations, and Tables 5 and J.3 show examples generated by AAE and DAAE. Scaling ±v to ±1.5v and ±2v, we find that the resulting sentences get more and more positive/negative. However, the PPL for AAE increases rapidly with this scaling factor, indicating that the sentences become unnatural when their encodings have a large offset. DAAE enjoys a much smoother latent space than AAE. Despite the fact that no sentiment labels are provided during training (a more challenging task than previous style transfer models (Shen et al., 2017)), DAAE with ±1.5v is able to transfer sentiment fairly well." }, { "heading": "5.3.2 SENTENCE INTERPOLATION VIA LATENT SPACE TRAVERSAL", "text": "We also study sentence interpolation by traversing the latent space of text autoencoders. Given two input sentences, we encode them to z1, z2 and decode from tz1 + (1 − t)z2 (0 ≤ t ≤ 1) to obtain interpolated sentences. Ideally this should produce fluent sentences with gradual semantic change (Bowman et al., 2016). Table 6 shows two examples from the Yelp dataset, where it is clear that DAAE produces more coherent and natural interpolations than AAE. Table J.4 in the appendix shows two difficult examples from the Yahoo dataset, where we interpolate between dissimilar sentences. While it is challenging to generate semantically correct sentences in these cases, the latent space of our model exhibits continuity on topic and syntactic structure." }, { "heading": "6 CONCLUSION", "text": "This paper proposed DAAE for generative text modeling. As revealed in previous work (Devlin et al., 2018; Lample et al., 2018), we find that denoising techniques can greatly improve the learned text representations. We provide a theoretical explanation for this phenomenon by analyzing the latent\nInput 1 it ’s so much better than the other chinese food places in this area . fried dumplings are a must . Input 2 better than other places . the fried dumplings are a must if you ever visit this place .\nspace geometry arisen from input perturbations. Our proposed model substantially outperforms other text autoencoders, and demonstrates potential for various text manipulations via vector operations. Future work may investigate superior perturbation strategies and additional properties of latent space geometry to provide finer control over the text generated using autoencoder models." }, { "heading": "A WASSERSTEIN DISTANCE", "text": "The AAE objective can be connected to a relaxed form of the Wasserstein distance between model and data distributions (Tolstikhin et al., 2017). Specifically, for cost function c(·, ·) : X × X → R and deterministic decoder mapping G : Z → X , it holds that:\ninf Γ∈P(x∼pdata,y∼pG) E(x,y)∼Γ[c(x, y)] = inf q(z|x):q(z)=p(z) Epdata(x)Eq(z|x)[c(x,G(z))] (7)\nwhere the minimization over couplings Γ with marginals pdata and pG can be replaced with minimization over conditional distributions q(z|x) whose marginal q(z) = Epdata(x)[q(z|x)] matches the latent prior distribution p(z). Relaxing this marginal constraint via a divergence penalty D(q(z)‖p(z)) estimated by adversarial training, one recovers the AAE objective (Eq. 1). In particular, AAE on discrete x with the cross-entropy loss is minimizing an upper bound of the total variation distance between pdata and pG, with c chosen as the indicator cost function (Zhao et al., 2018).\nOur model is optimizing over conditional distributions q(z|x) of the form (6), a subset of all possible conditional distributions. Thus, after introducing input perturbations, our method is still minimizing an upper bound of the Wasserstein distance between pdata and pG described in (7)." }, { "heading": "B PROOF OF THEOREM 1", "text": "Theorem 1. For any one-to-one encoder mappingE from {x1, · · · , xn} to {z1, · · · , zn}, the optimal value of objective maxG∈GL 1 n ∑n i=1 log pG(xi|E(xi)) is the same.\nProof. Consider two encoder matchings xi to zα(i) and xi to zβ(i), where both α and β are permutations of the indices {1, . . . , n}. Suppose Gα is the optimal decoder model for the first matching (with permutations α). This implies\npGα = arg max G∈GL n∑ i=1 log pG(xi|zα(i))\nNow let pGβ (xi|zj) = pGα(xβα−1(i)|zj),∀i, j. ThenGβ can achieve exactly the same log-likelihood objective value for matching β as Gα for matching α, while still respecting the Lipschitz constraint." }, { "heading": "C PROOF OF THEOREM 2", "text": "Theorem 2. Let d be a distance metric over X . Suppose x1, x2, x3, x4 satisfy that with some > 0: d(x1, x2) < , d(x3, x4) < , and d(xi, xj) > for all other (xi, xj) pairs. In addition, z1, z2, z3, z4 satisfy that with some 0 < δ < ζ: ‖z1 − z2‖ < δ, ‖z3 − z4‖ < δ, and ‖zi − zj‖ > ζ for all other (zi, zj) pairs. Suppose our perturbation process C reflects local X geometry with: pC(xi|xj) = 1/2 if d(xi, xj) < and = 0 otherwise. For δ < 1L (2 log (σ(Lζ)) + log 2) and ζ > 1 L log ( 1/( √ 2− 1) ) , the denoising objective maxG∈GL 1 n ∑n i=1 ∑n j=1 pC(xj |xi) log pG(xi|E(xj)) (where n = 4) achieves the largest value when encoder E maps close pairs of x to close pairs of z.\nProof. Let [n] denote {1, . . . , n}, and assume without loss of generality that the encoder E maps each xi to zi. We also define A = {1, 2}, B = {3, 4} as the two x-pairs that lie close together. For our choice of C(x), the training objective to be maximized is:∑\ni,j∈A log pG(xi|E(xj)) + ∑ k,`∈B log pG(xk|E(x`))\n= ∑ i,j∈A log pG(xi|zj) + ∑ k,`∈B log pG(xk|z`) (8)\nThe remainder of our proof is split into two cases:\nCase 1. ||zj − z`|| > ζ for j ∈ A, ` ∈ B\nCase 2. ||zj − z`|| < δ for j ∈ A, ` ∈ B Under Case 1, x points that lie far apart also have z encodings that remain far apart. Under Case 2, x points that lie far apart have z encodings that lie close together. We complete the proof by showing that the achievable objective value in Case 2 is strictly worse than in Case 1, and thus an optimal encoder/decoder pair would avoid the x, z matching that leads to Case 2.\nIn Case 1 where ||zj − z`|| > ζ for all j ∈ A, ` ∈ B, we can lower bound the training objective (8) by choosing:\npG(xi|zj) = {\n(1− γ)/2 if i, j ∈ A or i, j ∈ B γ/2 otherwise\n(9)\nwith γ = σ(−Lζ) ∈ (0, 12 ), where σ(·) denotes the sigmoid function. Note that this ensures∑ i∈[4] pG(xi|zj) = 1 for each j ∈ [4], and does not violate the Lipschitz condition from Assumption 2 since:\n| log pG(xi|zj)− log pG(xi|z`)| {\n= 0 if j, ` ∈ A or j, ` ∈ B ≤ log ((1− γ)/γ) otherwise\nand thus remains≤ L||zj−z`|| when γ = σ(−Lζ) ≥ σ(−L||zj−z`||) = 1/[1+exp(L||zj−z`||)]. Plugging the pG(x|z) assignment from (9) into (8), we see that an optimal decoder can obtain training objective value ≥ 8 log [σ(Lζ)/2] in Case 1 where ||zj − z`|| > ζ, ∀j ∈ A, ` ∈ B.\nNext, we consider the alternative case where ||zj − z`|| < δ for j ∈ A, ` ∈ B. For i, j ∈ A and for all ` ∈ B, we have:\nlog pG(xi|zj) ≤ log pG(xi|z`) + L||zj − z`|| by Assumption 2 ≤ log pG(xi|z`) + Lδ\n≤ Lδ + log [\n1− ∑ k∈B\npG(xk|z`) ]\nsince ∑ k pG(xk|z`) ≤ 1\nContinuing from (8), the overall training objective in this case is thus:∑ i,j∈A log pG(xi|zj) + ∑ k,`∈B log pG(xk|z`)\n≤ 4Lδ + ∑ i,j∈A min `∈B log\n[ 1−\n∑ k∈B\npG(xk|z`) ]\n+ ∑ k,`∈B log pG(xk|z`)\n≤ 4Lδ + ∑ `∈B\n[ 2 log ( 1−\n∑ k∈B\npG(xk|z`) )\n+ ∑ k∈B\nlog pG(xk|z`) ]\n≤ 4Lδ − 12 log 2 using the fact that the optimal decoder for the bound in this case is: pG(xk|z`) = 1/4 for all k, ` ∈ B. Finally, plugging our range for δ stated in the Theorem 2, it shows that the best achievable objective value in Case 2 is strictly worse than the objective value achievable in Case 1. Thus, the optimal encoder/decoder pair under the AAE with perturbed x will always prefer the matching between {x1, . . . , x4} and {z1, . . . , z4} that ensures nearby xi are encoded to nearby zi (corresponding to Case 1)." }, { "heading": "D PROOF OF THEOREM 3", "text": "Theorem 3. Suppose x1, · · · , xn are divided into n/K clusters of equal size K, with Si denoting the cluster index of xi. Let the perturbation process C be uniform within clusters, i.e. pC(xi|xj) = 1/K if Si = Sj and = 0 otherwise. For a one-to-one encoder mapping E from {x1, · · · , xn} to {z1, · · · , zn}, the denoising objective maxG∈GL 1n ∑n i=1 ∑n j=1 pC(xj |xi) log pG(xi|E(xj)) is\nupper bounded by: 1n2 ∑ i,j:Si 6=Sj log σ(L‖E(xi)− E(xj)‖)− logK.\nProof. Without loss of generality, let E(xi) = zi for notational convenience. We consider what is the optimal decoder probability assignment pG(xi|zj) under the Lipschitz constraint 2. The objective of the AAE with perturbed x is to maximize:\n1\nn ∑ i ∑ j pC(xj |xi) log pG(xi|E(xj)) = 1 nK ∑ j ∑ i:Si=Sj log pG(xi|zj)\nWe first show that the optimal pG(·|·) will satisfy that the same probability is assigned within a cluster, i.e. p(xi|zj) = p(xk|zj) for all i, k s.t. Si = Sk. If not, let Psj = ∑ i:Si=s\npG(xi|zj), and we reassign pG′(xi|zj) = PSij/K. Then G′ still conforms to the Lipschitz constraint if G meets it, and G′ will have a larger target value than G.\nNow let us define Pj = ∑ i:Si=Sj pG(xi|zj) = K ·pG(xj |zj) (0 ≤ Pj ≤ 1). The objective becomes:\nmax pG\n1\nnK ∑ j ∑ i:Si=Sj log pG(xi|zj) = max pG 1 n ∑ j log pG(xj |zj)\n= max pG\n1\nn ∑ j logPj − logK\n= max pG\n1\n2n2 ∑ i ∑ j (logPi + logPj)− logK\n≤ 1 2n2 ∑ i ∑ j max pG (logPi + logPj)− logK\nConsider each term maxpG(logPi + logPj): when Si = Sj , this term can achieve the maximum value 0 by assigning Pi = Pj = 1; when Si 6= Sj , the Lipschitz constraint ensures that:\nlog(1− Pi) ≥ logPj − L‖zi − zj‖ log(1− Pj) ≥ logPi − L‖zi − zj‖\nTherefore:\nlogPi + logPj ≤ 2 log σ(L‖zi − zj‖) Overall, we thus have:\nmax pG\n1\nnK ∑ j ∑ i:Si=Sj log pG(xi|zj) ≤ 1 n2 ∑ i,j:Si 6=Sj log σ(L‖zi − zj‖)− logK" }, { "heading": "E DATASETS", "text": "The Yelp dataset is from Shen et al. (2017), which has 444K/63K/127K sentences of less than 16 words in length as train/dev/test sets, with a vocabulary of 10K. It was originally divided into positive and negative sentences for style transfer between them. Here we discard the sentiment label and let the model learn from all sentences indiscriminately. Our second dataset of Yahoo answers is from Yang et al. (2017). It was originally document-level. We perform sentence segmentation and keep sentences with length from 2 to 50 words. The resulting dataset has 495K/49K/50K sentences for train/dev/test sets, with vocabulary size 20K." }, { "heading": "F EXPERIMENTAL DETAILS", "text": "We use the same architecture to implement all models with different objectives. The encoder E, generator G, and the language model used to compute Forward/Reverse PPL are one-layer LSTMs with hidden dimension 1024 and word embedding dimension 512. The last hidden state of the encoder is projected into 128/256 dimensions to produce the latent code z for Yelp/Yahoo datasets\nrespectively, which is then projected and added with input word embeddings fed to the generator. The discriminator D is an MLP with one hidden layer of size 512. λ of AAE based models is set to 10 to ensure the latent codes are indistinguishable from the prior. All models are trained via the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.0005, β1 = 0.5, β2 = 0.999. At test time, encoder-side perturbations are disabled, and we use greedy decoding to generate x from z." }, { "heading": "G HUMAN EVALUATION", "text": "For the tense transfer experiment, the human annotator is presented with a source sentence and two outputs (one from each approach, presented in random order) and asked to judge which one successfully changes the tense while being faithful to the source, or whether both are good/bad, or if the input is not suitable to have its tense inverted. We collect labels from two human annotators and if they disagree, we further solicit a label from the third annotator.\nH GENERATION-RECONSTRUCTION RESULTS ON THE YAHOO DATASET\n= 0.01 <latexit sha1_base64=\"BV65hVKyh8AENiYdPQ/U24EATCs=\">AAAB8XicbVBNSwMxEJ2tX7V+VT16CRbBU9mtgl6EghePFewHtkvJprNtaDa7JFmhLP0XXjwo4tV/481/Y9ruQVsfhDzem2FmXpAIro3rfjuFtfWNza3idmlnd2//oHx41NJxqhg2WSxi1QmoRsElNg03AjuJQhoFAtvB+Hbmt59QaR7LBzNJ0I/oUPKQM2qs9NgL0NAbt+p6/XLFfnOQVeLlpAI5Gv3yV28QszRCaZigWnc9NzF+RpXhTOC01Es1JpSN6RC7lkoaofaz+cZTcmaVAQljZZ80ZK7+7shopPUkCmxlRM1IL3sz8T+vm5rw2s+4TFKDki0GhakgJiaz88mAK2RGTCyhTHG7K2EjqigzNqSSDcFbPnmVtGpV76Jau7+s1Gt5HEU4gVM4Bw+uoA530IAmMJDwDK/w5mjnxXl3PhalBSfvOYY/cD5/AAMJj8Y=</latexit>\n= 1 <latexit sha1_base64=\"LYFEefZ6fSFsCaDjOrsC+RMD4FE=\">AAAB7nicbVDLSgNBEOz1GeMr6tHLYBA8hd0o6EUIePEYwTwgWcLspJMMmX0w0yuEJR/hxYMiXv0eb/6Nk2QPmljQUFR1090VJEoact1vZ219Y3Nru7BT3N3bPzgsHR03TZxqgQ0Rq1i3A25QyQgbJElhO9HIw0BhKxjfzfzWE2oj4+iRJgn6IR9GciAFJyu1ugESv/V6pbJbcedgq8TLSRly1Hulr24/FmmIEQnFjel4bkJ+xjVJoXBa7KYGEy7GfIgdSyMeovGz+blTdm6VPhvE2lZEbK7+nsh4aMwkDGxnyGlklr2Z+J/XSWlw42cySlLCSCwWDVLFKGaz31lfahSkJpZwoaW9lYkR11yQTahoQ/CWX14lzWrFu6xUH67KtWoeRwFO4QwuwINrqME91KEBAsbwDK/w5iTOi/PufCxa15x85gT+wPn8AbUijxo=</latexit>\np = 0.1 <latexit sha1_base64=\"GHsAGalSKw3XqD6sSHkBD+XWD1k=\">AAAB7HicbVBNS8NAEJ34WetX1aOXxSJ4CkkV9CIUvHisYNpCG8pmO2mXbjZhdyOU0t/gxYMiXv1B3vw3btsctPXBwOO9GWbmRZng2njet7O2vrG5tV3aKe/u7R8cVo6OmzrNFcOApSJV7YhqFFxiYLgR2M4U0iQS2IpGdzO/9YRK81Q+mnGGYUIHksecUWOlILv1XL9XqXquNwdZJX5BqlCg0at8dfspyxOUhgmqdcf3MhNOqDKcCZyWu7nGjLIRHWDHUkkT1OFkfuyUnFulT+JU2ZKGzNXfExOaaD1OItuZUDPUy95M/M/r5Ca+CSdcZrlByRaL4lwQk5LZ56TPFTIjxpZQpri9lbAhVZQZm0/ZhuAvv7xKmjXXv3RrD1fVeq2IowSncAYX4MM11OEeGhAAAw7P8ApvjnRenHfnY9G65hQzJ/AHzucPooaN3A==</latexit>\np = 1 <latexit sha1_base64=\"ha4QS8DD+94qqDrze0UWo2Ckk0o=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoBeh4MVjRfsBbSib7aRdutmE3Y1QQn+CFw+KePUXefPfuG1z0NYHA4/3ZpiZFySCa+O6305hbX1jc6u4XdrZ3ds/KB8etXScKoZNFotYdQKqUXCJTcONwE6ikEaBwHYwvp357SdUmsfy0UwS9CM6lDzkjBorPSQ3Xr9ccavuHGSVeDmpQI5Gv/zVG8QsjVAaJqjWXc9NjJ9RZTgTOC31Uo0JZWM6xK6lkkao/Wx+6pScWWVAwljZkobM1d8TGY20nkSB7YyoGellbyb+53VTE177GZdJalCyxaIwFcTEZPY3GXCFzIiJJZQpbm8lbEQVZcamU7IheMsvr5JWrepdVGv3l5V6LY+jCCdwCufgwRXU4Q4a0AQGQ3iGV3hzhPPivDsfi9aCk88cwx84nz/IRY1q</latexit>\n= 0.01 <latexit sha1_base64=\"BV65hVKyh8AENiYdPQ/U24EATCs=\">AAAB8XicbVBNSwMxEJ2tX7V+VT16CRbBU9mtgl6EghePFewHtkvJprNtaDa7JFmhLP0XXjwo4tV/481/Y9ruQVsfhDzem2FmXpAIro3rfjuFtfWNza3idmlnd2//oHx41NJxqhg2WSxi1QmoRsElNg03AjuJQhoFAtvB+Hbmt59QaR7LBzNJ0I/oUPKQM2qs9NgL0NAbt+p6/XLFfnOQVeLlpAI5Gv3yV28QszRCaZigWnc9NzF+RpXhTOC01Es1JpSN6RC7lkoaofaz+cZTcmaVAQljZZ80ZK7+7shopPUkCmxlRM1IL3sz8T+vm5rw2s+4TFKDki0GhakgJiaz88mAK2RGTCyhTHG7K2EjqigzNqSSDcFbPnmVtGpV76Jau7+s1Gt5HEU4gVM4Bw+uoA530IAmMJDwDK/w5mjnxXl3PhalBSfvOYY/cD5/AAMJj8Y=</latexit>\np = 0.1 <latexit sha1_base64=\"GHsAGalSKw3XqD6sSHkBD+XWD1k=\">AAAB7HicbVBNS8NAEJ34WetX1aOXxSJ4CkkV9CIUvHisYNpCG8pmO2mXbjZhdyOU0t/gxYMiXv1B3vw3btsctPXBwOO9GWbmRZng2njet7O2vrG5tV3aKe/u7R8cVo6OmzrNFcOApSJV7YhqFFxiYLgR2M4U0iQS2IpGdzO/9YRK81Q+mnGGYUIHksecUWOlILv1XL9XqXquNwdZJX5BqlCg0at8dfspyxOUhgmqdcf3MhNOqDKcCZyWu7nGjLIRHWDHUkkT1OFkfuyUnFulT+JU2ZKGzNXfExOaaD1OItuZUDPUy95M/M/r5Ca+CSdcZrlByRaL4lwQk5LZ56TPFTIjxpZQpri9lbAhVZQZm0/ZhuAvv7xKmjXXv3RrD1fVeq2IowSncAYX4MM11OEeGhAAAw7P8ApvjnRenHfnY9G65hQzJ/AHzucPooaN3A==</latexit>\n1 = 0.01 <latexit sha1_base64=\"bloHzXcIQny3d2FBJejCRR0ISgE=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxUQTdC0Y3LCvYB7VgymUwbmkmGJKOUof/hxoUibv0Xd/6NaTsLbT0QcjjnXHJzgoQzbVz32ymsrK6tbxQ3S1vbO7t75f2DlpapIrRJJJeqE2BNORO0aZjhtJMoiuOA03Ywupn67UeqNJPi3owT6sd4IFjECDZWeuhxGw1x37tyq67XL1fsNQNaJl5OKpCj0S9/9UJJ0pgKQzjWuuu5ifEzrAwjnE5KvVTTBJMRHtCupQLHVPvZbOsJOrFKiCKp7BEGzdTfExmOtR7HgU3G2Az1ojcV//O6qYku/YyJJDVUkPlDUcqRkWhaAQqZosTwsSWYKGZ3RWSIFSbGFlWyJXiLX14mrVrVO6vW7s4r9eu8jiIcwTGcggcXUIdbaEATCCh4hld4c56cF+fd+ZhHC04+cwh/4Hz+ALGUkVM=</latexit>\n1 = 0.2 <latexit sha1_base64=\"XVlFqVG7AxVj1iPPG+FjbbITpFY=\">AAAB9HicbVDLSsNAFL2pr1pfVZduBovgKiRV0I1QdOOygn1AG8pkMmmHTiZxZlIood/hxoUibv0Yd/6N0zQLbT0wcDjnXO6d4yecKe0431ZpbX1jc6u8XdnZ3ds/qB4etVWcSkJbJOax7PpYUc4EbWmmOe0mkuLI57Tjj+/mfmdCpWKxeNTThHoRHgoWMoK1kbw+N9EAD9wbx64PqjXHdnKgVeIWpAYFmoPqVz+ISRpRoQnHSvVcJ9FehqVmhNNZpZ8qmmAyxkPaM1TgiCovy4+eoTOjBCiMpXlCo1z9PZHhSKlp5JtkhPVILXtz8T+vl+rw2suYSFJNBVksClOOdIzmDaCASUo0nxqCiWTmVkRGWGKiTU8VU4K7/OVV0q7b7oVdf7isNW6LOspwAqdwDi5cQQPuoQktIPAEz/AKb9bEerHerY9FtGQVM8fwB9bnD0FEkRo=</latexit>\n1 = 0.05 <latexit sha1_base64=\"JaxMZUeaAnr+QIWpkAvnRdRwED4=\">AAAB9XicbVDLSgMxFL3js9ZX1aWbYBFclZmq6EYounFZwT6gHUsmk2lDM8mQZJQy9D/cuFDErf/izr8xbWehrQdCDuecS25OkHCmjet+O0vLK6tr64WN4ubW9s5uaW+/qWWqCG0QyaVqB1hTzgRtGGY4bSeK4jjgtBUMbyZ+65EqzaS4N6OE+jHuCxYxgo2VHrrcRkPc867cinveK5XtNQVaJF5OypCj3it9dUNJ0pgKQzjWuuO5ifEzrAwjnI6L3VTTBJMh7tOOpQLHVPvZdOsxOrZKiCKp7BEGTdXfExmOtR7FgU3G2Az0vDcR//M6qYku/YyJJDVUkNlDUcqRkWhSAQqZosTwkSWYKGZ3RWSAFSbGFlW0JXjzX14kzWrFO61U787Kteu8jgIcwhGcgAcXUINbqEMDCCh4hld4c56cF+fd+ZhFl5x85gD+wPn8AbekkVc=</latexit>\n1 = 0.01 <latexit sha1_base64=\"bloHzXcIQny3d2FBJejCRR0ISgE=\">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsxUQTdC0Y3LCvYB7VgymUwbmkmGJKOUof/hxoUibv0Xd/6NaTsLbT0QcjjnXHJzgoQzbVz32ymsrK6tbxQ3S1vbO7t75f2DlpapIrRJJJeqE2BNORO0aZjhtJMoiuOA03Ywupn67UeqNJPi3owT6sd4IFjECDZWeuhxGw1x37tyq67XL1fsNQNaJl5OKpCj0S9/9UJJ0pgKQzjWuuu5ifEzrAwjnE5KvVTTBJMRHtCupQLHVPvZbOsJOrFKiCKp7BEGzdTfExmOtR7HgU3G2Az1ojcV//O6qYku/YyJJDVUkPlDUcqRkWhaAQqZosTwsSWYKGZ3RWSIFSbGFlWyJXiLX14mrVrVO6vW7s4r9eu8jiIcwTGcggcXUIdbaEATCCh4hld4c56cF+fd+ZhHC04+cwh/4Hz+ALGUkVM=</latexit> 1 = 0.1\n<latexit sha1_base64=\"syeDM2EVxd5nOmJkFAh9HY/R1JA=\">AAAB9HicbVDLSsNAFL2pr1pfVZduBovgKiRV0I1QdOOygn1AG8pkMmmHTiZxZlIood/hxoUibv0Yd/6N0zQLbT0wcDjnXO6d4yecKe0431ZpbX1jc6u8XdnZ3ds/qB4etVWcSkJbJOax7PpYUc4EbWmmOe0mkuLI57Tjj+/mfmdCpWKxeNTThHoRHgoWMoK1kbw+N9EAD9wbx3YH1ZpjOznQKnELUoMCzUH1qx/EJI2o0IRjpXquk2gvw1Izwums0k8VTTAZ4yHtGSpwRJWX5UfP0JlRAhTG0jyhUa7+nshwpNQ08k0ywnqklr25+J/XS3V47WVMJKmmgiwWhSlHOkbzBlDAJCWaTw3BRDJzKyIjLDHRpqeKKcFd/vIqaddt98KuP1zWGrdFHWU4gVM4BxeuoAH30IQWEHiCZ3iFN2tivVjv1sciWrKKmWP4A+vzBz/AkRk=</latexit>\n= 0.1 <latexit sha1_base64=\"lQ3dfPYIjt1Rujnm9oRIE6ahP7g=\">AAAB8HicbVBNS8NAEJ34WetX1aOXxSJ4CkkV9CIUvXisYD+kDWWznbRLN5uwuxFK6a/w4kERr/4cb/4bt20O2vpg4PHeDDPzwlRwbTzv21lZXVvf2CxsFbd3dvf2SweHDZ1kimGdJSJRrZBqFFxi3XAjsJUqpHEosBkOb6d+8wmV5ol8MKMUg5j2JY84o8ZKj50QDb32XL9bKnuuNwNZJn5OypCj1i19dXoJy2KUhgmqddv3UhOMqTKcCZwUO5nGlLIh7WPbUklj1MF4dvCEnFqlR6JE2ZKGzNTfE2Maaz2KQ9sZUzPQi95U/M9rZya6CsZcpplByeaLokwQk5Dp96THFTIjRpZQpri9lbABVZQZm1HRhuAvvrxMGhXXP3cr9xfl6k0eRwGO4QTOwIdLqMId1KAODGJ4hld4c5Tz4rw7H/PWFSefOYI/cD5/AJdTj5w=</latexit>\np = 0.2 <latexit sha1_base64=\"WwDnrEuQH6Z7JXaD8hcdJoK6m0s=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0iqoBeh6MVjBdMW2lA22027dLNZdjdCCf0NXjwo4tUf5M1/47bNQVsfDDzem2FmXiQ508bzvp3S2vrG5lZ5u7Kzu7d/UD08auk0U4QGJOWp6kRYU84EDQwznHakojiJOG1H47uZ336iSrNUPJqJpGGCh4LFjGBjpUDeeG69X615rjcHWiV+QWpQoNmvfvUGKckSKgzhWOuu70kT5lgZRjidVnqZphKTMR7SrqUCJ1SH+fzYKTqzygDFqbIlDJqrvydynGg9SSLbmWAz0sveTPzP62Ymvg5zJmRmqCCLRXHGkUnR7HM0YIoSwyeWYKKYvRWREVaYGJtPxYbgL7+8Slp1179w6w+XtcZtEUcZTuAUzsGHK2jAPTQhAAIMnuEV3hzhvDjvzseiteQUM8fwB87nD6jaje0=</latexit>\n= 0.4 <latexit sha1_base64=\"22+9GVtaNf2f2jPffZehEOFqgDM=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbB07JbC3oRil48VrAf0i4lm2bb0CS7JFmhLP0VXjwo4tWf481/Y9ruQVsfDDzem2FmXphwpo3nfTuFtfWNza3idmlnd2//oHx41NJxqghtkpjHqhNiTTmTtGmY4bSTKIpFyGk7HN/O/PYTVZrF8sFMEhoIPJQsYgQbKz32QmrwtefW+uWK53pzoFXi56QCORr98ldvEJNUUGkIx1p3fS8xQYaVYYTTaamXappgMsZD2rVUYkF1kM0PnqIzqwxQFCtb0qC5+nsiw0LriQhtp8BmpJe9mfif101NdBVkTCapoZIsFkUpRyZGs+/RgClKDJ9Ygoli9lZERlhhYmxGJRuCv/zyKmlVXf/Crd7XKvWbPI4inMApnIMPl1CHO2hAEwgIeIZXeHOU8+K8Ox+L1oKTzxzDHzifP5vfj58=</latexit>\np = 0.5 <latexit sha1_base64=\"NcRyqU1q7HvM5CG6xRdiEyR/sWI=\">AAAB7HicbVBNS8NAEJ31s9avqkcvi0XwFJKq6EUoevFYwbSFNpTNdtMu3WzC7kYoob/BiwdFvPqDvPlv3LY5aOuDgcd7M8zMC1PBtXHdb7Syura+sVnaKm/v7O7tVw4OmzrJFGU+TUSi2iHRTHDJfMONYO1UMRKHgrXC0d3Ubz0xpXkiH804ZUFMBpJHnBJjJT+9cZ3LXqXqOu4MeJl4BalCgUav8tXtJzSLmTRUEK07npuaICfKcCrYpNzNNEsJHZEB61gqScx0kM+OneBTq/RxlChb0uCZ+nsiJ7HW4zi0nTExQ73oTcX/vE5mousg5zLNDJN0vijKBDYJnn6O+1wxasTYEkIVt7diOiSKUGPzKdsQvMWXl0mz5njnTu3holq/LeIowTGcwBl4cAV1uIcG+ECBwzO8whuS6AW9o4956woqZo7gD9DnD61mjfA=</latexit>\nFigure H.1: Generation-reconstruction trade-off of various text autoencoders on Yahoo. The “real data” line marks the PPL of a language model trained and evaluated on real data. We strive to approach the lower right corner with both high BLEU and low PPL. The grey box identifies hyperparameters we use for respective models in subsequent experiments. Points of severe collapse (Reverse PPL > 300) are removed from the right panel." }, { "heading": "I NEIGHBORHOOD PRESERVATION", "text": "" }, { "heading": "J ADDITIONAL EXAMPLES", "text": "Source how many gospels are there that were n’t included in the bible ?\n5-NN by AAE there are no other gospels that were n’t included in the bible . how many permutations are there for the letters in the word _UNK ’ ? anyone else picked up any of the _UNK in the film ? what ’s the significance of the number 40 in the bible ? how many pieces of ribbon were used in the _UNK act ?\n5-NN by DAAE there are no other gospels that were n’t included in the bible . how many litres of water is there in the sea ? how many _UNK gods are there in the classroom ? how many pieces of ribbon were used in the _UNK act ? how many times have you been grounded in the last year ?\nSource how do i change colors in new yahoo mail beta ?\n5-NN by AAE how should you present yourself at a _UNK speaking exam ? how can i learn to be a hip hop producer ? how can i create a _UNK web on the internet ? how can i change my _UNK for female not male ? what should you look for in buying your first cello ?\n5-NN by DAAE how do i change that back to english ? is it possible to _UNK a yahoo account ? how do i change my yahoo toolbar options ? what should you look for in buying your first cello ? who do you think should go number one in the baseball fantasy draft , pujols or _UNK ?\nTable J.1: Examples of nearest neighbors in the latent Euclidean space of AAE and DAAE on Yahoo dataset.\nInput the staff is rude and the dr. does not spend time with you . slow service , the food tasted like last night ’s leftovers . ARAE the staff is rude and the dr. does not worth two with you . slow service , the food tasted like last night ’s leftovers . β-VAE the staff was rude and the dr. did not spend time with your attitude . slow service , the food tastes like last place serves . AAE the staff was rude and the dr. does not spend time with you . slow service , the food tasted like last night ’s leftovers . LAAE the staff was rude and the dr. is even for another of her entertained . slow service , the food , on this burger spot ! DAAE the staff was rude and the dr. did not make time with you . slow service , the food tastes like last night ... .\nInput they are the worst credit union in arizona . i reported this twice and nothing was done . ARAE they are the worst bank credit in arizona . i swear this twice and nothing was done . β-VAE they were the worst credit union in my book . i ’ve gone here and nothing too . AAE they are the worst credit union in arizona . i reported this twice and nothing was done . LAAE they were the worst credit union in my heart . i dislike this twice so pleasant guy . DAAE they were the worst credit union in arizona ever . i hate this pizza and nothing done .\nTable J.2: Additional examples of vector arithmetic for tense inversion.\nAAE DAAE\nInput this woman was extremely rude to me . this woman was extremely rude to me . +v this woman was extremely rude to me . this woman was extremely nice . +1.5v this woman was extremely rude to baby . this staff was amazing . +2v this woman was extremely rude to muffins . this staff is amazing .\nInput my boyfriend said his pizza was basic and bland also . my boyfriend said his pizza was basic and bland also . +v my boyfriend said his pizza was basic and tasty also . my boyfriend said his pizza is also excellent . +1.5v my shared said friday pizza was basic and tasty also . my boyfriend and pizza is excellent also . +2v my shared got pizza pasta was basic and tasty also . my smoked pizza is excellent and also exceptional .\nInput the stew is quite inexpensive and very tasty . the stew is quite inexpensive and very tasty . −v the stew is quite inexpensive and very tasty . the stew is quite an inexpensive and very large . −1.5v the stew is quite inexpensive and very very tasteless . the stew is quite a bit overpriced and very fairly brown . −2v the – was being slow - very very tasteless . the hostess was quite impossible in an expensive and very few customers .\nInput the patrons all looked happy and relaxed . the patrons all looked happy and relaxed . −v the patrons all looked happy and relaxed . the patrons all helped us were happy and relaxed . −1.5v the patrons all just happy and smelled . the patrons that all seemed around and left very stressed . −2v the patrons all just happy and smelled . the patrons actually kept us all looked long and was annoyed .\nTable J.3: Additional examples of vector arithmetic for sentiment transfer.\nInput 1 what language should i learn to be more competitive in today ’s global culture ? Input 2 what languages do you speak ?\nAAE what language should i learn to be more competitive in today ’s global culture ? what language should i learn to be more competitive in today ’s global culture ? what language should you speak ? what languages do you speak ? what languages do you speak ?\nDAAE what language should i learn to be more competitive in today ’s global culture ? what language should i learn to be competitive today in arabic ’s culture ? what languages do you learn to be english culture ? what languages do you learn ? what languages do you speak ?\nInput 1 i believe angels exist . Input 2 if you were a character from a movie , who would it be and why ?\nAAE i believe angels exist . i believe angels - there was the exist exist . i believe in tsunami romeo or <unk> i think would it exist as the world population . if you were a character from me in this , would we it be ( why ! if you were a character from a movie , who would it be and why ?\nDAAE i believe angels exist . i believe angels exist in the evolution . what did <unk> worship by in <unk> universe ? if you were your character from a bible , it will be why ? if you were a character from a movie , who would it be and why ?\nTable J.4: Interpolations between two input sentences generated by AAE and our model on the Yahoo dataset." } ]
2,019
null
SP:fe137babff80e9e5f48e44f36a86a71d095d6264
[ "This paper proposes a new option discovery method for multi-task RL to reuse the option learned in previous tasks for better generalization. The authors utilize demonstrations collected beforehand and train an option learning framework offline by minimizing the expected number of terminations while encouraging diverse options by adding a regularization term. During the offline training, they add one option at a time and move onto the next option when the current loss fails to improve over the previous loss, which enables automatically learning the number of options without manually specifying it. Experiments are conducted on the four rooms environment and Atari 2600 games and demonstrate that the proposed method leads to faster learning on new tasks.", "The authors propose to learn reusable options to make use of prior information and claim to do so with minimal information from the user (such as # of options needed to solve the task, which options etc). The claim is that the agent is first able to learn a near-optimal policy for a small # of problems and then is able to solve a large # of tasks by such a learned policy. The authors build on the idea that minimizing the number of decisions made by the agent results in discovering reusable options. The options are learned offline by learning to solve a small number of tasks. Their algorithm introduces one option at a time until introducing a new option doesn’t improve the objective further. The ideas are interesting, However, the paper as it stands is lacking in thorough evaluation." ]
Reinforcement learning (RL) has become an increasingly active area of research in recent years. Although there are many algorithms that allow an agent to solve tasks efficiently, they often ignore the possibility that prior experience related to the task at hand might be available. For many practical applications, it might be unfeasible for an agent to learn how to solve a task from scratch, given that it is generally a computationally expensive process; however, prior experience could be leveraged to make these problems tractable in practice. In this paper, we propose a framework for exploiting existing experience by learning reusable options. We show that after an agent learns policies for solving a small number of problems, we are able to use the trajectories generated from those policies to learn reusable options that allow an agent to quickly learn how to solve novel and related problems.
[]
[ { "authors": [ "Haitham Bou Ammar", "Eric Eaton", "Paul Ruvolo", "Matthew E. Taylor" ], "title": "Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment", "venue": "In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Alon Farchy", "Samuel Barrett", "Patrick MacAlpine", "Peter Stone" ], "title": "Humanoid robots learning to walk faster: From the real world to simulation and back", "venue": "In Proc. of 12th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS),", "year": 2013 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jean Harb", "Pierre-Luc Bacon", "Martin Klissarov", "Doina Precup" ], "title": "When waiting is not an option: Learning options with a deliberation cost", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Anna Harutyunyan", "Will Dabney", "Diana Borsa", "Nicolas Heess", "Remi Munos", "Doina Precup" ], "title": "The termination critic", "venue": "In AISTAT,", "year": 2019 }, { "authors": [ "Marlos C. Machado", "Marc G. Bellemare", "Michael Bowling" ], "title": "A Laplacian Framework for Option Discovery in Reinforcement Learning", "venue": null, "year": 2017 }, { "authors": [ "Sridhar Mahadevan" ], "title": "Proto-value functions: Developmental reinforcement learning", "venue": "In Proceedings of the 22nd International Conference on Machine Learning", "year": 2005 }, { "authors": [ "A. McGovern", "R. Sutton" ], "title": "Macro actions in reinforcement learning: An empirical analysis", "venue": "Technical report,", "year": 1998 }, { "authors": [ "Amy McGovern", "Andrew G. Barto" ], "title": "Automatic discovery of subgoals in reinforcement learning using diverse density", "venue": "In Proceedings of the Eighteenth International Conference on Machine Learning,", "year": 2001 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jürgen Schmidhuber", "Jieyu Zhao", "Nicol N. Schraudolph" ], "title": "Learning to learn. chapter Reinforcement Learning with Self-modifying Policies, pp. 293–309", "venue": "URL http://dl.acm.org/citation.cfm?id=296635", "year": 1998 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "On learning how to learn learning strategies", "venue": "Technical report,", "year": 1995 }, { "authors": [ "David Silver", "Aja Huang", "Chris J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of Go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "Richard S. Sutton", "Doina Precup" ], "title": "Intra-option learning about temporally abstract actions", "venue": "Proceedings of the 15th International Conference on Machine Learning (ICML-1998),", "year": 1998 }, { "authors": [ "Richard S. Sutton", "Doina Precup", "Satinder P. Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial Intelligence,", "year": 1999 }, { "authors": [ "Matthew E. Taylor", "Peter Stone" ], "title": "Transfer learning for reinforcement learning domains: A survey", "venue": "J. Mach. Learn. Res.,", "year": 2009 }, { "authors": [ "Matthew E. Taylor", "Peter Stone", "Yaxin Liu" ], "title": "Transfer learning via inter-task mappings for temporal difference learning", "venue": "J. Mach. Learn. Res.,", "year": 2007 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": "Commun. ACM,", "year": 1995 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) techniques have experienced much of their success in simulated environments, such as video games (Mnih et al., 2015) or board games (Silver et al., 2016; Tesauro, 1995). One of the main reasons why RL has worked so well in these applications is that we are able simulate millions of interactions with the environment in a relatively short period of time. In many real world applications, however, where the agent interacts with the physical world, it might not be easy to generate such a large number of interactions. The time and cost associated with training such systems could render RL an unfeasible approach for training in large scale.\nAs a concrete example, consider training a large number of humanoid robots (agents) to move quickly, as in the Robocup competition (Farchy et al., 2013). Although the agents have similar dynamics, subtle variations mean that a single policy shared across all agents would not be an effective solution. Furthermore, learning a policy from scratch for each agent is too data-inefficient to be practical. As shown by Farchy et al. (2013), this type of problem can be addressed by leveraging the experience obtained from solving a related task (e.g., walking) to quickly learn a policy for each individual agent that is tailored to a new task (e.g., running). These situations also occurs in industry, such as robots tasked with sorting items in fulfillment centers. A simple approach, like using PD controllers, would fail to adapt to the forces generated from picking up objects with different weight distributions, causing the arm to drop the objects. RL is able to mitigate this problem by learning a policy for each arm that is able to make corrections quickly, which is tailored to the robot’s dynamics. However, training a new policy for each agent would be far too costly to be a practical solution. In these scenarios, it is possible to use a small number of policies learned a subset of the agents, and then leverage the experience obtained from learning those policies to allow the remaining agents to quickly learn their corresponding policies. This approach can turn problems that are prohibitively expensive to solve into relatively simple problems.\nTo make use of prior experience and improve learning on new related problems in RL, several lines of work, which are complementary to each other, have been proposed and are actively being studied. Transfer learning (Taylor & Stone, 2009) refers to the problem of adapting information acquired while solving one task to another. One might consider learning a mapping function that allows for a policy learned in one task to be used in a different task (Ammar et al., 2015) or simply learn a mapping of the value function learned in one task to another (Taylor et al., 2007). These techniques can be quite effective, but are also limited in that they consider mapping information from one source task to another target task. Another approach to reusing prior knowledge is through meta learning\nor learning to learn (Schmidhuber, 1995; Schmidhuber et al., 1998). In the context of RL, the goal under this framework for an agent to be exposed to a number of tasks where it can learn some general behavior that generalizes to new tasks (Finn et al., 2017).\nOne last technique to leverage prior experience, and the one this paper focuses on, is through temporally extended actions or temporal abstractions (McGovern & Sutton, 1998; Sutton et al., 1999). While in the standard RL framework the agent has access to a set of primitive actions (i.e., actions that last for one time step), temporally extended actions allow an agent to execute actions that last for several time-steps. They introduce a bias in the behavior of the agent which, if appropriate for the problem at hand, results in dramatic improvements in how quickly the agent learns to solve a new task. A popular representation for temporally extended actions is the options framework (Sutton & Precup, 1998; Sutton et al., 1999) (formally introduced in the next section), which is the focus of this work. It has been shown that options learned in a specific task or set of tasks, can be reused to improve learning on new tasks (Machado et al., 2017; Bacon et al., 2017); however, this often requires knowledge from the user about which options or how many options are appropriate for the type of problems the agent will face.\nIn this paper, we propose learning reusable options for a set of related tasks with minimal information provided by the user. Throughout this paper, we refer as (near)-optimal policies to those policies that were learned to solve a particular task, but are not strictly speaking optimal. We consider the scenario where the agent must solve a large numbers of tasks and show that after learning a (near)-optimal policy for a small number of problems, we can learn an appropriate number of options that facilitates learning in a remaining set of tasks. To do so, we propose learning a set of options that minimize the expected number of decisions needed to represent trajectories generated from the (near)-optimal policies learned by the agent, while also maximizing the probability of generating those trajectories. Unlike techniques that learn options to rach bottleneck states (McGovern & Barto, 2001) or states deemed of high value (Machado et al., 2017), our method seeks to learn options that are able to generate trajectories known to perform well. This does not necessarily lead to learn options that reach states one might consider “interesting”." }, { "heading": "2 BACKGROUND AND NOTATION", "text": "A Markov decision process (MDP) is a tuple,M = (S,A, P,R, γ, d0), where S is the set of possible states of the environment, A is the set of possible actions that the agent can take, P (s, a, s′) is the probability that the environment will transition to state s′ ∈ S if the agent executes action a ∈ A in state s ∈ S , R(s, a, s′) is the expected reward received after taking action a in state s and transitioning to state s′, d0 is the initial state distribution, and γ ∈ [0, 1] is a discount factor for rewards received in the future. We use t to index the time-step and write St,At, andRt to denote the state, action, and reward at time t. A policy, π : S × A → [0, 1], provides a conditional distribution over actions given each possible state: π(s, a) = Pr(At = a|St = s). We denote a trajectory of length t as ht = (s0, a0, r0, . . . , st−1, at−1, rt−1, st), that is, ht is defined as a sequence of states, actions and rewards observed after following some policy for t time-steps. This work focuses on learning options that can be used for a set of related tasks. We consider the setting where an agent must solve a set of related tasks, where each task is an MDP, M = (S,A, PM , RM , γ, dM0 ); that is, each task is an MDP with its own transition function, reward function and initial state distribution, with shared state and action sets.\nAn option, o = (Io, µo, βo), is a tuple in which Io ⊆ S is the set of states in which option o can be executed (the initiation set), µo is a policy that governs the behavior of the agent while executing o, and βo : S → [0, 1] is a termination function that determines the probability that o terminates in a given state. We assume that Io = S for all options o; that is, the options are available at every state. The options framework does not dictate how an agent should choose between available options or how options should be discovered. A common approach to selecting between options is to a learn a policy over options, which is defined by the probability of choosing an option in a particular state. Two recent popular approaches to option discovery are eigenoptions (Machado et al., 2017) and the option-critic architecture (Bacon et al., 2017).\nThe eigenoptions (Machado et al., 2017) of an MDP are the optimal policies for a set of implicitly defined reward functions called eigenpurposes. Eigenpurposes are defined in terms of proto-value functions (Mahadevan, 2005), which are in turn derived from the eigenvectors of a modified adjacency matrix over states for the MDP. The intuition is that no matter the true reward function, the\neigenoptions allow an agent to quickly traverse the transition graph, resulting in better exploration of the state space and faster learning. However, there are two major downsides: 1) the adjacency matrix is often not known a priori, and may be difficult to construct for large MDPs, and 2) for each eigenpurpose, constructing the corresponding eigenoption requires solving a new MDP. The option-critic architecture (Bacon et al., 2017) is a more direct approach to learn options and a policy over options simultaneously using policy gradient methods. One issue that often arises within this framework is that the termination functions of the learned options tend to collapse to “always terminate”. In a later publication, the authors built on this work to consider the case where there is a cost associated with switching options (Harb et al., 2018). This method resulted in the agent learning to use a single option while it was appropriate and terminate when an option switch was needed, allowing it to discover improved policies for a particular task. The authors argue that minimizing the use of the policy over options may be desirable, as the cost of choosing an option may be greater than the cost of choosing a primitive action when using an option. Recent work by Harutyunyan et al. (2019) approaches the aforementioned termination problem by explicitly optimizing the termination function of options to focus on small regions of the state space. However, in contrast to the work presented in these paper, these methods do not explicitly take into consideration that the agent might face many related tasks in the future.\nWe build on the idea that minimizing the number of decisions made by an agent leads to the discovery of general reusable options, and propose an offline method where they are learned by solving a small number of tasks. The options are then leveraged to quickly solve new problems the agent will face in the future. We use the trajectories generated while learning (near)-optimal policies, and learn an appropriate set of options by directly minimizing the expected number of decisions the agent makes while simultaneously maximizing the probability of generating the observed trajectories." }, { "heading": "3 LEARNING REUSABLE OPTIONS FROM EXPERIENCE", "text": "In this section, we introduce the objective for learning a set of reusable options for a set of related tasks. Our algorithm introduces one option at a time until introducing a new option does not improve the objective further. This procedure results in a natural way of learning an adequate number of options without having to predefine it; a new option is included if it is able to improve the probability of generating optimal behavior while minimizing the number of decisions made by the agent. Our method assumes that the agent has learn a policy for a small number of tasks, and sample trajectories are obtained from these (near)-optimal policies. Notice that the propose algorithm is only concerned with being able to recreate the demonstrated trajectories, so if these were sample from a poorly performing policy the options learned are unlikely to provide any benefits." }, { "heading": "3.1 PROBLEM FORMULATION", "text": "In the options framework, at each time-step, t, the agent chooses an action, At, based on the current option, Ot. Let Tt be a Bernoulli random variable, where Tt = 1 if the previous option, Ot−1, terminated at time t, and Tt = 0 otherwise. If Tt = 1, Ot is chosen using the policy over options, π. If Tt = 0, then the previous option continues, that is, Ot = Ot−1. To ensure we can represent any trajectory, we consider primitive actions to be options which always select one specific action and then terminate; that is, for an option, o, corresponding to a primitive, a, for all s ∈ S, the termination function would be given by βo(s) = 1, and the policy by µ(s, a′) = 1 if a′ = a and 0 otherwise.\nLet O = OA ∪ OO denote a set of options, {o1, . . . , on}, where OA refers to the set of options corresponding to primitive actions and OO to the set corresponding to temporal abstractions. Furthermore, letH be a random variable denoting a trajectory of length |H| generated by a near-optimal policy, and let Ht be a random variable denoting the sub-trajectory of H up to the state encountered at time-step t. We seek to find a set, O∗ = {o∗1, . . . , o∗n}, that maximizes the following objective:\nJ(π,O) = E [ |H|∑ t=1 Pr(Tt = 0, Ht|π,O) + λ1g(H,OO) ] , (1)\nwhere g(h,OO) is a regularizer that encourages a diverse set of options, and λ1 is a scalar hyperparameter. If we are also free to learn the parameters of π, then O∗ ∈ arg max\nO max π\nJ(π,O).\nOne choice for g is the average KL divergence on a given trajectory over the set of m options\nbeing learned: g(h,OO) = 2m(m−1) ∑ o,o′∈OO ∑|h|−1 t=0 DKL ( µo(st)||µo′(st)). 1 Intuitively, we seek to find options that are capable of generating near-optimal trajectories with a small number of terminations. Notice that minimizing the number of terminations is the same as minimizing the number of decisions made by the policy over options, as each termination requires the policy to choose a new option. Given a set of options, a policy over options, and a near-optimal sample trajectory, we can calculate the joint probability for a trajectory exactly, and estimate equation 1 by averaging over a set of near-optimal trajectories." }, { "heading": "3.2 OPTIMIZATION OBJECTIVE FOR LEARNING OPTIONS", "text": "Given that the agent must solve a set of tasks, we can use the experienced gathered on a subset of tasks to obtain trajectories demonstrating optimal behavior. Given a set,H, of trajectories generated from an initial subset of tasks, we can now estimate the expectation in equation 1 to learn options that can be leveraged in the remaining problems. Because the probability of generating any trajectory approaches 0 as the length of the trajectory increases, we make modify the original objective for better numerical stability, and arrive to the objective Ĵ that we optimize in practice.\nĴ(π,O,H) = 1H ∑ h∈H ( λ2 Pr(H = h|π,O)︸ ︷︷ ︸\nprobability of generating h\n− ∑|h|\nt=1 E [Tt = 1|Ht = ht, π,O] |h|︸ ︷︷ ︸\nexpected number of terminations\n+ λ1g(h,OO) )︸ ︷︷ ︸\nencourage diverse options\n.\n(2)\nA more detailed discussion on how we arrived to this objective is provided in Appendix A. We can express equation 2 entirely in terms of the policy over options π, options O = {o1, . . . , on} and the transition function, P (which we estimate from samples). The following theorems show how to calculate the first two terms in equation 2, allowing us to maximize the proposed objective. Theorem 1. Given a set of options, O, and a policy, π, over options, the expected number of terminations for a trajectory h is given by:\n|h|∑ t=1 E [ Tt = 1 ∣∣∣∣Ht = ht, π,O] = |h|∑ t=1 ∑ o∈O βo(st) µo(st−1, at−1) Pr(Ot−1 = o|Ht−1 = ht−1, π,O)∑ o′∈O µo(st−1, at−1) Pr(Ot−1 = o ′|Ht−1 = ht−1, π,O) ,\n(3)\nPr(Ot = o|Ht = ht, π,O) = [( π(st, o)βo(st) ) + ( P (st−1, at−1, st)αt−1(o)(1− βo(st−1)) )] ,\nand Pr(O0 = o|H0 = h0, π,O) = π(s0, o).\nProof. See Appendix B.\nTheorem 2. Given a set of options O and a policy π over options, the probability of generating a trajectory h of length |h| is given by:\nPr(H|h| = h|h||π,O) =d0(s0) [∑ o∈O π(s0, o)µo(s0, a0)f(h|h|, o, 1) ] |h|−1∏ k=0 P (sk, ak, sk+1),\nwhere f is a recursive function defined as:\nf(ht, o, i) = 1, if i = t[( βo(si) ∑ o′∈O π(si+1, o ′)µo′(si+1, ai+1)f(ht, o ′, i+ 1) ) + ( (1− βo(si))µo(si+1, ai+1)f(ht, o, i+ 1) )] otherwise\nProof. See Appendix C.\n1This term is only defined when we consider more than one option. Otherwise, we set this term to 0.\nAlgorithm 1 Option Learning Framework - Pseudocode\n1: Collect set of trajectoriesH 2: Initialize option set O with primitive options 3: done = false 4: Ĵprev = −∞ 5: while done == false do 6: Initialize new option o′ = (µ′φ, β ′ ψ), ini-\ntializing parameters for φ and ψ. 7: O′ = O ∪ o′ 8: Initialize parameters θ of policy πθ 9: for k=1,. . . ,N do\n10: Ĵk = Ĵ(πθ,O′,H) 11: θ = θ + α∂Ĵk∂θ 12: φ = φ+ α∂Ĵk∂φ 13: ψ = ψ + α∂Ĵk∂ψ 14: if ĴN − Ĵprev < ∆ then 15: done = true 16: else 17: O = O′ 18: Ĵprev = ĴN 19: Return new option set O\nGiven a parametric representation of the option policies and termination functions for each o ∈ O and for the policy π over options, we use Theorems 1 and 2 to differentiate the objective in equation 2 with respect to their parameters and optimize with any numerical optimization technique." }, { "heading": "3.3 LEARNING OPTIONS INCREMENTALLY", "text": "One common issue in option discovery is identifying how many options are needed for a given problem. Oftentimes this number is predefined by the user based on intuition. In such a scenario, one could learn options by simply randomly initializing the parameters of a number of options and optimizing the proposed objective in equation 2. Instead, we propose not only learning options, but also the number of options needed, by the procedure shown in Algorithm 1. This algorithm introduces one option at a time and optimizes the objective Ĵ with respect to the policy over options πθ, with parameters θ, and the newly introduced option, o′ = (µ′φ, β ′ ψ), with parameters φ and ψ, for N epochs. Optimizing both o′ and πθ allows us to estimate how much we can improve Ĵ given that we keep any previously introduced option fixed. After the new option is trained, we measure how much Ĵ has improved; if it fails to improve above some threshold, ∆, the procedure terminates. This results in a natural way of obtaining an appropriate number of options, as options stop being added once a new option no longer improves the ability to represent the demonstrated behavior." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "This section describes experiments used to evaluate the proposed technique approach. We show results in the “four rooms” domain to allow us to visualize and understand the options produced by our method, and to show empirically that these options produce a clear improvement in learning. We use this domain to show that options generated by our method are able to generalize to tasks where the option-critic architecture (Bacon et al., 2017) and eigenoptions (Machado et al., 2017) would fail to do so. We then extend our experiments to evaluate our technique in a few selected problems from the Atari 2600 emulator provided by OpenAI Gym (Brockman et al., 2016). These experiments demonstrate that by using the trajectories obtained from solving a small subset of tasks, our approach is able to discover options that significantly improve the learning ability of the agent in the tasks it has yet to solve. For the four room experiment, we assume the transition function was known in advance. In all ATARI experiments, we estimated the transition functions by fitting the parameters of a linear Gaussian model to all the transitions experienced during training." }, { "heading": "4.1 EXPERIMENTS ON FOUR ROOMS ENVIRONMENT", "text": "We tested our approach in the four rooms domain: a gridworld of size 40 × 40, in which the agent is placed in a start state and needs to reach a goal state. At each time-step, the agent executes one of four available actions: moving left, right, up or down, and receives a reward of −1. Upon reaching the goal state, the agent receives a reward of +10. We generated 30 different task variations by changing the goal and start locations, and collected six sample trajectories from optimal policies learned in six tasks. We evaluated our method on the remaining 24 tasks.\nFigure 1a shows the change in the average expected number of terminations and average probability of generating the observed trajectories while learning options, as new options are introduced and adapted to the sampled trajectories. Options were learned over the six sampled optimal trajectories and every 50 epochs a new option was introduced. For every new option, the change in probability of\ngenerating the observed trajectories as well as the change in expected number of decisions reaches a plateau after 30 or 40 training epochs. When a new option is introduced, there is a large jump in the loss because a new policy, π, is initialized arbitrarily to account for the new option set being evaluated. However, after training the new candidate option, the overall loss improves beyond what it was possible before introducing the new option.\nIn Figure 1b, we compare the performance of Q-learning on 24 novel test tasks using options discovered by our method (with and without regularization using KL divergence), eigenoptions, and option critic. We allowed each competing method to learn options from the same six training tasks and, to ensure a fair comparison, we used the original code provided by the authors. As baselines, we also compare against primitive actions and randomly initialized options. It might seem surprising that both eigenoptions and the option-critic failed to reach an optimal policy when they were shown to work well in this type of problem; for that we offer the following explanation. Our implementation of four rooms is defined in a much larger state space than the ones where these methods were originally tested, making each individual room much larger. Since the options identified by these methods tend to lead the agent from room to room, it is possible that, once in the correct room, the agent executes an option leading to a different room before it had the opportunity to find the goal. When testing our approach in the smaller version of the four room problem, we found no clear difference in performance of the competing methods. In this experiment, we set the threshold ∆ for introducing a new option to 10% of Ĵ at the previous iteration and the hyperparameter λ2 = 100.0. When adding KL regularization, we set λ1 = 0.001.\nFigure 2 shows a visualization of the policy learned by the agent for a specific task. The policy leads the agent to navigate from a specific location in the bottom-left room to a location in the top-right room in a small “four-room” domain of size 10×15. 2 The new task to solve is shown in the top-left figure, while the solution found is shown in the top-right figure. The remaining rows of images depict the learned option policies, termination functions, and how they were used in the new task. The first row shows the learned option policies after training, the center row depict the termination functions and the bottom row shows a heat-map depicting where each option is likely to be called. The figure shows that while the options are defined over the entire state space, they are only useful in specific regions—that is, they are specialized. These options, when used in combination in specific regions, allow the agent to learn how to solve new problems more efficiently." }, { "heading": "4.2 EXPERIMENTS USING ATARI 2600 GAMES", "text": "We evaluated the quality of the options learned by our framework in two different Atari 2600 games: Breakout and Amidar. We trained the policy over options using A3C (Mnih et al., 2016) with grayscale pixel input. Options were represented by a two layer convolutional neural network, and\n2We show a smaller domain than used in the experiments for ease of visualization\nwere given the previous two frames as input. In both experiments the task variations consisted in changing the number of frames skipped after taking an action (randomly selected between 2 and 10), the reward function by scaling the reward with a real number between 0.1 and 10.0, and initial state distribution by letting the agent execute between 0 and 20 actions before start it starts learning. The full implementation details for these experiments are given in Appendix E. Figures 3a and 3b show the performance of the agent as a function of training time in Breakout and Amidar, respectively. The plots show that given good choices of hyperparameters, the learned options led to a clear improvement in performance during training. For both domains, we found that λ2 = 5, 000 led to a reasonable trade-off between the first two term in Ĵ , and report results with three different regularization values: λ1 = 0.0,, λ1 = 0.01 and λ1 = 0.1.\nNote that our results do not necessarily show that the options result in a better final policy, but they improve exploration early in training and enable the agent to learn more effectively. Figure 4a depicts the behavior for one of the learned options on Breakout. The option efficiently catches the ball after it bounces off the left wall, and then terminates with high probability before the ball has to be caught again. Bear in mind that the option remains active for many time-steps, significantly reducing the number of decisions made by the policy over options. However, it does not maintain control for so long that the agent is unable to respond to changing circumstances. Note that the option is only useful in specific case; for example, it was not helpful in returning a ball bounced off the right wall. That is to say, the option specialized in a specific sub-task within the larger problem:\na highly desirable property for generally useful options. Figure 4b shows the selection of two of the options learned for Amidar when starting a new game. At the beginning of the game, option 1 is selected, which takes the agent to a specific intersection before terminating. The agent then selects option 2, which chooses a direction at the intersection, follows the resulting path, and terminates at the next intersection. Note that the agent does not need to repeatedly select primitive actions in order to simply follow a previously chosen path. Having access to these types of options enables an agent to easily replicate known good behaviors, allowing for faster and more meaningful exploration of the state space.\n(a) Visualization of a learned option executed until termination on Breakout. The option learned to catch the ball bouncing off the left wall and terminates with high probability before the ball bounces a wall again (ball size increased for visualization).\n(b) Visualization of two learned options on Amidar. The agent is shown in yellow and enemies in pink. Option 1 learned to move up, at the beginning of the game, and turn left until reaching an intersection. Option 2 learned to turn in that intersection and move up until reaching the next one." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this work we presented an optimization objective for learning options offline from demonstrations of near-optimal behavior on a set of tasks. Optimizing the objective results in a set of options that allows an agent to reproduce the behavior while minimizing the number of decisions made by the policy over options, which are able to improve the learning ability of the agent on new tasks. We provided results showing how options adapt to the trajectories provided and showed, through several experiments, that the identified options are capable of significantly improving the learning ability of an agent. The resulting options encode meaningful abstractions that help the agent interact with and learn from its environment more efficiently." }, { "heading": "A APPENDIX", "text": "The following list defines the notation used in all derivations:\n1. At: random variable denoting action taken at step t. 2. St: random variable denoting state at step t. 3. Ht: random variable denoting history up to step t. Ht = (S0, A0, S1, A1, . . . , St). 4. Tt: random variable denoting the event that the option used at step t− 1 terminates at state St.\n5. π: policy over options. 6. P : transition function. P (s, a, s′) denotes the probability of transitioning to state s′ by\ntaking action a in state s 7. Ot: random variable denoting the option selected for execution at state St. 8. o: option defined as o = (µo, βo), where µo is the option policy for option and βo is the\ntermination function. 9. Assume primitives are options that perform only 1 action and last for 1 time-step.\n10. O: set of available options.\nWe can compute the probability of an option terminating at state st and generating a trajectory ht as:\nPr(Tt = 1, Ht = ht|π,O) = Pr(Tt = 1|Ht = ht, π,O) Pr(Ht = ht|π,O) (4)\nTo compute the proposed objective J we need to find an expression for Pr(Tt = 1|Ht = ht, π,O) and Pr(Ht = ht|π,O) in terms of known quantities.\nA.1 APPENDIX A - DERIVATION OF Ĵ Recall J(π,O, H) = E [∑|h| t=1 Pr(Tt = 0, Ht|π,O) ] , ignoring the regularization term. Assuming access to a set H of sample trajectories, we start by estimating J from sample averages and derive the objective Ĵ as follows:\nJ(π,O,H) ≈ 1 |H| ∑ h∈H |h|∑ t=1 Pr(Tt = 0, Ht = ht|π,O)\n= 1 |H| ∑ h∈H |h|∑ t=1 ( 1− Pr(Tt = 1|Ht = ht, π,O) ) Pr(Ht = ht|π,O)\n= 1 |H| ∑ h∈H |h|∑ t=1 ( 1−E [ Tt|Ht = ht, π,O ]) Pr(Ht = ht|π,O)\nIt can easily be seen that to maximize the above expression E [ Tt|Ht = ht, π,O ] should be minimized while Pr(H = h|π,O) should be maximized. Given that for long trajectories the expected number of terminations increases while the probability of generating the trajectories goes to 0, we normalize the number of terminations by the lenght of the trajectory, |h|, and adjust a hyperparameter, λ2, to prevent one term from dominating the other during optimization. Based on this observation we propose optimizing the following objective:\nĴ(π,O,H) = 1 H ∑ h|h|∈H λ2 Pr(H = h|π,O)− ∑|h| t=1 E [Tt|Ht = ht, π,O] |h| .\nThis objective allow us to control a trade-off, through λ2, of how much we care about the options reproducing the demonstrated trajectories vs. how much we want the agent to minimize the number of decisions.\nA.2 APPENDIX B - PROOF OF THEOREM 1\nTheorem 1 Given a set of options O and a policy π over options, the expected number of terminations for a trajectory h of length |h| is given by:\n|h|∑ t=1 E [Tt = 1|Ht = ht, π,O] = |h|∑ t=1 (∑ o∈O βo(st) µo(st−1, at−1) Pr(Ot−1 = o|Ht−1 = ht−1, π,O)∑ o′∈O µo′(st−1, at−1) Pr(Ot−1 = o ′|Ht−1 = ht−1, π,O) ) ,\nwhere, Pr(Ot−1 = o|Ht−1 = ht−1, π,O) = [( π(st−1, o)βo(st−1) )( P (st−2, at−2, st−1)µo(st−2, at−2)\n×Pr(Ot−2 = o|Ht−2 = ht−2, π,O)(1− βo(st−1)) )] ,\nand Pr(O0 = o|H0 = h0, π,O) = π(s0, o).\nProof. Notice that ∑|h| t=1 E [Tt = 1|Ht = ht, π,O] = ∑|h| t=1 Pr(Tt = 1|Ht = ht, π,O) 1, so if we find an expression for Pr(Tt = 1|Ht = ht, π,O), we can calculate the expectation exactly. We define Pr(T0 = 1|H1 = h1, π,O) = 1 for ease of derivation even though there is no option to terminate at T0.\nPr(Tt = 1|Ht = ht, π,O) = ∑ o∈O Pr(Tt = 1|Ot−1 = o,Ht = ht, π,O) Pr(Ot−1 = o|Ht = ht, π,O)\n= ∑ o∈O βo(st) Pr(Ot−1 = o|Ht = ht, π,O)\n= ∑ o∈O βo(st) Pr(Ot−1 = o|Ht−1 = ht−1, At−1 = at−1, St = st, π,O)\n= ∑ o∈O βo(st) Pr(St = st|Ht−1 = ht−1, At−1 = at−1, Ot−1 = o, π,O) Pr(St = st|Ht−1 = ht−1, At−1 = at−1, π,O)\n× Pr(Ot−1 = o|Ht−1 = ht−1, At−1 = at−1, π,O) = ∑ o∈O βo(st) Pr(St = st|Ht−1 = ht−1, At−1 = at−1, π,O) Pr(St = st|Ht−1 = ht−1, At−1 = at−1, π,O)\n× Pr(Ot−1 = o|Ht−1 = ht−1, At−1 = at−1, π,O) = ∑ o∈O βo(st) Pr(Ot−1 = o|Ht−1 = ht−1, At−1 = at−1, π,O)\n= ∑ o∈O βo(st) Pr(At−1 = at−1|Ht−1 = ht−1, Ot−1 = o, π,O) Pr(Ot−1 = o|Ht−1 = ht−1, π,O) Pr(At−1 = at−1|Ht−1 = ht−1, π,O)\n= ∑ o∈O βo(st) µo(st−1, at−1) Pr(Ot−1 = o|Ht−1 = ht−1, π,O) Pr(At−1 = at−1|Ht−1 = ht−1, π,O)\n= ∑ o∈O βo(st) µo(st−1, at−1) Pr(Ot−1 = o|Ht−1 = ht−1, π,O)∑ o′∈O Pr(At−1 = at−1, Ot−1 = o ′|Ht−1 = ht−1, π,O)\n= ∑ o∈O βo(st)µo(st−1, at−1) Pr(Ot−1 = o|Ht−1 = ht−1, π,O)\n× ( ∑ o′∈O Pr(At−1 = at−1|Ot−1 = o′, Ht−1 = ht−1, π,O)\n× Pr(Ot−1 = o′|Ht−1 = ht−1, π,O) )−1\n= ∑ o∈O βo(st) µo(st−1, at−1) Pr(Ot−1 = o|Ht−1 = ht−1, π,O)∑ o′∈O µo′(st−1, at−1) Pr(Ot−1 = o ′|Ht−1 = ht−1, π,O)\nWe are left with finding an expression in terms of known probabilities for Pr(Ot−1 = o|Ht−1 = ht−1, π,O).\nPr(Ot−1 = o|Ht−1 = ht−1, π,O) = [\nPr(Ot−1 = o, Tt−1 = 1|Ht−1 = ht−1, π,O) + Pr(Ot−1 = o, Tt−1 = 0|Ht−1 = ht−1, π,O) ] = [( Pr(Ot−1 = o|Ht−1 = ht−1, Tt−1 = 1, π,O)\n× Pr(Tt−1 = 1|Ht−1 = ht−1, π,O) )\n+ ( Pr(Ot−1 = o|Ht−1 = ht−1, Tt−1 = 0, π,O)\n× (1− Pr(Tt−1 = 1|Ht−1 = ht−1, π,O)) )]\n= [( π(st−1, o) Pr(Tt−1 = 1|Ht−1 = ht−1, π,O) ) + ( Pr(Ot−1 = o|Ht−1 = ht−1, Tt−1 = 0, π,O)\n× (1− Pr(Tt−1 = 1|Ht−1 = ht−1, π,O)) )]\n= [( π(st−1, o)βo(st−1) ) +\n× ( Pr(Ot−1 = o|Ht−1 = ht−1, Tt−1 = 0, π,O)(1− βo(st−1)) )]\nGiven that by convention, Pr(T0 = 1|H0 = h0, π,O) = 1.0, we are now left with figuring out how to calculate Pr(Ot−1 = o|Ht−1 = ht−1, Tt−1 = 0, π,O)\nPr(Ot−1 = o|Ht−1 = ht−1, Tt−1 = 0, π,O) = Pr(Ot−2 = o,At−2 = at−2, St−1 = st−1|Ht−1 = ht−1, π,O) = Pr(At−2 = at−2, St−1 = st−1|Ot−2 = o,Ht−1 = ht−1, π,O) × Pr(Ot−2 = o|Ht−1 = ht−1, π,O)\n= Pr(St−1 = st−1|At−2 = at−2, Ot−2 = o,Ht−1 = ht−1, π,O) × Pr(At−2 = at−2|Ot−2 = o,Ht−1 = ht−1, π,O) × Pr(Ot−2 = o|Ht−1 = ht−1, π,O)\n=P (st−2, at−2, st−1)µo(st−2, at−2) Pr(Ot−2 = o|Ht−1 = ht−1, π,O) =P (st−2, at−2, st−1)µo(st−2, at−2) Pr(Ot−2 = o|Ht−2 = ht−2, π,O)\nwhere Pr(O0 = o|H0 = h0, π,O) = π(s0, o)\nUsing the recursive function Pr(Ot−1 = o′|Ht−1 = ht−1, π,O), the expected number of terminations for a given trajectory is given by:\n|h|∑ t=1 E [Tt = 1|Ht = ht, π,O] = |h|∑ t=1 (∑ o∈O βo(st) µo(st−1, at−1) Pr(Ot−1 = o|Ht−1 = ht−1, π,O)∑ o′∈O µo′(st−1, at−1) Pr(Ot−1 = o ′|Ht−1 = ht−1, π,O) ) ,\nA.3 APPENDIX C - PROOF OF THEOREM 2\nTheorem 2\nGiven a set of options O and a policy π over options, the probability of generating a trajectory h of length |h| is given by:\nPr(H|h| = h|h||π,O) = d0(s0) [∑ o∈O π(s0, o)µo(s0, a0)f(h|h|, o, 1) ]∏|h|−1 k=0 P (sk, ak, sk+1),\nwhere f is a recursive function defined as:\nf(ht, o, i) = 1, if i = t[ βo(si) ∑ o′∈O π(si+1, o ′)µo′(si+1, ai+1)f(ht, o ′, i+ 1) +(1− βo(si))µo(si+1, ai+1)f(ht, o, i+ 1) ] , otherwise\nProof. We define Hi,t to be the history from time i to time t, that is, Hi,t = (Si, Ai, Si+1, Ai+1, . . . , St), where i < t. If i = t, the history would contain a single state.\nPr(Ht = ht|π,O) = Pr(S0 = s0|π,O) Pr(H1,t = h1,t, A0 = a0|S0 = s0, π,O) =d0(s0) Pr(H1,t = h1,t, A0 = a0|S0 = s0, π,O)\n=d0(s0) ∑ o∈O Pr(H1,t = h1,t, A0 = a0, Oo = o|S0 = s0, π,O)\n=d0(s0) ∑ o∈O Pr(O0 = o|S0 = s0, π,O) Pr(H1,t = h1,t, A0 = a0|S0 = s0, O0 = o, π,O)\n=d0(s0) ∑ o∈O π(s0, o) Pr(H1,t = h1,t, A0 = a0|S0 = s0, O0 = o, π,O)\n=d0(s0) ∑ o∈O π(s0, o) Pr(A0 = ao|S0 = s0, O0 = o, π,O)\n× Pr(H1,t = h1,t|S0 = s0, O0 = o,A0 = a0, π,O) =d0(s0) ∑ o∈O π(s0, o)µo(s0, ao) Pr(H1,t = h1,t|S0 = s0, O0 = o,A0 = a0, π,O).\nWe now need to find an expression to calculate Pr(H1,t = h1,t|S0 = s0, O0 = o,A0 = a0, π,O). Consider the probability of seeing history hi,t given the previous state, s, the previous option, o, and the previous action, a:\nPr(Hi,t = hi,t|Si−1 = s,Oi−1 = o,Ai−1 = a) = Pr(Si = si|Si−1 = s,Oi−1 = o,Ai−1 = a) Pr(Hi+1,t = hi+1,t, Ai = ai|Si−1 = s,Oi−1 = o,Ai−1 = a, Si = si) =P (s, a, si) Pr(Hi+1,t = hi+1,t, Ai = ai|Si−1 = s,Oi−1 = o,Ai−1 = a, Si = si) =P (s, a, si) Pr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si) =P (s, a, si) [ Pr(Ti = 1|Oi−1 = o,Ai−1 = a, Si = si)\n× Pr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si, Ti = 1) + Pr(Ti = 0|Oi−1 = o,Ai−1 = a, Si = si) × Pr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si, Ti = 0)\n] =P (s, a, si) [ βo(si)\n× Pr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si, Ti = 1) + (1− βo(si)) × Pr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si, Ti = 0) ] .\nEven though the equation above might seem complicated, there are only two cases we need to consider: either the current option terminates and a new one must be selected (the first term), or the current option does not terminate (the second term). Let’s consider each of them separately.\nCase 1 - option terminates: If we terminate, we sum over new options:\nPr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si, Ti = 1) = ∑ o′∈O Pr(Oi = o ′|Oi−1 = o,Ai−1 = a, Si = si, Ti = 1)\n× Pr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si, Ti = 1, Oi = o′) = ∑ o′∈O π(si, o ′) Pr(Hi+1,t = hi+1,t, Ai = ai|Oi−1 = o,Ai−1 = a, Si = si, Ti = 1, Oi = o′)\n= ∑ o′∈O π(si, o ′) Pr(Hi+1,t = hi+1,t, Ai = ai|Si = si, Oi = o′)\n= ∑ o′∈O π(si, o ′) Pr(Ai = ai|Si = si, Oi = o′) Pr(Hi+1,t = hi+1,t|Si = si, Oi = o′, Ai = ai)\n= ∑ o′∈O π(si, o ′)µo′(si, ai) Pr(Hi+1,t = hi+1,t|Si = si, Oi = o′, Ai = ai).\nNote that the expanded probability has the same form as Pr(Hi,t = hi,t|Si−1 = s,Oi−1 = o,Ai−1 = a).\nCase 2 - option does not terminate: This tells us that Oi = o, so we may drop the dependency on the i− 1 terms:\nPr(Hi+1,t = hi+1,t, Ai = ai|Si−1 = s,Oi−1 = o,Ai−1 = a, Si = si, Ti = 0) = Pr(Hi+1,t = hi+1,t, Ai = ai|Si = si, Oi = o) = Pr(Ai = ai|Si = si, Oi = 0) Pr(Hi+1,t = hi+1,t|Si = si, Oi = o,Ai = ai) =µo(si, ai) Pr(Hi+1,t = hi+1,t|Si = si, Oi = o,Ai = ai).\nPlugging these two cases back into our earlier equation yields:\nPr(Hi,t = hi,t|Si−1 = s,Oi−1 = o,Ai−1 = a) =P (s, a, si) [ βo(si) ∑ o′∈O π(si, o ′)µo′(si, ai) Pr(Hi+1,t = hi+1,t|Si = si, Oi = o′, Ai = ai)\n+ (1− βo(si))µo(si, ai) Pr(Hi+1,t = hi+1,t|Si = si, Oi = o,Ai = ai) ] .\nNote that each term contains an expression of the same form, Pr(Hi,t = hi,t|Si−1 = s,Oi−1 = o,Ai−1 = a). We can therefore compute the probability recursively. Our recursion will terminate when we consider i = t, asHt,t contains a single state, and we adopt the convention of its probability to be 1. Notice that for every recursive step, both inner terms will produce a P (s, a, si) term. Consider the result when we factor every recursive P (s, a, si) term to the front of the equation. We define the following recursive function:\nf(ht, o, i) = 1, if i = t[ βo(si) ∑ o′∈O π(si+1, o ′)µo′(si+1, ai+1)f(ht, o ′, i+ 1) +(1− βo(si))µo(si+1, ai+1)f(ht, o, i+ 1) ] , otherwise\n.\nNotice that this is the recursive probability described above, but with the P (s, a, s′) terms factored out. We now see that:\nPr(Hi,t = hi,t|Si−1 = si−1, Oi−1 = o,Ai−1 = ai−1) = f(ht, o, i) t−1∏ k=i−1 P (sk, ak, sk+1).\nPlugging this all the back into our original equation for Pr(Ht = ht|π,O) gives us the desired result:\nPr(H|h| = h|h||π,O) = d0(s0) [∑ o∈O π(s0, o)µo(s0, ao)f(h|h|, o, 1) ] t−1∏ k=0 P (sk, ak, sk+1).\nA.4 APPENDIX D - EMPIRICAL VALIDATION OF DERIVED EQUATIONS\nTo double check the derivation of the proposed objective and make sure the implementation was correct, we conducted a simple empirical test to compared the calculated expected number of decisions in a trajectory and the probability of generating each trajectory for a set of 10 trajectories on 10 MDPs. The MDPs are simple chains of 7 states with different transition functions. We randomly initialized four options and a policy over options, and estimated the probability of generating each trajectory and the expected number of terminations, for each sampled trajectory, by Montecarlo sampling 10, 000 trials. Table 1 presents results for the 10 trajectories verifying empirically that the equations were correctly derived and implemented. The table compares the empirical and true probability of generating a given trajectory, P̂r(H|·) and Pr(H|·), respectively, and the empirical and true sum of expected number of decisions an agent has to make to generate those trajectories,∑|H| t=1 Ê [Tt|·] and ∑|H| t=1 E [Tt|·], respectively.\nNote that the cases with largest discrepancy between the estimated and calculated number of terminations occur when the probability of generating a trajectory is low. This happens because, since the trajectory is unlikely to be generated, the Monte Carlo sampling is not able to produce enough samples of the trajectory.\nA.5 APPENDIX E - IMPLEMENTATION DETAILS FOR ATARI EXPERIMENTS\nFor these experiments we first learned a good performing policy with A3C for each game and sampled 12 trajectories for training. Each trajectory lasted until a life was lost, not for the entire duration of the episode. Each option was represented as a two-layer neural network, with 32 neurons in each layer, and two output layers: a softmax output layer over the four possible actions representing µ, and a separate sigmoid layer representing β. We implemented our objective using PyTorch which simplifies gradient calculations. The options were represented by a two-layer neural network, where the input was represented by gray scale images of the last two frames. We ran 32 training agents in parallel on CPUs, the learning rate was set to 0.0001 and the discount factor γ was set to 0.99.\nBecause the options can only learn from the states observed in the trajectories, it is possible that when using them, they will be executed in previously unseen states. When this happens, the termination function may decide to never terminate, as it has not seen that region of the state space before. To address this issue, we add a value of 0.05 to the predicted probability of termination per time-step that the option has been running since executed. Therefore, in our experiments an option cannot run for more than 20 time-steps in total." } ]
2,019
null
SP:ad9f47b416f144d43c2c4f66599529e1bb49bca6
[ "This paper presents a novel defense method to make the classification more robust. The motivation is based on the observation: the distribution of the soft-max for the cleaned image and its transformed images for one class is similar to the distribution of the soft-max for the adversarial image and its transformed images for the same class, and the distributions of the soft-max for the cleaned image and its transformed images for different classes are different. Then, a distribution based method is proposed to classify the distribution of the soft-max for the cleaned (or adversarial) image and its transformed images. ", "The authors analyze the use of image transformation as a defense against adversarial examples, where a challenge is to prevent the deterioration of performance on clean images. To do this, they show that the softmax distributions for clean and adversarial images share similar \"features\", and therefore one can apply a trained distribution classifier which takes the softmax distribution to return the class label. This is as opposed to original approach of making a prediction for each sample (a random transformation of the input image) followed by majority voting." ]
Adversarial attacks on convolutional neural networks (CNN) have gained significant attention and there have been active research efforts on defense mechanisms. Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples. However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images. While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear. In this paper, we study the distribution of softmax induced by stochastic transformations. We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the resulting distribution of softmax could be used to correct the prediction. Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images. With these observations, we propose a method to improve existing transformation-based defenses. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images. Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images. Our method is generic and can be integrated with existing transformation-based defenses.
[ { "affiliations": [], "name": "TION CLASSIFIER" }, { "affiliations": [], "name": "Connie Kou" }, { "affiliations": [], "name": "Hwee Kuan Lee" }, { "affiliations": [], "name": "Ee-Chien Chang" }, { "affiliations": [], "name": "Teck Khim Ng" } ]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Minhao Cheng", "Thong Le", "Pin-Yu Chen", "Jinfeng Yi", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Queryefficient hard-label black-box attack: An optimization-based approach", "venue": "arXiv preprint arXiv:1807.04457,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "arXiv preprint arXiv:1711.00117,", "year": 2017 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "arXiv preprint arXiv:1804.08598,", "year": 2018 }, { "authors": [ "Seokwoo Jung", "Unghui Lee", "Jiwon Jung", "David Hyunchul Shim" ], "title": "Real-time traffic sign recognition system with deep convolutional neural network", "venue": "In 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI),", "year": 2016 }, { "authors": [ "Alex Kantchelian", "J Doug Tygar", "Anthony Joseph" ], "title": "Evasion and hardening of tree ensemble classifiers", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Connie Khor Li Kou", "Hwee Kuan Lee", "Teck Khim Ng" ], "title": "A compact network learning model for distribution regression", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yifeng Li", "Lingxi Xie", "Ya Zhang", "Rui Zhang", "Yanfeng Wang", "Qi Tian" ], "title": "Defending adversarial attacks by correcting logits", "venue": null, "year": 1906 }, { "authors": [ "Fangzhou Liao", "Ming Liang", "Yinpeng Dong", "Tianyu Pang", "Xiaolin Hu", "Jun Zhu" ], "title": "Defense against adversarial attacks using high-level representation guided denoiser", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Omkar M Parkhi", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep face recognition", "venue": "In bmvc,", "year": 2015 }, { "authors": [ "Aaditya Prakash", "Nick Moran", "Solomon Garber", "Antonella DiLillo", "James Storer" ], "title": "Deflecting adversarial attacks with pixel deflection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "venue": "arXiv preprint arXiv:1805.06605,", "year": 2018 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "arXiv preprint arXiv:1710.10766,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "arXiv preprint arXiv:1711.01991,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "D EXPERIMENTAL SETUP D" ], "title": "ADVERSARIAL ATTACK HYPERPARAMETERS Tables 1 to 3 show the hyperparameter settings used for the adversarial attacks. The attacks are implemented using the CleverHans library (Papernot et al., 2018). For DeepFool and C&W, the other hyperparameters used are the default values set in CleverHans", "venue": "For L2 norm,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "There has been widespread use of convolutional neural networks (CNN) in many critical real-life applications such as facial recognition (Parkhi et al., 2015) and self-driving cars (Jung et al., 2016). However, it has been found that CNNs could misclassify the input image when the image has been corrupted by an imperceptible change (Szegedy et al., 2013). In other words, CNNs are not robust to small, carefully-crafted image perturbations. Such images are called adversarial examples and there have been active research efforts in designing attacks that show the susceptibility of CNNs. Correspondingly, many defense methods that aim to increase robustness to attacks have been proposed.\nStochastic transformation-based defenses have shown considerable success in recovering from adversarial attacks. Under these defenses, the input image is transformed in a certain way before feeding into the CNN, such that the transformed adversarial image would no longer be adversarial. As the transformation is random, by feeding in samples of the transformed image through the CNN, we accumulate a set of CNN softmax outputs and predictions. As such, existing transformationbased defenses take a majority vote of the CNN predictions from the randomly transformed image (Prakash et al., 2018; Guo et al., 2017). Transformation-based defenses are desirable as there is no need to retrain the CNN model. However, they suffer from deterioration of performance on clean images. With increasing number of pixel deflections (Prakash et al., 2018), there is improvement on\nthe performance on adversarial images, but this comes with a rapid deterioration of performance on clean images.\nThe exact mechanism of the deterioration in performance on clean images is unclear. We believe that the softmax distribution induced by the random transformation contains rich information which is not captured by majority vote that simply counts the final class predictions from the transformed samples. Now, an interesting question is whether the features in the distribution of softmax could be better utilized. In this paper, to elucidate how the deterioration in accuracy on clean images occurs, we study the effects of the random image transformations on the distribution of the softmax outputs and make some key observations. After the image transform, some clean images show distributions of softmax with modes at an incorrect class, reflecting the deterioration in voting accuracy as observed before. While the shifting of the distribution mode to the incorrect class is detrimental to the voting prediction, the resulting distribution of softmax contains features that is useful for correcting the prediction. In addition, we observe that the adversarial counterparts show similar shifts in the distributions of softmax as the clean images. We also look into the distribution shapes for the transformed clean and adversarial images and find that they are similar.\nWith these observations, we propose a simple method to improve existing transformation-based defenses, as illustrated in Figure 1. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed clean images and predict the class label. Without retraining the original CNN, our distribution classifier improves the performance of transformation-based defenses on both clean and adversarial images. On the MNIST dataset, the improvements in accuracy over majority voting are 1.7% and 5.9% on the clean and adversarial images respectively. On CIFAR10, the improvements are 6.4% and 3.6% respectively. Note that the distributions obtained from the adversarial images are not included in the training of the distribution classifier. In real-world settings, the type of attack is not known beforehand. Training the distribution classifier on a specific attack may cause the classifier to overfit to that attack. Hence, it is an advantage that our defense method is attack-agnostic. Our experimental findings show that the features of the distribution in the softmax are useful and can be used to improve existing transformation-based defenses. Our contributions are as follows:\n1. We analyze the effects of image transformation in existing defenses on the softmax outputs for clean and adversarial images, with a key finding that the distributions of softmax obtained from clean and adversarial images share similar features.\n2. We propose a method that trains a distribution classifier on the distributions of the softmax outputs of transformed clean images only, but show improvements in both clean and adversarial images. This method is agnostic to the attack method, does not require retraining of the CNN and can be integrated with existing transformation-based methods." }, { "heading": "2 RELATED WORK: ATTACKS AND DEFENSES", "text": "Given an image dataset {(x1, y1) · · · (xM , yM )} and a classifier Fθ that has been trained with this dataset with parameters θ, the aim of the attack is to produce an adversarial image xadvi such that Fθ(x adv i ) 6= yi, and ||xadvi − xi|| is small. We focus on four gradient-based untargeted attacks. The Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) is a single-step attack that uses the sign of the gradient of the classification loss to perturb the image. Iterative Gradient Sign Method (IGSM) (Kurakin et al., 2016a) is an iterative version of FGSM. In DeepFool (Moosavi-Dezfooli et al., 2016), at each iteration, the attack approximates the classifier with a linear decision boundary and generates the minimal perturbation to cross the boundary. Finally, the Carlini & Wagner (C&W) (Carlini & Wagner, 2017) L2 attack jointly minimizes the perturbation L2 norm and a differentiable loss function based on the classifier’s logit outputs. Besides gradient-based attacks, there are also black-box attacks where the CNN model is not known and only the softmax output or final prediction is given (Brendel et al., 2017; Ilyas et al., 2018; Cheng et al., 2018).\nDefense methods have been proposed to make the classifiers more robust. In adversarial training, the CNN model is trained on adversarial examples generated from itself (Madry et al., 2017; Kurakin et al., 2016b) or from an ensemble of models (Tramèr et al., 2017). Other methods involve training auxiliary neural networks on mixture of clean and adversarial images, for instance, by denoising the inputs with a neural network before feeding into the CNN (Liao et al., 2018; Song et al., 2017; Samangouei et al., 2018) or by training a neural network on the CNN logits (Li et al., 2019). In the next section, we introduce another class of defense: transformation-based defenses." }, { "heading": "2.1 TRANSFORMATION-BASED DEFENSES", "text": "Transformation-based defenses aim to recover from adversarial perturbations, that is for input transformation T , we want Fθ(T (xadvi )) = yi. At the same time, the accuracy on the clean images has to be maintained, ie. Fθ(T (xi)) = yi. Note that transformation-based defenses are implemented at test time and this is different from training-time data augmentation. Here we introduce two transformation-based defenses that we experiment on.\nPixel deflection (PD) (Prakash et al., 2018) : Pixel deflection corrupts an image by locally redistributing pixels. At each step, it selects a random pixel and replaces it with another randomly selected pixel in a local neighborhood. The probability of a pixel being selected is inversely proportional to the class activation map (Zhou et al., 2016). Lastly, there is a denoising step based on wavelet transform. In our experiments, we did not use robust activation maps for our datasets as we found that this omission did not cause significant difference in performance (see Appendix D.3).\nRandom resize and padding (RRP) (Xie et al., 2017) : Each image is first resized to a random size and then padded with zeroes to a fixed size in a random manner.\nIn many transformation-based methods, the transformation is stochastic. Hence there can be different samples of the transformation of an image: tx ∼ T (x), where tx represents a transformed sample. Existing transformation defenses benefit from improved performance by taking the majority vote across samples of random transformations. The advantage of transformation-based methods is that there is no retraining of the CNN classifier. However, a weakness, as identified by Prakash et al. (2018), is that the transformation increases the accuracy on adversarial images at the expense of the accuracy on clean images. The exact mechanism of the deterioration in performance on clean images is unclear. In this paper, we elucidate how the deterioration in accuracy on clean images occurs by studying the effects of the random image transformations on the distribution of the softmax outputs." }, { "heading": "3 ANALYSIS ON DISTRIBUTIONS OF SOFTMAX WITH RANDOM IMAGE TRANSFORMATIONS", "text": "Due to the randomness of the transforms, samples of the transformed image will have different softmax outputs. With each image, we obtain a distribution over the softmax outputs accumulated from multiple samples of the transformation. These are the steps to obtain the distribution of softmax:\n1. For each input image x, obtain N transformed samples: t(i)x ∼ T (x), i = 1, · · · , N\n2. The transformed samples of the image (t(1)x , t (2) x , · · · , t(N)x ) are fed into the CNN individ-\nually to obtain their softmax probabilities. Let σ(i)x be the softmax vector derived from t (i) x , and σ (i) x,j , for j = 1, · · · , C, be the j-th component of the softmax vector. C denotes the number of classes for the classification task. With each input image and a transformation method, there exists an underlying joint distribution of the CNN softmax probabilities, from which we estimate with N samples.\n3. The underlying joint distribution of the softmax has a dimension equal to the number of classes (eg. 10-D for MNIST). Performing accurate density estimation in high dimensions is challenging due to the curse of dimensionality. Here we make an approximation by computing the marginal distributions over each class. When we use the term ‘distribution of softmax’, we are referring to the marginalized distributions. We use kernel density estimation with a Gaussian kernel. Let hx,j be the distribution accumulated from σ (1) x,j , · · · , σ (N) x,j :\nhx,j(s) = 1\nN √ 2πδ N∑ i exp\n( − (s− σ(i)x,j)2\n2δ2\n) , (1)\nwhere δ is the kernel width and s ∈ [0, 1] is the support of the softmax output. The distribution is then discretized into bins.\nIn this section, we study the effect of image transformation on the distribution of the softmax and make several interesting observations. In the following analyses, we study a LeNet5 CNN (LeCun et al., 1998) trained with MNIST. The adversarial images are generated using FGSM and for the transformation defense, we use pixel deflection, with N=100 transformation samples per image. The image transformation magnitude is controlled by the number of pixel deflections, d. In the analysis here and in the experimental results in Section 5, when reporting the accuracies, on clean images, we consider images that have been correctly predicted by the CNN, hence without any transformation defense, the test accuracy is 100%. This is following the setup of Prakash et al. (2018) where the misclassified images by CNN are excluded as it is not meaningful to evaluate any attack (and subsequent defense) methods on these images. For adversarial images, we consider the images that have been successfully attacked, so the test accuracy reflects the recovery rate and without any defense the accuracy is 0%.\nIn Figure 2a, we show how the image transformation affects the voting predictions on two MNIST classes. For each MNIST class, we take all the clean and adversarial test images, perform the transformation and then feed through the CNN to obtain the final voting prediction. We observe that for class label 8, there is some recovery from the attack as some adversarial images are voted to the correct class after the transformation. However, some clean images get misclassified to other classes (eg. 2 and 3). Although this means there is a deterioration of the accuracy on clean images, it is interesting that the misclassifications have the same voting classes as the transformed adversarial images. A similar pattern is observed for class label 6 where the clean images are misclassified to classes 4 and 5, which overlap with the vote predictions of some adversarial images at d = 300.\nWith the above analysis, we characterize the relationship between the clean and adversarial images in terms of the JS divergence of the distributions of the softmax at increasing number of pixel deflections. For each MNIST digit class, we quantify the (1) distance of the distributions among the clean images (clean-clean, same class), (2) distance of the distributions among the adversarial images (adversarial-adversarial, same class), (3) the distance of the distributions between clean and adversarial images (clean-adversarial, same class) and (4) the distance of the distributions between clean images of this class and all other classes (clean-clean, different class). Here we give details on the calculation of the 4 distance measures. First, the distance between the distributions of softmax output for two input images, x1 and x2 is given by d(hx1 , hx2) = 1 C ∑C j DJS(hx1,j , hx2,j), where DJS is the Jensen-Shannon divergence. Distance measures of (1) and (2) computed by taking the average distance of each image distribution to the centroid distribution which is computed with µ({hx1,j , · · · ,hxM ,j}) = 1M ∑M i hxi,j . (3) is computed by the distance between the centroids of the clean and adversarial distributions. Finally, (4) is computed by the distance of the centroid distribution of the clean images of the particular class with the centroid distribution of another class, averaged over the other 9 classes.\nIn Figure 2b, we show results for two MNIST classes, but similar trends are observed across all classes (see Figure 8 in Appendix A)). The clean-clean (same-class) distance starts off low initially as all clean samples will give high scores at the correct class. With increasing number of deflections, there is increased variability in the softmax outputs and the resulting distributions. Next, the adversarial images of the same class are initially predicted as different incorrect classes without any transformation, and hence the adversarial-adversarial (same-class) distance starts off high and decreases with more transformation. The clean-adversarial (same-class) distance decreases with increasing image transformation which shows that the distributions of softmax from the clean and adversarial images are becoming more similar. Finally, the clean-clean (different class) distance decreases as well, which is expected because we already know that with more transformation, the clean image voting accuracy deteriorates. However, we observe that clean-clean (different class) distance decreases less rapidly and remains higher than clean-clean (same-class) distance at d=300. This means the transformation still retains information about the differences between the classes. At d=800, all 4 distance measures converge, which suggests the number of deflections is too large and the differences between the classes are no longer retained.\nNext, we visualize the morphing of the distributions with increasing number of pixel deflections for an example image in Figure 3. For the purpose of visualization, instead of the per-class marginal distributions of the softmax, we perform kernel density estimation (kde) on the softmax values for the marginals on class 5 and 6. The softmax values of the other 8 classes are not shown. We have not excluded the areas where performing kde results in sum probability exceeding one, and our visualization still conveys our ideas and the distribution shapes well. Without any image transformation, as expected, the softmax outputs of the clean and adversarial images are very different. As the number of pixel deflections increases, each point evolves to a distribution due to the randomness of the transformation. The voting mechanism is straightforward; an image is classified to the class where the distribution mass is largest. In this example, the distribution shapes for the clean and adversarial image become more similar, and result in the same incorrect voting prediction at d=300. This shows the similarity of distributions obtained from clean and adversarial images after image transformation, which was illustrated in Figure 2b.\nIn Figure 4, we show more examples of the distributions obtained from clean images (A-H) and their adversarial counterparts (Ã-H̃) at d=300. For clean images A-D, voting predicts correctly but on the adversarial counterparts Ã-D̃, voting predicts wrongly. For clean images E-H and the adversarial counterparts Ẽ-H̃ , voting predicts wrongly. For completeness, we also show in Figure 9\nin Appendix B examples of adversarial images where the transformation defense, coupled with voting, has successfully recovered the correct class. With the random image transformation, there are similarities in the distribution shapes between the clean and adversarial images, as shown by the groupings and arrows (eg. between E and Ã, Ẽ, F̃ ). This further supports our earlier observations. After the image transformation, the voting accuracy on the clean images deteriorates, but the resulting distributions have similar features as the distributions from the adversarial counterparts. This gives us an idea to enhance existing transformation-based defenses: to train a distribution classifier on the distributions obtained from clean images only, while improving the performance on both clean and adversarial images." }, { "heading": "4 ENHANCING TRANSFORMATION-BASED DEFENSES WITH DISTRIBUTION CLASSIFIER", "text": "Instead of voting, to reduce the drop in performance on clean images, we train a separate compact distribution classifier to recognize patterns in the distributions of softmax probabilities of clean images, as illustrated in Figure 1. For each clean image, the marginal distributions obtained are inputs to the distribution classifier, which learns to associate this distribution with the correct class label. If the individual transformed images were initially misclassified by the CNN, our distribution classifier should learn to recover the correct class. During the test phase, for any input image, clean or adversarial, we build the distribution of softmax from N transformed samples and feed them into our trained distribution classifier to obtain the final prediction. Note that our defense method does not require retraining of the original CNN, is agnostic to the attack method and can be integrated with most existing stochastic transformation-based methods.\nDistribution classifiers: We investigate three distribution classification methods. First, we adapt a state-of-the-art distribution-to-distribution regression method, called distribution regression network (DRN) (Kou et al., 2018) (details are included in Appendix C). We also experimented on random forest (RF), which averages the outputs from multiple decision trees. Finally, we experimented on multilayer perceptrons (MLP) which are fully connected neural networks, with a softmax output layer. For this distribution classification task, we concatenate the distribution bins from the softmax classes into a single input vector for RF and MLP. For DRN and MLP, we use the cross entropy loss and the network architectures are chosen by cross-validation. For random forest, the Gini impurity is used as the splitting criterion and the number of trees and maximum depth are tuned by cross-validation. The hyperparameter values are included in Appendix D.4." }, { "heading": "5 EXPERIMENTS AND DISCUSSION", "text": "In the following section, we describe our experimental setup to evaluate the performance on clean and adversarial images with our distribution classifier method.\nDatasets and CNN networks: We use the MNIST (LeCun et al., 1998), CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009) datasets. For the CNN model for MNIST, we use LeNet5 (LeCun et al., 1998) that has 98.7% test accuracy. For CIFAR10 and CIFAR100, we use wide ResNet (Zagoruyko & Komodakis, 2016) with test accuracies of 95.7% and 78.9% respectively.\nAttack methods: As introduced in Section 2, we use four adversarial attacks in the untargeted setting. In Appendix D.1, we have included the distortion metrics, the success rates and the hyperparameters. The attacks are implemented using the CleverHans library (Papernot et al., 2018).\nTransformation-based defenses: As a baseline, we use a random pixel noise (RPN) as a defense method, where each pixel noise is sampled with a uniform distribution with L∞ measure. In addition, we use two existing transformation-based methods: pixel deflection (PD) (Prakash et al., 2018) and image random resize and pad (RRP) (Xie et al., 2017). Although these two methods have not been tested for MNIST, CIFAR10 and CIFAR100, we find that they work considerably well and present the results here. The hyperparameter tuning for each defense is conducted on the validation sets. We select hyperparameters that give the best recovery from adversarial attack, regardless of the deterioration in accuracy on clean images. The hyperparameters are included in Appendix D.2.\nTo test the effectiveness of the transformation-based defenses before integrating with our defense method, we perform majority voting on the transformed image samples. This sets the baseline for our distribution classifier defense method. When reporting the test accuracies, on clean images, we consider images that have been correctly predicted by the CNN, hence without any defense method, the test accuracy is 100%. For adversarial images, we consider the images that have been successfully attacked, so the test accuracy reflects the recovery rate and without any defense the accuracy is 0%." }, { "heading": "5.1 MNIST RESULTS", "text": "For the MNIST dataset, N = 100 transformation samples were used for voting and for constructing the distribution of softmax. We found that the distribution classifiers required only 1000 training data, which is a small fraction out of the original 50,000 data. Figure 5 (left) shows the test accuracies of the three transformation-based defenses with majority voting and with the three distribution classifiers. Table 11 in Appendix D.5 shows the numerical figures of the results. First, we observe that the recovery on adversarial images with majority voting for the iterative methods IGSM, DeepFool and C&W is much better compared to single-step FGSM. This is in line with the observations by Xie et al. (2017) where they found their defense to be more effective for iterative attacks.\nThe distribution classifiers have improved accuracy over voting on the clean images, except when the voting accuracy was already high (eg. 100% voting accuracy for PD on DeepFool). The mean improvement of the accuracy on the clean images is 1.7% for DRN. Hence, our distribution classifier method is stronger than voting. Voting simply takes the mode of the softmax probabilities of the transformed image, disregarding properties such as variance across the classes. In contrast, the distribution classifier learns from the distinctive features of the distribution of softmax.\nWithout training on the distributions obtained from adversarial images, our method has managed to improve the recovery rate, with a mean improvement of 5.9% for DRN. The three distribution classifier methods are comparable, except for some cases where DRN outperforms other classifiers (eg. PD adv., IGSM) and where MLP and RF have lower accuracy than voting (eg. RPN adv., DeepFool and C&W). In Figure 4, we show that after image transformation, the distributions of softmax between the clean and adversarial images show some similarities and distinctive features. In fact, all of the clean (A-H) and adversarial (Ã-H̃) images (class 6) are classified correctly by the distribution classifier. Even though the distribution classifier was only trained on distributions from the clean images (A-H), the distribution classifier can recover the correct class for the adversarial images where voting has failed (Ã-H̃). The distribution classifier does so by learning the distinctive shapes of the distributions associated with the digit class from the clean images, and is able to apply this to the adversarial images with similar distribution shapes. Furthermore, our distribution classifier is able to pick up subtle differences in the distribution features. Figure 6a shows examples of clean images with class label 5 that are correctly classified by our distribution classifier. It is\ninteresting that although the distribution shapes for adversarial images C̃ and G shown in Figure 4 look similar, our distribution classifier is able to distinguish between the shapes for class 5 and 6." }, { "heading": "5.1.1 NUMBER OF TRANSFORMED SAMPLES REQUIRED", "text": "We used N=100 transformed samples in our experiments. Hence, the evaluation time will be 100 times longer than taking a single sample. Here we study the effect of the number of samples. Figure 6b and 6c show the classification accuracies for voting and DRN as the number of transformed samples increases. On the clean images, both voting and DRN accuracies improve with more number of samples, with the performance of voting saturating while DRN’s performance continues to increase with widening gap. This shows that a sufficient number of samples is required to capture the features of the distribution of softmax. On the adversarial images, the accuracies stay more of less the same. Although having more transformed samples is beneficial for the performance on clean images, our distribution classifier improves the voting performance regardless of the number of samples." }, { "heading": "5.2 CIFAR10 AND CIFAR100 RESULTS", "text": "For the CIFAR10 and CIFAR100 datasets, N = 50 image transformation samples and 10,000 training data were used. Figure 5 (middle) shows the results for CIFAR10. All three distribution classifiers gave comparable improvements over voting, except for MLP which performs worse than voting for adversarial images with RPN on DeepFool. For CIFAR100 (Figure 5, right), the distribution classifiers mostly show improved performance over voting. There are exceptions where DRN (eg. PD adv., FGSM) and MLP (eg. RPN adv., DeepFool) have lower accuracy than voting. This suggests that for datasets with more classes, random forest may perform better than other classifiers.\nAs explained in Section 3, in the results in Figure 5, we have excluded clean images which are misclassified by the CNN and the images where the attack has failed. To check that our method works on these images, we evaluated these images for CIFAR100 with FGSM attack, random resize and padding and random forest classifier. Our results in Table 14 in the Appendix show that our distribution classifier method still outperforms majority voting." }, { "heading": "6 END-TO-END ATTACK ON DISTRIBUTION CLASSIFIER METHOD", "text": "Here we evaluate end-to-end attacks on our distribution classifier method (with DRN) on MNIST and CIFAR10. We use Boundary Attack (Brendel et al., 2017) which is a black-box decision-based attack. We performed the attack on the base CNN classifier (CNN), CNN with pixel deflection and voting (Vote), and CNN with pixel deflection and distribution classifier trained on clean images (DRN). In addition, we trained the distribution classifier on a mixture of distributions obtained from both clean and adversarial images obtained with IGSM on the base CNN, which can be seen as a lightweight adversarial training (DRN LAT) except that the CNN is kept fixed. Finally, we tested the attack on an adversarially-trained CNN (Adv trained CNN) by Madry et al. (2017) with allowed\nperturbations of L∞ ≤ 0.3. Since Boundary Attack uses the L2 measure, the adversarially-trained CNN which uses the L∞ metric is not expected to perform well. For details of our implementation of Boundary Attack, please refer to Appendix E. Figure 7 shows the mean L2 of the perturbations over 100 test images, with a maximum of 5000 iterations for the attack. CNN and the adversariallytrained CNN have very low perturbations. The stochastic models, Vote, DRN and DRN LAT, have much higher perturbations with lower quality adversarial images, and the difficulty of the attack increases in that order. This shows that the distribution classifier and the lightweight adversarial training extension are more difficult to attack by the Boundary Attack method compared to voting.\nAthalye et al. (2018) have shown that under the white-box setting where the attacker has full knowledge of the CNN model and the defense, random transformation defenses are susceptible to further attack by estimating the gradients using multiple transformation samples, in a method called Expectation over Transformation(EOT). To employ white-box attack on our distribution classifier method, there are a few potential challenges. First, we use 50 to 100 transformed samples per image to accumulate the distribution of softmax. Attacking our method with EOT will be very time-consuming as it requires taking multiple batches of transformations, each with 50-100 samples. Next, we have shown our method works with different distribution classifier models, including the non-differentiable random forest. While there have been attacks proposed for random forests (Kantchelian et al., 2016), it is unclear how feasible it is to combine these attacks with EOT. We leave the evaluation of white-box attacks on our distribution classifier method for future work." }, { "heading": "7 CONCLUSION", "text": "Adversarial attacks on convolutional neural networks have gained significant research attention and stochastic input transformation defenses have been proposed. However, with transformation-based defenses, the performance on clean images deteriorates and the exact mechanism in which how this happens is unclear. In this paper, we conduct in-depth analysis on the effects of stochastic transformation-based defenses on the softmax outputs of clean and adversarial images. We observe that after image transformation, the distributions of softmax obtained from clean and adversarial images share similar distinct features. Exploiting this property, we propose a method that trains a distribution classifier on the distributions of the softmax outputs of transformed clean images only, but show improvements in both clean and adversarial images over majority voting. In our current work, we have considered untargeted attacks on the CNN and it is interesting to test our distribution classifier method with targeted attacks." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Harold Soh, Wang Wei, Terence Sim, Mahsa Paknezhad and Kaicheng Liang for their constructive discussions. This work is supported, in part, by the Biomedical Research Council of the Agency for Science, Technology and Research and the National University of Singapore." }, { "heading": "A DISTANCE BETWEEN DISTRIBUTIONS OF SOFTMAX FOR ALL MNIST CLASSES", "text": "In Section 3, we studied the 4 distance metrics for the distribution of softmax. Figure 8 shows the distance metrics for all ten MNIST classes with increasing number of pixel deflections." }, { "heading": "B EXAMPLES OF VOTING RECOVERING FROM ADVERSARIAL ATTACK", "text": "In Figure 9, we show examples where pixel deflection with voting recovers from the adversarial attack." }, { "heading": "C ADAPTATION OF DRN FOR DISTRIBUTION CLASSIFICATION", "text": "For one of the distribution classifier methods, we adapt a state-of-the-art distribution-to-distribution regression method, called distribution regression network (DRN) (Kou et al., 2018). DRN encodes an entire distribution in each network node and this compact representation allows it to achieve higher prediction accuracies for the distribution regression task compared to conventional neural networks. Since DRN shows superior regression performance, we adapt DRN for distribution classification in this work.\nOur adaption of the distribution classifier is shown on the right of Figure 10. The network consists of fully-connected layers, where each node encodes a distribution. The number of hidden layers and nodes per hidden layer are chosen by cross validation. The number of discretization bins for each distribution for the input layer and hidden layers is also tuned as hyperparameters. To adapt DRN for our distribution classification task, for the final layer, we have C nodes representing each class and we use 2 bins for each distribution to represent the logit output for the corresponding class. The cost function for the distribution classifier is the cross entropy loss on the logits. The distribution classifier is optimized by backpropagation using the Adam optimizer (Kingma & Ba, 2014). The weight initialization method follows Kou et al. (2018), where the weights are sampled from a uniform random distribution." }, { "heading": "D EXPERIMENTAL SETUP", "text": "D.1 ADVERSARIAL ATTACK HYPERPARAMETERS\nTables 1 to 3 show the hyperparameter settings used for the adversarial attacks. The attacks are implemented using the CleverHans library (Papernot et al., 2018). For DeepFool and C&W, the other hyperparameters used are the default values set in CleverHans. For L2 norm, we use the root-mean-square distortion normalized by total number of pixels, following previous works.\nD.2 IMAGE TRANSFORMATION DEFENSE PARAMETERS\nTables 4 to 6 show the image transformation parameters used for MNIST and CIFAR10 respectively. The hyperparameter tuning for each defense method is conducted on the validation set for each\ndataset. We select hyperparameters that give the best recovery from adversarial attack, regardless of the deterioration in accuracy on clean images.\nD.3 CLASS ACTIVATION MAPS FOR PIXEL DEFLECTION\nThe pixel deflection (Prakash et al., 2018) defense uses class activation maps (CAMs) (Zhou et al., 2016) to randomly select pixels to undergo the deflection step. In our experiments, we did not use class activation maps and instead randomly select pixels with equal probabilities. First, for the MNIST dataset, CAMs are unsuitable because the LeNet (LeCun et al., 1998) architecture does not have global average pooling layers which are required for CAMs. For the CIFAR10 dataset, the wide ResNet (Zagoruyko & Komodakis, 2016) architecture uses a final layer of global average pooling and so we tested CAMs on it. Table 7 compares the performance on clean and adversarial images using the FGSM and IGSM attacks, with and without CAMs, which shows that using CAMs does\nnot cause significant difference in performance. This may be because CAMs are more effective on larger images such as those in ImageNet where there are many more background pixels.\nD.4 DISTRIBUTION CLASSIFIER HYPERPARAMETERS\nOur defense method uses a distribution classifier to train on distributions of softmax probabilities obtained from transformed samples of the clean images. For each image, we build the marginal distributions of the softmax for each class using kernel density estimation with a Gaussian kernel. The kernel width is optimized to be 0.05. For DRN and MLP, the network architecture of the distribution classifier and optimization hyperparameters are chosen by cross-validation. For random forest, the number of trees and maximum depth of the trees are tuned by cross-validation. The hyperparameters used are shown in Tables 8 to 10.\nD.5 ACCURACY RESULTS FOR DISTRIBUTION CLASSIFIERS\nHere we include the detailed numerical figures for the accuracies of majority voting and the distribution classifier methods. Tables 11 to 13 show the clean and adversarial test accuracies and the 4 attack methods and the 3 defense methods." }, { "heading": "E DETAILS OF IMPLEMENTATION OF BOUNDARY ATTACK", "text": "For Vote, DRN and DRN LAT, the model outputs are random because of the random image transformation. At each step of Boundary Attack, we allow the attack to query the model once, and this involves taking 50-100 transformed samples for the image to perform voting or to feed to the\ndistribution classifier to obtain a prediction. To avoid overfitting to a fixed transformation pattern, the transformation is random at each step. Our criteria for an image being adversarial is that out of 5 queries, the image is misclassified at least once. Because of the randomness of the model, the image returned by Boundary Attack may be classified to the correct class, and we increase the perturbation by increasing amounts until the image is misclassified. Note that to overcome the randomness, we could have performed multiple queries at each attack step, but because our models already use 50-100 transformed samples per query, this will be computationally infeasible." } ]
2,020
ENHANCING TRANSFORMATION-BASED DEFENSES AGAINST ADVERSARIAL ATTACKS WITH A DISTRIBU-
SP:beaa3dfef4bdf3d8fea64d4cf86911f45edd2873
[ "This paper proposes a method called Self-Taught Associative Memory (STAM) for Unsupervised Progressive Learning (UPL) , i.e., learning salient representation from streams of mostly unlabeled data with occasional class labels, where the number of class increases over time. The motivation of this paper is quite interesting in that the authors try to mimic how animals learn. The surrounding environments of animals are considered to be unlabeled, and animals gradually learn to distinguish between objects without explicit information. The model shed light on the problem of catastrophic forgetting by introducing dual-memory organization. To be specific, Short-Term Memory contains a set of centroids associated with the unlabeled data, whereas Long-Term Memory stores the prototypical centroids, which are frequently seen patterns. In addition, the model utilizes novelty detection technique to introduce new centroids to each layer of the model, and it prepares the newly created centroids to be associated with new classes. ", "This paper sets up a new problem based on a continuous stream of potentially partially labelled data, which the authors call the Unsupervised Progressive Learning problem. The paper also introduces a new model designed to approach this problem, called the STAM architecture, with many concepts applied in a novel way. The STAM architecture is tested on example problems from the UPL problem." ]
We first pose the Unsupervised Progressive Learning (UPL) problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes increases with time. If some limited labeled data is also available, those representations can be associated with specific classes, thus enabling classification tasks. To solve the UPL problem, we propose an architecture that involves an online clustering module, called Self-Taught Associative Memory (STAM). Layered hierarchies of STAM modules learn based on a combination of online clustering, novelty detection, forgetting outliers, and storing only prototypical representations rather than specific examples. The goal of this paper is to introduce the UPL problem, describe the STAM architecture, and evaluate the latter in the UPL context.
[]
[ { "authors": [ "F. Gregory Ashby", "W. Todd Maddox" ], "title": "Human category learning", "venue": "Annu. Rev. Psychol.,", "year": 2005 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "Kevin S. Beyer", "Jonathan Goldstein", "Raghu Ramakrishnan", "Uri Shaft" ], "title": "When is ”nearest neighbor", "venue": "Proceedings of the 7th International Conference on Database Theory, ICDT", "year": 1999 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Adam Coates", "Andrew Y Ng" ], "title": "Learning feature representations with k-means, pages 561–580", "venue": null, "year": 2012 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "André van Schaik" ], "title": "Emnist: an extension of mnist to handwritten letters", "venue": "ArXiv,", "year": 2017 }, { "authors": [ "Yuwei Cui", "Subutai Ahmad", "Jeff Hawkins" ], "title": "Continuous online sequence learning with an unsupervised neural network model", "venue": "Neural Comput.,", "year": 2016 }, { "authors": [ "S.M. Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Koray Kavukcuoglu", "Geoffrey E. Hinton" ], "title": "Attend, infer, repeat: Fast scene understanding with generative models", "venue": "In Proceedings of the 30th International Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Li Fei-Fei", "R. Fergus", "P. Perona" ], "title": "One-shot learning of object categories", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2006 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning Volume 70,", "year": 2017 }, { "authors": [ "R. Geirhos", "P. Rubisch", "C. Michaelis", "M. Bethge", "F.A. Wichmann", "W. Brendel" ], "title": "Imagenettrained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Alexander Gepperth", "Cem Karaoguz" ], "title": "Incremental learning with self-organizing maps", "venue": "Visualization (WSOM),", "year": 2017 }, { "authors": [ "Robert L Goldstone" ], "title": "Perceptual learning", "venue": "Annual review of psychology,", "year": 1998 }, { "authors": [ "Alexander Hinneburg", "Charu C. Aggarwal", "Daniel A. Keim" ], "title": "What is the nearest neighbor in high dimensional spaces", "venue": "In Proceedings of the 26th International Conference on Very Large Data Bases,", "year": 2000 }, { "authors": [ "Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR 2019", "year": 2019 }, { "authors": [ "Yen-Chang Hsu", "Yen-Cheng Liu", "Anita Ramasamy", "Zsolt Kira" ], "title": "Re-evaluating continual learning scenarios: A categorization and case for strong baselines", "venue": "In NeurIPS Continual learning Workshop,", "year": 2018 }, { "authors": [ "Gabriel Huang", "Hugo Larochelle", "Simon Lacoste-Julien" ], "title": "Centroid networks for few-shot clustering and unsupervised few-shot classification", "venue": null, "year": 1902 }, { "authors": [ "X. Ji", "J. Henriques", "A. Vedaldi" ], "title": "Invariant infromation clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Zhuxi Jiang", "Yin Zheng", "Huachun Tan", "Bangsheng Tang", "Hanning Zhou" ], "title": "Variational deep embedding: An unsupervised and generative approach to clustering", "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Ronald Kemker", "Marc McClure", "Angelina Abitino", "Tyler Hayes", "Christopher Kanan" ], "title": "Measuring catastrophic forgetting in neural networks", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization, 2014. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations", "venue": "San Diego,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Danilo J. Rezende", "Shakir Mohamed", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2,", "year": 2014 }, { "authors": [ "Adam Kosiorek", "Hyunjik Kim", "Yee Whye Teh", "Ingmar Posner" ], "title": "Sequential attend, infer, repeat: Generative modelling of moving objects", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Dharshan Kumaran", "Demis Hassabis", "James L McClelland" ], "title": "What learning systems do intelligent agents need? complementary learning systems theory updated", "venue": "Trends in cognitive sciences,", "year": 2016 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin Raffel", "Ekin D. Cubuk", "Ian J. Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms", "venue": null, "year": 2018 }, { "authors": [ "German I. Parisi", "Ronald Kemker", "Jose L. Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Cengiz Pehlevan", "Alexander Genkin", "Dmitri B Chklovskii" ], "title": "A clustering neural network model of insect olfaction", "venue": "51st Asilomar Conference on Signals, Systems, and Computers,", "year": 2017 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H. Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B. Tenenbaum", "Hugo Larochelle", "Richard S. Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "In Proceedings of 6th International Conference on Learning Representations ICLR,", "year": 2018 }, { "authors": [ "Andrei A. Rusu", "Neil C. Rabinowitz", "Guillaume Desjardins", "Hubert Soyer", "James Kirkpatrick", "Koray Kavukcuoglu", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progressive neural networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": null, "year": 2017 }, { "authors": [ "Jost Tobias Springenberg" ], "title": "Unsupervised and semi-supervised learning with categorical generative adversarial networks", "venue": null, "year": 2015 }, { "authors": [ "Gido M. van de Ven", "Andreas S. Tolias" ], "title": "Three scenarios for continual learning", "venue": null, "year": 1904 }, { "authors": [ "Rajasekar Venkatesan", "Meng Joo Er" ], "title": "A novel progressive learning technique for multi-class classification", "venue": "Neurocomput.,", "year": 2016 }, { "authors": [ "Takeo Watanabe", "José E Náñez", "Yuka Sasaki" ], "title": "Perceptual learning without perception", "venue": "Nature, 413(6858):844,", "year": 2001 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Chen Zeno", "Itay Golan", "Elad Hoffer", "Daniel Soudry" ], "title": "Task agnostic continual learning using online variational Bayes, 2018", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "We start by posing a challenging problem, referred to as Unsupervised Progressive Learning (UPL) (see Figure 1). In the UPL problem, the agent observes a sequence (or stream) of unlabeled data vectors {xt}t∈N with xt ∈ Rn. Each vector xt is associated with a class k(xt) and the vectors of class k follow a distribution Fk. The class information, however, is hidden from the agent. Occasionally, the agent may be given a small number of labeled examples of one or more classes. These examples are meant to associate “names” (i.e., class labels) with the learned representations enabling classification tasks in which the set of output classes stays constant (“persistent tasks”) or increases (“expanding tasks”).\nWe denote asLt the set of class labels the agent has seen up to time t. This set is gradually increasing, meaning that the agent progressively learns about more classes. In the UPL context, the goal is to learn in an online manner salient representations of the unlabeled input stream so that the agent can, at any point in time t, classify a given set of test data based on the set of classes Lt it knows about so far. We require an online learner for pragmatic reasons: it would not be possible or desirable in practice to store and/or process all previously seen data and learn a new model offline every time there is a change in Lt. The online nature of the problem constraints the solution space: methods that require multiple passes over the training data and/or randomly sampled minibatches are not applicable in the UPL context.\nWe assume that the distribution Fk associated with class k may also change with time – but this is a slow and gradual process so that an online learner can track changes in Fk. Abrupt changes would require that the agent forgets what was previously learned about class k – we do not consider that possibility in this paper.\nWe do not add any further constraints on the structure of the data sequence. For instance, it is possible that the learner first observes a labeled example of class k at time t (and so k ∈ Lt) even though it has not seen any unlabeled examples of that class prior to t – this would require a transfer-learning capability so that the learner can classify k based on representations it has previously learned from other classes. Another interesting scenario is when the unlabeled data arrive in separated class phases, which are unknown to the agent, so that each phase includes data from only few new classes – this is a challenging task from the perspective of catastrophic forgetting because the learner should not forget previously learned classes for which it does not see any new examples. We consider such UPL scenarios in Section 3.\nIt is plausible that UPL represents how animals learn, at least in the case of perceptual learning (Goldstone, 1998): they observe their environment, which is predominantly “unlabeled”, and so they learn to gradually distinguish between a growing number of different object categories even when they do not have a way yet to name them. Later, some of those classes may be associated with words (in the case of humans) (Ashby and Maddox, 2005), or more generally, with a specific taste, odor, reward, fear, etc. (Watanabe et al., 2001)." }, { "heading": "1.1 UPL VERSUS SIMILAR LEARNING PARADIGMS", "text": "1. Unsupervised and self-supervised learning: There have been great strides recently in learning data representations (Bengio et al., 2013) via clustering (Caron et al., 2018), generative models (Jiang et al., 2017; Eslami et al., 2016; Kosiorek et al., 2018; 2019), and information theory (Hjelm et al., 2019; Ji et al., 2019). While these methods can learn representations without data labels, they still require prior information about the number of classes present in a given dataset (to set the number of cluster centroids or class outputs) and, in the continual learning case, they suffer from catastrophic forgetting unless some form of replay is used.\n2. Few-shot learning (FSL) and Meta-learning: Such methods attempt to recognize object classes not seen in a training set with only a single (or handful) of labeled examples (Fei-Fei et al., 2006; Snell et al., 2017; Finn et al., 2017; Ren et al., 2018). FSL requires labeled data to learn good representations - whereas UPL only requires labeled data to associate already learned representations with new classes.\n3. Semi-supervised learning (SSL): SSL addresses scarcity of available labeled data for model training by leveraging large amounts of unlabeled training data to boost performance (Springenberg, 2015; Oliver et al., 2018; Miyato et al., 2018; Kingma et al., 2014). SSL requires both labeled and unlabeled data during the training process and in most cases it needs to store and process the labeled data repeatedly during the training process.\n4: Continual learning (CL): Most CL methods rely on labeled data and knowledge of which tasks are learned or performed at any point in time(Hsu et al., 2018; Parisi et al., 2019; Kemker et al., 2018; van de Ven and Tolias, 2019; Lopez-Paz and Ranzato, 2017). Further, the most effective CL mechanisms address catastrophic forgetting using stored examples or generative replay (Gepperth and Karaoguz, 2017; Shin et al., 2017).\n5. Transfer learning (TL): Such methods require pre-training on a large labeled dataset and so they are not applicable to UPL (Yosinski et al., 2014).\n6. Progressive learning/networks: In (labeled) progressive learning, a supervised classification model must be able to learn in an online manner without prior knowledge of the number of classes (Venkatesan and Er, 2016; Rusu et al., 2016). However, the existing approaches in this area require supervision when a new class appears, and they suffer from catastrophic forgetting if they do not see data from a previous class for long periods of time." }, { "heading": "2 STAM ARCHITECTURE AND LEARNING ALGORITHM", "text": "The learning approach that we pursue in this work is based on online clustering, novelty detection, two separate short-term and long-term memories, and storing only prototypical representations rather than specific examples. In the following, we describe the STAM architecture as a sequence of its seven major components.\nThe notation is summarized in Appendix A. The image preprocessing pipeline is minimal and is described in Appendix B. The reasons we decided to not pursue a deep learning approach are discussed in Appendix C.\n1. Hierarchy of increasing receptive fields: An input vector xt ∈ Rn (an image in all subsequent examples) is analyzed through a hierarchy of Λ layers. Instead of neurons or hidden-layer units, each layer consists of STAM units – in its simplest form a STAM unit functions as an online clustering module. Each STAM processes one ρl × ρl patch (subvector) of the input at that layer. The patches are overlapping, with a small stride (set to one pixel in our experiments), to accomplish translation invariance (similar to CNNs). The patch dimension ρl increases in higher layers – the idea is that the first layer learns the smallest and most elementary patterns while the top layer learns the largest and most complex patterns.\n2. Online clustering: Every patch of each layer is clustered, in an online manner, to a set of centroids. These time-varying centroids form the prototypical patterns that the STAM architecture gradually learns at that layer. All STAM units of layer l share the same set of centroids Cl(t) – again for translation invariance.1 Given the m’th input patch xl,m at layer l, the nearest centroid of Cl selected for xl,m is\ncl.j = arg min c∈Cl d(xl,m, c) (1)\nwhere d(xl,m, c) is the Euclidean distance between the patch xl,m and centroid c.2 The selected centroid is updated based on a learning rate parameter α, as follows:\ncl,j = αxl,m + (1− α)cl,j, 0 < α < 1 (2) A higher α value makes the learning process faster but less predictable. We do not use a decreasing value of α because the goal is to keep learning in a non-stationary environment rather than convergence to a stable centroid. If the centroid cl,j is selected by more than one patches of the same input, the centroid is updated based on the closest patch to that centroid.\nAn online clustering algorithm that is similar to our approach (and asymptotically equivalent to k-means) can be implemented with a simple recurrent neural network of excitatory and inhibitory spiking neurons using strictly Hebbian learning, as shown recently (Pehlevan et al., 2017).\n3. Novelty detection: When an input patch xl,m at layer l is significantly different than all centroids at that layer (i.e., its distance to the nearest centroid is a statistical outlier), a new centroid is created in Cl based on xl,m. We refer to this event as Novelty Detection (ND). This function is necessary so that the architecture can learn centroids associated with new classes after they appear in the unlabeled data stream.\nTo do so, we estimate in an online manner the distribution of the Euclidean distance between input patches and their nearest centroid (separately for each layer). We sample a randomly chosen patch from each input vector, only considering the last 1000 inputs. The novelty detection threshold at layer l is denoted by D̂l and it is defined as the 95-th percentile (β = 0.95) of the distance distribution.\n1We drop the time index t from this point on but it is still implied that the centroids are dynamically learned over time.\n2We have also experimented with the L1 distance metric with only minimal differences.\n4. Dual-memory organization: New centroids are stored temporarily in a Short-Term Memory (STM) of limited capacity ∆ (separate for each layer). Every time a centroid is selected as the nearest neighbor of an input patch, it is updated based on (2). If an STM centroid cl,j is selected sl,j > θ times, it is copied to the Long-Term Memory (LTM) for that layer. We refer to this event as memory consolidation. The LTM has (practically) unlimited capacity and the learning rate is much smaller (in our experiments, set to zero).\nThis memory organization is inspired by the Complementary Learning Systems framework (Kumaran et al., 2016), where the STM role is played by the hippocampus and the LTM role by the cortex. This dual-memory scheme is necessary to distinguish between infrequently seen patterns that can be forgotten, and new patterns that are frequently seen after they first appear.\nWe initialize the pool of STM centroids at each layer using randomly sampled patches from the unlabeled stream (a single patch from each image to maximize diversity).\nWhen the STM pool of centroids at a layer is full, the introduction of a new centroid (created through novelty detection) causes the removal of an earlier centroid. We use the Least-Recently Used (LRU) policy to remove atypical centroids that have not been recently selected by any input. Figure 2 illustrates this dual-memory organization.\n5. Associating centroids with classes: Suppose that we have seen some labeled examples XL(t) from a set of classes L(t) up to time t. In the UPL context, we only use such labeled examples to associate existing LTM centroids at time t (learned strictly from unlabeled data) with the set of classes in L(t).\nGiven a labeled example of class k, suppose that there is a patch x in that example for which the nearest centroid is c. That patch contributes the following association between centroid c and class k:\nfx,c(k) = e −d(x,c)/D̄l (3)\nwhere D̄l is a normalization constant (calculated as the average distance between input patches and centroids).\nThe class-association vector gc between centroid c and any class k is computed aggregating all such associations, across all labeled examples in XL:\ngc(k) =\n∑ x∈XL(k) fx,c(k)∑\nk′∈L(t) ∑ x∈XL(k′) fx,c(k ′) , k = 1 . . . L(t) (4)\nNote that ∑\nk gc(k)=1.\n6. Class informative centroids: If a centroid is associated with only one class k (gc(k) = 1), only labeled examples of that class select that centroid. At the other extreme, if a centroid is equally likely to be selected by examples of any labeled class, (gc(k) ≈ 1/|L(t)|), the selection of that centroid does not provide any significant information for the class of the corresponding input.\nWe identify the centroids that are Class INformative (CIN) as those that are associated with at least one class more than expected by chance. Specifically, a centroid c is CIN if\nmax k∈L(t)\ngc(k) > 1\n|L(t)| + γ (5)\nwhere 1|L(t)| is the chance term and γ is an additional significance term.\n7. Classification using a hierarchy of centroids: At test time, we are given an input x of class k(x) and infer its class as k̂(x). The classification task is a “biased voting” process in which every patch of x, at any layer, votes for a single class as long as that patch selects a CIN centroid.\nSpecifically, if a patch xl,m of layer l selects a CIN centroid c, then that patch votes vl,m(k) = maxk∈L(t) gc(k) for the class k that has the highest association with c, and zero for all other classes. If c is not a CIN centroid, the vote of that patch is vl,m(k) = 0 for all classes.\nThe vote of layer l for class k is the average vote across all patches in layer l (as illustrated in Figure 3):\nvl(k) =\n∑ m∈Ml vl,m(k)\n|Ml| (6)\nwhere Ml is the set of patches in layer l. The final inference for input x is the class with the highest cumulative vote across all layers:\nk̂(x) = arg max k′ Λ∑ l=1 vl(k) (7)" }, { "heading": "3 EVALUATION", "text": "To evaluate the STAM architecture in the UPL context, we consider two different scenarios: Incremental UPL and Uniform UPL. In the Incremental UPL case, small groups of classes appear in successive phases. In the following results, new classes are introduced two at a time in each phase, and they are only seen in that phase. STAMs must be able to both recognize new classes when they are first seen in the stream, and to also remember all previously learned classes without catastrophic forgetting. In the Uniform UPL case, all classes appear with equal probability in the stream. The Uniform UPL scenario is more relevant when the distribution Fk of each class k may gradually change over time. The results for the Uniform scenario are presented in Appendix D.\nThe classification task that we focus on in the Incremental UPL case is expanding, meaning that in each phase we need to classify all classes seen so far. Given a few labeled examples for the classes that have been present in the stream up to time t, the algorithm is asked to perform object classification on a 1000-image test dataset. The datasets we evaluate on are MNIST (Lecun et al., 1998), EMNIST (balanced split with 47 classes) (Cohen et al., 2017), and SVHN (Netzer et al., 2011).\nFor each classification task, we average results over five trials (different unlabeled data streams). In each trial, we have a randomly sampled hold-out set of 1500 images. Then, we sample from the remaining data to form the unlabeled stream. We perform each classification task five times, using randomly sampled test inputs from the hold-out set. So, each result is the average of 25 classification evaluations.\nWe use a 3-layer STAM hierarchy – all hyperparameters values are reported in Appendix A. The robustness of the results as we vary these hyperparameter values is shown in Appendix E." }, { "heading": "3.1 INCREMENTAL UPL", "text": "As we introduce new classes to the incremental UPL stream (see Figure 4), the architecture recognizes previously learned classes without any major degradation in classification accuracy (left column). The average accuracy per phase is decreasing, which is due to the increasingly difficult expanding classification task. For EMNIST, we only show the average accuracy because there are 47 total classes. In all datasets, we observe that layer-3 (corresponding to the largest receptive field) contains the highest fraction of CIN centroids (center column). The ability to recognize new classes\nis perhaps best visualized in the LTM centroid count (right column). During each phase the LTM count stabilizes until a sharp spike occurs at the start of the next phase when new classes are introduced. This reinforces the claim that the LTM pool of centroids (i) is stable when there are no new classes, and (ii) is able to recognize new classes via novelty detection when they appear. In the EMNIST experiment, as the number of classes increases towards 47, we gradually see fewer “spikes” in the LTM centroids for the lower receptive fields, which is expected given the repetition of patterns at that small patch size. However, the highly CIN layer-3 continues to recognize new classes and create centroids, even when the last few classes are introduced." }, { "heading": "3.2 ABLATIONS", "text": "Several ablations are presented in Figure 5. On the left, we remove the LTM capabilities and only use STM centroids for classification. During the first two phases, there is little (if any) difference in classification accuracy. However, we see a clear dropoff during phases 3-5. This suggests that, without the LTM mechanisms, patterns from classes that are no longer seen in the stream are forgotten over time, and STAMs can only successfully classify classes that have been recently seen. We also investigate the importance of having static LTM centroids rather than dynamic centroids (center). Specifically, we replace the static LTM with a dynamic LTM in which the centroids are adjusted with the same learning rate parameter α, as in STM. The accuracy suffers drastically because the introduction of new classes “takes over” LTM centroids of previously learned classes, after the latter are removed from the stream. Similar to the removal of LTM, we do not see the effects of “forgetting” until phases 3-5. Note that the degradation due to a dynamic LTM is less severe than that from removing LTM completely. Finally, we look at the effects of removing layers from the STAM hierarchy (right). We see a small drop in accuracy after removing layer 3, and a large drop in accuracy after also removing layer 2. The importance of having a deeper hierarchy would be more pronounced in datasets with higher-resolution images or videos, potentially showing multiple objects in the same frame. In such cases, CIN centroids can appear at any layer, starting from the lowest to the highest." }, { "heading": "3.3 EFFECT OF UNLABELED AND LABELED DATA", "text": "We next examine the effects of unlabeled and labeled data on the STAM architecture (Figure 6). As we vary the length of the unlabeled data stream (left), we see that STAMs can actually perform well even with much less unlabeled data. This suggests that the STAM architecture may be applicable even where the datastream is much shorter than in the experiments of this paper. A longer stream would be needed however if there are many classes and some of them are infrequent. The accuracy “saturation” observed by increasing the unlabeled data from 20000 to 60000 can be explained based on the memory mechanism, which does not update centroids after they move to LTM. As showed in the ablation studies, this is necessary to avoid forgetting classes that no longer appear in the stream. The effect of varying the number of labeled examples per class (right) is much more pronounced.\nWe see that the STAM architecture can perform well above chance even in the extreme case of only a single (or small handful of) labeled examples per class." }, { "heading": "4 RELATED WORK AND DISCUSSION", "text": "Even though clustering has been used successfully in the past as a representation learning scheme (Coates et al., 2011; Coates and Ng, 2012), its effectiveness gradually drops as the input dimensionality increases (Hinneburg et al., 2000; Beyer et al., 1999). In the STAM architecture, we avoid this issue by clustering smaller subvectors (patches) of the input data. If those subvectors are still of high dimensionality, another approach is to reduce the intrinsic dimensionality of the input data at each layer by reconstructing that input using representations (selected centroids) from the previous layer – we have experimented with this approach but not included it here because it is not required in the datasets and tasks we work with in this paper.\nThis paper does not compare the performance of STAMs with other methods because, to the extent of our knowledge, none of the existing approaches in the literature are directly applicable to the UPL problem. In a follow-up study, we plan to adapt the most relevant approaches so that they can be applied in the UPL context. One of these approaches is Incremental Classifier and Representation Learning (iCaRL) (Rebuffi et al., 2017), which learns representations and classifiers from a progressive stream of labeled data and stores training examples for each class. In (Huang et al., 2019), the goal is to do unsupervised few-shot classification and the authors evaluate various methods in a similar manner to UPL but without learning from a non-stationary stream. FearNet (Kemker et al., 2018) replaces the storage of training examples with a generative model but still processes a labeled stream for both the classifier and generator. Bayesian Gradient Descent (BGD) (Zeno et al., 2018) is a method for preventing catastrophic forgetting when the learner is unaware of the task schedule and so it cannot take any special action when the input data distribution changes. A comparison of methods for continuous online sequence learning, including the Hierarchical Temporal Memory model, was conducted by Cui et al. (Cui et al., 2016) – those methods do not address the UPL problem however.\nWe firmly believe that in order to mimic human intelligence, learning methods should be able to learn in a streaming manner and in the absence of supervision. Animals do not “save off” labeled examples to train in parallel with unlabeled data, they do not know how many classes exist in their environment, and they do not have to replay/dream periodically all their past experiences to avoid forgetting them. The proposed STAM architecture addresses the desiderata that is often associated with Lifelong Learning:\n1. Online learning: STAMs constantly update their centroids with every example. There is no separate training stage, and there is no specific task for which the network optimizes the features it learns. Any tasks that require classification will of course require one or few labeled examples so that the corresponding clusters that were formed previously are now associated with the name of a class. However, STAMs do not need these labeled examples to learn efficient data representations.\n2. Transfer learning: The hierarchical nature of the proposed architecture means that features learned (in an unsupervised manner) at lower-level STAMs can be reused in different tasks that higher-level STAMs perform.\n3. Resistance to catastrophic forgetting: The introduction of a new class or prototype will lead to the creation of new clusters at some STAMs in the hierarchy (e.g., layer-1 STAMs will learn new elementary visual features if we start feeding them natural images instead of MNIST examples – while a STAM at a higher-level would create a new cluster when it first starts seeing examples of scooters but without affecting the cluster associated with bicycles).\n4. Expanding learning capacity: The learning capacity of a STAM architecture depends on two factors: the number of STAMs and the maximum number of centroids that each STAM can store in STM and LTM. The limited capacity constraint in the STM pool requires to forget recently created centroids that have not been recently updated with new examples. The unlimited capacity of the LTM pool of centroids, on the other hand, allows the system to gradually learn an unlimited number of classes, even if it does not see examples of all classes learned earlier.\n5. No direct access to previous experience: A STAM only needs to store the centroids of the clusters it has learned so far. Those centroids correspond to prototypes, allowing the STAM to generalize. All previously seen exemplars are discarded." }, { "heading": "A STAM NOTATION AND HYPERPARAMETERS", "text": "Table 3: MNIST/EMNIST Architecture\nLayer ρl ∆ (incremental) ∆ (uniform)\n1 8 400 2000 2 13 400 2000 3 20 400 2000\nTable 4: SVHN Architecture\nLayer ρl ∆ (incremental) ∆ (uniform)\n1 10 2000 10000 2 14 2000 10000 3 18 2000 10000\nB IMAGE PREPROCESSING\nGiven that each STAM operates on individual image patches, we perform patch normalization rather than image normalization. We chose a normalization operation that helps to identify similar patterns despite variations in the brightness and contrast: every patch is transformed to zero-mean, unit variance before clustering.At least for the datasets we consider in this paper, grayscale images result in higher classification accuracy than color.\nWe have also experimented with ZCA whitening and Sobel filtering. ZCA whitening did not work well because it requires estimating a transformation from an entire image dataset (and so it is not compatible with the online nature of the UPL problem). Sobel filtering did not work well because STAM clustering works better with filled shapes rather than the fine edges produced by Sobel filters." }, { "heading": "C WHY NOT USE DEEP LEARNING?", "text": "Given that deep neural networks have become the new orthodoxy in machine learning, we need to address the question: why not use a deep learning approach to solve the UPL problem? Why to rely on online clustering instead?\n1) With few exceptions, deep learning approaches have a fixed architecture and capacity (measured in terms of their hidden unit parameters) and so they are not able to automatically grow as the network is presented with new classes or tasks. Methods such as Progressive Networks (Snell et al., 2017) allow the network to grow but only when instructed that a new task/class is given – and by a pre-determined growth factor. The STAM architecture, on the other hand, grows by creating new centroids only when needed and by a growth factor that is determined in a self-supervised manner by the “degree of novelty” in the data.\n2) Deep learning approaches require multiple passes over the training data, and thus they would need to either store and replay previously seen data or to learn a generative model that synthesizes realistic data when needed. Especially in semi-supervised methods such as VAT (Miyato et al., 2018), it is critical that the limited given labeled data are processed repeatedly. In the STAM architecture, data is never stored both because that may be impractical and because the data distribution in the UPL context may gradually shift – storing old data can be misleading.\n3) A neural network (deep or shallow) learns a nonlinear embedding of the input data in a lowdimensional continuous space in which it is more efficient, presumably, to perform classification, clustering, generative modeling, or other tasks. This compression of the input dimensionality however comes at a high cost: the latent features learned by a neural net are not interpretable, they may be derived from properties that are unrelated to the causal/defining properties of the corresponding classes (Geirhos et al., 2019), and they can make the network susceptible to adversarial attacks. STAMs, on the other hand, use clustering to learn common patterns at a hierarchy of increasing receptive fields. These patterns can be easily interpreted by humans because they represent prototypes (common patterns) at each layer.\n4) The embedding that a neural network learns is typically assumed to be time-invariant. This is problematic in the UPL context because the number of classes or tasks increases with time, without external supervision whenever that happens. It is not clear how to gradually adjust an embedding in an online manner without any external supervision when the classes/tasks change.\n5) Learning in deep neural networks requires iterative nonconvex optimization processes, such as SGD, that can get stuck in local minima. Online clustering is a much simpler computation in which we only need to compute a certain distance metric between the input vector and all existing centroids – and then to update the nearest centroid based on the corresponding input." }, { "heading": "D UNIFORM UPL", "text": "In order to examine if the STAM architecture can learn all classes simultaneously, but without knowing how many classes exist, we also evaluate the STAM architecture in a uniform UPL scenario (Figure 5). Note that LTM centroids converge to a constant value, at least at the top layer, Each class is recognized at a different level of accuracy, depending on the similarity between that class and others.\nE ADDITIONAL HYPERPARAMETER SWEEPS AND ABLATIONS\nWe examine the effects of STAM hyperparameters in Figure 8. (a) As we decrease the rate of α, we see a degradation in performance. This is likely due to the static nature of the LTM centroids - with low α values, the LTM centroids will primarily represent the patch they were intialized as. (b) As we vary the rates of γ, there is little difference in our final classification rates. This suggests that the maximum gl,j(k) values are quite high, which may not be the case in other datasets besides SVHN. (c) We observe that STAM is robust to changes in Θ. (d,e) The STM size ∆ has a major effect on the number of learned LTM centroids and on classification accuracy. (e) The accuracy in\nphase-5 for different numbers of layer-3 LTM centroids (and correspnding ∆ values). The accuracy shows diminishing returns after we have about 1000 LTM centroids at layer-3. (g,h) As β increases the number of LTM centroids increases (due to a lower rate of novelty detection); if β ≥ 0.9 the classification accuracy is about the same." }, { "heading": "F COMPARISON WITH AUTO-ENCODER APPROACH", "text": "Even though the goal of this paper is not to present an extensive comparison between STAM and deep-learning methods, in this section we perform a limited such comparison with a simple autoencoder. Specifically, we train a convolutional autoencoder (CAE) with the images from the unlabeled data stream. The number of training epochs and the batch size are both set to 1, so that each unlabeled example is only used once. The encoder consists of three convolution layers with ReLU activations, embedding inputs into a 64-dimension latent space. The decoder consists of three transposed convolution layers with ReLU activations. The final layer uses linear activations and the network is trained to optimize Euclidean reconstruction error. The representations at the 64-dimension latent space are used to perform K nearest-neighbor (KNN) classification. The CAE architecture, given in Figure 10, is trained using Adam optimization (Kingma and Ba, 2014) with a learning rate of 1−4 and no decay.\nThe STAM architecture outperforms the CAE approach by a large margin (Figure 9, Incremental UPL, SVHN dataset). In follow-up work we plan to develop more sophisticated deep learning approaches for solving the UPL problem and compare them with STAMs under different conditions." }, { "heading": "G MEMORY FOOTPRINT ANALYSIS", "text": "The memory requirement of the STAM model can be calculated as:\nM = Λ∑ l=1 ρ2l · (|Cl|+ ∆) (8)\nFor the 3-layer SVHN architecture with |Cl| ≈ 3000 LTM centroids in every layer and ∆ = 2000, the memory footprint is 5, 064, 000 pixels, equivalent to roughly 5000 grayscale SVHN digits. This memory requirement can be significantly reduced however. Figure 8(f) shows that the accuracy remains almost the same when ∆ = 500 and |Cl| ≈ 1000. With these values the memory footprint reduces to about 950,000 pixels, equivalent to roughly 930 grayscale SVHN digits.\nBy comparison, the CAE architecture has 4, 683, 425 trainable parameters, which should be stored at floating-point precision. With four bytes per weight, then STAM model would require 9500004683425×4 ≈ 5% of the CAE’s memory footprint. Future work can decrease the STAM memory requirement further by merging similar LTM centroids." } ]
2,019
null
SP:3504773d062b05d1f7c358dfdc0da2ad78f5bc5e
[ "This paper proposes a sample elicitation framework to tackle the problem of eliciting credible samples from agents for complex distributions. The authors suggest that deep neural frameworks can be applied in this framework for sample elicitation through the derivations. The authors also show the connection between the problem of sample elicitation and f-GAN. However, some problems in the proof on sample elicitation should be clarified or carefully explained.", "This paper studies the sample elicitation problem where agents are asked to report samples. The goal is then to evaluate the quality of these reported samples by means of a scoring function S. Following previous related works, the authors use the equivalence between maximizing the expected proper score and minimizing some f-divergence. Their approach relies on the dual expression of the f-divergence which writes as a maximum over a set of functions t. Theoretical guarantees are given for f-scorings obtained (with or without ground truth samples) by first computing the empirical optimal function t, then plugged to estimate the f-divergence. Finally, a deep learning approach is proposed by considering functions f parameterized as sparse deep neural networks." ]
It is important to collect credible training samples (x, y) for building dataintensive learning systems (e.g., a deep learning system). In the literature, there is a line of studies on eliciting distributional information from self-interested agents who hold a relevant information. Asking people to report complex distribution p(x), though theoretically viable, is challenging in practice. This is primarily due to the heavy cognitive loads required for human agents to reason and report this high dimensional information. Consider the example where we are interested in building an image classifier via first collecting a certain category of highdimensional image data. While classical elicitation results apply to eliciting a complex and generative (and continuous) distribution p(x) for this image data, we are interested in eliciting samples xi ∼ p(x) from agents. This paper introduces a deep learning aided method to incentivize credible sample contributions from selfish and rational agents. The challenge to do so is to design an incentive-compatible score function to score each reported sample to induce truthful reports, instead of an arbitrary or even adversarial one. We show that with accurate estimation of a certain f -divergence function we are able to achieve approximate incentive compatibility in eliciting truthful samples. We then present an efficient estimator with theoretical guarantee via studying the variational forms of f -divergence function. Our work complements the literature of information elicitation via introducing the problem of sample elicitation. We also show a connection between this sample elicitation problem and f -GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples.
[]
[ { "authors": [ "Jacob D Abernethy", "Rafael M Frongillo" ], "title": "A characterization of scoring rules for linear properties", "venue": "In Conference on Learning Theory, pp", "year": 2012 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Yingyu Liang", "Tengyu Ma", "Yi Zhang" ], "title": "Generalization and equilibrium in generative adversarial nets (GANs)", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Marc G Bellemare", "Ivo Danihelka", "Will Dabney", "Shakir Mohamed", "Balaji Lakshminarayanan", "Stephan Hoyer", "Rémi Munos" ], "title": "The Cramer distance as a solution to biased Wasserstein gradients", "venue": "arXiv preprint arXiv:1705.10743,", "year": 2017 }, { "authors": [ "Glenn W. Brier" ], "title": "Verification of forecasts expressed in terms of probability", "venue": "Monthly Weather Review,", "year": 1950 }, { "authors": [ "Michel Broniatowski", "Amor Keziou" ], "title": "Parametric estimation and tests through divergences", "venue": "Technical report, Citeseer,", "year": 2004 }, { "authors": [ "Michel Broniatowski", "Amor Keziou" ], "title": "Parametric estimation and tests through divergences and the duality technique", "venue": "Journal of Multivariate Analysis,", "year": 2009 }, { "authors": [ "Yuheng Bu", "Shaofeng Zou", "Yingbin Liang", "Venugopal V Veeravalli" ], "title": "Estimation of KL divergence: Optimal minimax rate", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Luca De Alfaro", "Michael Shavlovsky", "Vassilis Polychronopoulos" ], "title": "Incentives for truthful peer grading", "venue": "arXiv preprint arXiv:1604.03178,", "year": 2016 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Monroe D Donsker", "SR Srinivasa Varadhan" ], "title": "Asymptotic evaluation of certain Markov process expectations for large time", "venue": "I. Communications on Pure and Applied Mathematics,", "year": 1975 }, { "authors": [ "Rafael Frongillo", "Ian Kash" ], "title": "On elicitation complexity", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Rafael Frongillo", "Ian A Kash" ], "title": "Vector-valued property elicitation", "venue": "In Conference on Learning Theory, pp", "year": 2015 }, { "authors": [ "Alice Gao", "James R Wright", "Kevin Leyton-Brown" ], "title": "Incentivizing evaluation via limited access to ground truth: Peer-prediction makes things worse", "venue": "arXiv preprint arXiv:1606.07042,", "year": 2016 }, { "authors": [ "Chao Gao", "Yuan Yao", "Weizhi Zhu" ], "title": "Generative adversarial nets for robust scatter estimation: A proper scoring rule perspective", "venue": null, "year": 1903 }, { "authors": [ "Weihao Gao", "Sewoong Oh", "Pramod Viswanath" ], "title": "Density functional estimators with k-nearest neighbor bandwidths", "venue": "In International Symposium on Information Theory,", "year": 2017 }, { "authors": [ "Tilmann Gneiting", "Adrian E. Raftery" ], "title": "Strictly proper scoring rules, prediction, and estimation", "venue": "Journal of the American Statistical Association,", "year": 2007 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yanjun Han", "Jiantao Jiao", "Tsachy Weissman" ], "title": "Minimax rate-optimal estimation of divergences between discrete distributions", "venue": "arXiv preprint arXiv:1605.09124,", "year": 2016 }, { "authors": [ "Victor Richmond Jose", "Robert F. Nau", "Robert L. Winkler" ], "title": "Scoring rules, generalized entropy and utility maximization", "venue": null, "year": 2006 }, { "authors": [ "Takafumi Kanamori", "Taiji Suzuki", "Masashi Sugiyama" ], "title": "divergence estimation and two-sample homogeneity test under semiparametric density-ratio models", "venue": "IEEE Transactions on Information Theory,", "year": 2011 }, { "authors": [ "Yuqing Kong", "Grant Schoenebeck" ], "title": "Water from two rocks: Maximizing the mutual information", "venue": "In Conference on Economics and Computation,", "year": 2018 }, { "authors": [ "Yuqing Kong", "Grant Schoenebeck" ], "title": "An information theoretic framework for designing information elicitation mechanisms that reward truth-telling", "venue": "Transactions on Economics and Computation,", "year": 2019 }, { "authors": [ "Yuqing Kong", "Katrina Ligett", "Grant Schoenebeck" ], "title": "Putting peer prediction under the micro (economic) scope and making truth-telling focal", "venue": "In International Conference on Web and Internet Economics,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "N.S. Lambert", "D.M. Pennock", "Y. Shoham" ], "title": "Eliciting properties of probability distributions", "venue": "In Conference on Electronic Commerce,", "year": 2008 }, { "authors": [ "Young Kyung Lee", "Byeong U Park" ], "title": "Estimation of Kullback–Leibler divergence by local likelihood", "venue": "Annals of the Institute of Statistical Mathematics,", "year": 2006 }, { "authors": [ "Xingguo Li", "Junwei Lu", "Zhaoran Wang", "Jarvis Haupt", "Tuo Zhao" ], "title": "On tighter generalization bound for deep neural networks: CNNs, ResNets, and beyond", "venue": "arXiv preprint arXiv:1806.05159,", "year": 2018 }, { "authors": [ "Tengyuan Liang" ], "title": "On how well generative adversarial networks learn densities: Nonparametric and parametric results", "venue": "arXiv preprint arXiv:1811.03179,", "year": 2018 }, { "authors": [ "Shuang Liu", "Olivier Bousquet", "Kamalika Chaudhuri" ], "title": "Approximation and convergence properties of generative adversarial learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "James E. Matheson", "Robert L. Winkler" ], "title": "Scoring rules for continuous probability distributions", "venue": "Management Science,", "year": 1976 }, { "authors": [ "John McCarthy" ], "title": "Measures of the value of information", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 1956 }, { "authors": [ "Mehryar Mohri", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Foundations of Machine Learning", "venue": "MIT press,", "year": 2018 }, { "authors": [ "XuanLong Nguyen", "Martin J Wainwright", "Michael I Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Avraham Ruderman", "Mark Reid", "Darío García-García", "James Petterson" ], "title": "Tighter variational representations of f -divergences via restriction to probability measures", "venue": "arXiv preprint arXiv:1206.4664,", "year": 2012 }, { "authors": [ "Leonard J. Savage" ], "title": "Elicitation of personal probabilities and expectations", "venue": "Journal of the American Statistical Association,", "year": 1971 }, { "authors": [ "Johannes Schmidt-Hieber" ], "title": "Nonparametric regression using deep neural networks with relu activation function", "venue": "arXiv preprint arXiv:1708.06633,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Ingo Steinwart", "Chloé Pasin", "Robert Williamson", "Siyu Zhang" ], "title": "Elicitation and identification of properties", "venue": "In Conference on Learning Theory, pp", "year": 2014 }, { "authors": [ "Charles J Stone" ], "title": "Optimal global rates of convergence for nonparametric regression", "venue": "The annals of statistics,", "year": 1982 }, { "authors": [ "Masashi Sugiyama", "Taiji Suzuki", "Takafumi Kanamori" ], "title": "Density Ratio Estimation in Machine Learning", "venue": null, "year": 2012 }, { "authors": [ "Taiji Suzuki", "Masashi Sugiyama", "Jun Sese", "Takafumi Kanamori" ], "title": "Approximating mutual information by maximum likelihood density ratio estimation. In New challenges for feature selection in data mining and knowledge discovery", "venue": null, "year": 2008 }, { "authors": [ "Qing Wang", "Sanjeev R Kulkarni", "Sergio Verdú" ], "title": "Divergence estimation of continuous distributions based on data-dependent partitions", "venue": "IEEE Transactions on Information Theory,", "year": 2005 }, { "authors": [ "Qing Wang", "Sanjeev R Kulkarni", "Sergio Verdú" ], "title": "Divergence estimation for multidimensional densities via k-nearest-neighbor distances", "venue": "IEEE Transactions on Information Theory,", "year": 2009 }, { "authors": [ "Robert L. Winkler" ], "title": "Scoring rules and the evaluation of probability assessors", "venue": "Journal of the American Statistical Association,", "year": 1969 }, { "authors": [ "Zhiyi Zhang", "Michael Grabchak" ], "title": "Nonparametric estimation of Kullback–Leibler divergence", "venue": "Neural computation,", "year": 2014 }, { "authors": [ "Xingyu Zhou" ], "title": "On the Fenchel duality between strong convexity and Lipschitz continuous gradient", "venue": "arXiv preprint arXiv:1803.06573,", "year": 2018 }, { "authors": [ "Lemma D" ], "title": "The entropy of the neural network", "venue": null, "year": 2020 }, { "authors": [ "Proof. See Schmidt-Hieber" ], "title": "If the function f is strongly convex with parameter μ0 > 0 and has Lipschitz continuous gradient with parameter L0 > 0, then the Fenchel duality f† of f is 1/L0-strongly convex and has 1/μ0-Lipschitz continuous gradient (therefore, f† itself is Lipschitz continuous)", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The availability of a large quantity of credible samples is crucial for building high-fidelity machine learning models. This is particularly true for deep learning systems that are data-hungry. Arguably, the most scalable way to collect a large amount of training samples is to crowdsource from a decentralized population of agents who hold relevant sample information. The most popular example is the build of ImageNet (Deng et al., 2009).\nThe main challenge in eliciting private information is to properly score reported information such that the self-interested agent who holds a private information will be incentivized to report truthfully. At a first look, this problem of eliciting quality data is readily solvable with the seminal solution for eliciting distributional information, called the strictly proper scoring rule (Brier, 1950; Winkler, 1969; Savage, 1971; Matheson & Winkler, 1976; Jose et al., 2006; Gneiting & Raftery, 2007): suppose we are interested in eliciting information about a random vector X = (X1, ..., Xd−1, Y ) ∈ Ω ⊆ Rd, whose probability density function is denoted by p with distribution P. As the mechanism designer, if we have a sample x drawn from the true distribution P, we can apply strictly proper scoring rules to elicit p: the agent who holds p will be scored using S(p, x). S is called strictly proper if it holds for any p and q that Ex∼P[S(p, x)] > Ex∼P[S(q, x)]. The above elicitation approach has two main caveats that limited its application:\n• When the outcome space |Ω| is large and is even possibly infinite, it is practically impossible for any human agents to report such a distribution with reasonable efforts. This partially inspired a line of follow-up works on eliciting property of the distributions, which we will discuss later.\n• The mechanism designer may not possess any ground truth samples.\nIn this work we aim to collect credible samples from self-interested agents via studying the problem of sample elicitation. Instead of asking each agent to report the entire distribution p, we hope to elicit samples drawn from the distribution P truthfully. We consider the samples xp ∼ P and xq ∼ Q. In analogy to strictly proper scoring rules1, we aim to design a score function S s.t. Ex∼P[S(xp, x′)] > Ex∼P[S(xq, x′)] for any q ̸= p, where x′ is a reference answer that can be defined using elicited reports. Often, this scoring procedure requires reports from multiple peer agents, and x′ is chosen as a function of the reported samples from all other agents (e.g., the average across all the reported xs, or a randomly selected x). This setting will relax the requirements of high reporting complexity, and has wide applications in collecting training samples for machine learning tasks. Indeed our goal resembles similarity to property elicitation (Lambert et al., 2008; Steinwart et al., 2014; Frongillo & Kash, 2015b), but we emphasize that our aims are different - property elicitation aims to elicit statistical properties of a distribution, while ours focus on eliciting samples drawn from the distributions. In certain scenarios, when agents do not have the complete knowledge or power to compute these properties, our setting enables elicitation of individual sample points.\nOur challenge lies in accurately evaluating reported samples. We first observe that the f -divergence function between two properly defined distributions of the samples can serve the purpose of incentivizing truthful report of samples. We proceed with using deep learning techniques to solve the score function design problem via a data-driven approach. We then propose a variational approach that enables us to estimate the divergence function efficiently using reported samples, via a variational form of the f -divergence function, through a deep neutral network. These estimation results help us establish an approximate incentive compatibility in eliciting truthful samples. It is worth to note that our framework also generalizes to the setting where there is no access to ground truth samples, where we can only rely on reported samples. There we show that our estimation results admit an approximate Bayesian Nash Equilibrium for agents to report truthfully. Furthermore, in our estimation framework, we use a generative adversarial approach to reconstruct the distribution from the elicited samples.\nWe want to emphasize that the deep learning based estimators considered above are able to handle complex data. And with our deep learning solution, we are further able to provide estimates for the divergence functions used for our scoring mechanisms with provable finite sample complexity. In this paper, we focus on developing theoretical guarantees - other parametric families either can not handle complex data, e.g., it is hard to handle images using kernel methods, or do not have provable guarantees on the sample complexity.\nOur contributions are three-folds. (1) We tackle the problem of eliciting complex distribution via proposing a sample elicitation framework. Our deep learning aided solution concept makes it practical to solicit complex sample information from human agents. (2) Our framework covers the case when the mechanism designer has no access to ground truth information, which adds contribution to the peer prediction literature. (3) On the technical side, we develop estimators via deep learning techniques with strong theoretical guarantees. This not only helps us establish approximate incentive-compatibility, but also enables the designer to recover the targeted distribution from elicited samples. Our contribution can therefore be summarized as\n“eliciting credible training samples by deep learning, for deep learning\"." }, { "heading": "1.1 RELATED WORKS", "text": "The most relevant literature to our paper is strictly proper scoring rules and property elicitation. Scoring rules were developed for eliciting truthful prediction (probability) (Brier, 1950; Winkler, 1969; Savage, 1971; Matheson & Winkler, 1976; Jose et al., 2006; Gneiting & Raftery, 2007). Characterization results for strictly proper scoring rules are given in McCarthy (1956); Savage (1971); Gneiting & Raftery (2007). Property elicitation notices the challenge of eliciting complex distributions (Lambert et al., 2008; Steinwart et al., 2014; Frongillo & Kash, 2015b). For instance, Abernethy & Frongillo (2012) characterize the score functions for eliciting linear properties, and Frongillo & Kash (2015a) study the complexity of eliciting properties. Another line of relevant research is peer prediction, where solutions can help elicit private information when the ground truth verification might be missing (De Alfaro et al., 2016; Gao et al., 2016; Kong et al., 2016;\n1Our specific formulation and goal will be different in details.\nKong & Schoenebeck, 2018; 2019). Our work complements the information elicitation literature via proposing and studying the question of sample elicitation via a variational approach to estimate f -divergence functions.\nOur work also extends the line of work on divergence estimation. The simplest way to estimate divergence starts with the estimation of density function (Wang et al., 2005; Lee & Park, 2006; Wang et al., 2009; Zhang & Grabchak, 2014; Han et al., 2016). Another method based on the variational form (Donsker & Varadhan, 1975) of the divergence function comes into play (Broniatowski & Keziou, 2004; 2009; Nguyen et al., 2010; Kanamori et al., 2011; Ruderman et al., 2012; Sugiyama et al., 2012), where the estimation of divergence is modeled as the estimation of density ratio between two distributions. The variational form of the divergence function also motivates the well-know Generative Adversarial Network (GAN) (Goodfellow et al., 2014), which learns the distribution by minimizing the Kullback-Leibler divergence. Follow-up works include Nowozin et al. (2016); Arjovsky et al. (2017); Gulrajani et al. (2017); Bellemare et al. (2017), with theoretical analysis in Liu et al. (2017); Arora et al. (2017); Liang (2018); Gao et al. (2019). See also Gao et al. (2017); Bu et al. (2018) for this line of work." }, { "heading": "1.2 NOTATIONS", "text": "For the distribution P, we denote by Pn the empirical distribution given a set of samples {xi}ni=1 following P, i.e., Pn = 1/n · ∑n i=1 δxi , where δxi is the Dirac measure at xi. We denote by\n∥v∥s = ( ∑d i=1 |v(i)|s)1/s the ℓs norm of the vector v ∈ Rd where 1 ≤ s < ∞ and v(i) is the i-th entry of v. We also denote by ∥v∥∞ = max1≤i≤d |v(i)| the ℓ∞ norm of v. For any real-valued continuous function f : X → R, we denote by ∥f∥Ls(P) := [ ∫ X |f(x)|\ns dP]1/s the Ls(P) norm of f and ∥f∥s := [ ∫ X |f(x)|\ns dµ]1/s the Ls(µ) norm of f(·), where µ is the Lebesgue measure. Also, we denote by ∥f∥∞ = supx∈X |f(x)| the L∞ norm of f(·). For any real-valued functions g(·) and h(·) defined on some unbounded subset of the real positive numbers, such that h(α) is strictly positive for all large enough values of α, we write g(α) ≲ h(α) and g(α) = O(h(α)) if |g(α)| ≤ c · h(α) for some positive absolute constant c and any α > α0, where α0 is a real number. We denote by [n] the set {1, 2, . . . , n}." }, { "heading": "2 PRELIMINARY", "text": "We formulate the question of sample elicitation." }, { "heading": "2.1 SAMPLE ELICITATION", "text": "We consider two scenarios. We start with an easier case where we, as the mechanism designer, have access to a certain number of group truth samples. This is a setting that resembles similarity to the proper scoring rule setting. Then we move to the harder case where the inputs to our mechanism can only be elicited samples from agents.\nMulti-sample elicitation with ground truth samples. Suppose that the agent holds n samples, with each of them independently drawn from P, i.e., xi ∼ P 2 for i ∈ [n]. The agent can report each sample arbitrarily, which is denoted as ri(xi) : Ω → Ω. There are n data {x∗i }i∈[n] independently drawn from the ground truth distribution Q3. We are interested in designing a score function S(·) that takes inputs of each ri(·) and {rj(xj), x∗j}j∈[n]: S(ri(xi), {rj(xj), x∗j}j∈[n]) such that if the agent believes that x∗ is drawn from the same distribution x∗ ∼ P, then for any {rj(·)}j∈[n], it holds with probability at least 1− δ that\nn∑ i=1 Ex,x∗∼P [ S ( xi, {xj , x∗j}j∈[n] )] ≥ n∑ i=1 Ex,x∗∼P [ S ( ri(xi), {rj(xj), x∗j}j∈[n] )] − n · ϵ.\n2Though we use x to denote the samples we are interested in, x potentially includes both the feature and labels (x, y) as in the context of supervised learning.\n3The number of ground truth samples can be different from n, but we keep them the same for simplicity of presentation. It will mainly affect the terms δ and ϵ in our estimations.\nWe name the above as (δ, ϵ)-properness (per sample) for sample elicitation. When δ = ϵ = 0, it is reduced to the one that is similar to the properness definition in scoring rule literature (Gneiting & Raftery, 2007). We also shorthand ri = ri(xi) when there is no confusion. Agent believes that her samples are generated from the same distribution as of the ground truth samples, i.e., P and Q are same distributions.\nSample elicitation with peer samples. Suppose there are n agents each holding a sample xi ∼ Pi, where the distributions {Pi}i∈[n] are not necessarily the same - this models the fact that agents can have subjective biases or local observation biases. This is a more standard peer prediction setting. We denote by their joint distribution as P = P1 × P2 × ....× Pn. Similar to the previous setting, each agent can report her sample arbitrarily, which is denoted as ri(xi) : Ω → Ω for any i ∈ [n]. We are interested in designing and characterizing a score function S(·) that takes inputs of each ri(·) and {rj(xj)}j ̸=i: S(ri(xi), {rj(xj)}j ̸=i) such that for any {rj(·)}j∈[n], it holds with probability at least 1− δ that\nEx∼P [ S ( xi, {rj(xj) = xj}j ̸=i )] ≥ Ex∼P [ S ( r(xi), {rj(xj) = xj}j ̸=i )] −ϵ.\nWe name the above as (δ, ϵ)-Bayesian Nash Equilibrium (BNE) in truthful elicitation. We only require that agents are all aware of above information structure as common knowledge, but they do not need to form beliefs about details of other agents’ sample distributions. Each agent’s sample is private to herself.\n2.2 f -DIVERGENCE\nIt is well known that maximizing the expected proper scores is equivalent to minimizing a corresponding Bregman divergence (Gneiting & Raftery, 2007). More generically, we take the perspective that divergence functions have great potentials to serve as score functions for eliciting samples. We define the f -divergence between two distributions P and Q with probability density function p and q, respectively, as\nDf (q∥p) = ∫ p(x)f ( q(x)\np(x)\n) dµ. (2.1)\nHere f(·) is a function satisfying certain regularity conditions, which will be specified later. Solving our elicitation problem involves evaluating the Df (q∥p) successively based on the distributions P and Q, without knowing the probability density functions p and q. Therefore, we have to resolve to a form of Df (q∥p) which does not involve the analytic forms of p and q, but instead sample forms. Following from Fenchel’s convex duality, it holds that\nDf (q∥p) = max t(·) Ex∼Q[t(x)]− Ex∼P[f†(t(x))], (2.2)\nwhere f†(·) is the Fenchel duality of the function f(·), which is defined as f†(u) = supv∈R{uv − f(v)}, and the max is taken over all functions t(·) : Ω ⊂ Rd → R." }, { "heading": "3 SAMPLE ELICITATION: A GENERATIVE ADVERSARIAL APPROACH", "text": "Recall from (2.2) that Df (q∥p) admits the following variational form:\nDf (q∥p) = max t(·) Ex∼Q[t(x)]− Ex∼P[f†(t(x))]. (3.1)\nWe highlight that via functional derivative, (3.1) is solved by t∗(x; p, q) = f ′(θ∗(x; p, q)), where θ∗(x; p, q) = q(x)/p(x) is the density ratio between p and q. Our elicitation builds upon such a variational form (3.1) and the following estimators,\nt̂(·; p, q) = argmin t(·) Ex∼Pn [f†(t(x))]− Ex∼Qn [t(x)],\nD̂f (q∥p) = Ex∼Qn [t̂(x)]− Ex∼Pn [f†(t̂(x))]." }, { "heading": "3.1 ERROR BOUND AND ASSUMPTIONS", "text": "Suppose we have the following error bound for estimating Df (q∥p): for any probability density functions p and q, it holds with probability at least 1− δ(n) that\n|D̂f (q∥p)−Df (q∥p)| ≤ ϵ(n), (3.2)\nwhere δ(n) and ϵ(n) will be specified later in Section 4. To obtain such an error bound, we need the following assumptions.\nAssumption 3.1 (Bounded Density Ratio). The density ratio θ∗(x; p, q) = q(x)/p(x) is bounded such that 0 < θ0 ≤ θ∗ ≤ θ1 holds for positive absolute constants θ0 and θ1.\nThe above assumption is standard in related literature (Nguyen et al., 2010; Suzuki et al., 2008), which requires that the probability density functions p and q lie on a same support. For simplicity of presentation, we assume that this support is Ω ⊂ Rd. We define the β-Hölder function class on Ω as follows.\nDefinition 3.2 (β-Hölder Function Class). The β-Hölder function class with radius M is defined as Cβd (Ω,M) = { t(·) : Ω ⊂ Rd → R : ∑ ∥α∥1<β ∥∂αt∥∞ + ∑ ∥α∥1=⌊β⌋ sup x,y∈Ω,x ̸=y |∂αt(x)− ∂αt(y)| ∥x− y∥β−⌊β⌋∞ ≤M } ,\nwhere ∂α = ∂α1 · · · ∂αd with α = (α1, . . . , αd) ∈ Nd.\nWe assume that the function t∗(·; p, q) is β-Hölder, which guarantees the smoothness of t∗(·; p, q).\nAssumption 3.3 (β-Hölder Condition). The function t∗(·; p, q) ∈ Cβd (Ω,M) for some positive absolute constants M and β, where Cβd (Ω,M) is the β-Hölder function class in Definition 3.2.\nIn addition, we assume that the following regularity conditions hold for the function f(·) in the definition of f -divergence in (2.1).\nAssumption 3.4 (Regularity of Divergence Function). The function f(·) is smooth on [θ0, θ1] and f(1) = 0. Also, it holds that\n(i) f is µ0-strongly convex on [θ0, θ1], where µ0 is a positive absolute constant; (ii) f has L0-Lipschitz continuous gradient on [θ0, θ1], where L0 is a positve absolute constant.\nWe highlight that we only require that the conditions in Assumption 3.4 hold on the interval [θ0, θ1], where the absolute constants θ0 and θ1 are specified in Assumption 3.1. Thus, Assumption 3.4 is mild and it holds for many commonly used functions in the definition of f -divergence. For example, in Kullback-Leibler (KL) divergence, we take f(u) = − log u, which satisfies Assumption 3.4; in Jenson-Shannon divergence, we take f(u) = u log u − (u + 1) log(u + 1), which also satisfies Assumption 3.4.\nWe will show that under Assumptions 3.1, 3.3, and 3.4, the bound (3.2) holds. See Theorem 4.2 in Section 4 for details." }, { "heading": "3.2 MULTI-SAMPLE ELICITATION WITH GROUND TRUTH SAMPLES", "text": "In this section, we focus on multi-sample elicitation with ground truth samples. Under this setting, as a reminder, the agent will report multiple samples. After the agent reported her samples, the mechanism designer obtains a set of ground truth samples {x∗i }i∈[n] ∼ Q to serve the purpose of evaluation. This falls into the standard strictly proper scoring rule setting.\nOur mechanism is presented in Algorithm 1.\nAlgorithm 1 consists of two steps: step 1 is to compute the function t̂(·; p, q), which enables us, in step 2, to pay agent using a linear-transformed estimated divergence between the reported samples and the true samples. We have the following result.\nTheorem 3.5. The f -scoring mechanism in Algorithm 1 achieves (2δ(n), 2bϵ(n))-properness.\nAlgorithm 1 f -scoring mechanism for multiple-sample elicitation with ground truth 1. Compute\nt̂(·; p, q) = argmin t(·) Ex∼Pn [f†(t(x))]− Ex∗∼Qn [t(x∗)].\n2. For i ∈ [n], pay reported sample ri using S ( ri, {rj , x∗j}nj=1 ) := a− b ( Ex∼Qn [t̂(x; p, q)]− f†(t̂(ri; p, q)) ) for some constants a, b > 0.\nThe proof is mainly based on the error bound in estimating f -divergence and its non-negativity. Not surprisingly, if the agent believes her samples are generated from the same distribution as the ground truth sample, and that our estimator can well characterize the difference between the two set of samples, she will be incentivized to report truthfully to minimize the difference. We defer the proof to Section B.1." }, { "heading": "3.3 SINGLE-TASK ELICITATION WITHOUT GROUND TRUTH SAMPLES", "text": "The above mechanism in Algorithm 1, while intuitive, has the following two caveats:\n• The agent needs to report multiple samples (multi-task/sample elicitation); • Multiple samples from the ground truth distribution are needed.\nTo deal with such caveats, we consider the single point elicitation in an elicitation without verification setting. Suppose there are 2n agents each holding a sample xi ∼ Pi 4. We randomly partition the agents into two groups, and denote the joint distributions for each group’s samples as P and Q with probability density functions p and q for each of the two groups. Correspondingly, there are a set of n agents for each group, respectively, who are required to report their single data point according to two distributions P and Q, i.e., each of them holds {xpi }i∈[n] ∼ P and {x q i }i∈[n] ∼ Q. As an interesting note, this is also similar to the setup of a Generative Adversarial Network (GAN), where one distribution corresponds to a generative distribution x | y = 1, and another x | y = 0. This is a connection that we will further explore in Section 5 to recover distributions from elicited samples.\nWe denote by the joint distribution of p and q as p ⊕ q (distribution as P ⊕ Q), and the product of the marginal distribution as p × q (distribution as P × Q). We consider the divergence between the two distributions:\nDf (p⊕ q∥p× q) = max t(·) Ex∼P⊕Q[t(x)]− Ex∼P×Q[f†(t(x))].\nMotivated by the connection between mutual information and KL divergence, we define generalized f -mutual information in the follows, which characterizes the generic connection between a generalized f -mutual information and f -divergence.\nDefinition 3.6 (Kong & Schoenebeck (2019)). The generalized f -mutual information between p and q is defined as\nIf (p; q) = Df (p⊕ q∥p× q)\nFurther it is shown in Kong & Schoenebeck (2018; 2019) that the data processing inequality for mutual information holds for If (p; q) when f is strictly convex. We define the following estimators,\nt̂(·; p⊕ q, p× q) = argmin t(·) Ex∼Pn×Qn [f†(t(x))]− Ex∼Pn⊕Qn [t(x)],\nD̂f (p⊕ q∥p× q) = Ex∼Pn⊕Qn [t̂(x; p⊕ q, p× q)]− Ex∼Pn×Qn [f†(t̂(x; p⊕ q, p× q))], (3.3)\nwhere Pn and Qn are empirical distributions of the reported samples. We denote x ∼ Pn ⊕Qn | ri as the conditional distribution when the first variable is fixed with realization ri. Our mechanism is presented in Algorithm 2.\n4This choice of 2n is for the simplicity of presentation.\nAlgorithm 2 f -scoring mechanism for sample elicitation 1. Compute t̂(·; p⊕ q, p× q) = argmint(·) Ex∼Pn×Qn [f†(t(x))]− Ex∼Pn⊕Qn [t(x)]. 2. Pay each reported sample ri using:\nS(ri, {rj}j ̸=i) := a+ b ( Ex∼Pn⊕Qn|ri [t̂(x; p⊕ q, p× q)]−Ex∼Pn×Qn|ri [f †(t̂(x; p⊕ q, p× q))] )\nfor some constants a, b > 0.\nSimilar to Algorithm 1, the main step in Algorithm 2 is to estimate the f -divergence between Pn × Qn and Pn ⊕Qn using reported samples. Then we pay agents using a linear-transformed form of it. We have the following result. Theorem 3.7. The f -scoring mechanism in Algorithm 2 achieves (2δ(n), 2bϵ(n))-BNE.\nThe theorem is proved by error bound in estimating f -divergence, a max argument, and the data processing inequality for f -mutual information. We defer the proof in Section B.2.\nThe job left for us is to establish the error bound in estimating the f -divergence to obtain ϵ(n) and δ(n). Roughly speaking, if we solve the optimization problem (3.3) via deep neural networks with proper structure, it holds that\nδ(n) = exp{−n(d−2β)/(2β+d) log14 n}, ϵ(n) = c · n−2β/(2β+d) log7 n, where c is a positive absolute constant. We state and prove this result formally in Section 4. Remark 3.8. (1) When the number of samples grows, it holds that δ(n) and ϵ(n) decrease to 0 at least polynomially fast, and our guaranteed approximate incentive-compatibility approaches a strict one. (2) Our method or framework handles arbitrary complex information, where the data can be sampled from high dimensional continuous space. (3) The score function requires no prior knowledge. Instead, we design estimation methods purely based on reported sample data. (4) Our framework also covers the case where the mechanism designer has no access to the ground truth, which adds contribution to the peer prediction literature. So far peer prediction results focused on eliciting simple categorical information. Besides handling complex information structure, our approach can also be viewed as a data-driven mechanism for peer prediction problems.\n4 ESTIMATION OF f -DIVERGENCE\nIn this section, we introduce an estimator of f -divergence and establish the statistical rate of convergence, which characterizes ϵ(n) and δ(n). For the simplicity of presentation, in the sequel, we estimate the f -divergenceDf (q∥p) between distributions P and Q with probability density functions p and q, respectively. The rate of convergence of estimating f -divergence can be easily extended to that of mutual information.\nBy Section 3, estimating f -divergence between P and Q is equivalent to solving the following optimization problem,\nt∗(·; p, q) = argmin t(·) Ex∼P[f†(t(x))]− Ex∼Q[t(x)],\nDf (q∥p) = Ex∼Q[t∗(x; p, q)]− Ex∼P[f†(t∗(x; p, q))]. (4.1) In what follows, we propose an estimator of Df (q∥p). By Assumption 3.3, it suffices to solve (4.1) on the function class Cβd (Ω,M). To this end, we approximate solution to (4.1) by the family of deep neural networks.\nWe now define the family of deep neural networks as follows. Definition 4.1. Given a vector k = (k0, . . . , kL+1) ∈ NL+2, where k0 = d and kL+1 = 1, the family of deep neural networks is defined as\nΦ(L, k) = {φ(x;W, v) =WL+1σvL · · ·W2σv1W1x : Wj ∈ Rkj×kj−1 , vj ∈ Rkj}. Here we write σv(x) as σ(x − v) for notational convenience, where σ(·) is the ReLU activation function.\nTo avoid overfitting, the sparsity of the deep neural networks is a typical assumption in deep learning literature. In practice, such a sparsity property is achieved through certain techniques, e.g., dropout (Srivastava et al., 2014), or certain network architecture, e.g., convolutional neural network (Krizhevsky et al., 2012). We now define the family of sparse networks as follows,\nΦM (L, k, s) ={φ(x;W, v) ∈ Φ(L, d) : ∥φ∥∞ ≤M, ∥Wj∥∞ ≤ 1 for j ∈ [L+ 1],\n∥vj∥∞ ≤ 1 for j ∈ [L], L+1∑ j=1 ∥Wj∥0 + L∑ j=1 ∥vj∥0 ≤ s}, (4.2)\nwhere s is the sparsity. In contrast, another approach to avoid overfitting is to control the norm of parameters. See Section A.2 for details.\nWe now propose the following estimators\nt̂(x; p, q) = argmin t∈ΦM (L,k,s)\nEx∼Pn [f†(t(x))]− Ex∼Qn [t(x)],\nD̂f (q∥p) = Ex∼Qn [t̂(x; p, q)]− Ex∼Pn [f†(t̂(x; p, q))]. (4.3)\nThe following theorem characterizes the statistical rate of convergence of the estimators defined in (4.3). Theorem 4.2. Let L = O(log n), s = O(N log n), and k = (d, d,O(dN),O(dN), . . . ,O(dN), 1) in (4.2), where N = nd/(2β+d). Under Assumptions 3.1, 3.3, and 3.4, it holds with probability at least 1− exp{−n(d−2β)/(2β+d) log14 n} that\n|Df (q∥p)− D̂f (q∥p)| ≲ n− 2β 2β+d log7 n.\nWe defer the proof of the theorem in Section B.3. By Theorem 4.2, the estimators in (4.3) achieve the optimal nonparametric rate of convergence (Stone, 1982) up to a logarithmic term. By (3.2) and Theorem 4.2, we have\nδ(n) = exp{−n(d−2β)/(2β+d) · log14 n}, ϵ(n) = c · n−2β/(2β+d) · log7 n,\nwhere c is a positive absolute constant." }, { "heading": "5 CONNECTION TO GAN AND RECONSTRUCTION OF DISTRIBUTION", "text": "After sample elicitation, a natural question to ask is how to learn a representative probability density function from the samples. Denote the probability density function from elicited samples as p. Then, learning the probability density function p is to solve for\nq∗ = argmin q∈Q Df (q∥p), (5.1)\nwhere Q is the probability density function space. To see the connection between (5.1) and the formulation of f -GAN (Nowozin et al., 2016), by combining (2.2) and (5.1), we have\nq∗ = argmin q∈Q max t Ex∼Q[t(x)]− Ex∼P[f†(t(x))],\nwhich is the formulation of f -GAN. Here the probability density function q(·) is the generator, while the function t(·) is the discriminator. By the non-negativity of f -divergence, q∗ = p solves (5.1). We now propose the following estimator\nq̂ = argmin q∈Q\nD̂f (q∥p), (5.2)\nwhere D̂f (q∥p) is given in (4.3). We define covering number as follows.\nDefinition 5.1 (Covering Number). Let (V, ∥ · ∥L2) be a normed space, and Q ⊂ V . We say that {v1, . . . , vN} is a δ-covering over Q of sizeN if Q ⊂ ∪Ni=1B(vi, δ), whereB(vi, δ) is the δ-ball centered at vi. The covering number is defined as N2(δ,Q) = min{N : ∃ϵ-covering over Q of size N}.\nWe impose the following assumption on the covering number of the probability density function space Q. Assumption 5.2. It holds that N2(δ,Q) = O(exp{1/δd/(2β)−1}).\nRecall that q∗ = p is the unique minimizer of the problem (5.1). Therefore, the f -divergence Df (q̂∥p) characterizes the deviation of q̂ from p∗. The following theorem characterizes the error bound of estimating q∗ by q̂. Theorem 5.3. Under the same assumptions in Theorem 4.2 and Assumption 5.2, for sufficiently large sample size n, it holds with probability at least 1− 1/n that\nDf (q̂∥p) ≲ n− 2β 2β+d · log7 n+min q̃∈Q Df (q̃∥p). (5.3)\nWe defer the proof of the theorem in Section B.4.\nIn Theorem 5.3, the first term on the RHS of (5.3) characterizes the generalization error of the estimator in (5.2), while the second term characterizes the approximation error. If the approximation error in (5.3) vanishes, then the estimator q̂ converges to the true density function q∗ = p at the optimal nonparametric rate of convergence (Stone, 1982) up to a logarithmic term." }, { "heading": "6 CONCLUDING REMARKS", "text": "In this work, we introduce the problem of sample elicitation as an alternative to eliciting complicated distribution. Our elicitation mechanism leverages the variational form of f -divergence functions to achieve accurate estimation of the divergences using samples. We provide theoretical guarantee for both our estimators and the achieved incentive compatibility.\nIt reminds an interesting problem to find out more “organic\" mechanisms for sample elicitation that requires (i) less elicited samples; and (ii) induced strict truthfulness instead of approximated ones." }, { "heading": "A AUXILIARY ANALYSIS", "text": "" }, { "heading": "A.1 AUXILIARY RESULTS ON SPARSITY CONTROL", "text": "In this section, we provide some auxiliary results on (4.3). We first state an oracle inequality showing the rate of convergence of t̂(x; p, q). Theorem A.1. Given 0 < ε < 1, for any sample size n satisfies that n ≳ [γ + γ−1 log(1/ε)]2, under Assumptions 3.1, 3.3, and 3.4, it holds that\n∥t̂− t∗∥L2(P) ≲ min t̃∈ΦM (L,k,s) ∥t̃− t∗∥L2(P) + γn −1/2 log n+ n−1/2[\n√ log(1/ε) + γ−1 log(1/ε)]\nwith probability at least 1− ε · exp(−γ2). Here γ = s1/2 log(V 2L) and V = ∏L+1 j=0 (kj + 1).\nWe defer the proof of to Section B.5.\nAs a by-product, note that t∗(x; p, q) = f ′(θ∗(x; p, q)) = f ′(q(x)/p(x)), based on the error bound established in Theorem A.1, we obtain the following result. Corollary A.2. Given 0 < ε < 1, for the sample size n ≳ [γ+ γ−1 log(1/ε)]2, under Assumptions 3.1, 3.3, and 3.4, it holds with probability at least 1− ε · exp(−γ2) that\n∥θ̂ − θ∗∥L2(P) ≲ min t̃∈ΦM (L,k,s) ∥t̃− t∗∥L2(P) + γn −1/2 log n+ n−1/2[\n√ log(1/ε) + γ−1 log(1/ε)].\nHere γ = s1/2 log(V 2L) and V = ∏L+1 j=0 (kj + 1).\nProof. Note that (f ′)−1 = (f†)′ and f† has Lipschitz continuous gradient with parameter 1/µ0 from Assumption 3.4 and Lemma D.6, we obtain the result from Theorem A.1." }, { "heading": "A.2 ERROR BOUND USING NORM CONTROL", "text": "In this section, we consider using norm of the parameters (specifically speaking, the norm ofWj and vj in (4.1)) to control the error bound, which is an alternative of the network model shown in (4.2). We consider the family of L-layer neural networks with bounded spectral norm for weight matrices W = {Wj ∈ Rkj×kj−1}L+1j=1 , where k0 = d and kL+1 = 1, and vector v = {vj ∈ Rkj}Lj=1, which is denoted as\nΦnorm = Φnorm(L, k,A,B) = {φ(x;W, v) ∈ Φ(L, k) : ∥vj∥2 ≤ Aj for all j ∈ [L], ∥Wj∥2 ≤ Bj for all j ∈ [L+ 1]}, (A.1)\nwhere σvj (x) is short for σ(x− vj) for any j ∈ [L]. We write the following optimization problem,\nt̂(x; p, q) = argmin t∈Φnorm\nEx∼Pn [f†(t(x))]− Ex∼Qn [t(x)],\nD̂f (q∥p) = Ex∼Qn [t̂(x; p, q)]− Ex∼Pn [f†(t̂(x; p, q))]. (A.2) Based on this formulation, we derive the error bound on the estimated f -divergence in the following theorem. We only consider the generalization error bound in this setting. Therefore, we assume that the ground truth t∗(x; p, q) = f ′(q(x)/p(x)) locates within Φnorm. Before we state the theorem, we first define two parameters for the family of neural networks Φnorm(L, k,A,B) as follows\nγ1 = B L+1∏ j=1 Bj · √√√√L+1∑ j=0 k2j , γ2 = L · ( √∑L+1 j=1 k 2 jB 2 j + ∑L j=1Aj)∑L+1 j=0 k 2 j ·minj B2j · L∑ j=1 Aj . (A.3)\nWe proceed to state the theorem. Theorem A.3. We assume that t∗(x; p, q) ∈ Φnorm. Then for any 0 < ε < 1, with probability at least 1− ε, it holds that\n|D̂f (q∥p)−Df (q∥p)| ≲ γ1 · n−1/2 log(γ2n) + L+1∏ j=1 Bj · n−1/2 √ log(1/ε).\nHere γ1 and γ2 are defined in (A.3).\nWe defer the proof to Section B.6.\nThe next theorem uses the results in Theorem A.3. Recall that in Section §A.2, we assume that the minimizer t∗ to the population version problem (4.1) lies within the norm-controlled family of neural networks Φnorm(L, k,A,B).\nTheorem A.4. Recall that we defined the parameter γ1 and γ2 of the family of neural networks Φnorm(L, k,A,B) in (A.3), the estimated distribution q̂ in (5.2), and the ground truth q∗ = p. We denote the the covering number of the probability distribution function class Q as N2(δ,Q), then for any 0 < ε < 1, with probability at least 1− ε, we have\nDf (q̂∥p) ≲ b2(n, γ1, γ2) + L+1∏ j=1 Bj · n−1/2 · √ log(N2[b2(n, γ1, γ2),Q]/ε) + min q̃∈Q Df (q̃∥p),\nwhere b2(n, γ1, γ2) = γ1n−1/2 log(γ2n).\nWe defer the proof to Section B.7." }, { "heading": "B PROOFS OF THEOREMS", "text": "" }, { "heading": "B.1 PROOF OF THEOREM 3.5", "text": "If the player truthfully reports, she will receive the following expected payment per sample i: with probability at least 1− δ(n),\nE[S(ri, ·)] := a− b(Ex∼Qn [t̂(x)]− Exi∼Pn [f†(t̂(xi))]) = a− b · D̂f (q∥p) ≥ a− b · (Df (q∥p) + ϵ(n)) (sample complexity guarantee) ≥ a− b · (Df (p∥p) + ϵ(n)) (agent believes p = q) = a− bϵ(n)\nSimilarly, any misreporting according to a distribution p̃with distribution P̃ will lead to the following derivation with probability at least 1− δ\nE[S(ri, ·)] := a− b(Ex∼Qn [t̂(x)]− Exi∼P̃n [f †(t̂(xi))])\n= a− b · D̂f (q∥p̃) ≤ a− b · (Df (p∥p̃)− ϵ(n)) ≤ a+ bϵ(n) (non-negativity of Df )\nCombining above, and using union bound, leads to (2δ(n), 2bϵ(n))-properness." }, { "heading": "B.2 PROOF OF THEOREM 3.7", "text": "Consider an arbitrary agent i. Suppose every other agent truthfully reports.\nE[S(ri, {rj}j ̸=i)] = a+ b(Ex∼Pn⊕Qn|ri [t̂(x)]− Ex∼Pn×Qn|ri{f †(t̂(x))})\n= a+ bE[Ex∼Pn⊕Qn|ri [t̂(x)]− Ex∼Pn×Qn|ri{f †(t̂(x))}]\nConsider the divergence term E[Ex∼Pn⊕Qn|ri [t̂(x)] − Ex∼Pn×Qn|ri{f†(t̂(x))}]. Reporting a ri ∼ P̃ ̸= P (denoting its distribution as p̃) leads to the following score\nEri∼P̃n [Ex∼P̃n⊕Qn|ri [t̂(x)]− Ex∼P̃n×Qn|ri{f †(t̂(x))}]\n= Ex∼P̃n⊕Qn [t̂(x)]− Ex∼P̃n×Qn{f †(t̂(x))} (tower property)\n≤ max t Ex∼P̃n⊕Qn [t(x)]− Ex∼P̃n×Qn{f †(t(x))} (max)\n= D̂f (p̃⊕ q∥p̃× q) ≤ Df (p̃⊕ q∥p̃× q) + ϵ(n) = If (p̃; q) + ϵ(n) (definition) ≤ If (p; q) + ϵ(n) (data processing inequality (Kong & Schoenebeck, 2019))\nwith probability at least 1− δ(n) (the other δ(n) probability with maximum score S̄). Now we prove that truthful reporting leads at least\nIf (p; q)− ϵ(n)\nof the divergence term:\nExi∼Pn [Ex∼Pn⊕Qn|xi [t̂(x)]− Ex∼Pn×Qn|xi{f †(t̂(x))}]\n= Ex∼Pn⊕Qn [t̂(x)]− Ex∼Pn×Qn{f†(t̂(x))} (tower property) = D̂f (p⊕ q∥p× q) ≥ Df (p⊕ q∥p× q)− ϵ(n) = If (p; q)− ϵ(n) (definition)\nwith probability at least 1 − δ(n) (the other δ(n) probability with score at least 0). Therefore the expected divergence terms differ at most by 2ϵ(n) with probability at least 1 − 2δ(n) (via union bound). The above combines to establish a (2δ(n), 2bϵ(n))-BNE." }, { "heading": "B.3 PROOF OF THEOREM 4.2", "text": "Step 1. We proceed to bound ∥t∗−t̂∥L2(P). We first proceed to find some t̃ ∈ ΦM (L, k, s). Note that the ground truth t∗ lies on a finite support Ω ⊂ [a, b]d. To invoke Theorem D.5, we denote t′(y) = t∗((b − a)y + a1d), where 1d = (1, 1, . . . , 1)⊤ ∈ Rd. Then the support of t′ lies in the unit cube [0, 1]d. We choose L′ = O(log n), s′ = O(N log n), k′ = (d,O(dN),O(dN), . . . ,O(dN), 1), and m′ = log n, we then utilize Theorem D.5 to construct some t̃′ ∈ ΦM (L′, k′, s′) such that\n∥t̃′ − t′∥L∞([0,1]d) ≲ N−β/d.\nWe further define t̃(·) = t̃′ ◦ ℓ(·), where ℓ(·) is a linear mapping taking the following form\nℓ(x) = x b− a − a b− a · 1d.\nTo this end, we know that t̃ ∈ ΦM (L, k, s), with parameters L, k, and s given in the statement of Theorem 4.2. We fix this t̃ and invoke Theorem A.1, then with probability at least 1− ε · exp(−γ2), we have\n∥t̂− t∗∥L2(P) ≲ ∥t̃− t∗∥L2(P) + γn−1/2 log n+ n−1/2[ √ log(1/ε) + γ−1 log(1/ε)]\n≲ N−β/d + γn−1/2 log n+ n−1/2[ √ log(1/ε) + γ−1 log(1/ε)]. (B.1)\nNote that γ takes the form γ = s1/2 log(V 2L), where V = O(dL · NL) and L, s given in the statement of Theorem 4.2, it holds that γ = O(N1/2 log5/2 n). Moreover, by the choice N = nd/(2β+d), combining (B.1) and taking ε = 1/n, we know that\n∥t̂− t∗∥L2(P) ≲ n−β/(2β+d) log7/2 n (B.2)\nwith probability at least 1− exp{−nd/(2β+d) log5 n}.\nStep 2. We denote by L(t) = Ex∼Q[t(x)] − Ex∼P[f†(t(x))] and L̂(t) = Ex∼Qn [t(x)] − Ex∼Pn [f†(t(x))]. Then from Assumption 3.4 and Lemma D.6, we know that L̂(·) is strongly convex with a constant coefficient. Note that by triangular inequality, we have\n|D̂f (q∥p)−Df (q∥p)| = |L̂(t̂)− L(t∗)| ≤ |L̂(t∗)− L̂(t̂)|+ |L̂(t∗)− L(t∗)| =: A1 +A2.\nWe proceed to bound A1 and A2.\nBound on A1: Recall that L̂(·) is strongly convex. Consequently, we have\nA1 ≲ ∥t∗ − t̂∥2L2(P) ≲ n − β2β+d log7/2 n,\nwith probability at least 1− exp{−nd/(2β+d) log5 n}, where the last inequality comes from (B.2). Bound on A2: Note that both the functions t∗(·) and f†(t∗(·)) are bounded, then by Hoeffding’s inequality, we obtain that\nP(A2 ≤ n− β 2β+d log7/2 n) ≥ 1− exp{−n(d−2β)/(2β+d) log14 n}.\nTherefore, by combining the above two bounds, we obtain that\n|D̂f (q∥p)−Df (q∥p)| ≲ n− β 2β+d log7/2 n\nwith probability at least 1−exp{−n(d−2β)/(2β+d) log14 n}. This concludes the proof of the theorem." }, { "heading": "B.4 PROOF OF THEOREM 5.3", "text": "We first need to bound the max deviation of the estimated f -divergence D̂f (q∥p) among all q ∈ Q. The following lemma provides such a bound. Lemma B.1. Under the assumptions stated in Theorem 5.3, for any fixed density p, if the sample size n is sufficiently large, it holds that\nsup q∈Q\n|Df (q∥p)− D̂f (q∥p)| ≲ n− 2β 2β+d · log7 n\nwith probability at least 1− 1/n.\nWe defer the proof to Section C.1.\nNow we turn to the proof of the theorem. We denote by q̃′ = argminq̃∈QDf (q̃∥p), then with probability at least 1− 1/n, we have\nDf (q̂∥p) ≤ |Df (q̂∥p)− D̂f (q̂∥p)|+ D̂f (q̂∥p)\n≤ sup q∈Q\n|Df (q∥p)− D̂f (q∥p)|+ D̂f (q̃′∥p) ≲ n− 2β 2β+d · log7 n+Df (q̃′∥p). (B.3)\nHere in the second line we use the optimality of q̂ among all q̃ ∈ Q to the problem (5.2), while the last inequality uses Lemma B.1 and Theorem 4.2. Moreover, note that Df (q̃′∥p) = minq̃∈QDf (q̃∥p), combining (B.3), it holds that with probability at least 1− 1/n,\nDf (q̂∥p) ≲ n− 2β 2β+d · log7 n+min q̃∈Q Df (q̃∥p).\nThis concludes the proof of the theorem." }, { "heading": "B.5 PROOF OF THEOREM A.1", "text": "For any real-valued function ϱ, we write EP(ϱ) = Ex∼P[ϱ(x)], EQ(ϱ) = Ex∼Q[ϱ(x)], EPn(ϱ) = Ex∼Pn [ϱ(x)], and EQn(ϱ) = Ex∼Qn [ϱ(x)] for notational convenience.\nFor any t̃ ∈ ΦM (L, k, s), we establish the following lemma.\nLemma B.2. Under the assumptions stated in Theorem A.1, it holds that 1/(4L0) · ∥t̂− t̃∥2L2(P) ≤ 1/µ0 · ∥t̂− t̃∥L2(P) · ∥t̃− t ∗∥L2(P) + {EQn [(t̂− t̃)/2]− EQ[(t̂− t̃)/2]} − {EPn [f†((t̂+ t̃)/2)− f†(t̃)]− EP[f†((t̂+ t̃)/2)− f†(t̃)]} Here µ0 and L0 are specified in Assumption 3.4.\nWe defer the proof to Section C.2.\nNote that by Lemma B.2 and the fact that f† is Lipschitz continuous, we have ∥t̂− t̃∥2L2(P) ≲ ∥t̂− t̃∥L2(P) · ∥t̃− t ∗∥L2(P) + {EQn [(t̂− t̃)/2]− EQ[(t̂− t̃)/2]} − {EPn [f†((t̂+ t̃)/2)− f†(t̃)]− EP[f†((t̂+ t̃)/2)− f†(t̃)]}. (B.4) Furthermore, to bound the RHS of the above inequality, we establish the following lemma. Lemma B.3. We assume that the function ψ : R → R is Lipschitz continuous and bounded such that |ψ(x)| ≤ M0 for any |x| ≤ M . Then under the assumptions stated in Theorem A.1, for any fixed t̃(x) ∈ ΦM , n ≳ [γ + γ−1 log(1/ε)]2 and 0 < ε < 1, we have the follows\nP {\nsup t(·)∈ΦM (L,k,s) |EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| η(n, γ, ε) · ∥ψ(t)− ψ(t̃)∥L2(P) ∨ λ(n, γ, ε)\n≤ 16M0 } ≥ 1− ε · exp(−γ2),\nwhere η(n, γ, ε) = n−1/2[γ log n+γ−1 log(1/ε)], λ(n, γ, ε) = n−1[γ2+log(1/ε)], and for any real numbers c1 and c2, we denote by c1∨ c2 = max{c1, c2}. Here γ takes the form γ = s1/2 log(V 2L), where V = ∏L+1 j=0 (kj + 1).\nWe defer the proof to Section C.3.\nNote that the results in Lemma B.3 also apply to the distribution Q, and by using the fact that the true density ratio θ∗(x; p, q) = q(x)/p(x) is bounded below and above, we know that L2(Q) is indeed equivalent to L2(P). We thus focus on L2(P) here. By (B.4), Lemma B.3, and the Lipschitz property of f† according to Lemma D.6, with probability at least 1 − ε · exp(−γ2), we have the following bound\n∥t̂− t̃∥2L2(P) ≲ ∥t̂− t̃∥L2(P) · ∥t̃− t ∗∥L2(P)\n+O(n−1/2[γ log n+ γ−1 log(1/ε)] · ∥t̂− t̃∥L2(P) ∨ n −1[γ2 + log(1/ε)]), (B.5)\nwhere we recall that the notation γ = s1/2 log(V 2L) is a parameter related with the family of neural networks ΦM . We proceed to analyze the dominant part on the RHS of (B.5).\nCase 1. If the term ∥t̂−t̃∥L2(P)·∥t̃−t∗∥L2(P) dominates, then with probability at least 1−ε·exp(−γ2) ∥t̂− t̃∥L2(P) ≲ ∥t̃− t∗∥L2(P).\nCase 2. If the term O(n−1/2[γ log n+γ−1 log(1/ε)] ·∥t̂− t̃∥L2(P)) dominates, then with probability at least 1− ε · exp(−γ2)\n∥t̂− t̃∥L2(P) ≲ n−1/2[γ log n+ γ−1 log(1/ε)].\nCase 3. If the term O(n−1[γ2+log(1/ε)]) dominates, then with probability at least 1−ε ·exp(−γ2) ∥t̂− t̃∥L2(P) ≲ n−1/2[γ + √ log(1/ε)].\nTherefore, by combining the above three cases, we have ∥t̂− t̃∥L2(P) ≲ ∥t̃− t∗∥L2(P) + γn−1/2 log n+ n−1/2[ √ log(1/ε) + γ−1 log(1/ε)]. Further the triangular inequality gives us\n∥t̂− t∗∥L2(P) ≲ ∥t̃− t∗∥L2(P) + γn−1/2 log n+ n−1/2[ √ log(1/ε) + γ−1 log(1/ε)]\nwith probability at least 1 − ε · exp(−γ2). Note that the above error bound holds for any t̃ ∈ ΦM (L, k, s), especially for the choice t̃ such that it minimizes ∥t̃− t∗∥L2(P). Therefore, we have ∥t̂− t∗∥L2(P) ≲ min\nt̃∈ΦM (L,k,s) ∥t̃− t∗∥L2(P) + γn\n−1/2 log n+ n−1/2[ √ log(1/ε) + γ−1 log(1/ε)]\nwith probability at least 1− ε · exp(−γ2). This concludes the proof of the theorem." }, { "heading": "B.6 PROOF OF THEOREM A.3", "text": "We follow the proof in Li et al. (2018). We denote by the loss function in (A.2) as L[t(x)] = f†(t(xI)) − t(xII), where xI follows the distribution P and xII follows Q. To prove the theorem, we first link the generalization error in our theorem to the empirical Rademacher complexity (ERC). Given the data {xi}ni=1, the ERC related with the class L(Φnorm) is defined as\nRn[L(Φnorm)] = Eε [\nsup φ∈Φnorm | 1 n n∑ i=1 εi · L[φ(xi;W, v)]|{xi}ni=1 ] , (B.6)\nwhere εi’s are i.i.d. Rademacher random variables, i.e., P(εi = 1) = P(εi = −1) = 1/2. Here the expectation Eε(·) is taken over the Rademacher random variables {εi}i∈[n]. We introduce the following Lemma B.4 (Mohri et al., 2018), which links the ERC to the generalization error bound. Lemma B.4. Assume that supφ∈Φnorm |L(φ)| ≤ M1, then for any ε > 0, with probability at least 1− ε, we have\nsup φ∈Φnorm\n{ Ex{L[φ(x;W, v)]} − 1\nn n∑ i=1 L[φ(xi;W, v)] } ≲ Rn[L(Φnorm)] +M1 · n−1/2 √ log(1/ε),\nwhere the expectation Ex{·} is taken over xI ∼ P and xII ∼ Q. Equipped with the above lemma, we only need to bound the ERC defined in (B.6). Lemma B.5. Let L be a Lipschitz continuous loss function and Φnorm be the family of networks defined in (A.1). We assume that the input x ∈ Rd is bounded such that ∥x∥2 ≤ B. Then it holds that\nRn[L(Φnorm)] ≲ γ1 · n−1/2 log(γ2n), where γ1 and γ2 are given in (A.3).\nWe defer the proof to Section C.4.\nNow we proceed to prove the theorem. Recall that we assume that t∗ ∈ Φnorm. For notational convenience, we denote by\nĤ(t) = Ex∼Pn [f†(t(x))]− Ex∼Qn [t(x)], H(t) = Ex∼P[f†(t(x))]− Ex∼Q[t(x)].\nThen E[Ĥ(t)] = H(t). We proceed to bound |D̂f (q∥p)−Df (q∥p)| = |Ĥ(t̂)−H(t∗)|. Note that if Ĥ(t̂) ≥ H(t∗), then we have\n0 ≤ Ĥ(t̂)−H(t∗) ≤ Ĥ(t∗)−H(t∗), (B.7)\nwhere the second inequality follows from the fact that t̂ is the minimizer of Ĥ(·). On the other hand, if Ĥ(t̂) ≤ H(t∗), we have\n0 ≥ Ĥ(t̂)−H(t∗) ≥ Ĥ(t̂)−H(t̂), (B.8) where the second inequality follows that fact that t∗ is the minimizer of H(·). Therefore, by (B.7), (B.8), and the fact that L(φ) ≲ ∏L+1 j=1 Bj for any φ ∈ Φnorm, we deduce that\n|Ĥ(t̂)−H(t∗)| ≤ sup t∈Φnorm |Ĥ(t)−H(t)| ≲ Rn[L(Φnorm)] + L+1∏ j=1 Bj · n−1/2 √ log(1/ε) (B.9)\nwith probability at least 1 − ε. Here the second inequality follows from Lemma B.4. By plugging the result from Lemma B.5 into (B.9), we deduce that with probability at least 1− ε, it holds that\n|D̂f (q∥p)−Df (q∥p)| = |Ĥ(t̂)−H(t∗)| ≲ γ1 · n−1/2 log(γ2n) + L+1∏ j=1 Bj · n−1/2 √ log(1/ε).\nThis concludes the proof of the theorem." }, { "heading": "B.7 PROOF OF THEOREM A.4", "text": "We first need to bound the max deviation of the estimated f -divergence D̂f (q∥p) among all q ∈ Q. We utilize the following lemma to provide such a bound.\nLemma B.6. Assume that the distribution q is in the set Q, and we denote its L2 covering number as N2(δ,Q). Then for any target distribution p, we have\nmax q∈Q |Df (q∥p)− D̂f (q∥p)| ≲ b2(n, γ1, γ2) + L+1∏ j=1 Bj · n−1/2 · √ log(N2[b2(n, γ1, γ2),Q]/ε)\nwith probability at least 1 − ε. Here b2(n, γ1, γ2) = γ1n−1/2 log(γ2n) and c is a positive absolute constant.\nWe defer the proof to Section C.5.\nNow we turn to the proof of the theorem. We denote by q̃′ = argminq̃∈QDf (q̃∥p). Then with probability at least 1− ε, we have\nDf (q̂∥p) ≤ |Df (q̂∥p)− D̂f (q̂∥p)|+ D̂f (q̂∥p) ≤ max\nq∈Q |Df (q∥p)− D̂f (q∥p)|+ D̂f (q̃′∥p)\n≲ b2(n, γ1, γ2) + L+1∏ j=1 Bj · n−1/2 · √ log(N2[b2(n, γ1, γ2),Q]/ε) +Df (q̃′∥p),\nwhere we use the optimality of q̂ among all q̃ ∈ Q to the problem (5.2) in the second inequality, and we uses Lemma B.6 and Theorem 4.2 in the last line. Moreover, note that Df (q̃′∥p) = minq̃∈QDf (q̃∥p), we obtain that\nDf (q̂∥p) ≲ b2(n, γ1, γ2) + L+1∏ j=1 Bj · n−1/2 √ log(N2[b2(n, γ1, γ2),Q]/ε) + min q̃∈Q Df (q̃∥p).\nThis concludes the proof of the theorem." }, { "heading": "C LEMMAS AND PROOFS", "text": "" }, { "heading": "C.1 PROOF OF LEMMA B.1", "text": "Recall that the covering number of Q is N2(δ,Q), we thus assume that there exists q1, . . . , qN2(δ,Q) ∈ Q such that for any q ∈ Q, there exists some qk, where 1 ≤ k ≤ N2(δ,Q), so that ∥q − qk∥2 ≤ δ. Moreover, by taking δ = δn = n−2β/(2β+d) and union bound, we have\nP[sup q∈Q\n|Df (q∥p)− D̂f (q∥p)| ≥ c1 · n− 2β 2β+d · log7 n]\n≤ N2(δn,Q)∑ k=1 P[|Df (qk∥p)− D̂f (qk∥p)| ≥ c1 · n− 2β 2β+d · log7 n] ≤ N2(δn,Q) · exp(−n d−2β 2β+d · log14 n),\nwhere the last line comes from Theorem 4.2. Combining Assumption 5.2, when n is sufficiently large, it holds that\nP[sup q∈Q\n|Df (q∥p)− D̂f (q∥p)| ≥ c1 · n− 2β 2β+d · log7 n] ≤ 1/n,\nwhich concludes the proof of the lemma." }, { "heading": "C.2 PROOF OF LEMMA B.2", "text": "For any real-valued function ϱ, we write EP(ϱ) = Ex∼P[ϱ(x)], EQ(ϱ) = Ex∼Q[ϱ(x)], EPn(ϱ) = Ex∼Pn [ϱ(x)], and EQn(ϱ) = Ex∼Qn [ϱ(x)] for notational convenience.\nBy the definition of t̂ in (4.3), we have EPn [f†(t̂)]− EQn(t̂) ≤ EPn [f†(t̃)]− EQn(t̃). Note that the functionalG(t) = EPn [f†(t)]−EQn(t) is convex in t since f† is convex, we then have\nG( t̂+ t̃ 2 )−G(t̃) ≤ G(t̂)−G(t̃) 2 ≤ 0.\nBy re-arranging terms, we have {EPn [f†((t̂+ t̃)/2)− f†(t̃)]− EP[f†((t̂+ t̃)/2)− f†(t̃)]} − {EQn [(t̂− t̃)/2]− EQ[(t̂− t̃)/2]} ≤ EQ[(t̂− t̃)/2]− EP[f†((t̂+ t̃)/2)− f†(t̃)]. (C.1) We denote by Bf (t̃, t) = EP[f†(t)− f†(t̃)]− EQ(t− t̃). (C.2) then the RHS of (C.1) is exactly −Bf (t̃, (t̂ + t̃)/2). We proceed to establish the lower bound of Bf (t̃, t) using L2(P) norm. From t∗(x; p, q) = f ′(q(x)/p(x)) and (f†)′ ◦ (f ′)(x) = x, we know that q/p = ∂f†(t∗)/∂t. Then by substituting the second term on the RHS of (C.2) using the above relationship, we have\nBf (t̃, t) = EP [ f†(t)− f†(t̃)− ∂f †\n∂t (t∗) · (t− t̃) ] = EP [ f†(t)− f†(t̃)− ∂f †\n∂t (t̃) · (t− t̃)\n] + EP {[ ∂f†\n∂t (t̃)− ∂f\n†\n∂t (t∗)\n] · (t− t̃) } = A1 +A2. (C.3)\nWe lower bound A1 and A2 in the sequel.\nBound on A1. Note that by Assumption 3.4 and Lemma D.6, we know that the Fenchel duality f† is strongly convex with parameter 1/L0. This gives that\nf†(t(x))− f†(t̃(x))− ∂f †\n∂t (t̃(x)) · [t(x)− t̃(x)] ≥ 1/L0 · (t(x)− t̃(x))2\nfor any x. Consequently, it holds that A1 ≥ 1/L0 · ∥t− t̃∥2L2(P). (C.4)\nBound on A2. By Cauchy-Schwarz inequality, it holds that\nA2 ≥ − √ EP {[ ∂f†\n∂t (t̃)− ∂f\n†\n∂t (t∗)\n]2} · √ EP[(t− t̃)2].\nAgain, by Assumption 3.4 and Lemma D.6, we know that the Fenchel duality f† has 1/µ0-Lipschitz gradient, which gives that∣∣∣∣∂f†∂t (t̃(x))− ∂f†∂t (t∗(x)) ∣∣∣∣ ≤ 1/µ0 · |t̃(x)− t∗(x)| for any x. By this, the term A2 is lower bounded:\nA2 ≥ −1/µ0 · ∥t̃− t∗∥L2(P) · ∥t− t̃∥L2(P). (C.5)\nPlugging (C.4) and (C.5) into (C.3), we have Bf (t̃, t) ≥ 1/L0 · ∥t− t̃∥2L2(P) − 1/µ0 · ∥t̃− t\n∗∥L2(P) · ∥t− t̃∥L2(P). By this, together with (C.1), we conclude that 1/(4L0) · ∥t̂− t̃∥2L2(P) ≤ 1/µ0 · ∥t̂− t̃∥L2(P) · ∥t̃− t ∗∥L2(P) + {EQn [(t̂− t̃)/2]− EQ[(t̂− t̃)/2]} − {EPn [f†((t̂+ t̃)/2)− f†(t̃)]− EP[f†((t̂+ t̃)/2)− f†(t̃)]}. This concludes the proof of the lemma." }, { "heading": "C.3 PROOF OF LEMMA B.3", "text": "For any real-valued function ϱ, we write EP(ϱ) = Ex∼P[ϱ(x)], EQ(ϱ) = Ex∼Q[ϱ(x)], EPn(ϱ) = Ex∼Pn [ϱ(x)], and EQn(ϱ) = Ex∼Qn [ϱ(x)] for notational convenience.\nWe first introduce the following concepts. For any K > 0, the Bernstein difference ρ2K,P(t) of t(·) with respect to the distribution P is defined to be\nρ2K,P(t) = 2K 2 · EP[exp(|t|/K)− 1− |t|/K].\nCorrespondingly, we denote by HK,B the generalized entropy with bracketing induced by the Bernstein difference ρK,P. We denote by Hs,B the entropy with bracketing induced by Ls norm, Hs the entropy induced by Ls norm, HLs(P),B the entropy with bracketing induced by Ls(P) norm, and HLs(P) the regular entropy induced by Ls(P) norm.\nSince we focus on fixed L, k, and s, we denote by ΦM = ΦM (L, k, s) for notational convenience. We consider the space\nΨM = ψ(ΦM ) = {ψ(t) : t(x) ∈ ΦM}.\nFor any δ > 0, we denote the following space\nΨM (δ) = {ψ(t) ∈ ΨM : ∥ψ(t)− ψ(t̃)∥L2(P) ≤ δ}, Ψ′M (δ) = {∆ψ(t) = ψ(t)− ψ(t̃) : ψ(t) ∈ ΨM (δ)}.\nNote that sup∆ψ(t)∈Ψ′M (δ) ∥∆ψ(t)∥∞ ≤ 2M0 and sup∆ψ(t)∈Ψ′M (δ) ∥∆ψ(t)∥∞ ≤ δ, by Lemma D.4 we have\nsup ∆ψ(t)∈Ψ′M (δ)\nρ8M0,P[∆ψ(t)] ≤ √ 2δ.\nTo invoke Theorem D.3 for G = Ψ′M (δ), we pick K = 8M0 and R = √ 2δ. Note that from the fact that sup∆ψ(t)∈Ψ′M (δ) ∥∆ψ(t)∥∞ ≤ 2M0, by Lemma D.1, Lemma D.2, and the fact that ψ is Lipschitz continuous, we have\nH8M0,B(u,Ψ′M (δ),P) ≤ H∞(u/(2 √ 2),Ψ′M (δ)) ≤ 2(s+ 1) log(4 √ 2u−1(L+ 1)V 2)\nfor any u > 0. Then, by algebra, we have the follows∫ R 0 H1/28M0,B(u,Ψ ′ M (δ),P) du ≤ 3s1/2δ · log(8V 2L/δ).\nFor any 0 < ε < 1, we take C = 1, and a,C1 and C0 in Theorem D.3 to be\na = 8M0 log(exp(γ 2)/ε)γ−1 · δ,\nC0 = 6M0γ −1 √ log(exp(γ2)/ε),\nC1 = 33M 2 0 γ −2 log(exp(γ2)/ε).\nHere γ = s1/2 log(V 2L). Then it is straightforward to check that our choice above satisfies the conditions in Theorem D.3 for any δ such that δ ≥ γn−1/2, when n is sufficiently large such that n ≳ [γ + γ−1 log(1/ε)]2. Consequently, by Theorem D.3, for δ ≥ γn−1/2, we have\nP{ sup t(x)∈ΦM (δ) |EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| ≥ 8M0 log(exp(γ2)/ε)γ−1 · δ · n−1/2}\n= P{ sup ∆ψ(t)∈Ψ′M (δ) |EPn [∆ψ(t)]− EP[∆ψ(t)]| ≥ 8M0 log(exp(γ2)/ε)γ−1 · δ · n−1/2} ≤ ε · exp(−γ2).\nBy taking δ = δn = γn−1/2, we have P {\nsup t(x)∈ΦM (δ)\n|EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| n−1[γ2 + log(1/ε)]\n≤ 8M0 } ≥ 1− ε · exp(−γ2). (C.6)\nOn the other hand, we denote that S = min{s > 1 : 2−s(2M0) < δn} = O(log(γ−1n1/2)). For notational convenience, we denote the set\nAs = {ψ(t) ∈ ΨM : ψ(t) ∈ ΨM (2−s+2M0), ψ(t) /∈ ΨM (2−s+1M0)}. (C.7)\nThen by the peeling device, we have the following P {\nsup ψ(t)∈ΨM ,ψ(t)/∈ΨM (δn) |EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| ∥ψ(t)− ψ(t̃)∥L2(P) · T (n, γ, ε)\n≥ 16M0 }\n≤ S∑ s=1 P { sup ψ(t)∈As |EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| 2−s+1M0 ≥ 16M0 · T (n, γ, ε) }\n≤ S∑ s=1 P{ sup ψ(t)∈As |EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| ≥ 8M0 · (2−s+2M0) · T (n, γ, ε)}\n≤ S∑ s=1 P{ sup ψ(t)∈ΨM (2−s+2M0) |EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| ≥ 8M0 · (2−s+2M0) · T (n, γ, ε)} ≤S · ε · exp(−γ2)/ log(γ−1n1/2) = c · ε · exp(−γ2),\nwhere c is a positive absolute constant, and for notational convenience we denote by T (n, γ, ε) = γ−1 · n−1/2 log(log(γ−1n1/2) exp(γ2)/ε). Here in the second line, we use the fact that for any ψ(t) ∈ As, we have ∥ψ(t) − ψ(t̃)∥L2(Q) ≥ 2−s+1M0 by the definition of As in (C.7); in the forth line, we use the argument that since As ⊆ ΨM (2−s+2M0), the probability of supremum taken over ΨM (2\n−s+2M0) is larger than the one overAs; in the last line we invoke Theorem D.3. Consequently, this gives us\nP {\nsup ψ(t)∈ΨM\nψ(t)/∈ΨM (δn)\n|EPn [ψ(t)− ψ(t̃)]− EP[ψ(t)− ψ(t̃)]| ∥ψ(t)− ψ(t̃)∥L2(P) · n−1/2[γ log n+ γ−1 log(1/ε)]\n≤ 16M0 } ≥ 1− ε · exp(−γ2).\n(C.8)\nCombining (C.6) and (C.8), we finish the proof of the lemma." }, { "heading": "C.4 PROOF OF LEMMA B.5", "text": "The proof of the theorem utilizes following two lemmas. The first lemma characterizes the Lipschitz property of φ(x;W, v) in the input x. Lemma C.1. Given W and v, then for any φ(·;W, v) ∈ Φnorm and x1, x2 ∈ Rd, we have\n∥φ(x1;W, v)− φ(x2;W, v)∥2 ≤ ∥x1 − x2∥2 · L+1∏ j=1 Bj .\nWe defer the proof to Section C.6.\nThe following lemma characterizes the Lipschitz property of φ(x;W, v) in the network parameter pair (W, v). Lemma C.2. Given any bounded x ∈ Rd such that ∥x∥2 ≤ B, then for any weights W 1 = {W 1j } L+1 j=1 ,W 2 = {W 2j } L+1 j=1 , v\n1 = {v1j }Lj=1, v2 = {v2j }Lj=1, and functions φ(·,W 1, v1), φ(·,W 2, v2) ∈ Φnorm, we have\n∥φ(x,W 1, v1)− φ(x,W 2, v2)∥\n≤ B √ 2L+ 1 ·\n∏L+1 j=1 Bj\nminj Bj · L∑ j=1 Aj · √√√√L+1∑ j=1 ∥W 1j −W 2j ∥2F + L∑ j=1 ∥v1j − v2j ∥22.\nWe defer the proof to Section C.7.\nWe now turn to the proof of Lemma B.5. Note that by Lemma C.2, we know that φ(x;W, v) is Lw-Lipschitz in the parameter (W, v) ∈ Rb, where the dimension b takes the form\nb = L+1∑ j=1 kjkj−1 + L∑ j=1 kj ≤ L+1∑ j=0 (kj + 1) 2, (C.9)\nand the Lipschitz constant Lw satisfies\nLw = B √ 2L+ 1 ·\n∏L+1 j=1 Bj\nminj Bj · L∑ j=1 Aj . (C.10)\nIn addition, we know that the covering number of W = {(W, v) ∈ Rb : ∑L+1 j=1 ∥Wj∥F +∑L\nj=1 ∥vj∥2 ≤ K}, where\nK = √√√√L+1∑ j=1 k2jB 2 j + L∑ j=1 Aj , (C.11)\nsatisfies\nN(W, δ) ≤ (3Kδ−1)b. By the above facts, we deduce that the covering number of L(Φnorm) satisfies\nN [L(Φnorm), δ] ≤ (c1KLwδ−1)b, for some positive absolute constant c1. Then by Dudley entropy integral bound on the ERC, we know that\nRn[L(Φnorm)] ≤ inf τ>0 τ + 1√ n ∫ ϑ τ √ logN [L(Φnorm), δ] dδ, (C.12)\nwhere ϑ = supg(·;W,v)∈L(Φnorm),x∈Rd |g(x;W, v)|. Moreover, from Lemma C.1 and the fact that the loss function is Lipschitz continuous, we have\nϑ ≤ c2 ·B · L+1∏ j=1 Bj (C.13)\nfor some positive absolute constant c2. Therefore, by calculations, we derive from (C.12) that Rn[L(Φnorm)] = O ( ϑ√ n · √ b · log KLw √ n ϑ √ b ) ,\nthen we conclude the proof of the lemma by plugging in (C.9), (C.10), (C.11), and (C.13), and using the definition of γ1 and γ2 in (A.3)." }, { "heading": "C.5 PROOF OF LEMMA B.6", "text": "Remember that the covering number of Q is N2(δ,Q), we assume that there exists q1, . . . , qN2(δ,Q) ∈ Q such that for any q ∈ Q, there exists some qk, where 1 ≤ k ≤ N2(δ,Q), so that ∥q − qk∥2 ≤ δ. Moreover, by taking δ = γ1n−1/2 log(γ2n) = b2(n, γ1, γ2) and N2 = N2[b2(n, γ1, γ2),Q], we have\nP{max q∈Q |Df (q∥p)− D̂f (q∥p)| ≥ c · [b2(n, γ1, γ2) + L+1∏ j=1 Bj · n−1/2 · √ log(N2/ε)]}\n≤ N2∑ k=1 P{|Df (q∥p)− D̂f (q∥p)| ≥ c · [b2(n, γ1, γ2) + L+1∏ j=1 Bj · n−1/2 · √ log(N2/ε)]}\n≤ N2 · ε/N2 = ε, where the second line comes from union bound, and the last line comes from Theorem A.3. By this, we conclude the proof of the lemma." }, { "heading": "C.6 PROOF OF LEMMA C.1", "text": "The proof follows by applying the Lipschitz property and bounded spectral norm of Wj recursively:\n∥φ(x1;W, v)− φ(x2;W, v)∥2 = ∥WL+1(σvL · · ·W2σv1W1x1 − σvL · · ·W2σv1W1x2)∥2 ≤ ∥WL+1∥2 · ∥σvL(WL · · ·W2σv1W1x1 −WL · · ·W2σv1W1x2)∥2 ≤ BL+1 · ∥WL · · ·W2σv1W1x1 −WL · · ·W2σv1W1x2∥2\n≤ · · · ≤ L+1∏ j=1 Bj · ∥x1 − x2∥2.\nHere in the third line we uses the fact that ∥Wj∥2 ≤ Bj and the 1-Lipschitz property of σvj (·), and in the last line we recursively apply the same argument as in the above lines. This concludes the proof of the lemma." }, { "heading": "C.7 PROOF OF LEMMA C.2", "text": "Recall that φ(x;W, v) takes the form\nφ(x;W, v) =WL+1σvLWL · · ·σv1W1x.\nFor notational convenience, we denote by φij(x) = σvij (W i jx) for i = 1, 2. By this, φ(x;W, v) has the form φ(x;W i, vi) = W iL+1φ i L ◦ · · · ◦ φi1(x). First, note that for any W 1,W 2, v1 and v2, by triangular inequality, we have\n∥φ(x,W 1, v1)− φ(x,W 2, v2)∥2 = ∥W 1L+1φ1L ◦ · · · ◦ φ11(x)−W 2L+1φ2L ◦ · · · ◦ φ21(x)∥2 ≤ ∥W 1L+1φ1L ◦ · · · ◦ φ11(x)−W 2L+1φ1L ◦ · · · ◦ φ11(x)∥2\n+ ∥W 2L+1φ1L ◦ · · · ◦ φ11(x)−W 2L+1φ2L ◦ · · · ◦ φ21(x)∥2 ≤ ∥W 1L+1 −W 2L+1∥F · ∥φ1L ◦ · · · ◦ φ11(x)∥2\n+BL+1 · ∥φ1L ◦ · · · ◦ φ11(x)− φ2L ◦ · · · ◦ φ21(x)∥2. (C.14)\nMoreover, note that for any ℓ ∈ [L], we have the following bound on ∥φ1L ◦ · · · ◦ φ11(x)∥2:\n∥φiℓ ◦ · · · ◦ φi1(x)∥2 ≤ ∥W iℓφiℓ−1 ◦ · · · ◦ φi1(x)∥2 + ∥viℓ∥2 ≤ Bℓ · ∥φiℓ−1 ◦ · · · ◦ φi1(x)∥2 +Aℓ\n≤ ∥x∥2 · ℓ∏ j=1 Bj + ℓ∑ j=1 Aj ℓ∏ i=j+1 Bi, (C.15)\nwhere the first inequality comes from the triangle inequality, and the second inequality comes from the bounded spectral norm of W ij , while the last inequality simply applies the previous arguments recursively. Therefore, combining (C.14), we have\n∥φ(x,W 1, v1)− φ(x,W 2, v2)∥2 ≤ ( B · L∏ j=1 Bj + L∑ j=1 Aj L∏ i=j+1 Bi ) · ∥W 1L+1 −W 2L+1∥F\n+BL+1 · ∥φ1L ◦ · · · ◦ φ11(x)− φ2L ◦ · · · ◦ φ21(x)∥2. (C.16)\nSimilarly, by triangular inequality, we have\n∥φ1L ◦ · · · ◦ φ11(x)− φ2L ◦ · · · ◦ φ21(x)∥2 ≤ ∥φ1L ◦ φ1L−1 ◦ · · · ◦ φ11(x)− φ2L ◦ φ1L−1 ◦ · · · ◦ φ11(x)∥2\n+ ∥φ2L ◦ φ1L−1 ◦ · · · ◦ φ11(x)− φ2L ◦ φ2L−1 ◦ · · · ◦ φ21(x)∥2 ≤ ∥φ1L ◦ φ1L−1 ◦ · · · ◦ φ11(x)− φ2L ◦ φ1L−1 ◦ · · · ◦ φ11(x)∥2 (C.17)\n+BL · ∥φ1L−1 ◦ · · · ◦ φ11(x)− φ2L−1 ◦ · · · ◦ φ21(x)∥2,\nwhere the second inequality uses the bounded spectral norm of WL and 1-Lipschitz property of σvL(·). For notational convenience, we further denote y = φ1L−1 ◦ · · · ◦ φ11(x), then\n∥φ1L(y)− φ2L(y)∥2 = ∥σ(W 1Ly − v1L)− σ(W 2Ly − v2L)}∥2 ≤ ∥v1L − v2L∥2 + ∥W 1L −W 2L∥F · ∥y∥2,\nwhere the inequality comes from the 1-Lipschitz property of σ(·). Moreover, combining (C.15), it holds that\n∥φ1L(y)− φ2L(y)∥2 ≤ ∥v1L − v2L∥2 + ∥W 1L −W 2L∥F · ( B · L−1∏ j=1 Bj + L−1∑ j=1 Aj L−1∏ i=j+1 Bi ) . (C.18)\nBy (C.17) and (C.18), we have\n∥φ1L ◦ · · · ◦ φ11(x)− φ2L ◦ · · · ◦ φ21(x)∥2 ≤ ∥v1L − v2L∥2 + ∥W 1L −W 2L∥F · ( B · L−1∏ j=1 Bj + L−1∑ j=1 Aj L−1∏ i=j+1 Bi ) +BL · ∥φ1L−1 ◦ · · · ◦ φ11(x)− φ2L−1 ◦ · · · ◦ φ21(x)∥2\n≤ L∑ j=1 L∏ i=j+1 Bi · ∥v1j − v2j ∥2 + B ·\n∏L+1 j=1 Bj\nminj Bj · L∑ j=1 Aj · L∑ j=1 ∥W 1j −W 2j ∥F\n≤ B ·\n∏L+1 j=1 Bj\nminj Bj · L∑ j=1 Aj · L∑ j=1 (∥v1j − v2j ∥2 + ∥W 1j −W 2j ∥F).\nHere in the second inequality we recursively apply the previous arguments. Further combining (C.16), we obtain that\n∥φ(x,W 1, v1)− φ(x,W 2, v2)∥2\n≤ B ·\n∏L+1 j=1 Bj\nminj Bj · L∑ j=1 Aj · (L+1∑ j=1 ∥W 1j −W 2j ∥F + L∑ j=1 ∥v1j − v2j ∥2 )\n≤ B √ 2L+ 1 ·\n∏L+1 j=1 Bj\nminj Bj · L∑ j=1 Aj · √√√√L+1∑ j=1 ∥W 1j −W 2j ∥2F + L∑ j=1 ∥v1j − v2j ∥22,\nwhere we use Cauchy-Schwarz inequality in the last line. This concludes the proof of the lemma." }, { "heading": "D AUXILIARY RESULTS", "text": "Lemma D.1. The following statements for entropy hold.\n1. Suppose that supg∈G ∥g∥∞ ≤M , then\nH4M,B( √ 2δ,G,Q) ≤ H2,B(δ,G,Q)\nfor any δ > 0.\n2. For 1 ≤ q <∞, and Q a distribution, we have\nHp,B(δ,G,Q) ≤ H∞(δ/2,G),\nfor any δ > 0. Here H∞ is the entropy induced by infinity norm.\n3. Based on the above two statements, suppose that supg∈G ∥g∥∞ ≤M , we have\nH4M,B( √ 2 · δ,G,Q) ≤ H∞(δ/2,G),\nby taking p = 2.\nProof. See van de Geer & van de Geer (2000) for a detailed proof.\nLemma D.2. The entropy of the neural network set defined in (4.1) satisfies\nH∞[δ,ΦM (L, p, s)] ≤ (s+ 1) log(2δ−1(L+ 1)V 2),\nwhere V = ∏L+1 l=0 (pl + 1).\nProof. See Schmidt-Hieber (2017) for a detailed proof.\nTheorem D.3. Assume that supg∈G ρK(g) ≤ R. Take a, C, C0, and C1 satisfying that a ≤ C1 √ nR2/K, a ≤ 8 √ nR, a ≥ C0 · [ ∫ R 0 H 1/2 K,B(u,G,P)du ∨ R], and C20 ≥ C2(C1 + 1). It holds that\nP[sup g∈G\n|EPn(g)− EP(g)| ≥ a · n−1/2] ≤ C exp ( − a 2\nC2(C1 + 1)R2\n) .\nProof. See van de Geer & van de Geer (2000) for a detailed proof.\nLemma D.4. Suppose that ∥g∥∞ ≤ K, and ∥g∥ ≤ R, then ρ22K,P(g) ≤ 2R2. Moreover, for any K ′ ≥ K, we have ρ22K′,P(g) ≤ 2R2.\nProof. See van de Geer & van de Geer (2000) for a detailed proof.\nTheorem D.5. For any function f in the Hölder ball Cβd ([0, 1]d,K) and any integers m ≥ 1 and N ≥ (β+1)d ∨ (K +1), there exists a network f̃ ∈ Φ(L, (d, 12dN, . . . , 12dN, 1), s) with number of layers L = 8 + (m + 5)(1 + ⌈log2 d⌉) and number of parameters s ≤ 94d2(β + 1)2dN(m + 6)(1 + ⌈log2 d⌉), such that\n∥f̃ − f∥L∞([0,1]d) ≤ (2K + 1)3d+1N2−m +K2βN−β/d.\nProof. See Schmidt-Hieber (2017) for a detailed proof.\nLemma D.6. If the function f is strongly convex with parameter µ0 > 0 and has Lipschitz continuous gradient with parameter L0 > 0, then the Fenchel duality f† of f is 1/L0-strongly convex and has 1/µ0-Lipschitz continuous gradient (therefore, f† itself is Lipschitz continuous).\nProof. See Zhou (2018) for a detailed proof." } ]
2,019
null
SP:c8a8e6b90a56186572e90e21755e73effbeaea14
[ "The paper raises an alarm that state-of-the art change-point detection methods in the ML literature do not handle important practical aspects arising in time-series modeling, namely seasonality. Indeed, methods designed to detect changing distribution under an i.i.d. setting can fail dramatically when the assumption is violated, when the change happens in the seasonal component. The paper proposes to use an auto-encoder to find the \"main pattern\" within each seasonal window, and to use total variation penalty (l1-norm on the change) of the hidden state in the auto-encoder to encourage a smooth state-sequence which allow breaks. They use k-means clustering to partition data-points, and detect a change-point if two consequent hidden states don't end up in the same cluster. ", "This paper proposed a new model for change point detection, using autoencoders with temporal regularization, in order to impose temporal smoothness in the latent codes. To motivate this new model, the authors also provided a toy example to show how the abnormality in a time series is removed in the reconstructed signal using this additional regularization term. Experimental results were provided to support the proposed new model." ]
Change-point detection problem consists of discovering abrupt property changes in the generation process of time-series. Most state-of-the-art models are optimizing the power of a kernel two-sample test, with only a few assumptions on the distribution of the data. Unfortunately, because they presume the samples are distributed i.i.d., they are not able to use information about the seasonality of a time-series. In this paper, we present a novel approach ATR-CSPD allowing the detection of changes in the seasonal pattern of a time-series. Our method uses an autoencoder together with a temporal regularization, to learn the pattern of each seasonal cycle. Using low dimensional representation of the seasonal patterns, it is possible to accurately and efficiently estimate the existence of a change point using a clustering algorithm. Through experiments on artificial and real-world data sets, we demonstrate the usefulness of the proposed method for several applications.
[]
[ { "authors": [ "Michèle Basseville", "Igor V Nikiforov" ], "title": "Detection of abrupt changes: theory and application, volume 104", "venue": null, "year": 1993 }, { "authors": [ "Claudie Beaulieu", "Jie Chen", "Jorge L Sarmiento" ], "title": "Change-point analysis as a tool to detect abrupt climate variations", "venue": "Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,", "year": 1962 }, { "authors": [ "Wei-Cheng Chang", "Chun-Liang Li", "Yiming Yang", "Barnabás Póczos" ], "title": "Kernel change-point detection with auxiliary deep generative models", "venue": "arXiv preprint arXiv:1901.06077,", "year": 2019 }, { "authors": [ "Eric Ghysels", "Denise R Osborn", "Thomas J Sargent" ], "title": "The econometric analysis of seasonal time series", "venue": null, "year": 2001 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Zaid Harchaoui", "Olivier Cappé" ], "title": "Retrospective mutiple change-point estimation with kernels", "venue": "IEEE/SP 14th Workshop on Statistical Signal Processing,", "year": 2007 }, { "authors": [ "Zaı̈d Harchaoui", "Eric Moulines", "Francis R Bach" ], "title": "Kernel change-point analysis", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "John A Hartigan", "Manchek A Wong" ], "title": "Algorithm as 136: A k-means clustering algorithm", "venue": "Journal of the Royal Statistical Society. Series C (Applied Statistics),", "year": 1979 }, { "authors": [ "Yoshinobu Kawahara", "Takehisa Yairi", "Kazuo Machida" ], "title": "Change-point detection in timeseries data based on subspace identification", "venue": "In Seventh IEEE International Conference on Data Mining (ICDM", "year": 2007 }, { "authors": [ "Song Liu", "Makoto Yamada", "Nigel Collier", "Masashi Sugiyama" ], "title": "Change-point detection in time-series data by relative density-ratio estimation", "venue": "Neural Networks,", "year": 2013 }, { "authors": [ "Robert Lund", "Xiaolan L Wang", "Qi Qi Lu", "Jaxk Reeves", "Colin Gallagher", "Yang Feng" ], "title": "Changepoint detection in periodic and autocorrelated time series", "venue": "Journal of Climate,", "year": 2007 }, { "authors": [ "Vasundhara Puttagunta", "Konstantinos Kalpakis" ], "title": "Adaptive methods for activity monitoring of streaming data", "venue": "In ICMLA,", "year": 2002 }, { "authors": [ "Masashi Sugiyama", "Shinichi Nakajima", "Hisashi Kashima", "Paul V Buenau", "Motoaki Kawanabe" ], "title": "Direct importance estimation with model selection and its application to covariate shift adaptation", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Vladimir Vapnik" ], "title": "The nature of statistical learning theory", "venue": "Springer science & business media,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Time series data are sequences of measurements over time describing the behavior of systems. Time series analysis has become increasingly important in monitoring systems health and performance. As the system behavior changes over time due to external events and/or internal modifications, the problem of identifying the locations of these changes, referred to as Change Point Detection (CPD) has quickly drawn researchers attention. The CPD problem has been widely researched during the last three decades (4; 18; 13; 6; 16; 9) and it has been applied to several fields such as financial market analysis, medicine, climate science as well as system monitoring. The first methods found in the literature for CPD compared probability distributions between two consecutive intervals in a time-series, and alarmed if the difference became significant. Among them we find the cumulative sum algorithm (4) or the change finder for auto-regressive processes (18). Another line of research focuses on subspace identification (12), where the time series is modeled using a linear state-space and a change is identified using the model parameters.\nSince all these methods make strong assumptions on the distributions, a need arises for more generic solutions and non parametric algorithms, such as direct density estimation methods. Unfortunately, these methods suffer from the curse of dimensionality (17) and are not applicable to real life problems. To overcome this challenge, one possible solution is to estimate the ratio of densities between two successive window without computing the densities themselves. This is achieved by going through the estimation of a probability divergence metric such as Kullback-Leibler in (16) or the Person divergence (13). Such methods proved to be quite successful. Another line of research focuses on Kernel two sample test (8), where the kernel trick is used to evaluate mean discrepancy of two samples in a Reproducing Kernel Hilbert Space. For example, Harchaoui et al. (10) introduced a test statistic using the maximum kernel fisher discriminant ratio. More recently, Chang et al. (6) proposed a way to learn an optimal kernel representation for CPD by using an auxiliary generative model.\nNonetheless, although more general, these models still assume that the process is time independent. However, very often, a time-series follows a seasonal behavior. Seasonality is defined as the tendency of a time-series to exhibit behavior that repeats itself every fixed period of time. The term season is used to represent the period of time before behavior begins to repeat itself. Detecting change in the seasonal pattern of a time-series is critical for many applications such as service mon-\nitoring or climate change detection (5; 15). In some cases, it requires a totally different approach than regular CPD solutions. For example, Figure 1 shows the CPU utilization metric values recorded over a period of six days for some server machine. An event occurs every day at 10AM, representing some background process that is important for the system. A forecasting system should adapt its predictions to take into consideration this event. If for some reason, this process was moved to 4PM, the forecasting system is expected to detect this change and adjust the forecast values accordingly. Some information about the location of the peak in each period has to be taken into account in order to detect this kind of change.\nThe regular algorithms presented in this section would fail to detect such kind of change points. Indeed: (a) Existing parametric models for CPD compare the distribution between two windows by looking at some statistics. Here, we will not observe a persistent statistical change, as the spike still occurs. (b) Kernel based methods assume that the samples are generated i.i.d (10; 13; 6). Hence, they will consider every seasonal spike as anomaly instead of including them to the model.\nIn this paper, we introduce a CPD variation for seasonal time series. The issue of seasonal CPD has also been addressed by Lund et al. (14) where they developed a test for periodic and auto correlated time series. However, this test is based on predefined statistics and focuses on median (level) change of the metric.\nWe present ATR-CSPD, a method which uses an Autencoder with Temporal Regularization for detecting Changes in Seasonal Pattern. Our contributions are the following:\n• In section 2, we introduce a variation of the Change Point Detection (CPD) problem called Change in Seasonal Pattern Detection (CSPD) for seasonal time series, and explain why current solutions to CPD are inefficient at detecting them.\n• In section 3 we describe ATR-CSPD, an unsupervised approach that is designed to capture essential information of each period in a seasonal time series. This is achieved by applying a time dependant regularization to the model’s loss. We explain then how to use this model for CPD.\n• Sections 4 and 5 present an extensive evaluation on both generated and real-life data. This benchmark exhibits how ATR-CSPD manages to detect new types of change points, undetected by regular algorithms." }, { "heading": "2 PROBLEM FORMULATION", "text": "In this section, we start by presenting the simplified formulation of the Change Point Detection problem. Then we will introduce a more elaborated formulation for the detection of change points seasonal metrics, and illustrate the difference with an example. The simplified formulation assumes a single generative function behind the time-series values, and independence between the observations. Many common methods for detecting change points are applied based on this simplified version. For example, kernel density methods are based on the analysis of the single underlying function that generates the observations before and after the change (13; 6).\nDefinition 1. Change Point Detection (CPD): Given a sequence of 1-dimensional observations {x1, ..., xt, ...xN} for which ∀i, xi ∈ R. A change point t0 is a point such that {x1, ..., xt0−1} are sampled i.i.d from a distribution F1, and {xt0 , ..., xN} are sampled i.i.d from the distribution F2 where F1 6= F2.\nHowever, for a seasonal time-series the assumption that up to the change-point all samples are i.i.d does not hold, since seasonality creates dependencies between the samples and their index in the period (i.e their phase). In the extreme, it might be that each observation within a period is drawn from an entirely different distribution. However, as we are targeting metrics that are generated by one origin system, a more plausible assumption would be that each data point xi is generated by a combination of a generative function of the time series and another generative function of its phase. For a seasonal time series with period size p, we define each point xi as xjk where j is its period number, k is its phase number, and i = j · p+ k. We denote a single seasonal window of observations of size p by wi where wi = xi1, ...xip. We can represent the original time series by grouping all its seasonal windows: {w1, ..., wt, ...}, wi ∈ Rp. Denote F as the generative function of the time series, {Sk}p1 as the phase-wise generative functions, and xjk ∼ Gj , whereGj = Sj⊗F and⊗ can be an additive or a multiplicative factor (7). In additive seasonal models, the metric is explained by a weighted sum of the seasonal components {Sk}p1 and the generating function F . In multiplicative model, the sum is replaced by a multiplication. In our analysis and results we don’t differentiate the two. Definition 2. Change in Seasonal Pattern Detection (CSPD): A change point in a seasonal pattern is a period number j0 such that ∀(k, j < j0), xjk ∼ Gk, and ∃k, xj0k ∼ G ′ k, where Gk 6= G ′ k.\nThe difference between the two formulations can be demonstrated by the example displayed in Figure 1. It is clear that the time-series displayed in the chart represents a system that has altered its behavior. However, when considering the classical CPD problem formulation, the chart does not fall under the definition of a time series containing a change point as the overall distribution did not change. Considering the CSPD formulation, the change of the seasonal distribution component of both S10AM and S4PM distribution functions are identified as a change point in the time series. This observation suggests the CSPD is a generalization of the CPD problem. While CPD is centered around the cumulative values distribution parameters such as median and variance of the time series values, CSPD also focuses on the shape and proportions between the values observed in each period cycle.\nRemark: Both problems can be extended to multi-dimensional time series and we are only presenting it in 1-dimension for simplicity." }, { "heading": "3 AUTOENCODER WITH TEMPORAL REGULARIZATION", "text": "We introduce an algorithm that is able to detect changes in the seasonal pattern of a time-series. We start by using an autoencoder to capture the main pattern of each period in the time-series. The aim is to have close encoding (in terms of euclidean distance) for two periods that behaves similar, and different ones if there is an abrupt change between them. By having such a representation we can detect if there has been a change point by examining the euclidean distance between two adjacent encoded periods in the time series." }, { "heading": "3.1 THE AUTOENCODER MODEL", "text": "Autoencoders are neural networks that attempt to copy its input to its output. It consists of an encoder function, that maps the input to an encoded version, and a decoder that performs the reconstruction. Here we are training an autoencoder to reconstruct fix-size windows of a time-series. Let xi ∈ Rd×p, be the ith window of size p in a d-dimensional time series, fθ1 : Rd×p → Rd×q our encoder function, gθ2 : Rd×q → Rd×p the decoder function and n the total number of windows. In general autoencoder model, want to minimize:\nmin θ1,θ2 n∑ i=1 ||xi − gθ2(fθ1(xi))||2 + λ(||θ1||2 + ||θ2||2) (1)\nθ1 and θ2 are a set of parameters that can be learn by gradient descent using back-propagation and λ ∈ R allows us to control the L2-norm of θ1 and θ2. In ATR-CSPD a regularization term is added to this loss function, which will contribute to the detection of change points." }, { "heading": "3.2 TEMPORAL REGULARIZATION", "text": "In order to encourage the network to generate similar low-dimensional representations, we introduce a new term in equation (1). The idea is to penalize the network for a difference between the encoding of two consecutive periods. The resulting loss function is given in (2).\nmin θ1,θ2 n∑ i=1 ||xi − gθ2(fθ1(xi))||2 + λ(||θ1||2 + ||θ2||2) + γ n−1∑ i=1 ||hi+1 − hi||1 (2)\nWe refer to the last term as temporal regularization as it applies to neighbours period in the timeseries. Here γ ∈ R allows us to control the strength of this regularization and hi = fθ1(xi). To understand its effect we take the following examples. Figure 2a shows a one-dimensional, weekly seasonal, time-series. On it we run two autoencoder models, the first one optimizing equation (1) and the second one equation (2). We set p = 288 meaning that we consider the period as daily (the metric is recorded every 5 minutes), and expect to observe changes in the weekends. The additional parameters are set to λ = 0.00005, γ = 0.001 and q = 18. We observe that, although the regular autoencoder doesn’t get rid of the major anomalies in the data (Figure 2b), the temporal regularized model creates a generic pattern for the weekdays and for the weekends (Figure 2c). According to this result, we can infer that the low dimensional representations are similar within the weekdays and within the days of the weekends, and we can use clustering to identify breakpoints. In Appendix C, we show the results of a Principal Components Analysis (PCA) applied to the encodings generated by both algorithms, which confirms this hypothesis." }, { "heading": "3.3 LOCATING THE CHANGE POINTS", "text": "Once every window is mapped to an embedding in a reduced space, finding change points becomes much easier. We aim to find groups of similar windows, based on their location in the space. A change points will be detected if two consecutive points do not end up in the same partition.\nTo do that we choose to use the well known k-means clustering algorithm (11). In order to find the best number of clusters, and thus the number of change points, we use the silhouette score. It compares the mean pairwise distance of points in the same cluster (a) with the mean distance of each points to the nearest cluster (b): s = b−amax(a,b)\nWe run k-means iteratively on candidates number of change points, and select the one with the best silhouette score. If the received score is larger than a specified threshold (which is a hyperparameter of the model), we select the resulting partition, otherwise we consider that there were no changes." }, { "heading": "4 EXPERIMENTS", "text": "In order to illustrate the difference between regular CPD algorithms and ATR-CSPD, we start by generating data containing different types of seasonal or non seasonal change points. Then we run some experiments on several real-life seasonal data sets. We present the results in the Smart meters in London (3) data set, the NYC taxi data set (2) and on time-series taken from Azure monitoring. In the experiments, we assume that for seasonal time-series the length of the period is known (the inference of the season is outside the scope of this article) ." }, { "heading": "4.1 NETWORK ARCHITECTURE", "text": "In all the experiments we use a similar network architecture. The encoder function fθ1 is a 3-layer feed-forward neural network. Each layer consists of a linear function and a hyperbolic tangent function. Equation (3) shows how the layer output z is computed from its input x. The decoder is a 2-layer neural network, with a similar activation function.\ny =Wx+ b, z = ey − e−y\ney + e−y (3)\nThe shape of W in the layers determines whether it increases, decreases or leaves unchanged the dimension of the output. In ATR-CSPD, we reduce the dimension during the encoding phase and increase it back when decoding. This way, only the main information for the reconstruction will be stored in the encoding.\nThe initial window is always of the size of one period. For example if we have a daily seasonality and the data is recorded every 30 minutes, we will have a window size of 48. Then it is divided successively by 4, 2 and again 2 in the encoding phase, and multiplied by 2 and 8 in the decoding layers. The architecture is displayed in Appendix B.\nDepending on the experiment, we use a learning rate that ranges from 0.005 to 0.05. The value of the parameter λ in the loss function (2) is set to 0.00005 or 0.000001 and γ varies from 0.001 and 0.00001.\nNote that the time-series are first scaled to range between 0 and 1 using min-max scaling." }, { "heading": "4.2 EVALUATION ON GENERATED SET", "text": "" }, { "heading": "4.2.1 SIMULATED SET WITH CHANGE POINT", "text": "For each category of change point, we generate 20 random time-series and keep track of the location of the change point inserted to the data. For all the seasonal time-series we fix the period length to 288. The first type is regular change points from non seasonal time-series. They can be a change in the mean, the variance or both on a white Gaussian noise without (category A), and with random anomalies (category B). Then we generate time-series with seasonal spikes and white Gaussian noise and search for two different types of change points: a change in the height of the spike (category C), and a change in the position of the spike. (category D) Finally, we consider seasonal time-series that alternate between a quiet state and an active state, with higher values and higher noise. On them we insert the following kind of change : A change in the active period height (category E), a change in the quiet period height (category F) and a change in the active period length (category G). We also generate 120 samples from those categories without changes in order to evaluate the false detection.\nSamples for each category are drawn in Appendix A, along with more details about the implementation of the generating process." }, { "heading": "4.2.2 RESULTS", "text": "We compare the results for ATR-CSPD with two non-parametric change point algorithm: KCpE introduced by Harchaoui and Capp (9), and RDR presented by Liu et al. (13). We choose to use\na regular gaussian kernel: K(x, x ′ ) = exp(−γ||x − x′ ||2). The bandwidth parameters which are required for KCpE were set according to the ”median trick” (8). For the RDR model, we follow Li et al. by adjusting the bandwidth and the regularization parameter at each time step. Finally, we run KCpE on deseasonalized data. Thus means, that for the seasonal samples, we compute the mean for each period and remove it from their original values. This is a common procedure for obtaining stationarity in seasonal time-series. We refer to this model as DS-KCpE.\nEach model Fi runs on a time-series xj and returns a set of candidate change points t̂ij . A change point tj0 in xj is detected by Fj if\nt̂ij = 1 and |tj0 − t̂ ij 0 | < 288. (4)\nNote that even if every model is able to detect multiple change points on a time-series, our generated samples all have at most one. The detection for each category is equal to the number of change points detected divided by the category size. The false positive rate for Fi is the ratio in time-series without change on which Fi found at least one CP.\nThe results are summarized in Table 1:\nThe results clearly show that regular CPD approaches fail to detect most changes in the seasonal pattern when it comes to a seasonal spike. They also demonstrate that ATR-CSPD performs at least as well as classical algorithms on detecting change points in seasonal time-series. Indeed, in categories C and D, the impact of the changes on the kernel mean estimate of the windows is minimal or nonexistent, which explains why KCpE and RDR hardly detect them. We can notice that using decomposition of the seasonal component helped DS-KCpE for category D, still the detection is far from ATR-CSPD. A similar explanation can be derived for the next categories. Except for the category F, the changes are made only on a minor section of the periods and the statistical impact is smaller. It worth noticing that ATR-CSPD performs badly on categories A and B. For a very noisy gaussian data, since getting a minor decrease in the MSE requires a high increase of the temporal regularization term, the autoencoder will not create a change in the encodings and the detection will be bad. In other cases, where there are too much anomalies on one period, it decides to learn the anomalies to decrease the MSE which might create a wrong detection of change point." }, { "heading": "4.3 LONDON ELECTRICITY DATA SET", "text": "The data records the energy consumption of 5567 households in London between November 2011 and February 2014. For each time-series the consumption in kWh/hh is saved every 30 minutes. We assume that, in general, we will observe a weekly seasonal pattern, with similar weekdays and different weekends. Of course this assumption is not always true, for example retired people will probably exhibit a more monotonic week but the results will support this assumption.\nFirst we preprocess each entry in the dataset by aggregating every 30 minutes of the week within the timeseries. Meaning that for a specific timeseries, we retrieve all the Mondays 8:30 AM and take the mean. Doing it for every recorded timestamp, gives us a new timeseries representing the average week for a household.\nThen we run KCpE and ATR-CSPD on this new set. As RDR is quite slow, and we could not run it on this set. We say a model correctly detected a change if it detected changes only after Friday 00:00. Any other change is considered as false positive. In Table 2 we present the precision and recall of the two models using the previous definitions. At a pretty similar precision level, we can see that our algorithm outperforms the baseline model. Figure 3 displays a few examples of unique detection of ATR-CSPD.\nWe can observe that the weekends, in those cases, is characterized by a different shape whether the level (mean) changes seem to occur at different places in the time-series. In Figure 3a we can easily distinguish the working hours of the weekdays, whether on the weekend no drop is observed. Looking at 3b, we notice that a peak of activity occurs every day. On Sundays, however, it happens at 12:00 PM instead of 4:00 PM days and ATR-CSPD spotted the change. Finally in 3c, we can observe that our model detected a change in the pattern on Sundays that did not drastically impact the mean." }, { "heading": "4.4 NYC TAXI DATASET", "text": "The city of New York provides every month a dataset containing a lot of information about taxi trips (2). Taken from the kaggle Numenta Anomaly Benchmark repository (1), the current dataset consists of an aggregation of the total number of taxi passengers into 30 minutes buckets between July 2014 and February 2015.\nThe results of ATR-CSPD, compared with the two baseline models introduced in section 4, exhibit well the differences between regular Change Points and Changes in Seasonal Patterns. Looking at Figure 4 we, in fact, observe that although KCpE and RDR detects many level change, whether ATR-CSPD identifies only seasons in which the regular weekly pattern is broken. Looking at the dates of Figure 4b, one can notice that the weeks match respectively: the 4th of July, Thanksgiving and Christmas holidays. Being able to identify non-regular seasons and to treat them separately could drastically benefit to a forecasting algorithm." }, { "heading": "4.5 AZURE MONITOR DATA", "text": "Azure Dynamic Threshold (DT) Monitor is a service provided in the Azure cloud for monitoring a vast range of services and usage metrics. For this purpose, DT analyzes the past observable history of metric values and creates a baseline that describes the normal behavior expected for this metric. It later uses this baseline as a forecasting method. We used the training stage (analysis of past data) and run, as before, KCpE, ATR-CSPD and RDR to try to detect change point that happened in\nthose time series. We selected random 958 daily seasonal time series for this purpose. The values are recorded either every 5-15-30 or 60 minutes, and we use a total record of 10 days.\nAs there is not any label for this data, we will focus on the differences in the change points detected. We tune the three model to find change points in 25% of the time-series. Among the changes found by ATR-CSPD, 40% are not reported by the others. We present in Figure 5 a few examples of changes only detected by our model.\nIn 5a blue spikes occurs between 2:40 and 2:45, in the green section, for some reasons it occurs at 1:35, an hour before usually. Finally in the orange section the spikes all occurs between 0:55 and 1:05. While monitoring the production system detecting such kind of update in a process could save false alarms while still allowing quick detection of anomalies. In 5b, a seasonal pattern appears at the end of the time-series and ATR-CSPD managed to find the change points. Finally in 5c, we can notice that even a slight change in a complex seasonal pattern can be detected as long it is repeated enough time." }, { "heading": "5 CONCLUSION", "text": "We propose ATR-CSPD, a new deep learning-based algorithm that detects changes in seasonal pattern of a timeseries. The model combines an autoencoder network with a clustering algorithm, and is able to identify changes from different real-world applications. Evaluations on multiple benchmark datasets illustrate the difference between our new approach and existing CPD methods. They also demonstrate that ATR-CSPD outperforms other models in detecting specific types of change points and could be used to improve the efficiency of time-series forecasting in many applications." }, { "heading": "A GENERATED DATA SAMPLES", "text": "A.1 PLOTS\nA.2 IMPLEMENTATION DETAILS\nIn this section, we explain, category by category, how samples in section 4.2 are generated. Category A: Samples are following a gaussian distribution with a mean between 0 and 10 and a standard deviation between 1 and 10. Then we randomly choose whether a change will occur in the mean, in the standard deviation or in both. Finally, again at random, we define the parameters of after the change. Category B: We use the same process as category A but we add some random spikes (anomalies). The spikes are generated with at a random frequency, there will be between 1 to 60 spikes in the timeseries. Their height is also generated randomly, from 2 to 10 times the max value of the gaussian samples. Same for their length, between 5 and 15 points. At the end, we add some gaussian centered noise with a standard deviation that depends on its height. Category C: We generate some regular gaussian samples with a random mean and variance. Then we add the spikes as previously but make them repeat every 288 points. For the last 4 spikes, we increase the height by a random factor (between 1.5 and 2). Category D: The process is similar to the generation of samples of category C, but we create a random change in the periodic spike time and not in the height. Category E: First we split randomly the period into three parts: a quiet part, an active part and a quiet part again. Then we generate some gaussian samples for the quiet part, with a random mean and a standard deviation that is determined by this mean. We do the same for the active part but make sure the mean is bigger. Finally, at a specific point in time, we randomly increase the mean of the active part. Category F: We do the same as category E, but we randomly increase the mean of the quiet part and make sure it still stays lower than the active one. Category G: Again, the same process as category E, but we do not modify the mean, we randomly change the split to make the active part longer." }, { "heading": "C PCA ON THE ENCODINGS", "text": "" }, { "heading": "B NETWORK ARCHITECTURE", "text": "" } ]
2,019
null
SP:14dca47a505818106502978bddde5a7f294bfeeb
[ "This paper developed a novel layerwise adaptation strategy, LAMB, that allows training BERT model with large mini-batches (32k vs baseline 512). This significantly speeds up the status quo in training BERT model, and effectively reduces the training time from original 3 days to only 76 minutes. In addition to demonstrating superior results across various tasks in practice, the paper also provides theoretical convergence analysis on LAMB optimizer. ", "This paper proposes a learning rate adaptation mechanism, called LAMB, for large-batch distributed training. The goal is to stabilize the training as the batch size increases. The idea is simple and straightforward -- there should be a layerwise learning rate adjusted by normalizing the layer weights and gradients at each layer so that layers with larger weights take larger learning steps, and vice versa. The authors perform empirical studies on BERT-large and ResNet to conclude that LAMB can scale up training batch size while still being able to converge in time with comparable accuracy." ]
Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains RESNET on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and RESNET-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1). The LAMB implementation is available online1.
[ { "affiliations": [], "name": "Yang You" }, { "affiliations": [], "name": "Jing Li" }, { "affiliations": [], "name": "Sashank Reddi" }, { "affiliations": [], "name": "Jonathan Hseu" }, { "affiliations": [], "name": "Sanjiv Kumar" }, { "affiliations": [], "name": "Srinadh Bhojanapalli" }, { "affiliations": [], "name": "Xiaodan Song" }, { "affiliations": [], "name": "James Demmel" }, { "affiliations": [], "name": "Kurt Keutzer" }, { "affiliations": [], "name": "Cho-Jui Hsieh" } ]
[ { "authors": [ "Takuya Akiba", "Shuji Suzuki", "Keisuke Fukuda" ], "title": "Extremely large minibatch sgd: Training resnet-50 on imagenet in 15 minutes", "venue": "arXiv preprint arXiv:1711.04325,", "year": 2017 }, { "authors": [ "Yoshua Bengio" ], "title": "Practical recommendations for gradient-based training of deep architectures", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Anima Anandkumar" ], "title": "signsgd: compressed optimisation for non-convex problems", "venue": null, "year": 2018 }, { "authors": [ "Valeriu Codreanu", "Damian Podareanu", "Vikram Saletore" ], "title": "Scale out for large minibatch sgd: Residual network training on imagenet-1k with improved accuracy and reduced time to train", "venue": "arXiv preprint arXiv:1711.04291,", "year": 2017 }, { "authors": [ "Cody Coleman", "Deepak Narayanan", "Daniel Kang", "Tian Zhao", "Jian Zhang", "Luigi Nardi", "Peter Bailis", "Kunle Olukotun", "Chris Ré", "Matei Zaharia" ], "title": "Dawnbench: An end-to-end deep learning benchmark and competition", "venue": null, "year": 2017 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V Le" ], "title": "Large scale distributed deep networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Aditya Devarakonda", "Maxim Naumov", "Michael Garland" ], "title": "Adabatch: Adaptive batch sizes for training deep neural networks", "venue": "arXiv preprint arXiv:1712.02029,", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Timothy Dozat" ], "title": "Incorporating nesterov momentum into adam", "venue": null, "year": 2016 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first- and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan", "Hongchao Zhang" ], "title": "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization", "venue": "Mathematical Programming,", "year": 2014 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "arXiv preprint arXiv:1705.08741,", "year": 2017 }, { "authors": [ "Forrest N Iandola", "Matthew W Moskewicz", "Khalid Ashraf", "Kurt Keutzer" ], "title": "Firecaffe: near-linear acceleration of deep neural network training on compute clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Xianyan Jia", "Shutao Song", "Wei He", "Yangzihao Wang", "Haidong Rong", "Feihu Zhou", "Liqiang Xie", "Zhenyu Guo", "Yuanzhou Yang", "Liwei Yu" ], "title": "Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes", "venue": "arXiv preprint arXiv:1807.11205,", "year": 2018 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky" ], "title": "One weird trick for parallelizing convolutional neural networks", "venue": "arXiv preprint arXiv:1404.5997,", "year": 2014 }, { "authors": [ "Mu Li" ], "title": "Scaling Distributed Machine Learning with System and Algorithm Co-design", "venue": "PhD thesis,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Fixing weight decay regularization in adam", "venue": "arXiv preprint arXiv:1711.05101,", "year": 2017 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Hiroaki Mikami", "Hisahiro Suganuma", "Yoshiki Tanaka", "Yuichi Kageyama" ], "title": "Imagenet/resnet-50 training in 224 seconds", "venue": "arXiv preprint arXiv:1811.05233,", "year": 2018 }, { "authors": [ "Yurii E Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o (1/kˆ 2)", "venue": "In Dokl. akad. nauk Sssr,", "year": 1983 }, { "authors": [ "Kazuki Osawa", "Yohei Tsuji", "Yuichiro Ueno", "Akira Naruse", "Rio Yokota", "Satoshi Matsuoka" ], "title": "Second-order optimization method for large mini-batch: Training resnet-50 on imagenet in 35 epochs", "venue": null, "year": 1811 }, { "authors": [ "Benjamin Recht", "Christopher Re", "Stephen Wright", "Feng Niu" ], "title": "Hogwild: A lock-free approach to parallelizing stochastic gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Sashank J. Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabás Póczos", "Alexander J. Smola" ], "title": "Stochastic variance reduction for nonconvex optimization", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the Convergence of Adam & Beyond", "venue": "In Proceedings of the 6th International Conference on Learning Representations.,", "year": 2018 }, { "authors": [ "Christopher J Shallue", "Jaehoon Lee", "Joe Antognini", "Jascha Sohl-Dickstein", "Roy Frostig", "George E Dahl" ], "title": "Measuring the effects of data parallelism on neural network training", "venue": "arXiv preprint arXiv:1811.03600,", "year": 2018 }, { "authors": [ "Samuel L Smith", "Pieter-Jan Kindermans", "Quoc V Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "arXiv preprint arXiv:1711.00489,", "year": 2017 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Masafumi Yamazaki", "Akihiko Kasagi", "Akihiro Tabuchi", "Takumi Honda", "Masahiro Miwa", "Naoto Fukumoto", "Tsuguchika Tabaru", "Atsushi Ike", "Kohta Nakashima" ], "title": "Yet another accelerated sgd: Resnet-50 training on imagenet", "venue": "seconds. arXiv preprint arXiv:1903.12650,", "year": 2019 }, { "authors": [ "Chris Ying", "Sameer Kumar", "Dehao Chen", "Tao Wang", "Youlong Cheng" ], "title": "Image classification at supercomputer scale", "venue": "arXiv preprint arXiv:1811.06992,", "year": 2018 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling sgd batch size to 32k for imagenet training", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Yang You", "Zhao Zhang", "Cho-Jui Hsieh", "James Demmel", "Kurt Keutzer" ], "title": "Imagenet training in minutes", "venue": "In Proceedings of the 47th International Conference on Parallel Processing,", "year": 2018 }, { "authors": [ "Yang You", "Jonathan Hseu", "Chris Ying", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large-batch training for lstm and beyond", "venue": null, "year": 1901 }, { "authors": [ "Manzil Zaheer", "Sashank J. Reddi", "Devendra Singh Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jingzhao Zhang", "Sai Praneeth Karimireddy", "Andreas Veit", "Seungyeon Kim", "Sashank J. Reddi", "Sanjiv Kumar", "Suvrit Sra" ], "title": "Why ADAM beats SGD for attention models", "venue": null, "year": 1912 }, { "authors": [ "Goyal" ], "title": "2017) suggested a proper learning rate warmup and decay scheme may help improve the ImageNet classification accuracy. We included these techniques in Adam/AdamW/AdaGrad tuning. Specifically, we use the learning rate recipe of Goyal et al", "venue": "(Goyal et al.,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "With the advent of large scale datasets, training large deep neural networks, even using computationally efficient optimization methods like Stochastic gradient descent (SGD), has become particularly challenging. For instance, training state-of-the-art deep learning models like BERT and ResNet-50 takes 3 days on 16 TPUv3 chips and 29 hours on 8 Tesla P100 gpus respectively (Devlin et al., 2018; He et al., 2016). Thus, there is a growing interest to develop optimization solutions to tackle this critical issue. The goal of this paper is to investigate and develop optimization techniques to accelerate training large deep neural networks, mostly focusing on approaches based on variants of SGD.\nMethods based on SGD iteratively update the parameters of the model by moving them in a scaled (negative) direction of the gradient calculated on a minibatch. However, SGD’s scalability is limited by its inherent sequential nature. Owing to this limitation, traditional approaches to improve SGD training time in the context of deep learning largely resort to distributed asynchronous setup (Dean et al., 2012; Recht et al., 2011). However, the implicit staleness introduced due to the asynchrony limits the parallelization of the approach, often leading to degraded performance. The feasibility of computing gradient on large minibatches in parallel due to recent hardware advances has seen the resurgence of simply using synchronous SGD with large minibatches as an alternative to asynchronous SGD. However, naïvely increasing the batch size typically results in degradation of generalization performance and reduces computational benefits (Goyal et al., 2017).\nSynchronous SGD on large minibatches benefits from reduced variance of the stochastic gradients used in SGD. This allows one to use much larger learning rates in SGD, typically of the order square root of the minibatch size. Surprisingly, recent works have demonstrated that up to certain minibatch sizes, linear scaling of the learning rate with minibatch size can be used to further speed up the\n1https://github.com/tensorflow/addons/blob/master/tensorflow_addons/ optimizers/lamb.py\ntraining Goyal et al. (2017). These works also elucidate two interesting aspects to enable the use of linear scaling in large batch synchronous SGD: (i) linear scaling of learning rate is harmful during the initial phase; thus, a hand-tuned warmup strategy of slowly increasing the learning rate needs to be used initially, and (ii) linear scaling of learning rate can be detrimental beyond a certain batch size. Using these tricks, Goyal et al. (2017) was able to drastically reduce the training time of ResNet-50 model from 29 hours to 1 hour using a batch size of 8192. While these works demonstrate the feasibility of this strategy for reducing the wall time for training large deep neural networks, they also highlight the need for an adaptive learning rate mechanism for large batch learning.\nVariants of SGD using layerwise adaptive learning rates have been recently proposed to address this problem. The most successful in this line of research is the LARS algorithm (You et al., 2017), which was initially proposed for training RESNET. Using LARS, ResNet-50 can be trained on ImageNet in just a few minutes! However, it has been observed that its performance gains are not consistent across tasks. For instance, LARS performs poorly for attention models like BERT. Furthermore, theoretical understanding of the adaptation employed in LARS is largely missing. To this end, we study and develop new approaches specially catered to the large batch setting of our interest.\nContributions. More specifically, we make the following main contributions in this paper.\n• Inspired by LARS, we investigate a general adaptation strategy specially catered to large batch learning and provide intuition for the strategy.\n• Based on the adaptation strategy, we develop a new optimization algorithm (LAMB) for achieving adaptivity of learning rate in SGD. Furthermore, we provide convergence analysis for both LARS and LAMB to achieve a stationary point in nonconvex settings. We highlight the benefits of using these methods for large batch settings.\n• We demonstrate the strong empirical performance of LAMB across several challenging tasks. Using LAMB we scale the batch size in training BERT to more than 32k without degrading the performance; thereby, cutting the time down from 3 days to 76 minutes. Ours is the first work to reduce BERT training wall time to less than couple of hours.\n• We also demonstrate the efficiency of LAMB for training state-of-the-art image classification models like RESNET. To the best of our knowledge, ours is first adaptive solver that can achieve state-of-the-art accuracy for RESNET-50 as adaptive solvers like Adam fail to obtain the accuracy of SGD with momentum for these tasks." }, { "heading": "1.1 RELATED WORK", "text": "The literature on optimization for machine learning is vast and hence, we restrict our attention to the most relevant works here. Earlier works on large batch optimization for machine learning mostly focused on convex models, benefiting by a factor of square root of batch size using appropriately large learning rate. Similar results can be shown for nonconvex settings wherein using larger minibatches improves the convergence to stationary points; albeit at the cost of extra computation. However, several important concerns were raised with respect to generalization and computational performance in large batch nonconvex settings. It was observed that training with extremely large batch was difficult (Keskar et al., 2016; Hoffer et al., 2017). Thus, several prior works carefully hand-tune training hyper-parameters, like learning rate and momentum, to avoid degradation of generalization performance (Goyal et al., 2017; Li, 2017; You et al., 2018; Shallue et al., 2018).\n(Krizhevsky, 2014) empirically found that simply scaling the learning rate linearly with respect to batch size works better up to certain batch sizes. To avoid optimization instability due to linear scaling of learning rate, Goyal et al. (2017) proposed a highly hand-tuned learning rate which involves a warm-up strategy that gradually increases the LR to a larger value and then switching to the regular LR policy (e.g. exponential or polynomial decay). Using LR warm-up and linear scaling, Goyal et al. (2017) managed to train RESNET-50 with batch size 8192 without loss in generalization performance. However, empirical study (Shallue et al., 2018) shows that learning rate scaling heuristics with the batch size do not hold across all problems or across all batch sizes.\nMore recently, to reduce hand-tuning of hyperparameters, adaptive learning rates for large batch training garnered significant interests (Reddi et al., 2018; Zaheer et al., 2018; Zhang et al., 2019). Several recent works successfully scaled the batch size to large values using adaptive learning rates without degrading the performance, thereby, finishing RESNET-50 training on ImageNet in a few minutes (You et al., 2018; Iandola et al., 2016; Codreanu et al., 2017; Akiba et al., 2017; Jia et al.,\n2018; Smith et al., 2017; Martens & Grosse, 2015; Devarakonda et al., 2017; Mikami et al., 2018; Osawa et al., 2018; You et al., 2019; Yamazaki et al., 2019). To the best of our knowledge, the fastest training result for RESNET-50 on ImageNet is due to Ying et al. (2018), who achieve 76+% top-1 accuracy. By using the LARS optimizer and scaling the batch size to 32K on a TPUv3 Pod, Ying et al. (2018) was able to train RESNET-50 on ImageNet in 2.2 minutes. However, it was empirically observed that none of these performance gains hold in other tasks such as BERT training (see Section 4)." }, { "heading": "2 PRELIMINARIES", "text": "Notation. For any vector xt ∈ Rd, either xt,j or [xt]j are used to denote its jth coordinate where j ∈ [d]. Let I be the d×d identity matrix, and let I = [I1, I2, ..., Ih] be its decomposition into column submatrices Ii = d× dh. For x ∈ Rd, let x(i) be the block of variables corresponding to the columns of Ii i.e., x(i) = I>i x ∈ Rdi for i = {1, 2, · · · , h}. For any function f : Rd → R, we use ∇if(x) to denote the gradient with respect to x(i). For any vectors u, v ∈ Rd, we use u2 and u/v to denote elementwise square and division operators respectively. We use ‖.‖ and ‖.‖1 to denote l2-norm and l1-norm of a vector respectively.\nWe start our discussion by formally stating the problem setup. In this paper, we study nonconvex stochastic optimization problems of the form\nmin x∈Rd\nf(x) := Es∼P[`(x, s)] + λ\n2 ‖x‖2, (1)\nwhere ` is a smooth (possibly nonconvex) function and P is a probability distribution on the domain S ⊂ Rk. Here, x corresponds to model parameters, ` is the loss function and P is an unknown data distribution.\nWe assume function `(x) is Li-smooth with respect to ith block, i.e., there exists a constant Li such that ‖∇i`(x, s)−∇i`(x+ Iiδ, s)‖ ≤ Li‖δ‖, ∀ x ∈ Rd, δ ∈ Rdi and s ∈ S, (2) for all i ∈ [h]. We use L = (L1, · · · , Lh)> to denote the h-dimensional vector of Lipschitz constants. We use L∞ and Lavg to denote maxi Li and ∑ i Li h respectively. We assume the following bound on the variance in stochastic gradients: E‖∇i`(x, s)−∇if(x)‖2 ≤ σ2i for all x ∈ Rd and i ∈ [h]. Furthermore, we also assume E‖[∇`(x, s)]i − [∇f(x)]i‖2 ≤ σ̃2i for all x ∈ Rd and i ∈ [d]. We use σ = (σ1, · · · , σh)> and σ̃ = (σ̃1, · · · , σ̃d)> to denote the vectors of standard deviations of stochastic gradient per layer and per dimension respectively. Finally, we assume that the gradients are bounded i.e., |[∇l(x, s)]j | ≤ G for all j ∈ [d], x ∈ Rd and s ∈ S. Note that such assumptions are typical in the analysis of stochastic first-order methods (cf. (Ghadimi & Lan, 2013a; Ghadimi et al., 2014; Reddi et al., 2016; 2018)).\nStochastic gradient descent (SGD) is one of the simplest first-order algorithms for solving problem in Equation 1. The update at the tth iteration of SGD is of the following form:\nxt+1 = xt − ηt 1 |St| ∑ st∈St ∇`(xt, st) + λxt, (SGD)\nwhere St is set of b random samples drawn from the distribution P. For very large batch settings, the following is a well-known result for SGD. Theorem 1 ((Ghadimi & Lan, 2013b)). With large batch b = T and using appropriate learning rate, we have the following for the iterates of SGD:\nE [ ‖∇f(xa)‖2 ] ≤ O\n( (f(x1)− f(x∗))L∞\nT + ‖σ‖2 T\n) .\nwhere x∗ is an optimal solution to the problem in equation 1 and xa is an iterate uniformly randomly chosen from {x1, · · · , xT }.\nHowever, tuning the learning rate ηt in SGD, especially in large batch settings, is difficult in practice. Furthermore, the dependence on L∞ (the maximum of smoothness across dimension) can lead to significantly slow convergence. In the next section, we discuss algorithms to circumvent this issue." }, { "heading": "3 ALGORITHMS", "text": "In this section, we first discuss a general strategy to adapt the learning rate in large batch settings. Using this strategy, we discuss two specific algorithms in the later part of the section. Since our primary focus is on deep learning, our discussion is centered around training a h-layer neural network.\nGeneral Strategy. Suppose we use an iterative base algorithm A (e.g. SGD or ADAM) in the small batch setting with the following layerwise update rule:\nxt+1 = xt + ηtut,\nwhere ut is the update made by A at time step t. We propose the following two changes to the update for large batch settings:\n1. The update is normalized to unit l2-norm. This is ensured by modifying the update to the form ut/‖ut‖. Throughout this paper, such a normalization is done layerwise i.e., the update for each layer is ensured to be unit l2-norm.\n2. The learning rate is scaled by φ(‖xt‖) for some function φ : R+ → R+. Similar to the normalization, such a scaling is done layerwise.\nSuppose the base algorithm A is SGD, then the modification results in the following update rule:\nx (i) t+1 = x (i) t − ηt φ(‖x(i)t ‖) ‖g(i)t ‖ g (i) t , (3)\nfor all layers i ∈ [h] and where x(i)t and g (i) t are the parameters and the gradients of the i th layer at time step t. The normalization modification is similar to one typically used in normalized gradient descent except that it is done layerwise. Note that the modification leads to a biased gradient update; however, in large-batch settings, it can be shown that this bias is small. It is intuitive that such a normalization provides robustness to exploding gradients (where the gradient can be arbitrarily large) and plateaus (where the gradient can be arbitrarily small). Normalization of this form essentially ignores the size of the gradient and is particularly useful in large batch settings where the direction of the gradient is largely preserved.\nThe scaling term involving φ ensures that the norm of the update is of the same order as that of the parameter. We found that this typically ensures faster convergence in deep neural networks. In practice, we observed that a simple function of φ(z) = min{max{z, γl}, γu} works well. It is instructive to consider the case where φ(z) = z. In this scenario, the overall change in the learning rate is ‖x (i) t ‖\n‖g(i)t ‖ , which can also be interpreted as an estimate on the inverse of Lipschitz constant of the\ngradient (see equation 2). We now discuss different instantiations of the strategy discussed above. In particular, we focus on two algorithms: LARS (3.1) and the proposed method, LAMB (3.2)." }, { "heading": "3.1 LARS ALGORITHM", "text": "The first instantiation of the general strategy is LARS algorithm (You et al., 2017), which is obtained by using momentum optimizer as the base algorithm A in the framework. LARS was earlier proposed for large batch learning for RESNET on ImageNet. In general, it is observed that the using (heavy-ball) momentum, one can reduce the variance in the stochastic gradients at the cost of little bias. The pseudocode for LARS is provide in Algorithm 1.\nWe now provide convergence analysis for LARS in general nonconvex setting stated in this paper. For the sake of simplicity, we analyze the case where β1 = 0 and λ = 0 in Algorithm 1. However, our analysis should extend to the general case as well. We will defer all discussions about the convergence rate to the end of the section.\nTheorem 2. Let ηt = η = √\n2(f(x1)−f(x∗)) α2u‖L‖1T\nfor all t ∈ [T ], b = T , αl ≤ φ(v) ≤ αu for all v > 0 where αl, αu > 0. Then for xt generated using LARS (Algorithm 1), we have the following bound(\nE [ 1√ h h∑ i=1 ‖∇if(xa)‖ ])2 ≤ O ( (f(x1)− f(x∗))Lavg T + ‖σ‖21 Th ) ,\nwhere x∗ is an optimal solution to the problem in equation 1 and xa is an iterate uniformly randomly chosen from {x1, · · · , xT }.\nAlgorithm 1 LARS Input: x1 ∈ Rd, learning rate {ηt}Tt=1, parameter 0 < β1 < 1, scaling function φ, > 0 Set m0 = 0 for t = 1 to T do\nDraw b samples St from P Compute gt = 1|St| ∑ st∈St ∇`(xt, st) mt = β1mt−1 + (1− β1)(gt + λxt) x\n(i) t+1 = x (i) t − ηt\nφ(‖x(i)t ‖)\n‖m(i)t ‖ m\n(i) t for all i ∈ [h]\nend for\nAlgorithm 2 LAMB Input: x1 ∈ Rd, learning rate {ηt}Tt=1, parameters 0 < β1, β2 < 1, scaling function φ, > 0 Set m0 = 0, v0 = 0 for t = 1 to T do\nDraw b samples St from P. Compute gt = 1|St| ∑ st∈St ∇`(xt, st). mt = β1mt−1 + (1− β1)gt vt = β2vt−1 + (1− β2)g2t mt = mt/(1− βt1) vt = vt/(1− βt2) Compute ratio rt = mt√vt+\nx (i) t+1 = x (i) t − ηt\nφ(‖x(i)t ‖)\n‖r(i)t +λx (i) t ‖\n(r (i) t + λx (i) t )\nend for" }, { "heading": "3.2 LAMB ALGORITHM", "text": "The second instantiation of the general strategy is obtained by using ADAM as the base algorithm A. ADAM optimizer is popular in deep learning community and has shown to have good performance for training state-of-the-art language models like BERT. Unlike LARS, the adaptivity of LAMB is two-fold: (i) per dimension normalization with respect to the square root of the second moment used in ADAM and (ii) layerwise normalization obtained due to layerwise adaptivity. The pseudocode for LAMB is provided in Algorithm 2. When β1 = 0 and β2 = 0, the algorithm reduces to be Sign SGD where the learning rate is scaled by square root of the layer dimension (Bernstein et al., 2018).\nThe following result provides convergence rate for LAMB in general nonconvex settings. Similar to the previous case, we focus on the setting where β1 = 0 and λ = 0. As before, our analysis extends to the general case; however, the calculations become messy.\nTheorem 3. Let ηt = η = √\n2(f(x1)−f(x∗)) α2u‖L‖1T\nfor all t ∈ [T ], b = T , di = d/h for all i ∈ [h], and αl ≤ φ(v) ≤ αu for all v > 0 where αl, αu > 0. Then for xt generated using LAMB (Algorithm 2), we have the following bounds:\n1. When β2 = 0, we have( E [\n1√ d ‖∇f(xa)‖1\n])2 ≤ O ( (f(x1)− f(x∗))Lavg\nT + ‖σ̃‖21 Th\n) ,\n2. When β2 > 0, we have\nE[‖∇f(xa)‖2] ≤ O\n(√ G2d\nh(1− β2) ×\n[√ 2(f(x1)− f(x∗))‖L‖1\nT + ‖σ̃‖1√ T\n]) ,\nwhere x∗ is an optimal solution to the problem in equation 1 and xa is an iterate uniformly randomly chosen from {x1, · · · , xT }.\nDiscussion on convergence rates. We first start our discussion with the comparison of convergence rate of LARS with that of SGD (Theorem 1). The convergence rates of LARS and SGD differ in two ways: (1) the convergence criterion is (E[ ∑h i=1 ‖∇if‖])2 as opposed to E[‖∇f‖2] in SGD and (2) the dependence on L and σ in the convergence rate. Briefly, the convergence rate of LARS is better than SGD when the gradient is denser than curvature and stochasticity. This convergence rate comparison is similar in spirit to the one obtained in (Bernstein et al., 2018). Assuming that the convergence criterion in Theorem 1 and Theorem 2 is of similar order (which happens when gradients are fairly dense), convergence rate of LARS and LAMB depend on Lavg instead of L∞ and are thus, significantly better than that of SGD. A more quantitative comparison is provided in Section C of the Appendix. The comparison of LAMB (with β2 = 0) with SGD is along similar lines. We obtain slightly worse rates for the case where β2 > 0; although, we believe that its behavior should be better than the case β2 = 0. We leave this investigation to future work." }, { "heading": "4 EXPERIMENTS", "text": "We now present empirical results comparing LAMB with existing optimizers on two important large batch training tasks: BERT and RESNET-50 training. We also compare LAMB with existing optimizers for small batch size (< 1K) and small dataset (e.g. CIFAR, MNIST) (see Appendix).\nExperimental Setup. To demonstrate its robustness, we use very minimal hyperparameter tuning for the LAMB optimizer. Thus, it is possible to achieve better results by further tuning the hyperparameters. The parameters β1 and β2 in Algorithm 2 are set to 0.9 and 0.999 respectively in all our experiments; we only tune the learning rate. We use a polynomially decaying learning rate of ηt = η0×(1−t/T ) in Algorithm 2), which is the same as in BERT baseline. This setting also works for all other applications in this paper. Furthermore, for BERT and RESNET-50 training, we did not tune the hyperparameters of LAMB while increasing the batch size. We use the square root of LR scaling rule to automatically adjust learning rate and linear-epoch warmup scheduling. We use TPUv3 in all the experiments. A TPUv3 Pod has 1024 chips and can provide more than 100 petaflops performance for mixed precision computing. To make sure we are comparing with solid baselines, we use grid search to tune the hyper-parameters for ADAM, ADAGRAD, ADAMW (ADAM with weight decay), and LARS. We also tune weight decay for ADAMW. All the hyperparameter tuning settings are reported in the Appendix. Due to space constraints, several experimental details are relegated to the Appendix." }, { "heading": "4.1 BERT TRAINING", "text": "We first discuss empirical results for speeding up BERT training. For this experiment, we use the same dataset as Devlin et al. (2018), which is a concatenation of Wikipedia and BooksCorpus with 2.5B and 800M words respectively. We specifically focus on the SQuAD task2 in this paper. The F1 score on SQuAD-v1 is used as the accuracy metric in our experiments. All our comparisons are with respect to the baseline BERT model by Devlin et al. (2018). To train BERT, Devlin et al. (2018) first train the model for 900k iterations using a sequence length of 128 and then switch to a sequence length of 512 for the last 100k iterations. This results in a training time of around 3 days on 16 TPUv3 chips. The baseline BERT model3 achieves a F1 score of 90.395. To ensure a fair comparison, we follow the same SQuAD fine-tune procedure of Devlin et al. (2018) without modifying any configuration (including number of epochs and hyperparameters). As noted earlier, we could get even better results by changing the fine-tune configuration. For instance, by just slightly changing the learning rate in the fine-tune stage, we can obtain a higher F1 score of 91.688 for the batch size of 16K using LAMB. We report a F1 score of 91.345 in Table 1, which is the score obtained for the untuned version. Below we describe two different training choices for training BERT and discuss the corresponding speedups.\nFor the first choice, we maintain the same training procedure as the baseline except for changing the training optimizer to LAMB. We run with the same number of epochs as the baseline but with batch size scaled from 512 to 32K. The choice of 32K batch size (with sequence length 512) is mainly due to memory limits of TPU Pod. Our results are shown in Table 1. By using the LAMB optimizer, we are able to achieve a F1 score of 91.460 in 15625 iterations for a batch size of 32768 (14063 iterations for sequence length 128 and 1562 iterations for sequence length 512). With 32K batch size, we reduce BERT training time from 3 days to around 100 minutes. We achieved 49.1 times speedup by 64 times computational resources (76.7% efficiency). We consider the speedup is great because we use the synchronous data-parallelism. There is a communication overhead coming from transferring of the gradients over the interconnect. For RESNET-50, researchers are able to achieve 90% scaling efficiency because RESNET-50 has much fewer parameters (# parameters is equal to #gradients) than BERT (25 million versus 300 million).\nTo obtain further improvements, we use the Mixed-Batch Training procedure with LAMB. Recall that BERT training involves two stages: the first 9/10 of the total epochs use a sequence length of 128, while the last 1/10 of the total epochs use a sequence length of 512. For the second stage training, which involves a longer sequence length, due to memory limits, a maximum batch size of only 32768 can be used on a TPUv3 Pod. However, we can potentially use a larger batch size for the first stage because of a shorter sequence length. In particular, the batch size can be increased to 131072 for the first stage. However, we did not observe any speedup by increasing the batch size from 65536 to 131072 for the first stage, thus, we restrict the batch size to 65536 for this stage. By using this strategy, we are able to make full utilization of the hardware resources throughout the training\n2https://rajpurkar.github.io/SQuAD-explorer/ 3Pre-trained BERT model can be downloaded from https://github.com/google-research/bert\nprocedure. Increasing the batch size is able to warm-up and stabilize the optimization process (Smith et al., 2017), but decreasing the batch size brings chaos to the optimization process and can cause divergence. In our experiments, we found a technique that is useful to stabilize the second stage optimization. Because we switched to a different optimization problem, it is necessary to re-warm-up the optimization. Instead of decaying the learning rate at the second stage, we ramp up the learning rate from zero again in the second stage (re-warm-up). As with the first stage, we decay the learning rate after the re-warm-up phase. With this method, we only need 8599 iterations and finish BERT training in 76 minutes (100.2% efficiency).\nComparison with ADAMW and LARS. To ensure that our approach is compared to a solid baseline for the BERT training, we tried three different strategies for tuning ADAMW (Loshchilov & Hutter, 2017): (1) ADAMW with default hyperparameters (Devlin et al., 2018) (2) ADAMW with the same hyperparameters as LAMB, and (3) ADAMW with tuned hyperparameters. ADAMW stops scaling at the batch size of 16K because it is not able to achieve the target F1 score (88.1 vs 90.4). The tuning information of ADAMW is shown in the Appendix. For 64K/32K mixed-batch training, even after extensive tuning of the hyperparameters, we fail to get any reasonable result with ADAMW optimizer. We conclude that ADAMW does not work well in large-batch BERT training or is at least hard to tune. We also observe that LAMB performs better than LARS for all batch sizes (Table 2)." }, { "heading": "4.2 IMAGENET TRAINING WITH RESNET-50.", "text": "ImageNet training with ResNet-50 is an industry standard metric that is being used in MLPerf4. The baseline can get 76.3% top-1 accuracy in 90 epochs (Goyal et al., 2017). All the successful implementations are based on momentum SGD (He et al., 2016; Goyal et al., 2017) or LARS optimizer (Ying et al., 2018; Jia et al., 2018; Mikami et al., 2018; You et al., 2018; Yamazaki et al., 2019). Before our study, we did not find any paper reporting a state-of-the-art accuracy achieved\n4https://mlperf.org/\nby ADAM (Kingma & Ba, 2014), ADAGRAD, or ADAMW optimizer. In our experiments, even with comprehensive hyper-parameter tuning, ADAGRAD/ADAM/ADAMW (with batch size 16K) only achieves 55.38%/66.04%/67.27% top-1 accuracy. After adding learning rate scheme of Goyal et al. (2017), the top-1 accuracy of ADAGRAD/ADAM/ADAMW was improved to 72.0%/73.48%/73.07%. However, they are still much lower than 76.3%. The details of the tuning information are in the Appendix. Table 3 shows that LAMB can achieve the target accuracy. Beyond a batch size of 8K, LAMB’s accuracy is higher than the momentum. LAMB’s accuracy is also slightly better than LARS. At a batch size of 32K, LAMB achieves 76.4% top-1 accuracy while LARS achieves 76.3%. At a batch size of 2K, LAMB is able to achieve 77.11% top-1 accuracy while LARS achieves 76.6%." }, { "heading": "4.3 HYPERPARAMETERS FOR SCALING THE BATCH SIZE", "text": "For BERT and ImageNet training, we did not tune the hyperparameters of LAMB optimizer when increasing the batch size. We use the square root LR scaling rule and linear-epoch warmup scheduling to automatically adjust learning rate. The details can be found in Tables 4 and 5" }, { "heading": "5 CONCLUSION", "text": "Large batch techniques are critical to speeding up deep neural network training. In this paper, we propose the LAMB optimizer, which supports adaptive elementwise updating and layerwise learning\nrates. Furthermore, LAMB is a general purpose optimizer that works for both small and large batches. We also provided theoretical analysis for the LAMB optimizer, highlighting the cases where it performs better than standard SGD. LAMB achieves a better performance than existing optimizers for a wide range of applications. By using LAMB, we are able to scale the batch size of BERT pre-training to 64K without losing accuracy, thereby, reducing the BERT training time from 3 days to around 76 minutes. LAMB is also the first large batch adaptive solver that can achieve state-of-the-art accuracy on ImageNet training with RESNET-50." }, { "heading": "6 ACKNOWLEDGEMENT", "text": "We want to thank the comments from George Dahl and Jeff Dean. We want to thank Michael Banfield, Dehao Chen, Youlong Cheng, Sameer Kumar, and Zak Stone for TPU Pod support." }, { "heading": "A PROOF OF THEOREM 2", "text": "Proof. We analyze the convergence of LARS for general minibatch size here. Recall that the update of LARS is the following\nx (i) t+1 = x (i) t − ηtφ(‖x (i) t ‖)\ng (i) t\n‖g(i)t ‖ ,\nfor all i ∈ [h]. Since the function f is L-smooth, we have the following:\nf(xt+1) ≤ f(xt) + 〈∇if(xt), x(i)t+1 − x (i) t 〉+ h∑ i=1 Li 2 ‖x(i)t+1 − x (i) t ‖2\n= f(xt)− ηt h∑ i=1 di∑ j=1 φ(‖x(i)t ‖)×\n( [∇if(xt)]j × g (i) t,j\n‖g(i)t ‖\n) +\nh∑ i=1 Liη 2 t φ 2(‖x(i)t ‖) 2\n≤ f(xt)− ηt h∑ i=1 di∑ j=1 φ(‖x(i)t ‖)×\n( [∇if(xt)]j × ( g (i) t,j\n‖g(i)t ‖ − [∇if(xt)]j ‖∇if(xt)‖ + [∇if(xt)]j ‖∇if(xt)‖\n)) + η2tα 2 u\n2 ‖L‖1\n= f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)× ‖∇if(xt)‖\n− ηt h∑ i=1 di∑ j=1 φ(‖x(i)t ‖)×\n( [∇if(xt)]j × ( g (i) t,j\n‖g(i)t ‖ − [∇if(xt)]j ‖∇if(xt)‖\n)) + η2tα 2 u\n2 ‖L‖1\n(4)\nThe first inequality follows from the lipschitz continuous nature of the gradient. Let ∆(i)t = g (i) t − ∇if(xt). Then the above inequality can be rewritten in the following manner: f(xt+1) ≤ f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)‖∇if(xt)‖\n− ηt h∑ i=1 di∑ j=1 φ(‖x(i)t ‖)×\n( [∇if(xt)]j × ( (∆ (i) t,j + [∇if(xt)]j)\n‖∆(i)t +∇if(xt)‖ − [∇if(xt)]j ‖∇if(xt)‖\n)) + η2tα 2 u\n2 ‖L‖1\n= f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)‖∇if(xt)‖\n− ηt h∑ i=1 φ(‖x(i)t ‖)× ( 〈∆(i)t +∇if(xt),∇if(xt)〉 ‖∆(i)t +∇if(xt)‖ − ‖∇if(xt)‖ ) + η2tα 2 u 2 ‖L‖1\n= f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)‖∇if(xt)‖\n+ ηt h∑ i=1 φ(‖x(i)t ‖)×\n( ‖∇if(xt)‖‖∆(i)t +∇if(xt)‖ − 〈∆ (i) t +∇if(xt),∇if(xt)〉\n‖∆(i)t +∇if(xt)‖\n) + η2tα 2 u\n2 ‖L‖1\n= f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)‖∇if(xt)‖+ η2tα 2 u 2 ‖L‖1\n+ ηt h∑ i=1 φ(‖x(i)t ‖)×\n( ‖∇if(xt)‖‖∆(i)t +∇if(xt)‖ − ‖∆ (i) t +∇if(xt)‖2 + 〈∆ (i) t ,∆ (i) t +∇if(xt)〉\n‖∆(i)t +∇if(xt)‖\n) .\n(5)\nUsing Cauchy-Schwarz inequality in the above inequality, we have: f(xt+1) ≤ f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)‖∇if(xt)‖\n+ ηt h∑ i=1 φ(‖x(i)t ‖)× ( ‖∇if(xt)‖ − ‖∆(i)t +∇if(xt)‖+ ‖∆ (i) t ‖ ) + η2tα 2 u 2 ‖L‖1\n≤ f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)‖∇if(xt)‖+ 2ηt h∑ i=1 φ(‖x(i)t ‖)× ‖∆ (i) t ‖+ η2tα 2 u 2 ‖L‖1\nTaking expectation, we obtain the following: E[f(xt+1)] ≤ f(xt)− ηt h∑ i=1 φ(‖x(i)t ‖)‖∇if(xt)‖+ 2ηt h∑ i=1 φ(‖x(i)t ‖)× E[‖∆ (i) t ‖] + η2tα 2 u 2 ‖L‖1\n≤ f(xt)− ηtαl h∑ i=1 ‖∇if(xt)‖+ 2ηtαu ‖σ‖1√ b + η2tα 2 u 2 ‖L‖1. (6)\nSumming the above inequality for t = 1 to T and using telescoping sum, we have the following inequality:\nE[f(xT+1)] ≤ f(x1)− ηαl T∑ t=1 h∑ i=1 E[‖∇if(xt)‖] + 2ηT αu‖σ‖1√ b + η2α2uT 2 ‖L‖1.\nRearranging the terms of the above inequality, and dividing by ηTαl, we have:\n1\nT T∑ t=1 h∑ i=1 E[‖∇if(xt)‖] ≤ f(x1)− E[f(xT+1)] Tηαl + 2αu‖σ‖1√ bαl + ηα2u 2αl ‖L‖1\n≤ f(x1)− f(x ∗)\nTηαl + 2αu‖σ‖1 αl √ b + ηα2u 2αl ‖L‖1." }, { "heading": "B PROOF OF THEOREM 3", "text": "Proof. We analyze the convergence of LAMB for general minibatch size here. Recall that the update of LAMB is the following\nx (i) t+1 = x (i) t − ηtφ(‖x (i) t ‖)\nr (i) t\n‖r(i)t ‖ ,\nfor all i ∈ [h]. For simplicity of notation, we reason the Since the function f is L-smooth, we have the following:\nf(xt+1) ≤ f(xt) + 〈∇if(xt), x(i)t+1 − x (i) t 〉+ h∑ i=1 Li 2 ‖x(i)t+1 − x (i) t ‖2\n= f(xt)−ηt h∑ i=1 di∑ j=1 φ(‖x(i)t ‖)×\n( [∇if(xt)]j × r (i) t,j\n‖r(i)t ‖ ) ︸ ︷︷ ︸\nT1\n+ h∑ i=1 Liα 2 uη 2 t 2 (7)\nThe above inequality simply follows from the lipschitz continuous nature of the gradient. We bound term T1 in the following manner:\nT1 ≤ −ηt h∑ i=1 di∑ j=1 φ(‖x(i)t ‖)×\n( [∇if(xt)]j × r (i) t,j\n‖r(i)t ‖\n)\n≤ −ηt h∑ i=1 di∑ j=1 √ 1− β2 G2di ( φ(‖x(i)t ‖)× [∇if(xt)]j × g (i) t,j )\n− ηt h∑ i=1 di∑ j=1 ( φ(‖x(i)t ‖)× [∇if(xt)]j × r (i) t,j ‖r(i)t ‖ ) 1(sign(∇if(xt)]j) 6= sign(r(i)t,j))\n(8) This follows from the fact that ‖r(i)t ‖ ≤ √ di 1−β2 and √ vt ≤ G. If β2 = 0, then T1 can be bounded as follows:\nT1 ≤ −ηt h∑ i=1 di∑ j=1\n√ 1\ndi\n( φ(‖x(i)t ‖)× |[∇if(xt)]j | )\n− ηt h∑ i=1 di∑ j=1 ( φ(‖x(i)t ‖)× [∇if(xt)]j × r (i) t,j ‖r(i)t ‖ ) 1(sign(∇if(xt)]j) 6= sign(r(i)t,j))\nThe rest of the proof for β2 = 0 is similar to argument for the case β2 > 0, which is shown below. Taking expectation, we have the following: E[T1] ≤ −ηt h∑ i=1 di∑ j=1 √ 1− β2 G2di E [ φ(‖x(i)t ‖)× ( [∇if(xt)]j × g(i)t,j )]\n− ηt h∑ i=1 di∑ j=1 E [ φ(‖x(i)t ‖)× ( [∇if(xt)]j × r (i) t,j ‖r(i)t ‖ ) 1(sign(∇if(xt)]j) 6= sign(g(i)t,j )) ]\n≤ −ηt h∑ i=1 di∑ j=1 √ 1− β2 G2di E [( φ(‖x(i)t ‖)× [∇if(xt)]j × g (i) t,j )]\n+ ηt h∑ i=1 di∑ j=1 E [ αu|[∇if(xt)]j |1(sign(∇if(xt)]j) 6= sign(g(i)t,j )) ]\n≤ −ηt h∑ i=1 di∑ j=1 √ 1− β2 G2di E [ φ(‖x(i)t ‖)× ( [∇if(xt)]j × g(i)t,j )]\n+ ηt h∑ i=1 di∑ j=1 αu|[∇if(xt)]j |P(sign(∇if(xt)]j) 6= sign(g(i)t,j ))\nUsing the bound on the probability that the signs differ, we get:\nE[T1] ≤ −ηtαl √ h(1− β2) G2d ‖∇f(xt)‖2 + ηtαu h∑ i=1 di∑ j=1 σi,j√ b .\nSubstituting the above bound on T1 in equation 7, we have the following bound:\nE[f(xt+1)] ≤ f(xt)− ηtαl √ h(1− β2) G2d ‖∇f(xt)‖2 + ηtαu ‖σ̃‖1√ b + η2tα 2 u‖L‖1 2\n(9)\nSumming the above inequality for t = 1 to T and using telescoping sum, we have the following inequality:\nE[f(xT+1)] ≤ f(x1)− ηtαl √ h(1− β2) G2d T∑ t=1 E[‖∇f(xt)‖2] + ηTαu ‖σ̃‖1√ b + η2α2uT 2 ‖L‖1.\nAlgorithm 3 N-LAMB Input: x1 ∈ Rd, learning rate {ηt}Tt=1, parameters 0 < β1, β2 < 1, scaling function φ, > 0, parameters 0 < {βt1}Tt=1 < 1 Set m0 = 0, v0 = 0 for t = 1 to T do\nDraw b samples St from P. Compute gt = 1|St| ∑ st∈St ∇`(xt, st). mt = β1mt−1 + (1− β1)gt m̂ = βt+11 mt\n1−Πt+1i=1β i 1\n+ (1−βt1)gt\n1−Πti=1β i 1\nvt = β2vt−1 + (1− β2)g2t v̂ = β2vt\n1−βt2 Compute ratio rt = m̂√\nv̂+\nx (i) t+1 = x (i) t − ηt\nφ(‖x(i)t ‖)\n‖r(i)t +λx (i) t ‖\n(r (i) t + λxt)\nend for\nAlgorithm 4 NN-LAMB Input: x1 ∈ Rd, learning rate {ηt}Tt=1, parameters 0 < β1, β2 < 1, scaling function φ, > 0, parameters 0 < {βt1}Tt=1 < 1 Set m0 = 0, v0 = 0 for t = 1 to T do\nDraw b samples St from P. Compute gt = 1|St| ∑ st∈St ∇`(xt, st). mt = β1mt−1 + (1− β1)gt m̂ = βt+11 mt\n1−Πt+1i=1β i 1\n+ (1−βt1)gt\n1−Πti=1β i 1\nvt = β2vt−1 + (1− β2)g2t v̂ = βt+12 vt\n1−Πt+1i=1β i 2\n+ (1−βt2)g 2 t\n1−Πti=1β i 2\nCompute ratio rt = m̂√ v̂+\nx (i) t+1 = x (i) t − ηt\nφ(‖x(i)t ‖)\n‖r(i)t +λx (i) t ‖\n(r (i) t + λxt)\nend for\nRearranging the terms of the above inequality, and dividing by ηTαl, we have:√ h(1− β2) G2d 1 T T∑ t=1 E[‖∇f(xt)‖2] ≤ f(x1)− E[f(xT+1)] Tηαl + αu‖σ̃‖1 αl √ b + η 2 ‖L‖1\n≤ f(x1)− f(x ∗) Tηαl + αu‖σ̃‖1 αl √ b + ηα2u 2αl ‖L‖1." }, { "heading": "C COMPARISON OF CONVERGENCE RATES OF LARS AND SGD", "text": "Inspired by the comparison used by (Bernstein et al., 2018) for comparing SIGN SGD with SGD, we define the following quantities:(\nh∑ i=1 ‖∇if(xt)‖\n)2 = ψ(∇f(xt))d‖∇f(xt)‖2\nh ≥ ψgd‖∇f(xt)‖ 2 h\n‖L‖21 ≤ ψLd 2‖L‖2∞ h2 ‖σ‖21 = ψσd‖σ‖2\nh .\nThen LARS convergence rate can be written in the following manner: (E[‖∇f(xa)‖)2 ≤ O (\n(f(x1)− f(x∗))L∞ T ψL ψ2g + ‖σ‖2 T ψ2σ ψ2g\n) .\nIf ψL ψ2g and ψσ ψ2g then LARS (i.e., gradient is more denser than curvature or stochasticity), we gain over SGD. Otherwise, SGD’s upper bound on convergence rate is better." }, { "heading": "D N-LAMB: NESTEROV MOMENTUM FOR LAMB", "text": "Sutskever et al. (2013) report that Nesterov’s accelerated gradient (NAG) proposed by Nesterov (1983) is conceptually and empirically better than the regular momentum method for convex, non-stochastic objectives. Dozat (2016) incorporated Nesterov’s momentum into Adam optimizer and proposed the Nadam optimizer. Specifically, only the first moment of Adam was modified and the second moment of Adam was unchanged. The results on several applications (Word2Vec, Image Recognition,\nand LSTM Language Model) showed that Nadam optimizer improves the speed of convergence and the quality of the learned models. We also tried using Nesterov’s momentum to replace the regular momentum of LAMB optimizer’s first moment. In this way, we got a new algorithm named as N-LAMB (Nesterov LAMB). The complete algorithm is in Algorithm 3. We can also Nesterov’s momentum to replace the regular momentum of LAMB optimizer’s second moment. We refer to this algorithm as NN-LAMB (Nesterov’s momentum for both the first moment and the second moment). The details of NN-LAMB were shown in Algorithm 4.\nDozat (2016) suggested the best performance of Nadam was achieved by β1 = 0.975, β2 = 0.999, and = 1e-8. We used the same settings for N-LAMB and NN-LAMB. We scaled the batch size to 32K for ImageNet training with ResNet-50. Our experimental results show that N-LAMB and NN-LAMB can achieve a comparable accuracy compared to LAMB optimizer. Their performances are much better than momentum solver (Figure 1)." }, { "heading": "E LAMB WITH LEARNING RATE CORRECTION", "text": "There are two operations at each iteration in original Adam optimizer (let us call it adam-correction):\nmt = mt/(1− βt1)\nvt = vt/(1− βt2) It has an impact on the learning rate by ηt := ηt∗ √\n(1− βt2)/(1− βt1). According to our experimental results, adam-correction essentially has the same effect as learning rate warmup (see Figure 2). The warmup function often was implemented in the modern deep learning system. Thus, we can remove adam-correction from the LAMB optimizer. We did not observe any drop in the test or validation accuracy for BERT and ImageNet training." }, { "heading": "F LAMB WITH DIFFERENT NORMS", "text": "We need to compute the matrix/tensor norm for each layer when we do the parameter updating in the LAMB optimizer. We tried different norms in LAMB optimizer. However, we did not observe a significant difference in the validation accuracy of ImageNet training with ResNet-50. In our experiments, the difference in validation accuracy is less than 0.1 percent (Figure 3). We use L2 norm as the default." }, { "heading": "G REGULAR BATCH SIZES FOR SMALL DATASETS: MNIST AND CIFAR-10.", "text": "According to DAWNBench, DavidNet (a custom 9-layer Residual ConvNet) is the fastest model for CIFAR-10 dataset (as of April 1st, 2019)5. The baseline uses the momentum SGD optimizer. Table 6 and Figure 4 show the test accuracy of CIFAR-10 training with DavidNet. The PyTorch implementation (momentum SGD optimizer) on GPUs was reported on Standford DAWNBench’s website, which achieves 94.06% in 24 epochs. The Tensorflow implementation (momentum SGD optimizer) on TPU achieves a 93.72% accuracy in 24 epochs6. We use the implementation of TensorFlow on TPUs. LAMB optimizer is able to achieve 94.08% test accuracy in 24 epochs, which is better than other adaptive optimizers and momentum SGD. Even on the smaller tasks like MNIST training with LeNet, LAMB is able to achieve a better accuracy than existing solvers (Table 7).\nH IMPLEMENTATION DETAILS AND ADDITIONAL RESULTS\nThere are several hyper-parameters in LAMB optimizer. Although users do not need to tune them, we explain them to help users to have a better understanding. β1 is used for decaying the running average of the gradient. β2 is used for decaying the running average of the square of gradient. The default setting for other parameters: weight decay rate λ=0.01, β1=0.9, β2=0.999, =1e-6. We did not tune β1 and β2. However, our experiments show that tuning them may get a higher accuracy.\n5https://dawn.cs.stanford.edu/benchmark/CIFAR10/train.html 6https://github.com/fenwickslab/dl_tutorials/blob/master/tutorial3_cifar10_davidnet_fix.ipynb\nBased on our experience, learning rate is the most important hyper-parameter that affects the learning efficiency and final accuracy. Bengio (2012) suggests that it is often the single most important hyper-parameter and that it always should be tuned. Thus, to make sure we have a solid baseline, we carefully tune the learning rate of ADAM, ADAMW, ADAGRAD, and momentum SGD\nIn our experiments, we found that the validation loss is not reliable for large-batch training. A lower validation loss does not necessarily lead to a higher validation accuracy (Figure 5). Thus, we use the test/val accuracy or F1 score on dev set to evaluate the optimizers.\nH.0.1 BERT\nTable 8 shows some of the tuning information from BERT training with ADAMW optimizer. ADAMW stops scaling at the batch size of 16K. The target F1 score is 90.5. LAMB achieves a F1 score of 91.345. The table shows the tuning information of ADAMW. In Table 8, we report the best F1 score we observed from our experiments.\nThe loss curves of BERT training by LAMB for different batch sizes are shown in Figure 6. We observe that the loss curves are almost identical to each other, which means our optimizer scales well with the batch size.\nThe training loss curve of BERT mixed-batch pre-training with LAMB is shown in Figure 7. This figure shows that LAMB can make the training converge smoothly at the batch size of 64K.\nFigure 8 shows that we can achieve 76.8% scaling efficiency by scaling the batch size (49.1 times speedup by 64 times computational resources) and 101.8% scaling efficiency with mixed-batch (65.2 times speedup by 64 times computational resources)\nH.0.2 IMAGENET\nFigures 9 - 14 show the LAMB trust ratio at different iterations for ImageNet training with ResNet-50. From these figures we can see that these ratios are very different from each other for different layers. LAMB uses the trust ratio to help the slow learners to train faster.\nTable 7: Test Accuracy by MNIST training with LeNet (30 epochs for Batch Size = 1024). The tuning space of learning rate for all the optimizers is {0.0001, 0.001, 0.01, 0.1}. We use the same learning rate warmup and decay schedule for all of them.\nOptimizer Momentum Addgrad ADAM ADAMW LAMB\nAverage accuracy over 5 runs 0.9933 0.9928 0.9936 0.9941 0.9945\nFigure 5: Our experiments show that even the validation loss is not reliable in the large-scale training. A lower validation loss may lead to a worse accuracy. Thus, we use the test/val accuracy or F1 score on dev set to evaluate the optimizers.\nH.1 BASELINE TUNING DETAILS FOR IMAGENET TRAINING WITH RESNET-50\nIf you are not interested in the baseline tuning details, please skip this section.\nGoyal et al. (2017) suggested a proper learning rate warmup and decay scheme may help improve the ImageNet classification accuracy. We included these techniques in Adam/AdamW/AdaGrad tuning. Specifically, we use the learning rate recipe of Goyal et al. (2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 76.3% (Goyal et al., 2017). There techniques help to improve the accuracy of Adam/AdamW/AdaGrad to around 73%. However, even with these techniques, Adam/AdamW/AdaGrad stil can not achieve the target validation accuracy.\nTo make sure our baseline is solid, we carefully tuned the hyper-parameters. Table 9 shows the tuning information of standard Adagrad. Table 10 shows the tuning information of adding the learning rate scheme of Goyal et al. (2017) to standard Adagrad. Table 11 shows the tuning information of standard Adam. Table shows the tuning information of adding the learning rate scheme of Goyal et al. (2017) to standard Adam. It is tricky to tune the AdamW optimizer since both the L2 regularization and weight decay have the effect on the performance. Thus we have four tuning sets.\nThe first tuning set is based on AdamW with default L2 regularization. We tune the learning rate and weight decay. The tuning information is in Figures 13, 14, 15, and 16.\nThe second tuning set is based on AdamW with disabled L2 regularization. We tune the learning rate and weight decay. The tuning information is in Figures 17, 18, 19, and 20.\nThen we add the learning rate scheme of Goyal et al. (2017) to AdamW and refer to it as AdamW+.\nThe third tuning set is based on AdamW+ with default L2 regularization. We tune the learning rate and weight decay. The tuning information is Figure 21 and 22.\nThe fourth tuning set is based on AdamW+ with disabled L2 regularization. We tune the learning rate and weight decay. The tuning information is in Figures 23, 24, 25.\nBased on our comprehensive tuning results, we conclude the existing adaptive solvers do not perform well on ImageNet training or at least it is hard to tune them." } ]
2,020
null
SP:4f8f34e95732b3f87b23878289062d359cda110f
[ "The authors propose PARCUS (\"Pattern Representations on Continuous Spaces\"), a model which computes a soft-matching probability for all words in an input sequence with so-called prototypes in order to predict a label for the input. Furthermore, for training, PARCUS makes use of rationales. Those are indicators of input importance, and help to boost the loss for relevant tokens.", "This paper considers the problem of text classification, especially the settings in which the number of labeled sentences is very small. However, authors assume, annotations of rationales behind the label, i.e. highlighting tokens in a sentence which are important in deciding its label. As per my understanding, this is a big limitation. Second, the proposed model makes inference of class labels just based upon occurrence of words in a sentence, rather than making more sophisticated inferences relying upon sub-sequence patterns at least. " ]
We propose a model to tackle classification tasks in the presence of very little training data. To this aim, we introduce a novel matching mechanism to focus on elements of the input, by using vectors that represent semantically meaningful concepts for the task at hand. By leveraging highlighted portions of the training data, a simple, yet effective, error boosting technique guides the learning process. In practice, it increases the error associated to relevant parts of the input by a given factor. Results on text classification tasks confirm the benefits of the proposed approach in both balanced and unbalanced cases, thus being of practical use when labeling new examples is expensive. In addition, the model is interpretable, as it allows for human inspection of the learned weights.
[]
[ { "authors": [ "Yujia Bao", "Shiyu Chang", "Mo Yu", "Regina Barzilay" ], "title": "Deriving machine attention from human rationales", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Maria Barrett", "Joachim Bingel", "Nora Hollenstein", "Marek Rei", "Anders Søgaard" ], "title": "Sequence classification with human attention", "venue": "In Proceedings of the 22nd Conference on Computational Natural Language Learning,", "year": 2018 }, { "authors": [ "Joost Bastings", "Wilker Aziz", "Ivan Titov" ], "title": "Interpretable neural predictions with differentiable binary variables", "venue": "In Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Carlos Castillo", "Debora Donato", "Aristides Gionis", "Vanessa Murdock", "Fabrizio Silvestri" ], "title": "Know your neighbors: Web spam detection using the web topology", "venue": "In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval,", "year": 2007 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Gregory Druck", "Gideon Mann", "Andrew McCallum" ], "title": "Reducing annotation effort using generalized expectation criteria", "venue": "Technical report,", "year": 2007 }, { "authors": [ "Yoav Freund", "Robert Schapire", "Naoki Abe" ], "title": "A short introduction to boosting", "venue": "Journal-Japanese Society For Artificial Intelligence,", "year": 1999 }, { "authors": [ "Riccardo Guidotti", "Anna Monreale", "Salvatore Ruggieri", "Franco Turini", "Fosca Giannotti", "Dino Pedreschi" ], "title": "A survey of methods for explaining black box models", "venue": "ACM computing surveys (CSUR),", "year": 2019 }, { "authors": [ "Braden Hancock", "Paroma Varma", "Stephanie Wang", "Martin Bringmann", "Percy Liang", "Christopher Ré" ], "title": "Training classifiers with natural language explanations", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Zhiting Hu", "Xuezhe Ma", "Zhengzhong Liu", "Eduard Hovy", "Eric Xing" ], "title": "Harnessing deep neural networks with logic rules", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Ruslan Salakhutdinov", "Eric Xing" ], "title": "Deep neural networks with massive learned knowledge", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Nal Kalchbrenner", "Edward Grefenstette", "Phil Blunsom" ], "title": "A convolutional neural network for modelling sentences", "venue": "In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Tao Lei", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Rationalizing neural predictions", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Tao Li", "Vivek Srikumar" ], "title": "Augmenting neural networks with first-order logic", "venue": "arXiv preprint arXiv:1906.06298,", "year": 2019 }, { "authors": [ "Bingfeng Luo", "Yansong Feng", "Zheng Wang", "Songfang Huang", "Rui Yan", "Dongyan Zhao" ], "title": "Marrying up regular expressions with neural networks: A case study for spoken language understanding", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "year": 2018 }, { "authors": [ "Andrew McCallum", "Gideon Mann", "Gregory Druck" ], "title": "Generalized expectation criteria", "venue": "Computer science technical note,", "year": 2007 }, { "authors": [ "Pushkar Mishra", "Helen Yannakoudakis", "Ekaterina Shutova" ], "title": "Neural character-based composition models for abuse detection", "venue": "In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2),", "year": 2018 }, { "authors": [ "Chikashi Nobata", "Joel Tetreault", "Achint Thomas", "Yashar Mehdad", "Yi Chang" ], "title": "Abusive language detection in online user content", "venue": "In Proceedings of the 25th international conference on world wide web,", "year": 2016 }, { "authors": [ "Ji Ho Park", "Pascale Fung" ], "title": "One-step and two-step classification for abusive language detection on twitter", "venue": "In Proceedings of the First Workshop on Abusive Language Online,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Alexander J Ratner", "Christopher M De Sa", "Sen Wu", "Daniel Selsam", "Christopher Ré" ], "title": "Data programming: Creating large training sets, quickly", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Mike Schuster", "Kuldip K Paliwal" ], "title": "Bidirectional recurrent neural networks", "venue": "IEEE Transactions on Signal Processing,", "year": 1997 }, { "authors": [ "Roy Schwartz", "Sam Thomson", "Noah A. Smith" ], "title": "Bridging cnns, rnns, and weighted finite-state machines", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Imran Sheikh", "Irina Illina", "Dominique Fohr", "Georges Linares" ], "title": "Learning word importance with the neural bag-of-words model. In ACL, Representation Learning for NLP (Repl4NLP) workshop, 2016", "venue": null, "year": 2016 }, { "authors": [ "Vlamimir Vapnik" ], "title": "Statistical learning theory wiley", "venue": "New York, pp", "year": 1998 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Zeerak Waseem", "Dirk Hovy" ], "title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter", "venue": "In Proceedings of the NAACL student research workshop,", "year": 2016 }, { "authors": [ "Omar Zaidan", "Jason Eisner", "Christine Piatko" ], "title": "Using annotator rationales to improve machine learning for text categorization. In Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics", "venue": "proceedings of the main conference,", "year": 2007 }, { "authors": [ "Omar F Zaidan", "Jason Eisner" ], "title": "Modeling annotators: A generative approach to learning from annotator rationales", "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing,", "year": 2008 }, { "authors": [ "Hui Zou", "Trevor Hastie" ], "title": "Regularization and variable selection via the elastic net", "venue": "Journal of the royal statistical society: series B (statistical methodology),", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Gathering and labeling data is a task that can be expensive in terms of time, human effort and resources. When we cannot rely on already available datasets, training a model with acceptable performance on few data points annotated by few annotators, becomes critical in many practical applications. This is, indeed, especially important when the data is naturally imbalanced and the demands of gathering samples of the minority class are high. One important domain in which these issues arise is text classification, for example hate-speech (Waseem & Hovy, 2016), web spam (Castillo et al., 2007) and abuse detection (Mishra et al., 2018).\nOne effective approach to overcome the lack of training data is that of Zaidan et al. (2007), which consists of augmenting the few data available with rationales, i.e., highlighted portions of the input. Rationales are usually coupled with feature-engineering to be effective in low resource scenarios. An alternative way to deal with data sparsity, especially in text classification tasks, is to use pretrained language models (LMs) that are fine-tuned on a target domain. While this approach has been tested on hundreds of training points (Devlin et al., 2018; Howard & Ruder, 2018), it is not clear how it behaves in an even scarcer setting, as the vast parameter space of an LM might pose a problem. Moreover, fine-tuning a model may require a considerable amount of computing power, therefore restricting its applicability. On the other hand, some embedding-based models represent the input as a weighted average of words (Kalchbrenner et al., 2014; Sheikh et al., 2016), where the weight is given by a parameter called “reference vector”. However, these models cannot easily incorporate multiple reference vectors, and they are not interpretable since classification works on unreadable embedding features.\nIn this paper, we propose a novel and efficient representation learning model to address the above issues. The underlying idea is to focus on relevant words in the input while being able to generalise to semantically similar concepts; this is something akin to what a human would do in the presence of data scarcity. We therefore introduce two techniques that should coexist to reflect our intuition. First of all, the model to focuses on specific words by computing soft matching probabilities between each word and multiple vectors which represent semantic concepts. Secondly, we guide the learning process to learn important concepts thanks to an error boosting technique that exploits rationales. Basically, it encourages the model to reduce the overall error by improving the prediction associated with highlighted words. Additionally, by direct inspection of model weights it is possible to understand what words it focuses on; in short, the model is interpretable1. Results across a consistent number of baselines and three\n1See (Guidotti et al., 2019) for the notion of interpretability we use throughout the paper.\ndatasets also indicate a significant improvement in performance. Interestingly, we always outperform fine-tuned models when little training data is available. Our model can also assist users to train a classifier for a very specific task. As an example, consider training an abstract filtering system with rationales provided by the user itself. The model will then learn to filter out papers that are not matching the user’s preferences.\nThe rest of the paper is structured as follows: Section 2 provides an overview of the existing literature, highlighting similar and different ideas; Section 3 formally introduces the problem as well as our model, providing intuition behind our architectural choices; Section 4 details our experiments and shows our findings, with a thorough ablation study that disentangles the contribution of each part of the model and a use case on interpretability; finally, Section 5 summarizes our work." }, { "heading": "2 RELATED WORKS", "text": "There are different ways in which rationales can be used. Some works generate rationales, while others exploit them to inform the learning process. The method proposed by Lei et al. (2016) tackles text classification by learning a distribution of rationales given the text and a distribution of the target class given the rationales. Interestingly, an additional regularization term is added to the loss to produce rationales that are short and coherent. The model makes use of high-capacity recurrent neural networks (Schuster & Paliwal, 1997), thus it is tested on large amounts of training data to prevent overfitting. This work was later refined by Bastings et al. (2019), who proposed a probabilistic version of a similar architecture, where a latent model is responsible for the generation of discrete rationales. The main advantage of predicting discrete rationales is that it is possible to constrain their maximum number per sample, thus effectively controlling sparsity. However, it usually requires a large amount of data points to be effective. The first to exploit rationales in a low resource scenario were Zaidan et al. (2007) and Zaidan & Eisner (2008), by means of a rationale-constrained SVM (Cortes & Vapnik, 1995) and a probabilistic model. Moreover, the latter is realized as a log-linear classifier with heavy use of featureengineering. On the other hand, when rationales are defined on features rather than on samples, one can use the Generalized Expectation (GE) criteria (Druck et al., 2007; McCallum et al., 2007) to significantly improve the performance of classifiers. Rationales can also be incorporated in the loss function (Barrett et al., 2018), where the attention module (Vaswani et al., 2017) on top of an LSTM (Hochreiter & Schmidhuber, 1997) is forced to attend relevant tokens in a document. This method was not tested on small datasets, possibly because of the aforementioned issues of high capacity models. A similar approach has been successfully applied by Bao et al. (2018) to the weak supervision problem. However, the model assumes one source domain, with supervised labels, to learn an attention generation module that is then applied to the target domain. In contrast, our method can be built on a given embedding space with minimum supervision. Apart from incorporating prior knowledge in the form of rationales, one can augment neural networks with: first-order logic (Li & Srikumar, 2019; Hu et al., 2016a); a corpora of regular expressions (Luo et al., 2018); or massive linguistic constraints (Hu et al., 2016b). While generally powerful and effective, all these methods require domain-specific expertise to define the additional features and constraints that are then explicitly incorporated into the network. In a different manner, the SoPA architecture of Schwartz et al. (2018) learns to match surface patterns on text through a differentiable version of finite state machines. A weighted combination of these patterns is used to classify a document. Instead, BabbleLabble (BL) (Hancock et al., 2018) is a method for generating weak classifiers from natural language explanations when supervision is scarce. These are then fed to Data Programming (DP) (Ratner et al., 2016), a probabilistic framework, that outputs a final score. On one hand, BL works well because it exploits a domain-specific grammar to parse explanations; on the other hand, this grammar must be carefully designed by domain experts. Finally, the Neural Bag Of Words (NBOW) model (Kalchbrenner et al., 2014) takes an average of token embeddings and applies a logistic regression to classify a document. Its extension, NBOW2 (Sheikh et al., 2016), computes an importance score for each word by comparing it with a single reference vector that is learned. Despite the underlying idea being similar, we propose a different mechanism to focus on relevant words.\nIn the following, we describe the architecture whose only requirements are i) an embedding space of the input, and ii) additional rationales, though the latter is not strictly required. As we are shall\ndiscuss, the model has a strong inductive bias, which is effective when trained with very little data. Hereinafter, we refer to our new architecture with the name PARCUS, which stands for Pattern Representations on Continuous Spaces." }, { "heading": "3 THE PARCUS MODEL", "text": "Let us consider a classification task in which a very small labelled dataset DL = {(x1, r1, y1), . . . , (xL, rL, yL)} is given, where xi is an input sample, ri represents the rationale information and yi is the discrete target label. For the purpose of this paper, an input is a set of tokens xi = {x1i , . . . , x Ti i } of arbitrary size Ti. In addition, x j i ∈ Rd, where d is the size of an embedding space obtained using a pretrained model. Finally, we assume that each token in the input has been marked as relevant or not by annotators, i.e., ri = {r1i , . . . , r Ti i } ∈ {0, 1}Ti .\nIntuition When humans are asked to solve a text classification problem after seeing few examples, they tend to look for very simple patterns across the dataset, such as specific words. Nevertheless, humans are also able to generalise to semantically similar concepts; our goal is to design a model that reflects this ability. For example, assume that the word “excellent” is important for classifying a movie review as positive. If we were to work in the character space, a straightforward solution would be to match specific (sub-)strings in the input, an instance of the so-called pattern matching technique. Clearly, pattern matching cannot generalize to words that have similar meaning, e.g., “outstanding”. In this work, we transfer the concept of pattern matching into the embedding space, where semantically similar words are assumed to have “close” representations. We achieve this via a mechanism that outputs a probability of soft matching between a token and a “reference vector”, which is learned together with the classification task to capture discriminative “concepts”. Differently from bag-of-word methods of Section 2, this model easily accommodates multiple reference vectors, hence it can focus on many different concepts that are critical for classification. In order to guide the learning process using the given rationales, it seems sensible to magnify the error for those words that have been marked as relevant by annotators. Notwithstanding the simplicity of the idea, the underlying challenge is to effectively embed human knowledge into the reference vectors, which are responsible for the soft matching technique. In other words, the probabilities of soft matching should be highly correlated with the target class. Finally, we require that classification should be done in such a way that a user can explicitly understand which reference vectors are more important for positive or negative prediction. This last step is obtained by learning a linear combination of the soft matching probabilities.\nThe next sections describe the proposed model in depth. First, we show how to compute and combine soft token matching probabilities, and then we introduce the error boosting technique that incorporates rationales in the training process. It is worth mentioning that both techniques have been designed to coexist, even though the latter is not strictly necessary to train the model." }, { "heading": "3.1 SOFT TOKEN MATCHING", "text": "We now present the core mechanism that implements soft token matching. Let us define a set of parameters P = {p1, . . . , pN}, pk ∈ Rd called prototypes, where N is an hyper-parameter of the model. A prototype plays a similar role as the reference vector in Sheikh et al. (2016). Going back to our movie review example, one pi ∈ P should ideally adapt to be close to the embedding of the word “excellent”.\nTo learn the N prototypes, we employ the cosine similarity metric. Cosine similarity can be seen as a way to measure semantic similarity; its co-domain ranges from −1, i.e., opposite in meaning, to 1, i.e., same meaning, with 0 indicating uncorrelation. Ideally, we would like our prototypes to have near-1 similarity with the relevant tokens in the input. To this aim, we further define a gate activation function g : [−1, 1] → [0, 1] that takes the distance between a token xji and a prototype pk and outputs a probability of soft matching:\nP (xji soft-matches pk) = g(d(x j i , pk)) = a d(xji ,pk)−1 ∈ [0, 1] (1)\nwhere a is an hyper-parameter. In practice, the closer to 1 the similarity is, the greater the output of this gated activation, and g(v) = 1 ⇔ v = 1. By choosing a high value of a we strongly penalize\ntokens that are associated with low similarity scores. For completeness, Section A.1 depicts g(v) for different values of a. Such a technique is important because it allows PARCUS to focus on N concepts that are semantically different while fostering interpretability. Notice that this method differs from NBOW2 (Sheikh et al., 2016), as we use prototypes to compute per-token features rather than importance scores." }, { "heading": "3.2 COMBINING PROTOTYPES", "text": "Equation 1 computes the probability of soft matching between a token and a prototype. Likewise, because we have N prototypes, we treat all N probabilities as features associated with that token. We represent these features as φk(x j i ) = g(d(x j i , pk)) ∀k ∈ 1, . . . , N . Now that we have a notion of multiple soft matching probabilities, we can combine them via AND/OR logical functions. An approximation of such functions can be straightforwardly implemented through the pseudodifferentiable version of min and max (Paszke et al., 2017):\nφAND(x j i ) = min({φk(x j i ) ∀k}) (2)\nφOR(x j i ) = max({φk(x j i ) ∀k}) (3)\nIn Section A.2 we propose a fully differentiable version of the above equations, though min and max significantly speed up convergence (due to the absence of non-linearities)." }, { "heading": "3.3 INFERENCE", "text": "Finally, we need to linearly combine all F features to output a token prediction yji . Let us define an auxiliary function (omitting the argument xji to make notation less cluttered):\n∆(xji ) = [φ1, . . . , φN , φAND, φOR] ∈ [0, 1] 1×F (4)\nwhere square brackets denote concatenation. Then, the token prediction is computed as\nyji = ∆(x j i )W + b (5)\nwhere W ∈ RF×C is a matrix of parameters (multi-class prediction with C classes’) and b is the (optional) bias. It is worth noticing that these features are yet another strong inductive bias, and that the linear model is especially necessary to interpret the model, as detailed in Section 4.3. Figure 1 combines all steps of the proposed architecture for token classification.\nFinally, the input prediction is just a sum of the individual yji\nyi = σ( Ti∑ j yji ), (6)\nwhere σ, from now on, represents the softmax activation." }, { "heading": "3.4 RATIONALE-DRIVEN ERROR BOOSTING", "text": "So far, we have not made use of rationales, which are of fundamental importance to guide the learning process. Intuitively, we would like the prototypes to softly match those tokens that are relevant for prediction. It follows that a boosting approach (Freund et al., 1999) is not feasible in this scenario, because we want to weight the importance of tokens rather than of whole samples. Instead, the method we propose is simple and efficient, and it effectively exploits prior information. The idea is to boost the error associated with specific tokens’ predictions to encourage the model to focus on them. To be more precise, at training time only we modify Equation 6 to take into account the prior information as follows\nyi = σ( Ti∑ j yji · f(r j i )), (7)\nwhere f : [0, 1] → R is an arbitrary exponential function of our choice that boosts the error, e.g., f(x) = ex; we leave the extension to an adaptive version of f for future works. In terms of learning, f(rji ) boosts the gradient of highlighted tokens while leaving unchanged the rest (i.e., if r j i is 0, our f(rji ) outputs a multiplicative factor of 1).\nDiscussion Regularization of the matrix W plays an important role for effective learning. We use L1 and L2 regularization terms on W, as done in (Zou & Hastie, 2005), for two main purposes. First, the L1 term enforces sparsity, which allows us to more easily interpret the importance of different features. Secondly, L2 limits the magnitude of the weights, hence avoiding over-compensation of low cosine similarity scores. Consequently, in order to increase one of the soft-matching probability features, the model is encouraged to make changes to the prototypes rather than to the linear weights; in turn, this translates into a particular prototype being “close” to relevant embeddings. From a mathematical standpoint, we cannot achieve the same result as Equation 6 by means of an additional loss term as done in Lei et al. (2016), because gradients would be summed and not multiplied. Moreover, in our experiments we choose to augment ∆(xji ) with additional information, such as the probability of “opposite” matching: φ¬k(x j i ) = g(−d(x j i , pk)) ∀k ∈ 1, . . . , N . Specifically, when φ¬k(x j i ) ≈ 1 the token x j i and pk have cosine similarity equal to -1, hence they are opposite in meaning.\nImplementation details PARCUS can be trained by full-gradient descent in an end-to-end fashion, from the prototypes to the linear weights. We rely on Pytorch (Paszke et al., 2017) to implement our model in few lines of code; this reflects the simplicity and strong inductive bias of our approach, which is necessary in the context we consider. The error boosting technique is applied by the automatic differentiation packages once we implement Equation 7.\nWe conclude with remarks on the model complexity. The total number of parameters is Θ(Nd + FC), which is larger than those needed by simpler models such as logistic regression. Usually, a restricted number of parameters serves to counteract overfitting, by limiting the hypotheses space of the model (Vapnik, 1998). However, this work tackles the problem from a different and novel perspective, as we prevent the prototype weights from freely changing. Specifically, prototype weights vary in a way that depends on the given embedding space, because they tend to be close to some token representation. If this had not been the case, we could have simply used an MLP in place of P , which does not perform as well as PARCUS in our experiments." }, { "heading": "4 EXPERIMENTS", "text": "This section reports the experimental setting as well as our experimental findings. We perform an in-depth analysis on our model through ablation studies, in order to clearly separate the contribution\nof prototypes from the error boosting technique. Then, we explain what a model can learn by direct inspection of its parameters. All code to reproduce experiments is publicly available2." }, { "heading": "4.1 EXPERIMENTAL SETTING", "text": "Datasets We empirically validate our method on three different datasets. First, MovieReview (Zaidan et al., 2007) contains balanced positive and negative movie reviews with rationales. Secondly, we use the highly imbalanced Spouse dataset from Hancock et al. (2018), where the task is to tell whether two entities in a given piece of news are married or not. This is a much harder task than standard classification, as the same document can appear multiple times with different given entities and background context greatly varies. Finally, we use the Hatespeech Twitter dataset of Waseem & Hovy (2016), which contains short and noisy tweets that can belong to the hate-speech or neutral class. Datasets’ statistics are reported in the appendix for completeness. We manually provide rationales for 60 randomly chosen positive samples of both Spouse and Hatespeech (this process required approximately 1 hour).\nSetup The experimental evaluation was carried out by measuring performances on the given test set while varying the number of data points used for training. We used balanced train and validation splits for all models, and the validation set is taken as big as the training one to simulate a real scenario. As for Spouse, we used the given validation set to fairly compare with the results of Hancock et al. (2018). We chose the pre-trained base version of BERT (Devlin et al., 2018) to provide the embedding space to our method and to other neural baselines as well. We repeated each experiments 10 times with different random train/validation splits; however, different models have been trained and validated on the same data splits, and we report the hyperparameters table in the appendix. Moreover, to avoid bad initializations of the final re-training (for the selected configuration), we average test performances over 3 runs. The optimized measure is Accuracy for MovieReview and F1-score for Spouse and Hatespeech, as the former is perfectly balanced. We optimize the Cross Entropy loss using the Adam optimizer (Kingma & Ba, 2015) for all the baselines we implement.\nMethods To have a fair evaluation with respect to the same embedding space, we train a linear model (Linear) and a single-layer MLP that work on token embeddings (MLP), as well as NBOW and NBOW2. Importantly, we also finetune BERT on Spouse and MovieReview. For Spouse, we propose a regular expression that associates specific sub-strings (“wife”, “husb”, “marr” and “knot”) to the positive class; ideally, our model should be able to focus on such words while also generalizing. Traditional Supervision (TS) is a logistic regression trained on n-gram features, whereas BL-DM stands for the BabbleLabble pipeline tested on 30 random explanations; results for TS and BL-DM are taken from Hancock et al. (2018). BL-DM can explicitly exploit the relational information of the Spouse dataset, hence it is a strong baseline. Moreover, we report results of an SVM (Zaidan et al., 2007) and a log-linear model on language features (Zaidan & Eisner, 2008), both of which are specifically designed to exploit additional rationales. On Hatespeech, we compare against a Logistic Regression model based on character n-grams (LR-ngrams), as it was shown to reach state of the art performance (Park & Fung, 2017). Finally, we perform a number of ablation studies to isolate the contribution of different techniques: i) an MLP with the error boosting technique; ii) our method without highlights; ii) our method with no logical features; iii) our method with φk features only." }, { "heading": "4.2 RESULTS & DISCUSSION", "text": "Table 1 presents our empirical results for all three datasets, Results confirm that the choice of a strong inductive bias indeed benefits performances in a very low data regime. On Spouse, our model strongly outperforms other neural baselines and reaches the manually tuned regular expression with just 60 data points (only 30 of them are positive). Moreover, TS needs ≈50x more data to achieve similar performance, whereas 10 datapoints are sufficient to do better than almost all baselines with a training size of 300, a >30x improvement which does not depend on the chosen embedding space. We also found that TS performs much worse than our linear baseline (hence the need for a fair comparison on the embedding space). With 300 datapoints, our model without highlights has an average\n2Link to the code to reproduce the experiments is omitted at review time" }, { "heading": "MOVIEREVIEW", "text": "F1 score very close to that of BL-DM, which relies on a domain-specific grammar and parser. In addition, we note that the reported result (46.5) is not averaged over multiple runs; as a matter of fact, one of our random splits achieves a test score of 46.3, indicating the need for robust evaluation when it comes to experiments on few examples. Overall, we found that the proposed approach can be really helpful when data is greatly imbalanced, and outperforms models like BERT that are deemed quite performing when fine-tuned on relatively small datasets (Devlin et al., 2018; Howard & Ruder, 2018). Similar arguments apply to MovieReview, where our model strongly improves over the baselines. Interestingly, our simple representation learning approach is able to beat the state of the art by a large margin when few data points are available. Here, NBOW and NBOW2 models proved to be strong baselines, as the mean representation of a document seems to work well.Generally speaking, the gap between performances is more evident as training size is very scarce, even when compared to other baselines that use rationales. In light of these results, we performed ablation studies on both datasets to understand if the improvements are only due to prototypes, rationales or both. Overall, we observe that the strong inductive\nbias represented by prototypes provides a consistent improvement with respect to the other models, which is especially evident on the Spouse dataset. Interestingly, the MLP does not benefit from error boosting, which might be explained by the fact that its larger hypotheses space, i.e., unconstrained weights, makes it difficult to diminish the contributions of non-relevant tokens. Because rationales guide the learning process, their are more important in the extremely low resource scenario, but their effect slowly fades as the training size increases; contrarily to our expectations, PARCUS performed even better on larger amounts of training points without rationales. On Hatespeech PARCUS still performs fairly well on average, but with 200 traning examples it cannot keep up with the logistic regressor of Park & Fung (2017). We found two reasons for such behavior: (i) BERT’s tokenizer is unable to accurately splitting tweets, due to their noisy nature, and (ii) character n-grams are strong discriminative features for this task (Nobata et al., 2016), and BERT does not use them. To simultaneously solve both issues we switched to Fasttext embeddings (Bojanowski et al., 2017) which are also trained on character n-grams and do not need an additional tokenizer. Surprisingly, we observed a significant improvement on 200 data points, which reduces the gap in performance with the n-grams based model. Therefore, we conclude that choosing the “right” embedding space can make the difference, which does not necessarily mean using one of the latest and most powerful language models available." }, { "heading": "4.3 PROVIDING EXPLANATIONS", "text": "In this Section we show that PARCUS is interpretable. To this aim, we train a model using N=3 prototypes on 60 examples taken from the Spouse dataset. Then we run the model on unseen data and we inspect the outputs associated with each token. We then rank them to see what are the most important ones, and we observe that the tokens with highest rank correspond to semantic concepts that are relevant for the task. Indeed, the model learned to focus on words related to marriage, as well as syntactic variations associated with similar semantics. Moreover, some of the words were not given as part of rationales in the training set. The next step is to show that rationales have effectively been incorporated in the prototypes, and how the features of Eq. 4 have been weighted. We started by inspecting the magnitude of the linear weights W ∈ R8×2; specifically, if the i-th feature is discriminative for a class c, then the i-th row of W will have the c-th element clearly larger than the others. In our example, we found that φ1 was important for positive predictions, whereas the other features did not contribute to a particular class. Therefore, we performed top-10 cosine similarity ranking between tokens and the prototype p1. From the most similar to the least one, we obtained: husband; marriage; marrying; wife; married; marry; fiance; wedding; fiancee; and girlfriend. Interestingly, PARCUS has automatically learned to match concepts similar to those provided in natural language form by BabbleLabble Hancock et al. (2018)." }, { "heading": "5 CONCLUSIONS", "text": "We presented a new methodology to perform classification in the low data regime. We coupled soft token matching with error boosting to focus on concepts that are important for the task at hand. The model is able to outperform other competitors, including fine-tuned complex models. Moreover, we showed with a practical example how humans can interpret the predictions in terms of concepts matching. In conclusion, our model proved to be a useful in tasks where gathering data is challenging." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 GATING ACTIVATION FUNCTION", "text": "The gating activation function of Eq. 1 controls how similar an embedding and a prototype have to be in order to significantly contribute as a feature. Figure 3 shows different curves for different values of a. A bigger a squashes low cosine similarities more, as is the case for a = 100, therefore acting as a stricter filter. For completeness, Figure 5 shows the effect of a g(·) in R2, using a = 100." }, { "heading": "A.2 DIFFERENTIABLE IMPLEMENTATION OF AND AND OR", "text": "Given the set {φ1, , φN} ∈ [0, 1]N , interpreted as N independent “soft matching probabilities”, we would like to compute their joint probability as well as the probability of a single matching out of all possible ones. In our experiments, we found a pseudo-differentiable implementation of min and max (Eq. 2 and Eq. 3) to speed up convergence. Alternatively, we tried a fully differentiable version of those two functions as well as the probability of mutually exclusive events, i.e., XOR. We present the equations for N = 2, although this can be easily generalized to arbitrary N :\nφDiffAND(φ1, φ2) = φ1 ∗ φ2\nφDiffOR (φ1, φ2, s) = (φ1 − 2) −2s − 2−2s(1− φ1) + (φ2 − 2)−2s − 2−2s(1− φ2)− φDiffAND(φ1, φ2)\nφDiffXOR(φ1, φ2) = φ Diff OR (φ1, φ2)− φ Diff AND(φ1, φ2)\nwhere s controls how squashed is the curve is. Figure 5 depicts all three curves for s = 2." }, { "heading": "A.3 DATASETS’ STATISTICS", "text": "In Table 2 we report the statistics of the datasets we used. When preprocessing the data with BERT, we used a maximum sentence length of 128 for token-based methods, and 512 when finetuning BERT.\nAs mentioned in Section 4, each sample of the Spouse dataset contains a pair of entities as well as the sentence. Therefore, the MLP baseline, ablation studies and our method will make use of an input-mask (at test time only), which reflects the fact that we are not interested in those sentences where both our entities of interest do not appear. It is worth noticing that such relational information should be naturally exploited by methods like BabbleLabble, which rely on domain-specific grammars. While this method should help to improve precision, in practice it did not significantly affect performances." }, { "heading": "A.4 HYPER-PARAMETERS", "text": "Hyper-parameters are used to perform model selection, which returns the best configuration for a given validation split. We use such configuration to train a model on the whole training set and then evaluate its generalization performances on the unseen test set. When Data Programming is used, we simply combine model selection of our model with that of data programming. Since the experiment is repeated 10 times to avoid lucky/unlucky data splits, model selection is performed 10 times as well. Model assessment, i.e., the measure of performance of our family of models, is evaluated by averaging the 10 different performances on the test set. When fine-tuning BERT, we mainly follow the guidelines reported in Devlin et al. (2018)." }, { "heading": "A.5 GRADIENT BOOSTING EFFECT ON BACKPROPAGATION", "text": "For a loss ` defined on top of Equation 6 and a true label ŷ, backpropagation is computed as (abstracting from the sample index i to simplify notation):\n∂`(y, ŷ) ∂θ = ∂`(y, ŷ) ∂y · ∂y ∂θ = ∂`(y, ŷ) ∂y · ∂ σ(\n∑T j y jf(rj))\n∂( ∑T\nj y jf(rj))\n· ∂ ∑T j y jf(rj)\n∂θ ∂ ∑T\nj y jf(rj)\n∂θ = T∑ j (∂(∆(xj)W + b) ∂θ · f(rj) ) , (8)\nwhere θ are the parameters of the model." }, { "heading": "A.6 ROBUSTNESS TO NOISE", "text": "We investigated how injecting random rationales (i.e., ones) in prior information affects the final performances. Clearly, having a 100% noise corresponds to not having rationales at all, therefore we run a simple experiment (with fixed hyper-parameters) 10 times on Spouse, with the goal to study robustness to noise. Figure 6 shows the result, where different curves stand for different training sizes; as one would expect, for few data points (blue and red curves) noise has bigger influence whereas for 150 (orange curve) and 300 (green curve) data points the effect is negligible. This confirms that there is a trade-off where the impact of rationales has little effect on the final performances.\nA.7 VISUALIZATION OF THE MOST IMPORTANT TOKENS\nThe importance of tokens in a sentence can be inspected by looking at the prototypes pi, the weight matrix W and the tokens predictions yji . In Figure 7 we show the some of the tokens which are responsible for positive prediction on Spouse unseen data, which is consistent with the results of Section 4.3." } ]
2,019
null
SP:bf0b8ec1ea69eb1b54e6502182b66ab3a8321c42
[ "This paper attempts to compress the networks so as to accelerate the running procedure as well as save the storage. The authors propose a dual-module that is composed of a little module and big module. The big module use the full original data and parameters whereas the little module use small data and parameters by random projecting on the original ones. Through a statistical investigation, the authors provide a method to choose the little or big module dynamically. By applying this method on LSTM and GRU, the authors make them more efficient. Experimental results validate this point.", "This manuscript proposes an approach to reduce memory access and computation in Recurrent Neural Networks. Specifically, they train a second \"little\" neural network to approximate a pre-trained \"big\" network and use simple rules to switch between the little and the big network. The approach can provide some speedups while reducing the total number of memory accesses and the computational cost in exchange for a mild decrease in predictive performance." ]
Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs. We propose a big-little dual-module inference to dynamically skip unnecessary memory access and computation to speedup RNN inference. Leveraging the error-resilient feature of nonlinear activation functions used in RNNs, we propose to use a lightweight little module that approximates the original RNN layer, which is referred to as the big module, to compute activations of the insensitive region that are more errorresilient. The expensive memory access and computation of the big module can be reduced as the results are only used in the sensitive region. Our method can reduce the overall memory access by 40% on average and achieve 1.54x to 1.75x speedup on CPU-based server platform with negligible impact on model quality.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Ella Bingham", "Heikki Mannila" ], "title": "Random projection in dimensionality reduction: applications to image and text data", "venue": "In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2001 }, { "authors": [ "Tolga Bolukbasi", "Joseph Wang", "Ofer Dekel", "Venkatesh Saligrama" ], "title": "Adaptive neural networks for efficient inference", "venue": "In Proceedings of the 34th International Conference on Machine Learning - Volume 70,", "year": 2017 }, { "authors": [ "Vctor Campos", "Brendan Jou", "Xavier Gir i Nieto", "Jordi Torres", "Shih-Fu Chang" ], "title": "Skip RNN: Learning to skip state updates in recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Xiaoliang Dai", "Hongxu Yin", "Niraj K Jha" ], "title": "Grow and prune compact, fast, and accurate lstms", "venue": "arXiv preprint arXiv:1805.11797,", "year": 2018 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE international conference on acoustics, speech and signal processing,", "year": 2013 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Yanzhang He", "Tara N Sainath", "Rohit Prabhavalkar", "Ian McGraw", "Raziel Alvarez", "Ding Zhao", "David Rybach", "Anjuli Kannan", "Yonghui Wu", "Ruoming Pang" ], "title": "Streaming end-to-end speech recognition for mobile devices", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Ping Li", "Trevor J Hastie", "Kenneth W Church" ], "title": "Very sparse random projections", "venue": "In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2006 }, { "authors": [ "Liu Liu", "Lei Deng", "Xing Hu", "Maohua Zhu", "Guoqi Li", "Yufei Ding", "Yuan Xie" ], "title": "Dynamic sparse graph for efficient deep learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Huizi Mao", "Song Han", "Jeff Pool", "Wenshuo Li", "Xingyu Liu", "Yu Wang", "William J Dally" ], "title": "Exploring the regularity of sparse structure in convolutional neural networks", "venue": "arXiv preprint arXiv:1705.08922,", "year": 2017 }, { "authors": [ "Sharan Narang", "Erich Elsen", "Gregory Diamos", "Shubho Sengupta" ], "title": "Exploring sparsity in recurrent neural networks", "venue": "arXiv preprint arXiv:1704.05119,", "year": 2017 }, { "authors": [ "Daniel Neil", "Jun Haeng Lee", "Tobi Delbruck", "Shih-Chii Liu" ], "title": "Delta networks for optimized recurrent network computation", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jongsoo Park", "Maxim Naumov", "Protonu Basu", "Summer Deng", "Aravind Kalaiah", "Daya Khudia", "James Law", "Parth Malani", "Andrey Malevich", "Satish Nadathur" ], "title": "Deep learning inference in facebook data centers: Characterization, performance optimizations and hardware implications", "venue": "arXiv preprint arXiv:1811.09886,", "year": 2018 }, { "authors": [ "Antonio Polino", "Razvan Pascanu", "Dan Alistarh" ], "title": "Model compression via distillation and quantization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yikang Shen", "Shawn Tan", "Alessandro Sordoni", "Aaron Courville" ], "title": "Ordered neurons: Integrating tree structures into recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Peiqi Wang", "Xinfeng Xie", "Lei Deng", "Guoqi Li", "Dongsheng Wang", "Yuan Xie" ], "title": "Hitnet: hybrid ternary recurrent neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuxuan Wang", "RJ Skerry-Ryan", "Daisy Stanton", "Yonghui Wu", "Ron J Weiss", "Navdeep Jaitly", "Zongheng Yang", "Ying Xiao", "Zhifeng Chen", "Samy Bengio" ], "title": "Tacotron: Towards end-to-end speech synthesis", "venue": "arXiv preprint arXiv:1703.10135,", "year": 2017 }, { "authors": [ "Wei Wen", "Yuxiong He", "Samyam Rajbhandari", "Minjia Zhang", "Wenhan Wang", "Fang Liu", "Bin Hu", "Yiran Chen", "Hai Li" ], "title": "Learning intrinsic sparse structures within long short-term memory, 2017", "venue": null, "year": 2017 }, { "authors": [ "Samuel Williams", "Andrew Waterman", "David Patterson" ], "title": "Roofline: An insightful visual performance model for floating-point programs and multicore architectures", "venue": "Technical report, Lawrence Berkeley National Lab.(LBNL),", "year": 2009 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yonghui Wu", "Mike Schuster", "Zhifeng Chen", "Quoc V Le", "Mohammad Norouzi", "Wolfgang Macherey", "Maxim Krikun", "Yuan Cao", "Qin Gao", "Klaus Macherey" ], "title": "Google’s neural machine translation system: Bridging the gap between human and machine translation", "venue": "arXiv preprint arXiv:1609.08144,", "year": 2016 }, { "authors": [ "Chen Xu", "Jianqiang Yao", "Zhouchen Lin", "Wenwu Ou", "Yuanbin Cao", "Zhirong Wang", "Hongbin Zha" ], "title": "Alternating multi-bit quantization for recurrent neural networks", "venue": "arXiv preprint arXiv:1802.00150,", "year": 2018 }, { "authors": [ "X. Zhang", "C. Xie", "J. Wang", "W. Zhang", "X. Fu" ], "title": "Towards memory friendly long-short term memory networks (lstms) on mobile gpus", "venue": "In 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO),", "year": 2018 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv preprint arXiv:1710.01878,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recurrent Neural Networks (RNNs) play a critical role in many natural language processing (NLP) tasks, such as machine translation (Bahdanau et al., 2014; Wu et al., 2016), speech recognition (Graves et al., 2013; He et al., 2019), and speech synthesis (Wang et al., 2017), owing to the capability of modeling sequential data. These RNN-based services deployed in both data-center and edge devices often process inputs in a streaming fashion, which demands a real-time interaction. For instance, in cloud-based translation tasks, multiple requests need to be served with very stringent latency limit, where inference runs concurrently and individually (Park et al., 2018). For on-device speech recognition as an automated assistant, latency is the primary concern to pursue a fast response (He et al., 2019).\nHowever, serving RNN-based models in latency-sensitive scenarios is challenging due to the low data reuse, and thus low resource utilization as memory-bound General Matrix-Vector multiplication (GEMV) is the core compute pattern of RNNs. Accessing weight matrix from off-chip memory is the bottleneck of GEMV-based RNN execution as the weight data almost always cannot fit in on-chip memory. Moreover, accessing weights repeatedly at each time-step, especially in sequenceto-sequence models, makes the memory-bound problem severer. Subsequently, the on-chip computing resources would be under-utilized. Although batching is a walk-around for low-utilization, using a large batch size is not favored in latency-sensitive scenarios such as speech recognition and translation.\nIn essence, the RNN inference is not a simple GEMV. With non-linearity followed the GEMV operation as the activation functions, the RNN inference operation is “activated” GEMV. These nonlinear activation functions as used in neural networks bring error resilience. As shown in Figure 1, sigmoid and tanh functions in Gated RNNs such as Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Unit (GRU) (Cho et al., 2014) have insensitive regions – green shaded regions – where the outputs are saturated and resilient to errors in pre-activation accumulated results. In other words, not all computations in RNNs need to be accurate. Can we leverage this error resilience in RNNs to reduce the memory access and eventually achieve speedup?\nTo this end, we propose a big-little dual-module inference that regarding the original RNN layer as the big module, and use a parameterized little module to approximate the big module to help reduce redundant weight accesses. The philosophy of dual-module inference is using approximated results computed by the memory-efficient little module in the insensitive region, and using accurate\nresults computed by the memory-intensive big module in the sensitive region. For this reason, the final outputs are the mixture of the big-little module. With the memory-efficient little module computes for the insensitive region, we can reduce the expensive data access and computation of the big module and thus reduce overall memory access and computation cost. The (in)sensitive region is dynamically determined using the little module results. Because of the error resilience, using approximated results in the insensitive region has a negligible impact on the overall model quality but creates a significant acceleration potential.\nGiven the trade-off between accuracy and efficiency, the little module needs to be sufficiently accurate while being as much lightweight as possible. To achieve this, we first use a dimension reduction method – random projection – to reduce the parameter size of the little module and thus reducing data accesses. Then, we quantize the weights of the little module to lower the overhead further. Because we only need the little module outputs in the insensitive region that is error-resilient, we can afford aggressively low bit-width. Compared with common sparsification schemes, our hybrid approach avoids indexing overheads and therefore successfully achieves practical speedup.\nWe evaluate our method on language modeling and neural machine translation using RNN-based models and measure the performance, i.e., wall-clock execution time, on CPU-based server platform. With overall memory access data reduced by 40% on average, our method can achieve 1.54x to 1.75x speedup with negligible impact on model quality." }, { "heading": "2 MOTIVATION", "text": "In this section, we discuss the error resilience of RNNs. As shown in Fig. 1, the nonlinear activation functions – sigmoid and tanh – have insensitive regions where the output activations are resilient to errors introduced in pre-activation accumulation results. We take a single LSTM layer for language modeling over PTB dataset as an illustrative example. The baseline perplexity (PPL) is 80.64. We consider two cases: adding a random error vector under norm distribution into the pre-activation accumulation results in the sensitive regions of four gates; adding errors to the insensitive regions. We separate the (in)sensitive regions by 50% based on the activation magnitude.\nAs listed in Table 1, we report the PPL on the testing set and the average cosine similarity between the activations of the baseline model and the error-introduced model. Before applying the nonlinear activation functions, the cosine similarity of two cases – adding errors in the sensitive region or the insensitive region – are in the same level. However, we observe that after the nonlinear gates, the cosine similarity in the insensitive case is much closer to one (i.e., fewer output errors) than that in\nthe sensitive case. We further compare the PPL of these two cases, and we observe that introducing errors in the insensitive region causes little quality degradation.\nThe selection of which neurons should be in the (in)sensitive region is dynamic and input-dependent, which can be seen in Figure 2. Unlike the static weight sparsity that we can prune the unused connections offline in advance, the dynamic region speculation requires a very lightweight criterion for real-time processing. Taking all these into account, we propose a dual-model inference method that efficiently determines (in)sensitive region and significantly saves the memory access and computational cost." }, { "heading": "3 APPROACH", "text": "Firstly, we explain the dual-module inference by taking a fully-connected (FC) layer as an example and then extend it to LSTM and GRU. For an FC layer with unit batch size, the operation is typically formulated as a = ϕ(y),y = Wx + b, where W is a weight matrix (W ∈ Rn×d), x is an input vector (x ∈ Rd), b is a bias vector (b ∈ Rn), a is an activated output vector (a ∈ Rn), and ϕ is an activation function. The core computation is matrix-vector multiplication (GEMV), i.e., Wx. Both the amount of computation and memory access are O(nd); therefore, it is memory-bounded since the operation intensity is O(1) according to the Roofline model analysis (Williams et al., 2009). Accessing weights from the off-chip memory is the bottleneck in terms of both the latency and energy." }, { "heading": "3.1 OVERVIEW OF DUAL-MODULE PHILOSOPHY", "text": "Our work aims at reducing the memory access of weight matrices for GEMV-based RNN inference. We show in Section 2 that not all values in y need accurate computation, and those that belong to the insensitive region can afford some level of approximation. In other words, we only need accurate computation and expensive memory access in the sensitive region of y and skip computation and memory access to weights that contribute to the insensitive region of y. With that, we still need approximated results in the insensitive region. Therefore, we propose to learn a lightweight little module from the original trained layer, here we refer the original layer as the big module. Essentially, our little module is executed in a low-dimensional and low-precision space, thus termed as LL module; by contrast, the original big module with high dimension and high precision is called HH module. Let the outputs from these two modules be yLL and yHH , respectively. If the LL module approximates the HH module well, the final output vector – a mixture of results from the HH and the LL modules – can be assembled by\ny = yHH m+ yLL (1−m) (1)\nwhere m ∈ {0, 1}n is a binary mask vector for the output switching. mi equals 1 in the sensitive region while it switches to 0 in the insensitive region. The overall saving comes from skipping memory access to the big module while paying the overhead of accessing and computing of the little module." }, { "heading": "3.2 CONSTRUCT THE LL MODULE", "text": "As the HH module is the original pre-trained layer, we only need to construct the LL module. Delivering a lightweight little module at inference time is crucial to achieving real wall-clock time speedup. As discussed earlier, the sparsification method usually suffers from severe indexing overheads; therefore, we turn to other approaches. In this work, we propose a hybrid compression with dimension reduction and data quantization to keep the little module as efficient as possible in computation and storage. The low dimension and low precision give birth to the desired LLmodule. We emphasize two objects that should be reached in the design of LL module: (1) much lower computation and memory overheads than the HH module; (2) approximating the outputs of HH module accurately.\nFirst, we introduce sparse random projection to reduce the dimension of x from Rd to Rk where k d. Subsequently, the parameter size of the LL module is O(nk), which is much smaller compared with the parameter size O(nd) of the HH module. Random projection is a common technique for dimension reduction that preserves distances in Euclidean space (Achlioptas, 2003; Bingham & Mannila, 2001; Li et al., 2006; Liu et al., 2019).\nThe dimension reduction step can be formulated as\nxLL = PxHH (2)\nwhere P is a sparse random matrix (P ∈ 1√ 3 · {−1, 0, 1}k×d, the probability of Pij being −1, 0, and 1 is 16 , 2 3 , and 1 6 , respectively). Note that k is configurable according to actual needs to balance the accuracy loss and inference cost. We choose the value of k according to Achlioptas (2003):\nk = 4logn\n2/2− 3/3 (3)\nwhere n is the number of rows in W and is a real number in (0, 1).\nSecond, after the dimension reduction, we quickly construct a lightweight little module in the lowdimensional space to approximate the pre-trained big module. The parameters of the latter (i.e., WHH and bHH ) are kept frozen while the parameters of the former (i.e., WLL and bLL) are updated by stochastic gradient descent (SGD) to minimize the following loss function:\nL = 1\nS ∑ s ||yHH − yLL||22 = 1 S ∑ s ||(WHHxHH + bHH)− (WLLxLL + bLL)||22 (4)\nwhere S is the mini-batch size. Essentially, for each pair of big-little modules, we apply linear regression on the little module to approximate the function of the big module and optimize the mean square error of the two. Apparently, the parameter size of WLL is O(nk), much smaller than the original weight WHH of O(nd) in the high-dimensional space. Even if further considering the projection cost of O(kd), the overhead is still much lower than the vanilla inference. In this way, the memory-bound issue in GEMV-based models can be greatly alleviated; the computational complexity is also reduced. The SGD overhead for constructing the above module can be amortized by the pattern of “construct-once-inference-forever”.\nFinally, based on the constructed low-dimensional module, we also apply data quantization technique to reduce the parameter precision. Data quantization can further shrink the storage space of LL parameters due to the shorter bit-width. The input x is also quantized during run-time to reduce the computation cost. In our design, we apply one-time uniform quantization on WLL to avoid complicated calculations. Although some other accurate quantization methods are available as well, we find that one-time quantization works well in our dual-module inference given in Equation (1). This error tolerance is benefit from the fact that the computation in the insensitive region has a small influence on the final outputs." }, { "heading": "3.3 DETERMINE THE INSENSITIVE REGION", "text": "The dual-module inference relies on a binary mask m to switch between outputs of the “accurate & costly” HH module and the “approximated & efficient” LL module. Hence, the generation of m is a crucial factor to control the overall performance by adjusting the trade-off between accuracy and efficiency. Thanks to the saturation region of the nonlinear activation functions in RNNs, such\nas sigmoid and tanh, we observe a unipolar or bipolar distribution of their outputs, as depicted in Figure 3. This affords two excellent opportunities: (1) It is possible to remove the majority of the computation and access from the costly HH module by setting the peak areas in Figure 3 as insensitive regions; (2) The saturation output values in those regions such as near 0 in sigmoid and near ±1 in tanh additionally allow inaccurate computations because the outputs are insensitive to approximated values. According to the above observations and analysis, we design a specific criterion for each activation function. In particular, they are governed by{\nsigmoid : if yLLi > θsigmoid,mi = 1; otherwise, mi = 0 tanh : if θ−tanh < y LL i < θ + tanh,mi = 1; otherwise, mi = 0\n(5)\nwhere θsigmoid > 0, θ−tanh < 0, and θ + tanh > 0 are constant thresholds. Note that these thresholds can be searched to a target insensitive ratio using validation dataset or be tuned at run-time that acts as a knob for accuracy-efficiency trade-off." }, { "heading": "3.4 OVERVIEW OF DUAL-MODULE INFERENCE ALGORITHM", "text": "The overall implementation is provided in Algorithm 1. After the construction of the LL model, the consequent dual-module inference needs five steps: (1) Dimension reduction and data quantization for each dynamical input x as xLLQ = Q(Px\nHH) where Q(·) is a quantization function; (2) Obtain the approximated output yLL by performing yLL = ϕ(WLLQ x LL Q + b LL Q ) where W LL Q & b LL Q are stored quantized parameters; (3) Calculate the switching mask vector m according to Equation (5); (4) Obtain a faction of actual output yHH by performing yHHi = ϕ(W [i, :]\nHHxHH + bHHi ) if mi = 1; (5) Produce the final output y according to the assembling in Equation (1).\nAlgorithm 1: Dual-module Inference Algorithm Data: HH module parameters: WHH , bHH ; quantized LL module parameters: WLLQ and bLLQ ; thresholds θs to determine m; random projection matrix P ; current input xHH Result: Final output y\n1 Step 1: xLLQ = Q(Px HH); 2 Step 2: yLL = ϕ(WLLQ x LL Q + b LL Q ); 3 Step 3: Generating m according to Equation (5); 4 Step 4-5: foreach mi ∈m do 5 if mi == 1 then yi = yHHi = ϕ(W [i, :]HHxHH + bHHi ); 6 else yi = yLLi ; 7 end" }, { "heading": "3.5 APPLY TO RECURRENT NEURAL NETWORKS", "text": "We discuss how to apply the proposed dual-module inference for an FC layer to RNNs, including LSTM and GRU. We will explain the LSTM implementation for illustration, while the extension to\nGRU is quite straightforward. The dynamics of an LSTM cell can be described as f(t) = σ(bf +Wfxx(t) +Wfhh(t− 1)) i(t) = σ(bi +Wixx(t) +Wihh(t− 1)) o(t) = σ(bo +Woxx(t) +Wohh(t− 1)) g(t) = θ(bg +Wgxx(t) +Wghh(t− 1)) c(t) = c(t− 1) f(t) + g(t) i(t) h(t) = θ(c(t)) o(t)\n(6)\nwhere f , i, o are the states of forget, input, and output gate, respectively, and g is the input activation. Each of them has its own bias vector and weight matrices. c and h are the cellular and hidden states of the hidden layer, respectively. σ(·) and θ(·) are sigmoid function and tanh function, respectively. The computation of each gate is similar to an FC-like layer; therefore, Algorithm 1 still holds. The first difference is the two GEMV computations in each gate; we apply dimension reduction, construction of the LL module, and data quantization on both GEMV computations. The second difference is that there is an additional temporal dimension in RNNs. We should guarantee the approximation performance of theLLmodule at all time steps. Taking the forget gate as an example, the linear map works for both xLL(t) = PxxHH(t) and hLL(t − 1) = PhhHH(t − 1). The loss function for constructing the LL module is slightly modified to\nL = 1\nST ∑ s ∑ t ||(bHHf +WHHfx xHH(t)+WHHfh hHH(t−1))−(bLLf +WLLfx xLL(t)+WLLfh hLL(t−1))||22.\n(7) Here the minimization considers not only S training samples in each mini-batch but also T time steps. The data quantization, switching mask (i.e., m) generation, and output assembling is the same as Algorithm 1 describes. Applying to other gates is similar so we do not discuss them to avoid repetition. Note that the input x and hidden state h can have different sizes, termed as dx and dh, respectively. For simplicity, we set Px ∈ Rk×dx and Ph ∈ Rk×dh to let xLL and hLL to the same length k. For the g gate with tanh function, we set |θ−tanh| = |θ + tanh| also for simplicity; however, different magnitudes are allowed." }, { "heading": "3.6 SAVING AND OVERHEAD ANALYSIS", "text": "The target of our dual-module inference method is to reduce the expensive off-chip memory access of the big module with the help of the little module. We introduce an insensitive ratio as the number of outputs using the little module results over entire outputs. The ratio can be interpreted as the zero ratio in mask m as in Equation 1. In other words, the higher insensitive ratio will have less memory access to the big module. For example, obtaining a ratio of 50% results in reducing 50% of weight matrix accessing in a GEMV operation. The choice of accurate ratio determines the model inference quality, and it is a knob to trade-off model inference quality vs. latency at run-time.\nThe overhead of dual-module inference is small due to the use of dimension reduction and quantization. When choosing reduced dimension k and low-precision bit-width of the little module, we use Equation 3 with = 0.5 and INT8 quantization by default. We also explore different levels of dimension reduction and quantization in Section 4.3 and Section 4.4. As shown in Figure 4, we compare memory access data and operations between the single-module – the base case – and the little module of dual-module inference using a set of LSTM and GRU layers. On average, the little module accounts 10% storage overhead and 40% operation overhead compared with the base case. Note that we count the number of operations in Figure 4 regardless of precision; and the little module computation overhead can be further reduced using low-precision compute kernel as we used in performance evaluation." }, { "heading": "4 EVALUATION", "text": "We first evaluate the model inference quality and execution time under different insensitive ratio and then conduct two sensitivity studies on dimension reduction and quantization.\nOur method is evaluated on CPU-based server platform (Intel(R) Xeon(R) CPU E5-2698 v4) as most inference workloads run on CPUs (Park et al., 2018). We use PyTorch to train the little module and evaluate inference quality. The baseline implementation is the PyTorch CPU version with Intel MKL (version 2019.4) as the back-end BLAS kernel library. Our custom kernel implementation uses a multi-threaded MKL dot-product kernel at BLAS level-1 to compute the big module instead of BLAS level-2 or level-3 kernels. The kernel-wise performance is measured as wall-clock time and averaged with 1000 runs, assuming cold cache at the execution of each RNN cell representing the real-world cases, for example in the decoder of seq2seq model.\nWe first evaluate our method on single-layer LSTM & GRU used in language modeling tasks and then on multi-layer stacked LSTM in GNMT model used in machine translation tasks – a standard benchmark model for inference as in MLPerf 1. We train the little module while freezing the parameters of the big module, and we use the same training set and validation set to run SGD optimization." }, { "heading": "4.1 LANGUAGE MODELING", "text": "We first evaluate our method on single-layer LSTMs/GPUs. Our implementations are adapted from the word-level language modeling example from PyTorch with same hyper-parameters to train baseline models. We report word-level perplexity (PPL) as the measure of model quality. As listed in Table 2, the baseline LSTM model achieves 80.64 PPL at the latency of 1.477ms. Then, we varying the insensitive ratio to show the quality-performance trade-off; the larger insensitive ratio indicates more results are from the little module and less memory access to compute the big module. As we increase the insensitive ratio, we observe the degradation of quality as the perplexity increases during a gradual reduction in execution time. When the insensitive ratio is 50%, the perplexity is slightly increased to 81.36, which is negligible in language modeling tasks, while the inference speedup is 1.67x.\nWe observe a similar quality-performance trade-off for LSTM with 750 hidden units. Comparing the case of base LSTM with 750 hidden units with dual-module LSTM with 1500 hidden units and 50% insensitive ratio, although the memory access reduction is at the same level, our proposed dualmodule approach achieves much better model quality because we kept the expressive power of a larger LSTM layer.\nWe further report the results using single-layer GRU on word-level language modeling tasks as in Table 3. Using dual-module inference on GRUs expresses the similar quality-performance trade-off as of LSTMs. Our dual-module method is generally applicable to both LSTMs and GRUs." }, { "heading": "4.2 NEURAL MACHINE TRANSLATION", "text": "Given the promising results on language modeling, we further investigate Neural Machine Translation (NMT), which is a promising end-to-end learning approach for automated translation (Wu\n1https://mlperf.org/inference-overview/\net al., 2016). The base model 2 consists of a four-layer stacked LSTM in both the encoder and the decoder of the sequence-to-sequence modeling. We focus on the speedup of the decoder since it is the most memory intensive and the most time-consuming part ( 95%). The decoder has a four-layer unidirectional LSTM with hidden size 1024 with residual connections starting from the third layer, i.e., the input size of the third and fourth layer is 2048. Our experiments show de-tokenized BLEU score to measure the model inference quality on the public WMT16 English-German dataset. The baseline model obtains a BLEU score of 24.32.\nWe replace the LSTM layers in the decoder with our proposed dual-module-based LSTM layers. Similar to single-layer LSTM results, using the little module computed results in the insensitive region can reduce overall memory access while maintaining model quality. As listed in Table 4, our method can achieve imperceptible BLEU score degradation while speedup inference by 1.75x for the first two LSTM layers and 1.70x for the last two LSTM layers. When compromising more translation quality, i.e., decreasing the BLEU score by 2.4, our method can achieve more than 2x speedup." }, { "heading": "4.3 DISCUSSION ON DIMENSION REDUCTION", "text": "Dimension reduction is an integral part of our dual-module inference method to reduce the number of parameters and memory footprint. Here, we study the impact of different levels of dimension reduction on the model quality and performance. We conduct experiments on language modeling using single-layer LSTM of 1500 hidden units. We quantize the little module to INT8 and reduce the hidden dimension from 1500 to three different levels, which are calculated by Sparse Random Projection. We fix the insensitive ratio to be 50% across this set of experiments. As we can see in Table 5, the higher dimension of the little module, the better approximation the little module can perform. For instance, when we reduce hidden size to 966 and quantize to INT8, the dual-module inference can achieve slightly better quality – PPL of 80.40 – and 1.37x speedup. More aggressive dimension reduction can further have more speedup at the cost of more quality degradation: hidden dimension reduced to 417 and 266 can have 1.67x and 1.71x speedup but increase PPL by 0.72 and 2.87, respectively.\nWe further show the overhead of performing the computation of the little module. As listed in the last three columns in Table 5, we measure the execution time of performing dimension reduction on\n2From https://github.com/NVIDIA/DeepLearningExamples\ninputs by Sparse Random Projection, computation of the little module, and computation of the big module; the execution time is normalized to the baseline case, i.e., the execution time of standard LSTM, to highlight the percentage of overheads. When the hidden dimension is reduced to 966, the overhead of the little module accounts 22% while the execution time of the big module is cut off by half 3. In our experiments, we choose = 0.5 as the default parameter in sparse random projection as it demonstrated good quality and speedup trade-off by our study. When further reducing the hidden dimension to 266, there is only a slight improvement on speedup compared with the hidden size of 417 in the little module, where the overhead of the little module is already small enough, but the quality dropped significantly." }, { "heading": "4.4 DISCUSSION ON QUANTIZATION", "text": "Quantizing the weights of the little module is another integral part of keeping memory footprint small. We show different quantization levels the impact on model quality and parameter size. After training the little module, we can quantize its weights to lower precision to reduce the memory accessing on top of dimension reduction. As we can see in Table 6, more aggressive quantization leads to smaller parameter size that can reduce the overhead of computing the little module; on the other hand, the approximation of the little module is compromised by quantization. We can quantize the little module up to INT4 without significant quality degradation. Using lower precision would degrade the quality while decreasing the parameter size. For performance evaluation, we choose INT8 as the quantization level since we leverage off-the-shelf INT8 GEMM kernel in MKL. We expect more speedup once the little module overhead can be further reduced by leveraging INT4 compute kernels." }, { "heading": "5 RELATED WORK", "text": "As we aim at the memory-bound problem of RNN-based inference applications, we limit the discussion on related work to RNN inference acceleration. Although we only evaluate our dual-module inference method on standard LSTMs/GRUs, we believe our method can be applied to many newly released sequence modeling networks (Shen et al., 2019; Wu et al., 2019) as we leverage the commonly observed error-resilience of non-linear activation functions." }, { "heading": "5.1 MODEL COMPRESSION", "text": "Compressing DNN models via data quantization, weight sparsity, and knowledge distillation is promising to deliver efficient deployment for inference. Xu et al. (2018) propose a quantization method for RNNs where both weights and activations are quantized to binary or ternary. Wang et al. (2018) propose a hybrid ternary quantization method based on the different distributions of weights and activations.\nWeight pruning, i.e., inducing weight sparsity, has been proposed to reduce the parameter size of a pre-trained model (Han et al., 2015b;a). While fine-grained pruning at element-wise could reduce the number of parameters (Narang et al., 2017; Zhu & Gupta, 2017; Dai et al., 2018), indexing nonzero weights causes extra memory cost and would offset the benefits of reducing parameter size; it is hard to gain practical acceleration on general-purpose hardware or need hardware specialization (Mao et al., 2017). Although structural pruning (Wen et al., 2017) and knowledge distillation (Polino et al., 2018) could achieve speedup, the applicability on more complicated tasks such NMT using large-scale dataset is unstudied; besides, those methods require extensive retraining via regularization that would increase the training cost and hard to find a solution.\nModel compression would inevitably compromise the compressive power of RNNs. Our method, by no means, is supposed to replace model compression but provides an orthogonal approach to accelerate RNN inference. Using the analogy of knowledge distillation, we do not simply deploy a student network learned from the teacher network. Instead, we let the teacher network, applied with model compression or not, help with the student – the little module learned from the base module – and collaboratively perform inference with reduced memory access and computation." }, { "heading": "5.2 COMPUTATION SKIPPING", "text": "Instead of model compression, many work propose to skip computations dynamically based on certain criterion. Bolukbasi et al. (2017) propose dynamic execution with layer-wise early exit. Zhang et al. (2018) leverage a special feature of LSTM that using threshold-based pruning on output gates and generate a mask, and then using the mask to skip computation as well as data access of masked-out neurons of the other three gates. Neil et al. (2017) utilize temporal input sparsity but need to enforce input similarity with threshold clipping. Campos et al. (2018) selectively skip updating the hidden states for some inputs. However, these work either depend on special cell structure or rely on the temporal similarity of inputs which is not evaluated on NLP tasks such as NMT. We are the first that propose a general and principled method to reduce memory access and computation of Gated RNNs, including both LSTMs and GRUs." }, { "heading": "6 CONCLUSION", "text": "In this paper, we describe a big-little dual-module inference method to mitigate the memory-bound problem in serving RNN-based models under latency-sensitive scenarios. We leverage the error resilience of nonlinear activation functions by using the lightweight little module to compute for the insensitive region and using the big module with skipped memory access and computation to compute for the sensitive region. With overall memory access reduced by near half, our method can achieve 1.54x to 1.75x wall-clock time speedup without significant degradation on model quality." }, { "heading": "APPENDIX A COMPARISON WITH WEIGHT PRUNING METHOD", "text": "We compare our proposed dual-module inference approach with the automated gradual pruning method (Zhu & Gupta, 2017), which is a popular pruning method with open implementation4. Firstly, compared with weight pruning, our method achieves better quality with practical speedup – 1.54x to 1.75x reduction on wall-clock time – on commodity CPUs while element-wise weight pruning requires specialized hardware to gain real speedup of computation given irregular sparsity. Moreover, our dual-module inference method can be further applied on top of pruned models to reduce execution time by reducing memory access." } ]
2,019
null
SP:b6ca7f80548c640f173512386883ce7e305dd96c
[ "The paper proposes a framework (Scoring-Aggregating-Planning (SAP)) for learning task-agnostic priors that allow generalization to new tasks without finetuning. The motivation for this is very clear - humans can perform much better than machines in zero-shot conditions because humans have learned priors about objects, semantics, physics, etc. This is achieved by learning a scoring function based on the final reward and a self-supervised learned dynamics model.", "The paper describes a method that aims to learn task-agnostic priors for zero-shot generalization. The main idea is to employ the following modeling approach on top of the model-based RL framework: a local convolution network is used to compute a score for each local state action pair, and then another network is used to aggregate all the scores. While the problem being studied is important and the experimental results seem positive, there are a few concerns." ]
Humans can learn task-agnostic priors from interactive experience and utilize the priors for novel tasks without any finetuning. In this paper, we propose ScoringAggregating-Planning (SAP), a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions with sparse reward and then plan on unseen tasks in zero-shot condition. The framework finds a neural score function for local regional state and action pairs that can be aggregated to approximate the quality of a full trajectory; moreover, a dynamics model that is learned with self-supervision can be incorporated for planning. Many previous works that leverage interactive data for policy learning either need massive on-policy environmental interactions or assume access to expert data while we can achieve the similar goal with pure off-policy imperfect data. Instantiating our framework results in a generalizable policy to unseen tasks. Experiments demonstrate that the proposed method can outperform baseline methods on a wide range of applications including gridworld, robotics tasks and video games. 1
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Joshua Achiam", "Shankar Sastry" ], "title": "Surprise-based intrinsic motivation for deep reinforcement learning", "venue": "arXiv preprint arXiv:1703.01732,", "year": 2017 }, { "authors": [ "Rishabh Agarwal", "Chen Liang", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "Learning to generalize from sparse and underspecified rewards", "venue": "arXiv preprint arXiv:1902.07198,", "year": 2019 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jose A Arjona-Medina", "Michael Gillhofer", "Michael Widrich", "Thomas Unterthiner", "Johannes Brandstetter", "Sepp Hochreiter" ], "title": "Rudder: Return decomposition for delayed rewards", "venue": "arXiv preprint arXiv:1806.07857,", "year": 2018 }, { "authors": [ "JA Bagnell", "Joel Chestnutt", "David M Bradley", "Nathan D Ratliff" ], "title": "Boosting structured prediction for imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Kenneth Bogert", "Prashant Doshi" ], "title": "Multi-robot inverse reinforcement learning under occlusion with state transition estimation", "venue": "In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 1837–1838. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2015 }, { "authors": [ "Kenneth Bogert", "Jonathan Feng-Shun Lin", "Prashant Doshi", "Dana Kulic" ], "title": "Expectation-maximization for inverse reinforcement learning with hidden data", "venue": "In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems,", "year": 2016 }, { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sample-efficient reinforcement learning with stochastic ensemble value expansion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "arXiv preprint arXiv:1808.04355,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Jaedeug Choi", "Kee-Eung Kim" ], "title": "Inverse reinforcement learning in partially observable environments", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Leshem Choshen", "Lior Fox", "Yonatan Loewenstein" ], "title": "Dora the explorer: Directed outreaching reinforcement action-selection", "venue": "arXiv preprint arXiv:1804.04012,", "year": 2018 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Marc Peter Deisenroth", "Carl Edward Rasmussen", "Dieter Fox" ], "title": "Learning to control a low-cost manipulator using data-efficient reinforcement learning", "venue": "Robotics: Science and Systems VII, pp", "year": 2011 }, { "authors": [ "Coline Devin", "Pieter Abbeel", "Trevor Darrell", "Sergey Levine" ], "title": "Deep object-centric representations for generalizable robot learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Alexey Dosovitskiy", "Vladlen Koltun" ], "title": "Learning to act by predicting the future", "venue": "arXiv preprint arXiv:1611.01779,", "year": 2016 }, { "authors": [ "Yilun Du", "Karthik Narasimhan" ], "title": "Task-agnostic dynamics priors for deep reinforcement learning", "venue": "arXiv preprint arXiv:1905.04819,", "year": 2019 }, { "authors": [ "Rachit Dubey", "Pulkit Agrawal", "Deepak Pathak", "Thomas L Griffiths", "Alexei A Efros" ], "title": "Investigating human priors for playing video games", "venue": "arXiv preprint arXiv:1802.10217,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Ian Goodfellow", "Sergey Levine" ], "title": "Unsupervised learning for physical interaction through video prediction", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Xin Yu Tan", "Yan Duan", "Trevor Darrell", "Sergey Levine", "Pieter Abbeel" ], "title": "Deep spatial autoencoders for visuomotor learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Justin Fu", "John Co-Reyes", "Sergey Levine" ], "title": "Ex2: Exploration with exemplar models for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "James J Gibson" ], "title": "The ecological approach to visual perception: classic edition", "venue": null, "year": 2014 }, { "authors": [ "Marek Grześ", "Daniel Kudenko" ], "title": "Online learning of shaping rewards in reinforcement learning", "venue": "Neural Networks,", "year": 2010 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei Rusu", "Loic Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner. Darla" ], "title": "Improving zero-shot transfer in reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Shiyu Huang", "Hang Su", "Jun Zhu", "Ting Chen" ], "title": "Combo-action: Training agent for fps game with auxiliary tasks", "venue": null, "year": 2019 }, { "authors": [ "Zhiao Huang", "Fangchen Liu", "Hao Su" ], "title": "Mapping state space using landmarks for universal goal reaching", "venue": "arXiv preprint arXiv:1908.05451,", "year": 2019 }, { "authors": [ "K Jetal Hunt", "D Sbarbaro", "R Żbikowski", "Peter J Gawthrop" ], "title": "Neural networks for control systems—a survey", "venue": null, "year": 1992 }, { "authors": [ "Eric Jang", "Coline Devin", "Vincent Vanhoucke", "Sergey Levine" ], "title": "Grasp2vec: Learning object representations from self-supervised grasping", "venue": "arXiv preprint arXiv:1811.06964,", "year": 2018 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Christian Kauten" ], "title": "Super Mario Bros for OpenAI Gym. GitHub, 2018", "venue": "URL https://github.com/ Kautenja/gym-super-mario-bros", "year": 2018 }, { "authors": [ "Ramtin Keramati", "Jay Whang", "Patrick Cho", "Emma Brunskill" ], "title": "Strategic object oriented reinforcement learning", "venue": "arXiv preprint arXiv:1806.00175,", "year": 2018 }, { "authors": [ "S Mohammad Khansari-Zadeh", "Aude Billard" ], "title": "Learning stable nonlinear dynamical systems with gaussian mixture models", "venue": "IEEE Transactions on Robotics,", "year": 2011 }, { "authors": [ "Jonathan Ko", "Dieter Fox" ], "title": "Gp-bayesfilters: Bayesian filtering using gaussian process prediction and observation models", "venue": "Autonomous Robots,", "year": 2009 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Sergey Levine", "Pieter Abbeel" ], "title": "Learning neural network policies with guided policy search under unknown dynamics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Sergey Levine", "Vladlen Koltun" ], "title": "Guided policy search", "venue": "In International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Minne Li", "Pranav Nashikkar", "Jun Wang" ], "title": "Optimizing object-based perception and control by free-energy principle", "venue": "CoRR, abs/1903.01385,", "year": 2019 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Rudolf Lioutikov", "Alexandros Paraschos", "Jan Peters", "Gerhard Neumann" ], "title": "Sample-based informationltheoretic stochastic optimal control", "venue": "In 2014 IEEE International Conference on Robotics and Automation (ICRA),", "year": 2014 }, { "authors": [ "William Lotter", "Gabriel Kreiman", "David Cox" ], "title": "Deep predictive coding networks for video prediction and unsupervised learning", "venue": "arXiv preprint arXiv:1605.08104,", "year": 2016 }, { "authors": [ "Yuping Luo", "Huazhe Xu", "Yuanzhi Li", "Yuandong Tian", "Trevor Darrell", "Tengyu Ma" ], "title": "Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees", "venue": "arXiv preprint arXiv:1807.03858,", "year": 2018 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Michael Bowling" ], "title": "Count-based exploration with the successor representation", "venue": "arXiv preprint arXiv:1807.11622,", "year": 2018 }, { "authors": [ "Maryam Marashi", "Alireza Khalilian", "Mohammad Ebrahim Shiri" ], "title": "Automatic reward shaping in reinforcement learning using graph analysis", "venue": "In 2012 2nd International eConference on Computer and Knowledge Engineering (ICCKE),", "year": 2012 }, { "authors": [ "Bhaskara Marthi" ], "title": "Automatic shaping and decomposition of reward functions", "venue": "In Proceedings of the 24th International Conference on Machine learning,", "year": 2007 }, { "authors": [ "David Q Mayne", "James B Rawlings", "Christopher V Rao", "Pierre OM Scokaert" ], "title": "Constrained model predictive control: Stability and optimality", "venue": "Automatica, 36(6in):789–814,", "year": 2000 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Jun Morimoto", "Christopher G Atkeson" ], "title": "Minimax differential dynamic programming: An application to robust biped walking", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "Andrew Y Ng" ], "title": "Feature selection, l 1 vs. l 2 regularization, and rotational invariance", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Andrew Y Ng", "Daishi Harada", "Stuart Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": "In ICML,", "year": 1999 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Icml,", "year": 2000 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli" ], "title": "Zero-shot task generalization with multi-task deep reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by selfsupervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "arXiv preprint arXiv:1802.09464,", "year": 2018 }, { "authors": [ "Nicholas Rhinehart", "Kris M Kitani" ], "title": "First-person activity forecasting with online inverse reinforcement learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Martin Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom Van de Wiele", "Volodymyr Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by playing-solving sparse reward tasks from scratch", "venue": "arXiv preprint arXiv:1802.10567,", "year": 2018 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "A possibility for implementing curiosity and boredom in model-building neural controllers", "venue": "In Proc. of the international conference on simulation of adaptive behavior: From animals to animats,", "year": 1991 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Sungryull Sohn", "Junhyuk Oh", "Honglak Lee" ], "title": "Hierarchical reinforcement learning for zero-shot generalization with subtask dependencies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Bradly C Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "arXiv preprint arXiv:1507.00814,", "year": 2015 }, { "authors": [ "Alexander L Strehl", "Michael L Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Umar Syed", "Robert E Schapire" ], "title": "A game-theoretic approach to apprenticeship learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Voot Tangkaratt", "Syogo Mori", "Tingting Zhao", "Jun Morimoto", "Masashi Sugiyama" ], "title": "Model-based policy gradients with parameter-based exploration by least-squares conditional density estimation", "venue": "Neural networks,", "year": 2014 }, { "authors": [ "Aaron Tucker", "Adam Gleave", "Stuart Russell" ], "title": "Inverse reinforcement learning for video games", "venue": "arXiv preprint arXiv:1810.10593,", "year": 2018 }, { "authors": [ "Dequan Wang", "Coline Devin", "Qi-Zhi Cai", "Fisher Yu", "Trevor Darrell" ], "title": "Deep object-centric policies for autonomous driving", "venue": "In 2019 International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Shaojun Wang", "Ronald Rosenfeld", "Yunxin Zhao", "Dale Schuurmans" ], "title": "The latent maximum entropy principle", "venue": "In Proceedings IEEE International Symposium on Information Theory,,", "year": 2002 }, { "authors": [ "Manuel Watter", "Jost Springenberg", "Joschka Boedecker", "Martin Riedmiller" ], "title": "Embed to control: A locally linear latent dynamics model for control from raw images", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Theophane Weber", "Sebastien Racaniere", "David P Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomenech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "arXiv preprint arXiv:1707.06203,", "year": 2017 }, { "authors": [ "Yuxin Wu", "Yuandong Tian" ], "title": "Training agent for first-person shooter game with actor-critic curriculum", "venue": null, "year": 2016 }, { "authors": [ "Chris Xie", "Sachin Patil", "Teodor Moldovan", "Sergey Levine", "Pieter Abbeel" ], "title": "Model-based reinforcement learning with parametrized physical models and optimism-driven exploration", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2016 }, { "authors": [ "Xu Xie", "Changyang Li", "Chi Zhang", "Yixin Zhu", "Song-Chun Zhu" ], "title": "Learning virtual grasp with failed demonstrations via bayesian inverse reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Guangxiang Zhu", "Zhiao Huang", "Chongjie Zhang" ], "title": "Object-oriented dynamics predictor", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "While deep Reinforcement Learning (RL) methods have shown impressive performance on video games (Mnih et al., 2015) and robotics tasks (Schulman et al., 2015; Lillicrap et al., 2015), they solve each problem tabula rasa. Hence, it will be hard for them to generalize to new tasks without re-training even due to small changes. However, humans can quickly adapt their skills to a new task that requires similar priors e.g. physics, semantics, affordances to past experience. The priors can be learned from a spectrum of examples ranging from perfect demonstrative ones that accomplish certain tasks to aimless exploration.\nA parameterized intelligent agent “Mario” which learns to move to the right in the upper level in Figure 1 would fail to transfer the priors from to the lower level in Figure 1 and further play the game in the new level because change of configurations and background, e.g. different shapes of ladder, new fence. When an inexperienced human player is controlling the Mario to move it to the right in the upper level, it might take many trials for him/her to realize the falling to a pit and approaching the “koopa”(turtle) from the left are harmful while standing on the top of the “koopa”(turtle) is not. However, once learned, s/he can infer similar mechanisms in the lower level in Figure 1 without additional trials because human have a variety of priors including the concept of object, similarity, semantics, affordance, etc (Gibson, 2014; Dubey et al., 2018). In this paper, we intend to teach machine agents to realize and utilize useful priors to generalize to new tasks without finetuning.\nToward addressing the generalization problem with learned priors, we follow the intuition that: (1) each trajectory in a video game or a robotics task is composed of state-action pairs with object interactions (2) terminal rewards can be approximated by the aggregation of the scores for each state-action pairs. With those intuitions in mind, we summarize our proposed method in broad strokes. Given a trajectory with a terminal sparse reward, we first parameterize the score of an action-local region pair with a convolutional neural network F and then aggregate the scores to approximate the final sparse reward. To further enable actionable agents to utilize the scores, a neural dynamics model\n1Project page: https://sites.google.com/view/sapnew/home.\ncan be learned from the interaction data using self-supervision. We show that how an agent can take advantage of the scoring function and the learned dynamics model with planning algorithms (Mayne et al., 2000). We adopt the sparse terminal reward setting because in most of the tasks, step-by-step rewards are hard to obtain while final evaluations for trajectories are relatively easy.\nReaders may argue that learning a dense score for every interaction step is reminiscent of Inverse Reinforcement Learning (Ng et al., 2000; Abbeel & Ng, 2004). The distinctions between the proposed method and IRL are threefold: First, instead of learning a reward function of state s, we learn a scoring function of a local state sl and an action a, which is sufficiently rich in a physical environment and experimentally can generalize well. Second, with the scoring function in hand, we use a dynamics model learned from passive data to obtain the actual policy in a model-based manner while IRL needs to re-train an agent that can be as data inefficient as model-free RL. However, IRL can have difficulty learning a useful model because the expert demonstrations usually only cover a small portion of the true dynamics. Third, we eliminate the assumption of expensive expert demonstrations with the cost of adding a relatively economical sparse reward in the end. This elimination not only reduces the cost for data collection, but also includes more diverse data to train a robust model.\nThe proposed scoring function, beyond being a cost function for planning, can also be treated as an indicator of the existence of objects, which affect the evaluation of a trajectory. We empirically evaluate the scores for objects extracted in the context of human priors and hence find the potential of using our method as an unsupervised method for object discovery.\nIn this paper, we have three major contributions. First, we propose a framework that can learn task-agnostic priors that can generalize to novel tasks. Second, we incorporate a self-supervised learned dynamics model with scoring function to learn a useful policy. Third, we demonstrate the effectiveness of the proposed method on a didactic grid-world example, a well-known video game “Super Mario Bros” and a robotics Blocked-Reach environment and show our method outperforms various baselines. Last but not least, we find that objects emerge from our method in an unsupervised manner, which could be useful to other visual tasks." }, { "heading": "2 PRELIMINARIES", "text": "In this paper, we formulate each environment as an Markov Decision Process (MDP). We represent the MDP by a tuple: (S,A, p, r, γ), where S is the state space and A is the action space. A MDP is fully specified by a state s ∈ S. A MDP evolves with an action a ∈ A by a probability distribution p(s′|s, a). The MDP emits a reward r(s, a) each step. γ ∈ (0, 1) is the discount factor. Reinforcement learning aims to learn a conditional distribution over the action space given a state π(·|s) that maximizes the discounted future rewards:\nπ∗ = argmax π E at∼π(at|st)\nst+1∼p(st+1|st,at)\n∞∑ t=0 γtr(st, at)\nThe state transition probability p(s′|s, a) is treated as unknown in model-free RL problems. Modelbased methods explicitly learn a dynamics modelM(·|s, a) that specifies the conditional distribution of the next state given the current state s and action a from environmental interactions.\nWith an environment modelM, one can select an action to rollout with the model recurrently in order to maximize the discounted future reward. One method that approximately finds the optimal action is the Model Predictive Control (MPC) algorithm. It looks ahead for a horizon of H steps and selects an action sequence that maximizes the discounted reward for the future H steps:\nargmax a0,··· ,aH−1 H−1∑ t=0 γtr(st, at)\nwhere st+1 =M(st, at). To simplify notation, here we assume the environment is deterministic and slightly abuse the notations such thatM(st, at) returns the next state instead of state distribution. We note that MPC can also use the ground truth environment dynamics p(·|s, a)." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "An intelligent agent should be able to learn priors from its past experiences and to generalize to related yet unseen tasks. To facilitate such goals, we formulate the problem as follows:\nThe agent is first presented with a bank of exploratory trajectories {τi} , i = 1, 2 · · ·N collected in a training environment E1 = (S,A, p). Each τi is a trajectory and τi = {(st,at)}, t = 1, 2 · · ·Ki. These trajectories are ramdom explorations/interactions with the environment. Instead of specifying the task by per step reward, which is the standard MDP setting, we propose to only evaluate the performance of each trajectory with a terminal evaluation r(τ) in E1 when a task T is given. At test time, we would like the agent to perform the task T , using zero extra interaction with the new but related environment, represented by E2 = (S ′,A, p′). We assume the test environment E2 can have different state distribution and related but different dynamics from the experience environment E1. However, we assume the actions that an agent could perform stay the same. We evaluate an agent on task T by giving a single terminal reward r(τi) per trajectory τi to its previous experiences. This reward is only used for evaluation and an agent can never utilize it while carrying off the task. The proposed formulation requires much less information and thus more realistic. In this paper, we focus on locomotion tasks with object interactions, such as Super Mario running with other objects in presence, or a Reacher robot acting with obstacles around." }, { "heading": "3.2 THE SCORING-AGGREGATING-PLANNING (SAP) FRAMEWORK", "text": "We propose the Scoring-Aggregating-Planning framework to solve this problem. As an overview, we propose to learn a per step neural scoring function Fθ that scores a sub-region, subset of observation space that surrounds the agent. A local region score is simply a sum over all the sub-regions that are in the local region. The local region scores can be aggregated along the trajectory to approximate\nthe terminal sparse reward. Meanwhile, a dynamics modelMφ is learned to approximate the true transition p(·|s,a) based on past experience E1. After both the scoring function and the dynamics model are learned, we perform a Model Predictive Control algorithm to get the final policy on the new environment E2 for the task T .\nScoring The per step sub-region scoring function can be described as Fθ(Wl(st),at). Here θ denotes parameters, W is a function that extracts local regions from states and l is a sub-region indicator in a metric space L. We note that the neural network’s parameters are shared for every local region. A local region score then is ∑ l∈L Fθ(Wl(st),at). Intuitively, this function measures how well an action at performs in the current task, based on a local region l extracted from state s.\nWe presume a local region can be extracted for scoring because in a physical environment, an agent could only interact with the world within its sensing capabilities. From another perspective, it can be seen as a decomposition of the sparse reward into the per-step rewards.\nSpecifically, on the metric space associated with the problem, we divide the local region around the agent into n sub-regions l. For example, in a Cartesian coordinate system, we can divide each coordinate independently. For each l, we use a sub-region scoring network to produce a scoring table of size |A| × |L|, where |A| is the action dimension, and |L| denotes the number of possible relative positions around the agent. One entry is selected from the table as the score for this sub-region l based on the current action taken and the relative position of this sub-region l to the agent.\nAggregating To approximate the terminal rewards and learn the scoring network, we aggregate the local region scores ∑ l∈L Fθ(W(sl),at) for each step into a single aggregated score J , by an aggregating function G:\nJθ(τ) = G(st,at)∈τ ( ∑ l∈L Fθ(Wl(st),at))\nThe aggregated score J are then fitted to the sparse terminal reward. In practice, G is chosen based on the form of the final sparse terminal reward, e.g. a max or a sum function. In the learning process, the Fθ function is learned by back-propping errors between the terminal sparse reward and the predicted J through the aggregation function. In this paper, we use `2 loss that is: minθ 12 (Jθ(τ)− r(τ)) 2.\nPlanning To solve the task, we propose to use planning algorithms to find optimal action based on the learned scoring function and a learned dynamics model. As shown in the part (c) of Figure 2, we learn a forward dynamics modelMφ based on the exploratory data with a supervised loss function. Specifically, we train a neural network that takes in the action at, state st and output ŝt+1, which is an estimate of st+1. We use an `2 loss as the objective: minφ 12 (Mφ(st, at)− st+1) 2\nWith the learned dynamics model and the scoring function, we solve an optimization problem using the Model Predictive Control (MPC) algorithm to find the best trajectory for a task T in environment E2. The objective of the optimization problem is to minimize −Jθ(τ ′). Here τ ′ is a H-step trajectory sampled based onMφ starting from some state si. The actual algorithm can be stated as starting from the current state st=i, we randomly select several action sequence up to length H . With the dynamics model, we can roll out the estimated states ŝt+1, · · · , ŝtH . The learned scoring function and the aggregation function can give us an aggregated score for each of the action sequence. We select the action sequence that gives us the best aggregated score and execute the first action in the environment. And we repeat the procedure starting at the new state si." }, { "heading": "4 RELATED WORK", "text": "Inverse Reinforcement Learning. The seminal work Ng et al. (2000) proposed inverse reinforcement learning (IRL). IRL aims to learn a reward function from a set of expert demonstrations. The original IRL is demonstrated on physical state representations, while recent work (Tucker et al., 2018; Rhinehart & Kitani, 2017) has attempted to extend the work into visual states. Although IRL and SAP both learn functions from a set of off-policy data, they fundamentally study different problem — IRL learns a reward function, which can be used for a model-free RL algorithm, from expert demonstrations, while our method learns from exploratory data that is not necessarily related to any tasks. There are some works dealing with violation of the assumptions of IRL, such as inaccurate perception of the state (Bogert & Doshi, 2015; Wang et al., 2002; Bogert et al., 2016; Choi & Kim, 2011), or incomplete dynamics model (Syed & Schapire, 2008; Bogert & Doshi, 2015; Levine & Abbeel, 2014; Bagnell et al., 2007; Ng, 2004); however, IRL does not study the case when the dynamics model is purely learned and the demonstrations are suboptimal. Recent work (Xie et al., 2019) proposed to leverage failed demonstrations with model-free IRL to perform grasping tasks; though sharing some intuition, our work is different because of the model-based nature.\nReward Shaping. Ng et al. (1999) studied the problem of reward shaping, i.e. how to change the form of the reward without affecting the optimal policy. The scoring-aggregating part can also be thought as a novel form of reward shaping where reward functions are automatically learned. Most of the efforts in reward shaping require careful manual design (OpenAI, 2018; Wu & Tian, 2016). A corpus of literature (Marthi, 2007; Grześ & Kudenko, 2010; Marashi et al., 2012) try to learn the reward shaping automatically. Marthi (2007) assumes that the state space can be abstracted, such that one can form an abstracted MDP, which can be solved exactly. Other automatic reward shaping methods, such as Grześ & Kudenko (2010); Marashi et al. (2012), try to build a graph on top of discretized states. However, the methods do not apply to the high-dimensional input such as image, while our SAP framework could. One recent work RUDDER (Arjona-Medina et al., 2018) utilizes an LSTM to decompose rewards into per-step rewards. This method is orthogonal and complementary to our framework.\nRL with Sparse Rewards. When only sparse rewards are provided, an RL agent suffers a harder exploration problem. In the literature, there are mainly three categories of methods to deal with this problem for high dimensional tasks: (1) Unsupervised exploration strategies, such as curiosity-driven exploration (Pathak et al., 2017; Burda et al., 2018a;b; Schmidhuber, 1991; Stadie et al., 2015; Achiam & Sastry, 2017), or count-based exploration (Tang et al., 2017; Strehl & Littman, 2008; Bellemare et al., 2016; Fu et al., 2017; Ostrovski et al., 2017; Machado et al., 2018; Choshen et al., 2018), solve the sparse reward problem by more efficient explorations. (2) In goal-conditioned tasks, such as pushing an object to some random location, one can use Hindsight Experience Replay (Andrychowicz et al., 2017) to learn from experiences with different goals. (3) More generally, one can define auxiliary tasks to learn a meaningful intermediate representations (Huang et al., 2019a; Silver et al., 2018; Dosovitskiy & Koltun, 2016; Riedmiller et al., 2018; Agarwal et al., 2019). Different from previous methods, we approach this problem by learning a scoring function for each timestep, based on the single terminal reward. This effectively converts the single terminal reward to a set of rich intermediate representations, on top of which we can apply planning algorithms, such as MPC.\nModel-Based RL. In the planning part of our SAP framework, we train a dynamics model. i.e. under the umbrella of model-based algorithms (Sutton, 1991). This idea has been widely studied in the area of robotics (Deisenroth et al., 2013; Deisenroth & Rasmussen, 2011; Morimoto & Atkeson, 2003; Deisenroth et al., 2011). The line of work uses a variety of methods to learn an accurate dynamics\nmodel ranging from Gaussian Process (Ko & Fox, 2009), time-varying linear models (Levine & Koltun, 2013; Lioutikov et al., 2014; Xie et al., 2016), mixture of gaussian models (Khansari-Zadeh & Billard, 2011) to neural networks (Hunt et al., 1992; Tangkaratt et al., 2014; Kurutach et al., 2018; Chua et al., 2018; Luo et al., 2018; Buckman et al., 2018). This paradigm has been applied to high dimensional space, such as simulated and real robotic applications (Watter et al., 2015; Finn et al., 2016b; Hafner et al., 2018), and Atari games (Kaiser et al., 2019; Weber et al., 2017). Although model-based RL has been extensively studied, none of the previous work has explored combining it with learning the dense task-agnostic scores from sparse signals.\nZero-Shot Generalization and Priors Prior knowledge comes from previous experience including interaction with objects, etc. Recently, researchers have shown the importance of priors in playing video games (Dubey et al., 2018). More works have also been done to utilize visual priors such objects in many other domains such as robotics for generalization, etc. (Wang et al., 2019; Jang et al., 2018; Devin et al., 2018; Zhu et al., 2018; Du & Narasimhan, 2019). Keramati et al. (2018); Li et al. (2019); Higgins et al. (2017) explicitly extended RL to handle object level learning. While our method does not explicitly model objects, we have shown that meaningful scores are learned for objects in our SAP framework, which explains why our method generalizes to new tasks without any finetuning. Other works (Sohn et al., 2018; Oh et al., 2017)try to learn compositional skills that can be transferred to new tasks, which is orthogonal and complementary to the proposed method." }, { "heading": "5 EXPERIMENT", "text": "In this section, we would like to study how well our SAP framework performs compare to other methods, and the roles of various components in the framework. We conduct experiments on three environments in different domains: a didactic gridworld task, a video game “Super Mario Bros” (Kauten, 2018) and a robotics blocked reacher task (Huang et al., 2019b). Environment details, architectures, hyper-parameters are described thoroughly in Appendix. A." }, { "heading": "5.1 DIDACTIC EXAMPLE: HIDDEN REWARD GRIDWORLD", "text": "In order to investigate whether the proposed framework can learn meaningful scores and hence induce correct policy, we start with a simple didactic task Hidden Reward Gridworld where the environment matches the assumptions of our method. This environment will reveal to what level can our method recover the per step scores. Figure 3a shows an illustrative example of the Hidden Reward Gridworld. In the grid world, there is an object at each location, where each type of object has some unknown number of points. The agent has already explored some trajectories, and only the sum of points is known by the end of each trajectory. It needs to learn the value of each object and collect as much value as possible in a new environment with different object configurations. In our experiment, we use an 8 × 8 grid, with 16 different types of objects, and each of them worth a value of 0, 1 to 15\nrespectively. To make the task more challenging, instead of giving the identity of each object by object id, we generate a 16-dimensional noisy feature for each type of object. The noisy feature representation of object mimics the output of a perception network from visual inputs.\nOn this task, our method operates as follows. We use a two layer fully connected neural network to predict per step score from the 16-dimensional feature. Those per step scores are aggregated by a sum operator, and fitted to the training data. We opt to use the environment dynamics model, because it can be perfectly\nlearned by a tabular function. As shown in Table 1, we find that a neural network can fit the object value based on the features with an error 0.04 on a training environment and 0.34 on a new task even with the feature noise presence. To see how well our method performs on this task, we train two behavior cloning agents: an agent imitates the exploratory data (denoted as BC-random) and the other imitates the SAP behavior on the training environment (denoted as BC-SAP). As shown in Table 1, BC-random has far inferior performance, since it clones the exploration behavior, which does not maximize the value collected. BC-SAP performs as well as SAP in the training environment but performs worse than SAP in the new environment. This shows that even allowed to clone the behavior of SAP, behavior cloning still do not generalize as well as our method in a new environment.\nIn this grid world environment, we have shown that SAP is capable of learning accurate scoring functions, even in the presence of object feature noises. We have also demonstrated that SAP would generalize better than alternative algorithms." }, { "heading": "5.2 SAP ON SUPER MARIO BROS WITH SPARSE REWARDS", "text": "To evaluate our proposed algorithm in a more realistic environment, we run SAP and a few alternative methods in the Super Mario Bros environment. In this environment, we neither have a clear definition of the dense score for each step, nor the aggregation procedure. This environment also features high-dimensional visual observations, which is more challenging since we have a larger hypothesis space. The original game has 240 × 256 image input and discrete action space with 5 choices. We wrap the environment following Mnih et al. (2015) and what described in Appendix A.2; finally, we obtain a 84 × 84 size 4-frame stacked gray-scale stacked observation. The goal for an agent is to survive and go toward the right as far as possible. The environment returns how far the agent goes to the right at the end of a trajectory as the delayed terminal sparse reward.\nWe apply our SAP framework as follows. We first divide the local region around the agent into eight 12 by 12 pixel sub-regions based on relative position as illustrated in Figure. 7 in the Appendix. Each sub-region is scored by a CNN, which has a final FC layer to output a score matrix. The matrix has the shape dim(action) by dim(relative position), which are 5 and 8 respectively. Then an action selector and a sub-region selector jointly select row corresponding to the agent’s action and the column corresponding to the relative position. The sum of all the sub-region scores forms the local region score. Then we minimize the `2 loss between the aggregated local region scores along the trajectory and the terminal reward. A dynamics model is also learned by training another CNN. The dynamics model takes in a 30 by 30 size crop around the agent, the agent’s location as well a one-hot action vector. Instead of outputting a full generated image, we only predict the future location of the agent recursively. We avoid video predictive models because it suffers the blurry effect when predicting long term future (Lotter et al., 2016; Finn et al., 2016a). We plan with the learned scores and dynamics model with a standard MPC algorithm with random actions that looks ahead 10 steps." }, { "heading": "5.2.1 COMPARISONS", "text": "We compare our method with the following methods:\nExploration Data Exploration Data is the data from which we learn the scores, dynamics model and imitate. The data is collected from a suboptimal policy described in Appendix A.2.5. The average reward on this dataset is a baseline for all other methods. This is omitted in new tasks because we only know the performance in the environment where the data is collected.\nBehavioral Cloning Behavioral Cloning (BC) learns a mapping from a state to an action on the exploration data using supervised learning. We use cross-entropy loss for predicting the actions.\nDARLA (Higgins et al., 2017) DARLA relies on learning a latent state representation that can be transferred from the training environments to the testing environment. It achieves this goal by obtaining a disentangled representation of the environment’s generative factors before learning to act. We use the latent representation as the observations for a behavioral cloning agent.\nNaive Human Priors Naive Human Priors method (NHP) incorporates model predictive control with predefined naive human priors, which is +1 score if the agent tries to move or jump toward the right and 0 otherwise. NHP replaces the scoring-aggregation step of our method by a manually defined prior. We note that it might be hard to design human priors in other tasks.\nIn Figure. 3b, the results show that the proposed SAP model outperforms all the baselines with a large margin on the same level they are trained on. We believe that there are two major reasons for BC’s unsatisfactory performance: 1. we only have access to the exploration data which is suboptimal for the task. 2. When seeing rare events in the game (the exploration data hardly reach the ending part), it fails to generalize. SAP also outperforms DARLA on both training and generalization tasks. We believe it is because learning disentangled representations on the Mario games is hard, since observations can change dramatically in game settings. Comparing the SAP model with the NHP method, we demonstrate the learned priors can empower an agent with a better understanding of the world thus stronger performance. We show qualitative results in the subsection 5.2.3 to validate that the learned priors contain meaningful scores that leads to better actions. SAP also outperforms all the baselines in an unseen level without any finetuning (Fig. 3c), which proves that it can generalize well." }, { "heading": "5.2.2 ABLATIVE STUDIES", "text": "(a) GT dynamics model on W1S1 (train) (b) GT dynamics model & no done on W1S1 (train) (c) GT dynamics model on W2S1 (test) (d) GT dynamics model & no done on W2S1 (test)\nFigure 4: Ablative results. (a),(c) are SAP on W1S1 and W2S1 respectively with the groundtruth dynamics model. (b),(d) are similar to (a),(c) but without the true “done” signal. Error bar is 95% confidence interval. We see that with a perfect dynamics model the performance is boosted for both model-based methods. However, even with a disabled “done” signal, SAP still works well while NHP performs significantly worse than before.\nTowards understanding the effect of different components of SAP, we investigate the performance of an agent using learned scores on a perfect dynamics model which shows the upper bound for improvement from better models. We further investigate how much the “done” signal from the perfect model help with the performance. Hence, we perform two main ablative experiments:\nGroundtruth Model: We apply the SAP method with a groundtruth dynamics. Note this setting is only feasible for video games and simulated environments; however, this gives an upper bound for the proposed method by continually improving the dynamics model. In Fig. 4a and Fig. 4c, we find that with a perfect dynamics model, both NHP and SAP has a performance boost on both the original task and a novel task while SAP can still outperform baselines by a large margin. This suggests SAP can perform better if the dynamics model can be further improved.\nNo Done Signal: We hypothesize that the “done” signal from the perfect model contributes to the superior performance because it naturally tells an MPC algorithm not to select a short trajectory if reward at every step is positive (e.g. in the NHP). In Fig. 4b and Fig. 4d, we see that if “done” signal is not provided, SAP still preserves the performance and outperforms all the baselines. However, we find NHP significantly dropped below the level of exploration data return which indicates the NHP heavily relies on the “done” signal.\nMore ablation studies, such as planning horizon, visual representations in Appendix A.2.7." }, { "heading": "5.2.3 VISUALIZATION OF LEARNED SCORES AND ACTIONS", "text": "In this section, we qualitatively study the induced action by greedily maximizing one-step score. The actions can be suboptimal and different from the policy described above because the greedy actions only consider one-step score but it still capture the behavior of a policy.\nWe visualize the computed actions on World 5 Stage 1 (Figure. 5) which is visually different from previous tasks. In this testing case, we see that the actions are reasonable, such as avoiding obstacles and monsters by jumping over them, even in the face of previously unseen configurations and different backgrounds. However, the “Piranha Plants” are not recognized because all the prior scores are learned from W1S1 where it never appears. More visualization of action maps is available in Appendix A.2.8.\nAdditionally, we visualize the prior scores for different representative local sub-regions in Appendix A.2.8, Figure. 9. In this setting, we synthetically put an agent on a different relative position near an object. We find that our\nmethod learns meaningful scores such as assigning low scores for walking towards “koopa”(turtle) and a high score for jumping.\nThose qualitative studies further demonstrate that the SAP method can assign meaningful scores for different objects in an unsupervised manner. It also produces good actions even in a new environment." }, { "heading": "5.3 SAP ON THE 3-D ROBOTICS TASK", "text": "In this section, we further study the SAP method to understand its property with a higher dimensional observation space. We conduct experiments in a 3-D robotics environment, BlockedReacher-v0. In this environment, a robot hand is initialized at the left side of a table and tries to reach the right. Between the robot hand and the goal, there are a few blocks standing as obstacles. The task is moving the robot hand to reach a point on y = 1.0 as fast as possible. To test the generalization capability, we create four different configurations of the obstacles, as shown in Figure 6. Figure 6a is the environment where we collect exploration data from and Figure 6b, c, d are the testing environments.\nWe apply the SAP framework as follows. The original observation is a 25-dimensional continuous state and the action space is a 3-dimensional continuous control. They are discretized into voxels and 8 discrete actions as described in Appendix A.3.1. In this environment, the local region is set to a 15 × 15 × 15 cube of voxels around robot hand end effector. We divide this cube into 27\n5 × 5 × 5 sub-regions. The scoring function is a fully-connected neural network that takes in a flattened voxel sub-region and outputs the score matrix with a shape of 26× 8. The scores for each step are aggregated by a sum operator along the trajectory. We also train a 3D convolutional neural net as the dynamics model. The dynamics model takes in a 15× 15× 15 local region as well as an action, and outputs the next robot hand location. With the learned scores and the dynamics model, we plan using the MPC method with a horizon of 8 steps.\nWe perform similar baselines as in the previous section that is detailed in Appendix A.3.4. In Table 2, we compare our method with the NHP method on the 3D robot reaching task. We found that our method needs significantly fewer steps than the NHP method, both in the training environment and testing ones. We find that SAP significantly moves faster to the right because it learns a negative score for the obstacles. However, the NHP method, which has +1 positive for each 1 meter moved to the right, would be stuck by the obstacles for a longer duration. We found that the learned dynamics model is relatively accurate in this domain, such that the performance using the learned dynamics is close to that of perfect dynamics. These experiments show that our method can be applied to robotics environment that can be hard for some algorithms due to the 3-D nature. Moreover, we demonstrate again that using partial states (local regions) with SAP generalize better than baselines." }, { "heading": "6 CONCLUSION", "text": "We devise a novel Scoring-Aggregating-Planning (SAP) framework for designing algorithms that can learn generalizable priors from exploration and sparse rewards for novel tasks. We find the proposed method can capture the transferable priors and take advantage of the priors without any finetuning. Experimental results also show that following the SAP framework, designed algorithms outperform a variety of baseline methods on both training and unseen testing tasks in different application domains.\nWhile this paper explores some applications under SAP framework, many compelling questions remain open. There is a welcoming avenue for future work to improve each component of the framework. For example, for complex tasks, there might be priors beyond contact force and game dynamics with a much more complicated action space. Hence, how to extract relational priors from them to solve novel tasks is still yet to be explored. Dubey et al. (2018) thoroughly studied existing human priors in video game playing; however, it is still not clear how to use the correct priors for real-world applications in an SAP framework (e.g. fire is good when you want to look but harmful while being too close).\nThere are many interactive samples in the real world, but most of them are suboptimal. However, an evaluation of them can be given from a task-specific score or human evaluation. Our method excels in this setting. In theory, it can be extended to the case a binary sparse reward is given by carefully choosing an aggregator such as logic operators with sufficient samples. We leave those extensions for future works." }, { "heading": "A EXPERIMENT SPECS", "text": "" }, { "heading": "A.1 HIDDEN REWARD GRIDWORLD", "text": "" }, { "heading": "A.1.1 ENVIRONMENT.", "text": "In Figure 3a, we visualize a sample of the gridworld environment. Each entry correspond to a noised feature vector based on the type of object in it. Each feature is a length 16 vector whose entries are uniformly sampled from [0, 1]. Upon each feature, we add a small random noise from a normal distribution with µ = 0, σ = 0.05. The outer-most entries correspond to padding objects whose rewards are 0. The action space includes move toward four directions up, down, left, right. If an agent attempts to take an action which leads to outside of our grid, it will be ignored be the environment." }, { "heading": "A.1.2 ARCHITECTURES FOR SCORE FUNCTION AND DYNAMICS MODEL.", "text": "We train a two layer fully connected neural networks with 32 and 16 hidden units respectively and a ReLU activation function to approximate the score for each grid.\nIn this environment, we do not have a learned dynamics model.\nHyperparameters. During training, we use an adam optimizer with learning rate 1e-3, β1 = 0.9, β2 = 0.999. The learning rate is reduced to 1e-4 after 30000 iterations. The batchsize is 128. We use horizon = 4 as our planning horizon." }, { "heading": "A.2 SUPER MARIO BROS", "text": "" }, { "heading": "A.2.1 ENVIRONMENT.", "text": "We wrap the original Super Mario environments with additional wrappers. We wrap the action space into 5 discrete joypad actions, none, walk right, jump right, run right and hyper jump right. We follow (Burda et al., 2018b) to add a sticky action wrapper that repeats the last action with a probability of 20%. Besides this, we follow add the standard wrapper as in past work (Mnih et al., 2015)." }, { "heading": "A.2.2 ARCHITECTURES FOR SCORE FUNCTION AND DYNAMICS MODEL.", "text": "For the score function, we train a CNN taking each 12px by 12px sub-region as input with 2 conv layers and 1 hidden fully connected layers. For each conv layer, we use a filter of size 3 by 3 with stride 2 with number of output channels equals to 8 and 16 respectively. “Same padding” is used for each conv layer. The fully connected layers have 128 units. Relu functions are applied as activation except the last layer.\nFor the dynamics model, we train a neural network with the following inputs: a. 30 by 30 local observation around mario. b. current action along with 3 recent actions encoded in one-hot tensor. c. 3 most recent position shifts. d. one-hot encoding of the current planning step. Input a is encoded with 4 sequential conv layers with kernel size 3 and stride 2. Output channels are 8, 16, 32, 64 respectively. A global max pooling follows the conv layers. Input b, c, d are each encoded with a 64 node fc layer. The encoded results are then concatenated and go through a 128 units hidden fc layer. This layer connects to two output heads, one predicting shift in location and one predicting “done” with sigmoid activation. Relu function is applied as activation for all intermediate layers." }, { "heading": "A.2.3 HYPERPARAMETERS.", "text": "During training, we use an adam optimizer with learning rate 3e-4, β1 = 0.9, β2 = 0.999. The batchsize is 256 for score function training and 64 for dynamics model. We use horizon = 10 as our planning horizon. We use a discount factor γ = 0.95 and 128 environments in our MPC." }, { "heading": "A.2.4 MORE ON TRAINING", "text": "In the scoring function training, each data point is a tuple of a down sampled trajectory and a calculated score. We down sample the trajectory in the exploration data by taking data from every\ntwo steps. Half the the trajectories ends with a “done”(death) event and half are not. For those ends with “done”, the score is the distance mario traveled by mario at the end. For the other trajectories, the score is the distance mario traveled by the end plus a mean future score. The mean future score of a trajectory is defined to be the average extra distance traveled by longer (in terms of distance) trajectories than our trajectory. We note that all the information are contained in the exploration data." }, { "heading": "A.2.5 MORE DETAILS ON BASELINES", "text": "Behaviroal Cloning (BC). As super mario is a deterministic environment, we noticed pure behavior cloning trivially get stuck at a tube at the very beginning of level 1-1 and die at an early stage of 2-1. Thus we select action using sampling from output logits instead of taking argmax.\nExploration Data We train a policy with only curiosity as rewards (Pathak et al., 2017). However, we early stopped the training after 5e7 steps which is far from the convergence at 1e9 steps. We further added an -greedy noise when sampling demonstrations with = 0.4 for 20000 episodes and = 0.2 for 10000 episodes." }, { "heading": "A.2.6 ADDITIONAL ABLATIONS", "text": "Ablation of Planning Steps In this section, we conduct additional ablative experiments to evaluate the effect of the planning horizon in a MPC method. In Table. 3, we see that our method fluctuates a little with different planning steps in a relatively small range and outperforms baselines constantly. In the main paper, we choose horizon = 10. We find that when plan steps are larger such as 12, the performance does not improve monotonically. This might be due to the difficult to predict long range future with a learned dynamics model.\nAblation of visual representation In this section, we conduct experiments to evaluate the effect of the proposed visual representation — local sub-regions. As a comparision, we include a variant that takes in the whole local region as input and output a score conditioned on actions. In Table 4, we see that the local subregions contribute to both the training performance and the zero-shot generalization performance. However, we also find that even without the subregions, SAP still outperforms our second strongest baseline. This is because the scoring and planning steps still empowers the agent the ability to learn and generalize.\nModel Dissection: To further understand each component of SAP, we ablate the scoring-aggregating component and the planning component. The NHP method uses a manually designed scoring function, with the original planning component. We further conducted the SAP-3 step and the Greedy method which only have 3 planning steps and no planning respectively. In the Table. 5, we observe that without the scoring aggregating component or the planning component, the performance has a significantly drop. This shows that all components of SAP are critical to the performance." }, { "heading": "A.2.7 ADDITIONAL BASELINES", "text": "In this section, we compare SAP with more baselines. The first baseline is privileged BC: We collected 8000 near-optimal trajectories (average score 1833.0) from the training environment. Then we train an imitation learning agent that mimics the near-optimal data. We note that this baseline is not a fair comparison because SAP only utilizes random exploratory data; we present this baseline to test the generalization ability of an imitative agent that performs well on training environment. The second baseline is a reinforcement learning agent that is trained with curiosity driven reward (Burda et al., 2018b) and the final sparse reward. We limit the training steps to 10M. This is also violating the setting we have as it interacts with the environment. We conduct this experiment to test the generalization ability of a reinforcement learning agent. In Table 6, we see that both baselines have a large drop on the generalization task compared to SAP." }, { "heading": "A.2.8 ADDITIONAL VISUALIZATION", "text": "In this section, we present additional visualization for qualitative study. In Figure. 8, we see that on a few randomly sampled frames, even the greedy action can be meaningful for most of the cases. We see the agent intend to jump over obstacles and avoid dangerous monsters.\nIn Figure. 9, we show the scores of a given state-action pair and find that the scores fulfill the human prior. For example, in Figure. 9a, we synthetically put the mario agent in 8 relative positions to “koopa” conditioned on the action “move right”. The score is significantly lower when the agent’s position is to the left of “koopa” compared to other position. In Figure. 9b, it is the same setup as in Figure. 9a but conditioned on the action “jump”. We find that across Figure. 9a and Figure. 9b the\nleft position score of Figure. 9b is smaller than that of Figure. 9a which is consistent with human priors. In Figure. 9c and Figure. 9c, we substitute the object from “koopa” to the ground. We find that on both Figure. 9c and Figure. 9c the score are similar for the top position which means there is not much difference between different actions." }, { "heading": "A.3 ROBOTICS BLOCKED REACH", "text": "" }, { "heading": "A.3.1 ENVIRONMENT.", "text": "In the Blocked Reach environment, a 7-DoF robotics arm is manipulated for a specific task. For more details, we refer the readers to Plappert et al. (2018). We discretize the robot world into a 200 × 200 × 200 voxel cube. For the action space, we discretize the actions into two choices for each dimension which are moving 0.5 or -0.5. Hence, in total there are 8 actions. We design four configurations for evaluating different methods as shown in Figure 6. For each configurations, there are three objects are placed in the middle as obstacles. The height of the objects in each configuration are (0.05, 0.1, 0.08), (0.1, 0.05, 0.08), (0.12, 1.12, 0.12), (0.07, 0.11, 0.12)." }, { "heading": "A.3.2 ARCHITECTURES FOR SCORE FUNCTION AND DYNAMICS MODEL.", "text": "For the score function, we train a 1 hidden layer fully-connected neural networks with 128 units. We use Relu functions as activation except for the last layer. Note that the input 5 by 5 by 5 voxels are flattened before put into the scoring neural network.\nFor the dynamics model, we train a 3-D convolution neural network that takes in a local region (voxels), action and last three position changes. The 15 by 15 by 15 local voxels are encoded using three 3d convolution with kernel size 3 and stride 2. Channels of these 3d conv layers are 16, 32, 64, respectively. A 64-unit FC layer is connected to the flattened features after convolution. The action is encoded with one-hot vector connected to a 64-unit FC layer. The last three δ positions are also encoded with a 64-unit FC layer. The three encoded features are concatenated and go through a 128-unit hidden FC layer and output predicted change in position. All intermediate layers use relu as activation." }, { "heading": "A.3.3 HYPERPARAMETERS.", "text": "During training, we use an adam optimizer with learning rate 3e-4, β1 = 0.9, β2 = 0.999. The batchsize is 128 for score function training and 64 for dynamics model. We use horizon = 8 as our planning horizon." }, { "heading": "A.3.4 BASELINES", "text": "Our naive human prior baseline in the blocked robot environment is a 8-step MPC where score for each step is the y component of the action vector at that step.\nWe omit the Behavioral cloning baselines, which imitates exploration data, as a consequence of two previous results." } ]
2,019
SCORING-AGGREGATING-PLANNING: LEARNING TASK-AGNOSTIC PRIORS FROM INTERACTIONS AND SPARSE REWARDS FOR ZERO-SHOT GENERALIZATION
SP:9fdc1a88425fd5d103163f7bbcbafc7ca7fe81be
[ "This paper provides a novel solution to the variable sparsity problem, where the output of neural networks biased with respect to the number of missing inputs. The authors proposed a sparsity normalization algorithm to process the input vectors to encounter the bias. In experiments, the authors evaluated the proposed sparsity normalization model on multiple datasets: collaborative filtering datasets, electric medical records datasets, single-cell RNA sequence datasets and UCI datasets. Results show that the proposed normalization method improves the prediction performance and the predicted values of the neural network is more uniformly distributed according to the number of missing entries.", "This paper studies a very interesting phenomena in machine learning called VSP, that is the output of the model is highly affected via the level of missing values in its input. The authors demonstrate the existence of such phenomena empirically, analyze the root cause for it theoretically, and propose a simple yet effective normalization method to tackle the problem. Several experiments demonstrate the effectiveness of this method." ]
Handling missing data is one of the most fundamental problems in machine learning. Among many approaches, the simplest and most intuitive way is zero imputation, which treats the value of a missing entry simply as zero. However, many studies have experimentally confirmed that zero imputation results in suboptimal performances in training neural networks. Yet, none of the existing work has explained what brings such performance degradations. In this paper, we introduce the variable sparsity problem (VSP), which describes a phenomenon where the output of a predictive model largely varies with respect to the rate of missingness in the given input, and show that it adversarially affects the model performance. We first theoretically analyze this phenomenon and propose a simple yet effective technique to handle missingness, which we refer to as Sparsity Normalization (SN), that directly targets and resolves the VSP. We further experimentally validate SN on diverse benchmark datasets, to show that debiasing the effect of input-level sparsity improves the performance and stabilizes the training of neural networks.
[ { "affiliations": [], "name": "Joonyoung Yi" }, { "affiliations": [], "name": "Juhyuk Lee" }, { "affiliations": [], "name": "Kwang Joon Kim" }, { "affiliations": [], "name": "Sung Ju Hwang" }, { "affiliations": [], "name": "Eunho Yang" } ]
[ { "authors": [ "Philip Bachman", "Alessandro Sordoni", "Adam Trischler" ], "title": "Learning algorithms for active learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Rianne van den Berg", "Thomas N Kipf", "Max Welling" ], "title": "Graph convolutional matrix completion", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining Deep Learning Day. ACM,", "year": 2018 }, { "authors": [ "S van Buuren", "Karin Groothuis-Oudshoorn" ], "title": "mice: Multivariate imputation by chained equations in r", "venue": "Journal of statistical software,", "year": 2010 }, { "authors": [ "Wei Cao", "Dong Wang", "Jian Li", "Hao Zhou", "Lei Li", "Yitan Li" ], "title": "Brits: Bidirectional recurrent imputation for time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhengping Che", "Sanjay Purushotham", "Kyunghyun Cho", "David Sontag", "Yan Liu" ], "title": "Recurrent neural networks for multivariate time series with missing values", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Chao Chen", "Dongsheng Li", "Qin Lv", "Junchi Yan", "Stephen M Chu", "Li Shang" ], "title": "Mpma: Mixture probabilistic matrix approximation for collaborative filtering", "venue": "In IJCAI,", "year": 2016 }, { "authors": [ "Chao Chen", "Dongsheng Li", "Qin Lv", "Junchi Yan", "Li Shang", "Stephen M Chu" ], "title": "Gloma: Embedding global information in local matrix approximation models for collaborative filtering", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Chao Du", "Chongxuan Li", "Yin Zheng", "Jun Zhu", "Bo Zhang" ], "title": "Collaborative filtering with user-item co-autoregressive models", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Charles Dugas", "Yoshua Bengio", "François Bélisle", "Claude Nadeau", "René Garcia" ], "title": "Incorporating second-order functional knowledge for better option pricing", "venue": "In Advances in neural information processing systems,", "year": 2001 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Neural network matrix factorization", "venue": "arXiv preprint arXiv:1511.06443,", "year": 2015 }, { "authors": [ "Mathieu Germain", "Karol Gregor", "Iain Murray", "Hugo Larochelle" ], "title": "Made: Masked autoencoder for distribution estimation", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Golnaz Ghiasi", "Tsung-Yi Lin", "Quoc V Le" ], "title": "Dropblock: A regularization method for convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Lovedeep Gondara", "Ke Wang" ], "title": "Mida: Multiple imputation using denoising autoencoders", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2018 }, { "authors": [ "F Maxwell Harper", "Joseph A Konstan" ], "title": "The movielens datasets: History and context", "venue": "Acm transactions on interactive intelligent systems (tiis),", "year": 2016 }, { "authors": [ "Elad Hazan", "Roi Livni", "Yishay Mansour" ], "title": "Classification with low rank and missing data", "venue": "In Proceedings of the 32nd International Conference on Machine Learning-Volume", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Xiangnan He", "Lizi Liao", "Hanwang Zhang", "Liqiang Nie", "Xia Hu", "Tat-Seng Chua" ], "title": "Neural collaborative filtering", "venue": "In Proceedings of the 26th International Conference on World Wide Web, pp. 173–182. International World Wide Web Conferences Steering Committee,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yehuda Koren" ], "title": "Factorization meets the neighborhood: a multifaceted collaborative filtering model", "venue": "In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2008 }, { "authors": [ "Yehuda Koren", "Robert Bell", "Chris Volinsky" ], "title": "Matrix factorization techniques for recommender systems", "venue": "Computer, pp", "year": 2009 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Joonseok Lee", "Seungyeon Kim", "Guy Lebanon", "Yoram Singer", "Samy Bengio" ], "title": "Llorma: Local low-rank matrix approximation", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Dongsheng Li", "Chao Chen", "Qin Lv", "Junchi Yan", "Li Shang", "Stephen Chu" ], "title": "Low-rank matrix approximation with stability", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Dongsheng Li", "Chao Chen", "Wei Liu", "Tun Lu", "Ning Gu", "Stephen Chu" ], "title": "Mixture-rank matrix approximation for collaborative filtering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Steven Cheng-Xian Li", "Bo Jiang", "Benjamin Marlin" ], "title": "Misgan: Learning from incomplete data with generative adversarial networks", "venue": "arXiv preprint arXiv:1902.09599,", "year": 2019 }, { "authors": [ "Zachary C Lipton", "David C Kale", "Randall Wetzel" ], "title": "Modeling missing data in clinical time series with rnns", "venue": "Machine Learning for Healthcare,", "year": 2016 }, { "authors": [ "Etai Littwin", "Lior Wolf" ], "title": "Regularizing by the variance of the activations", "venue": "sample-variances. In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yonghong Luo", "Xiangrui Cai", "Ying Zhang", "Jun Xu" ], "title": "Multivariate time series imputation with generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Andrew L Maas", "Awni Y Hannun", "Andrew Y Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In Proc. icml,", "year": 2013 }, { "authors": [ "Rahul Mazumder", "Trevor Hastie", "Robert Tibshirani" ], "title": "Spectral regularization algorithms for learning large incomplete matrices", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "In International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "Federico Monti", "Michael Bronstein", "Xavier Bresson" ], "title": "Geometric matrix completion with recurrent multi-graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ruslan Salakhutdinov", "Andriy Mnih", "Geoffrey Hinton" ], "title": "Restricted boltzmann machines for collaborative filtering", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Suvash Sedhain", "Aditya Krishna Menon", "Scott Sanner", "Lexing Xie" ], "title": "Autorec: Autoencoders meet collaborative filtering", "venue": "In Proceedings of the 24th International Conference on World Wide Web,", "year": 2015 }, { "authors": [ "I. Silva", "George B. Moody", "Daniel J. Scott", "L.A. Celi", "R. Gritz Mark" ], "title": "Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge", "venue": "Computing in Cardiology,", "year": 2012 }, { "authors": [ "Marek Śmieja", "Łukasz Struski", "Jacek Tabor", "Bartosz Zieliński", "Przemysław Spurek" ], "title": "Processing of missing data by neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Divyanshu Talwar", "Aanchal Mongia", "Debarka Sengupta", "Angshul Majumdar" ], "title": "Autoimpute: Autoencoder based imputation of single-cell rna-seq data", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Volker Tresp", "Subutai Ahmad", "Ralph Neuneier" ], "title": "Training neural networks with deficient data", "venue": "In Advances in neural information processing systems,", "year": 1994 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Menghan Wang", "Mingming Gong", "Xiaolin Zheng", "Kun Zhang" ], "title": "Modeling dynamic missingness of implicit feedback for recommendation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Baolin Yi", "Xiaoxuan Shen", "Hai Liu", "Zhaoli Zhang", "Wei Zhang", "Sannyuya Liu", "Naixue Xiong" ], "title": "Deep matrix factorization with implicit feedback embedding for recommendation system", "venue": "IEEE Transactions on Industrial Informatics,", "year": 2019 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela Van Der Schaar" ], "title": "Gain: Missing data imputation using generative adversarial nets", "venue": "In Proceedings of the 35th International Conference on Machine Learning-Volume", "year": 2018 }, { "authors": [ "Shuai Zhang", "Lina Yao", "Xiwei Xu" ], "title": "Autosvd++: An efficient hybrid collaborative filtering model via contractive auto-encoders", "venue": "In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval,", "year": 2017 }, { "authors": [ "Yin Zheng", "Bangsheng Tang", "Wenkui Ding", "Hanning Zhou" ], "title": "A neural autoregressive approach to collaborative filtering", "venue": "In Proceedings of the 33rd International Conference on Machine LearningVolume 69. JMLR. org,", "year": 2016 }, { "authors": [ "Fuzhen Zhuang", "Zhiqiang Zhang", "Mingda Qian", "Chuan Shi", "Xing Xie", "Qing He" ], "title": "Representation learning via dual-autoencoder for recommendation", "venue": "Neural Networks,", "year": 2017 }, { "authors": [ "Sedhain" ], "title": "CF-NADE (Zheng et al., 2016) We use two layer CF-NADE model with 500 hidden units. For fair comparisons, we tune the hyper-parameters for weight decay in all experiments to have only one significant digit, and use a learning rate8 of 0.001. Also, we use mini-batch (512) just following CF-NADE. Although the CF-NADE used weight sharing and averaging possible choices in addition", "venue": null, "year": 2015 }, { "authors": [ "SC", "YY Kim", "SK Park" ], "title": "Cohort profile: the National Health Insurance Service-National Health Screening Cohort (NHIS-HEALS) in Korea", "venue": "BMJ Open 2017;7:e016640. pmid:28947447", "year": 2017 }, { "authors": [ "Klambauer" ], "title": "Littwin & Wolf (2018)’s UCI experiments. We use ReLU (Glorot et al., 2011) networks of 4 hidden layers with 256 units. We use Mean Square Error (MSE) for loss function and Adam optimizer without weight decay, = 10−8 and learning rate 10−4. The batch size used for training is 128. We split 10% from the dataset for test and use 20% of training set as the validation", "venue": null, "year": 2017 }, { "authors": [ "Śmieja" ], "title": "Gaussian Mixture Model Compensator", "venue": null, "year": 2018 }, { "authors": [ "Yoon" ], "title": "stabilizing the statistics of a hidden layer20 (See Appendix H.1.2 for deeper analysis)", "venue": "16The GMMC (Śmieja et al.,", "year": 2018 } ]
[ { "heading": null, "text": "Handling missing data is one of the most fundamental problems in machine learning. Among many approaches, the simplest and most intuitive way is zero imputation, which treats the value of a missing entry simply as zero. However, many studies have experimentally confirmed that zero imputation results in suboptimal performances in training neural networks. Yet, none of the existing work has explained what brings such performance degradations. In this paper, we introduce the variable sparsity problem (VSP), which describes a phenomenon where the output of a predictive model largely varies with respect to the rate of missingness in the given input, and show that it adversarially affects the model performance. We first theoretically analyze this phenomenon and propose a simple yet effective technique to handle missingness, which we refer to as Sparsity Normalization (SN), that directly targets and resolves the VSP. We further experimentally validate SN on diverse benchmark datasets, to show that debiasing the effect of input-level sparsity improves the performance and stabilizes the training of neural networks." }, { "heading": "1 INTRODUCTION", "text": "Many real-world datasets often contain data instances whose subset of input features is missing. While various imputing techniques, from imputing using global statistics such as mean, to individually imputing by learning auxiliary models such as GAN, can be applied with their own pros and cons, the most simple and natural way to do this is zero imputation, where we simply treat a missing feature as zero. In neural networks, at first glance, zero imputation can be thought of as a reasonable solution since it simply drops missing input nodes by preventing the weights associated with them from being updated. Some what surprisingly, however, many previous studies have reported that this intuitive approach has an adverse effect on model performances (Hazan et al., 2015; Luo et al., 2018; Śmieja et al., 2018), and none of them has investigated the reasons of such performance degradations.\nIn this work, we find that zero imputation causes the output of a neural network to largely vary with respect to the number of missing entries in the input. We name this phenomenon Variable Sparsity Problem (VSP), which should be avoided in many real-world tasks. Consider a movie recommender system, for instance. It is not desirable that users get different average of predicted ratings just because they have rated different number of movies (regardless of their actual rating values). One might argue that people with less ratings do not like movies in general and it is natural to give higher predicted values to people with more ratings. This might be partially true for users of some sparsity levels, but it is not a common case uniformly applicable for a wider range of sparsity levels. This can be verified in real collaborative filtering datasets as shown in Figure 1 (upper left corner) where users have a similar average rating for test data regardless of the number of known ratings (see also other two examples in Figure 1). However, in standard neural networks with zero imputation, we observe that the model’s inference correlates with the number of known entries of the data instance as shown in\nthe second row of Figure 11. It would be fatal in some safety-critical applications such as a medical domain: a patient’s probability of developing disease for example should not be evaluated differently depending on the number of medical tests they received (we do not want our model to predict the probability of death is high just because some patient has been screened a lot!).\nIn addition, we theoretically analyze the existence of VSP under several circumstances and propose a simple yet effective means to suppress VSP while retaining the intuitive advantages of zero imputation: normalizing with the number of non-zero entries for each data instance. We refer to this regularization as Sparsity Normalization, and show that it effectively deals away with the VSP, resulting in significant improvements in both the performance and the stability of training neural networks.\nOur contribution in this paper is threefold:\n• To best of our knowledge, we are the first in exploring the adverse effect of zero imputation, both theoretically and empirically.\n1Note that, this tendency is very consistent with other test points and is observed throughout the entire learning process (even before the training).\n• We identify the cause of adverse effect of zero imputation, which we refer to as variable sparsity problem, and formally describe how this problem actually affects training and inference of neural networks (Section 2). We further provide new perspectives using VSP to understand phenomena that have not been clearly explained or that we have misunderstood (Section 4 and 5).\n• We present Sparsity Normalization (SN) and theoretically show that SN can solve the VSP under certain conditions (Section 3). We also experimentally reaffirm that simply applying SN can effectively alleviate or solve the VSP yielding significant performance gains (Section 4)." }, { "heading": "2 VARIABLE SPARSITY PROBLEM", "text": "We formally define the Variable Sparsity Problem (VSP) as follows: a phenomenon in which the expected value of the output layer of a neural network (over the weight and input distributions) depends on the sparsity (the number of zero values) of the input data (Figure 2a). With VSP, the activation values of neural networks could become largely different for exactly the same input instance, depending on the number of zero entries; this makes training more difficult and may mislead the model into incorrect predictions.\nWhile zero imputation is intuitive in the sense that it drops the missing input features, we will show that it causes variable sparsity problem for several example cases. Specifically, we show the VSP under assumptions with increasing generality: (Case 1) where activation function is an identity mapping with no bias, (Case 2) where activation function is an affine function, and (Case 3) where activation function is a non-decreasing convex function such as ReLU (Glorot et al., 2011), leaky ReLU (Maas et al., 2013), ELU (Clevert et al., 2016), or Softplus (Dugas et al., 2001).\nHere, we summarize the notation for clarity. For a L-layer deep network with non-linearity σ, we use W i ∈ Rni×ni−1 to denote the weight matrix of i-th layer, bi ∈ Rni to denote the bias, hi ∈ Rni to denote the activation vector. For simplicity, we use h0 ∈ Rn0 and hL ∈ RnL to denote input and output layer, respectively. Then, we have\nhi = σ(W ihi−1 + bi), for i = 1, · · · , L.\nOur goal in this section is to observe the change in hL as the sparsity of h0 (input x) changes. To simplify the discussion, we consider the following assumption:\nAssumption 1. (i) Every coordinate of input vector, h0l , is generated by the element-wise multiplication of two random variables h̃0l and ml where ml is binary mask indicating missing value and h̃0l is a (possibly unobserved) feature value. Here, missing mask ml is MCAR (missing completely at random), with no dependency with other mask variables or their values h̃0. All ml follow some identical distribution with mean µm. (ii) The elements of matrix W i are mutually independent and follow the identical distribution with mean µiw. Similarly, b\ni and h̃0 consist of i.i.d. coordinates with mean µib and µx, respectively. (iii) µ i w is not zero uniformly over all i.\n(i) assumes the simplest missing mechanism. (ii) is similarly defined in Glorot & Bengio (2010) and He et al. (2015) in studying weight initialization techniques. (iii) may not hold under some initialization strategies, but as the learning progresses, it is very likely to hold.\n(Case 1) For simplicity, let us first consider networks without the non-linearity nor the bias term. Theorem 1 shows that the average value of the output layer E[hLl ] is directly proportional to the expectation of the mask vector µm:\nTheorem 1. Suppose that activation σ is an identity function and that bil is uniformly fixed as zero under Assumption 1. Then, we have E[hLl ] = ∏L i=1 ni−1µ i wµxµm.\n(Case 2) When the activation function is affine but now with a possibly nonzero bias, E[hLl ] is influenced by µm in the following way:\nTheorem 2. Suppose that activation σ is an affine function under Assumption 1. Suppose further that fi(x) is defined as σ(ni−1µiwx+ µ i b). Then, E[h L l ] = fL ◦ · · · ◦ f1(µxµm).\n(Case 3) Finally, when the activation function is non-linear but non-decreasing and convex, we can show that E[hLl ] is lower-bounded by some quantity involving µm:\nTheorem 3. Suppose that σ is a non-decreasing convex function under Assumption 1. Suppose further that fi(x) is defined as σ(ni−1µiwx+ µ i b) and µ i w > 0. Then, E[h L l ] ≥ fL ◦ · · · ◦ f1(µxµm).\nIf the expected value of the output layer (or the lower bound of it) depends on the level of sparsity/missingness as in Theorem 1-3, even similar data instances may have different output values depending on their sparsity levels, which would hinder fair and correct inference of the model. As shown in Figure 1 (second row), the VSP can easily occur even in practical settings of training neural networks where the above conditions do not hold." }, { "heading": "3 SPARSITY NORMALIZATION", "text": "Algorithm 1 Sparsity Normalization (SN) Input: Dataset D, constant K. Output: Sparsity Normalized Dataset DSN.\nEmpty set S = φ for each (h0,m) ∈ D do\nh0SN ← K · h0/ ‖m‖1 S ← S ∪ { h0SN }\nend for DSN ← S\nIn this section, we propose a simple yet surprisingly effective method to resolve the VSP. We first revisit (Case 2) to find a way of making expected output independent of input sparsity level since the linearity in activation simplifies the correction. Recalling the notation of h0 = h̃0 m ( represents the element-wise product), we find that simply normalizing via h0SN = (h̃\n0 m) ·K1/µm for any fixed constant K1, can debias the dependency on the input sparsity level. We name this simple normalizing technique Sparsity Normalization (SN) and describe it in Algorithm 1. Conceptually, this method scales the size of each input value according to its sparsity level so that the change in output size is less sensitive to the sparsity level (Figure 2b). The formal description on correcting sparsity bias by SN is as follows in this particular case:\nTheorem 4. (With Sparsity Normalization) Suppose that activation σ is an affine function under Assumption 1. Suppose further that fi(x) = σ(ni−1µiwx+ µ i b) and replace the input layer using SN, i.e. h0SN = (h̃ 0 m)·K1/µm for any fixed constantK1. Then, we haveE[hLl ] = fL◦· · ·◦f1(µx·K1).\nUnlike in Theorem 2, SN in Theorem 4 makes average activation to be independent of µm, which determines the sparsity levels of input. It is not trivial to show the counterpart of (Case 3) using SN since E[σ(x)] = σ(E[x]) does not hold in general. However, we show through extensive experiments in Section 4 that SN is practically effective even in more general cases.\nWhile Theorem 4 assumes that µm is known and fixed across all data instances, we relax this assumption in practice and consider varying µm across data instances. Specifically, by a maximum likelihood principle, we can estimate µm for each instance by ∥∥h0∥∥ 0 /n0 = ‖m‖1/n0. Thus, we have h0SN = K · h0/ ‖m‖1 where K = n0 · K1 (see Algorithm 1)2. In practice, we recommend using K as the average of ‖m‖1 over all instances in the training set. We could encounter the dying ReLU phenomenon (He et al., 2015) if K is too small (e.g., K = 1). Since the hyper-parameter K can bring in a regularization effect via controlling the magnitude of gradient (Salimans & Kingma, 2016), we define K = E(h0,m)∈D[‖m‖1] so that the average scales remain constant before and after the normalization, minimizing such side effects caused by SN." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we empirically show that VSP occurs in various machine learning tasks and it can be alleviated by SN. In addition, we also show that resolving VSP leads to improved model performance on diverse scenarios." }, { "heading": "4.1 COLLABORATIVE FILTERING (RECOMMENDATION) DATASETS", "text": "We identify VSP and the effect of SN on several popular benchmark datasets for collaborative filtering with extremely high missing rates. We train an AutoRec (Sedhain et al., 2015) using user vector on Movielens (Harper & Konstan, 2016) 100K dataset for validating VSP and SN. Going back to the first column of Figure 1, the prediction with SN is almost constant regardless of ‖m‖1. Another perceived phenomenon in Figure 1 is that the higher the ‖m‖1, the smaller the variation in the prediction of\n2When ‖m‖1 is 0, calculation is impossible. Hence, in this case, the ‖m‖1 is assumed to be 1.\nthe model with SN. Note that the same tendency has been observed regardless of test instances or datasets. This implies that models with SN yield more calibrated predictions; as more features are known for a particular instance, the variance of prediction for that instance should decrease (since we generated independent masks in Figure 1). It is also worthwhile to note that an AutoRec is a sigmoid-based network and Movielens datasets are known to have no MCAR hypothesis (Wang et al., 2018), in which Assumption 1 does not hold at all.\nIt is also validated that performance gains can be obtained by putting SN into AutoRec (Sedhain et al., 2015), CF-NADE (Zheng et al., 2016), and CF-UIcA (Du et al., 2018), which are the state-ofthe-art among neural network based collaborative filtering models on several Movielens datasets (see Appendix B for detailed settings). In Table 13, we consider three different sized Movielens datasets. Note that AutoRec and CF-NADE allow two types of models according to data encoding (user- or item-rating vector based) and we consider both types. While we obtain performance improvements with SN in most cases, it is more prominent in user-rating based model in case of AutoRec and CF-NADE.\nFurthermore, Table 2 compares our simply modification using SN on AutoRec and CF-UIcA with other state-of-the-art collaborative filtering models beyond neural networks. Unlike experiments of AutoRec in Table 1, which use the same network architectures proposed in original papers, here we could successfully learn more expressive network due to the stability obtained by using SN4. For Movielens 100K and 1M datasets, applying SN yields better or similar performance compared to other state-of-the-art collaborative filtering methods beyond neural network based models. It is important to note that all models outperforming AutoRec with SN on Movielens 10M are ensemble models while AutoRec with SN is a single model and it shows consistently competitive results across all datasets." }, { "heading": "4.2 ELECTRONIC MEDICAL RECORDS (EMR) DATASETS", "text": "We further test VSP and SN for clinical time-series prediction with two Electronic Medical Records (EMR), namely PhysioNet Challenge 2012 (Silva et al., 2012) and the National Health Insurance Service (NHIS) datasets, which have intrinsic missingness since patients will only receive medical examinations that are considered to be necessary. We identify whether the VSP exists with the PhysioNet Challenge 2012 dataset (Silva et al., 2012). We randomly select one test point and plot in-hospital death probabilities as the number of examinations varies (Second column of Figure 1). Without SN, the in-hospital death probability increases as the number of examinations increases,\n3We consider a CF-NADE without weight sharing and re-run experiments for fair comparisons because applying SN with weight sharing is not trivial. We also exclude averaging possible choices because it does not make big differences given unnecessary extra computational costs.\n4Because overfitting is less with SN in AutoRec, we use twice the capacity than the existing AutoRec model.\neven though there is no such tendency in the dataset statistics. However, SN corrects this bias so that in-hospital death probability is consistent regardless of the number of examinations. We observe a similar tendency for examples from the NHIS dataset as well.\nAlthough SN corrects the VSP in both datasets, we perceive different behaviors in both cases in terms of actual performance changes. While SN significantly outperforms its counterpart without SN on NHIS dataset as shown in Table 3, it just performs similarly on PhysioNet dataset (results and detailed settings are deferred to Appendix C). However, SN is still valuable for its ability to prevent biased predictions in this mission-critical area.\nIn addition, we compare SN with other missing handling techniques to show the efficacy of SN although our main purpose is to provide a deeper understanding and the corresponding solution about the issue that the zero imputation, which is the simplest and most intuitive way of handling missing data, degrades the performance in training neural networks. Even with its simplicity, SN exhibits better or similar performances compared to other more complex techniques. Detailed descriptions of other missing handling techniques are deferred to Appendix H." }, { "heading": "4.3 SINGLE-CELL RNA SEQUENCE DATASETS", "text": "Single-cell RNA sequence datasets contain expression levels of specific RNA for each single cells. AutoImpute (Talwar et al., 2018) is one of the state-of-the-art methods that imputes missing data on single-cell RNA sequence datasets. We reproduce their experiments using authors’ official implementation, and follow most of their experimental settings (see Appendix D for details).\nAs before, we first check whether VSP occurs in AutoImpute model. The third column in Figure 1 shows how the prediction of a AutoImpute model changes as the number of known entries changes. Although the number of RNAs found in the specific cell is less related to cell characteristics (upper right corner in Figure 1), the prediction increases as the number of RNAs found in the cell increases. This tendency is dramatically reduced with SN.\nFigure 3 shows how imputation performance changes by being worked with SN to several single cell RNA sequence datasets with respect to the portion of train set (see Appendix D for more results). As we can see in Figure 3, SN significantly increases the imputation performance of AutoImpute model. In particular, the smaller the train data, the better the effect of SN in all datasets consistently. AutoImpute model is a sigmoid-based function and single cell RNA datasets (Talwar et al., 2018) do not have the MCAR hypothesis, unlike Assumption 1. Nevertheless, VSP occurs even here and it can be successfully alleviated by SN with huge performance gain. It implies that SN would work for other neural network based imputation techniques." }, { "heading": "4.4 DROPOUT ON UCI DATASETS", "text": "While SN primarily targets to fix the VSP in the input layer, it could be also applied to any layers of deep neural networks to resolve VSP. The typical example of having heterogeneous sparsity in hidden layers is when we use dropout (Srivastava et al., 2014), which can be understood as another form of zero imputation but at the hidden layers; with Bernoulli dropout, the variance of the number of zero units across instances is np(1− p) (n: the dimension of hidden layer, p: drop rate). While dropout partially handles this VSP issue by scaling 1/(1− p) in the training phase5, SN can exactly correct VSP of hidden layers by considering individual level sparsity (Note that the scaling of dropout can be viewed as applying SN in an average sense: E [‖m‖1] = n(1− p) and K = n).\n5In almost all of the deep learning frameworks such as PyTorch, TensorFlow, Theano and Caffe, this inverted dropout is supported.\nFigure 4 shows how the RMSE changes as the drop rate changes with and without SN on three popular UCI regression datasets (Boston Housing, Diabetes, and California Housing)6. As illustrated in Figure 4, the larger the drop rate, the greater the difference of RMSE between with and without SN. To explain this phenomenon, we define the degree of VSP as the inverse of signal-to-noise ratio with respect to the number of active units in hidden layers: √ p/(n(1− p)) (expected number of active units over its standard deviation). As can be seen from the figure, the larger drop rate p is, the more severe degree of VSP is and thus the greater protection by SN against performance degradation." }, { "heading": "4.5 DENSITY ESTIMATION", "text": "In the current literature of estimating density based on deep models, inputs with missing features are in general not largely considered. However, we still may experience the VSP since the proportion of zeros in the data itself can vary greatly from instance to instance. In this experiment, we apply SN to MADE (Germain et al., 2015), which is one of the modern architectures in neural network-based density estimation. We reproduce binarized MNIST (LeCun, 1998) experiments of Germain et al. (2015) measuring negative log likelihood (the lower the better) of test dataset while increasing the number of masks. Figure 5 illustrates the effect of using SN. Note that MADE uses masks in the hidden layer that are designed to produce proper autoregressive conditional probabilities and variable sparsity arises across hidden nodes. SN can be trivially extended to handle this case as well and the corresponding result is given in the figure denoted as w/SN(all)7. We reaffirm that SN is effective even when MCAR assumption is not established." }, { "heading": "5 RELATED WORKS", "text": "Missing handling techniques Missing imputation can be understood as a technique to increase the generalization performance by injecting plausible noise into data. Noise injection using global statistics like mean or median values is the simplest way to do this (Lipton et al., 2016; Śmieja et al., 2018). However, it could lead to highly incorrect estimation since they do not take into consideration the characteristics of each data instance (Tresp et al., 1994; Che et al., 2018). To overcome this limitation, researchers have proposed various ways to model individualized noise using autoencoders (Pathak et al., 2016; Gondara & Wang, 2018), or GANs (Yoon et al., 2018; Li et al., 2019). However, those model based imputation techniques have not properly worked for high dimensional datasets with the large number of features and/or extremely high missing rates (Yoon et al., 2018) because excessive noise can ruin the training of neural networks rather increasing generalization performance.\nFor this reason, in the case of high dimensional datasets such as collaborative filtering or single cell RNA sequences, different methods of handling missing data have been proposed. A line of work simply used zero imputation by minimizing noise level and achieved state-of-the-art performance on their target datasets (Sedhain et al., 2015; Zheng et al., 2016; Talwar et al., 2018). In addition, methods using low-rank matrix factorization have been proposed to reduce the input dimension, but these methods not only cause lots of information loss but also fail to capture non-linearity of the input data (Hazan et al., 2015; Bachman et al., 2017; He et al., 2017). Vinyals et al. (2016); Monti et al. (2017) proposed recurrent neural network (RNN) based methods but computational costs for these methods are outrageous for high dimensional datasets. Also, it is not natural to use RNN-based models for non-sequential datasets.\n6The most experimental settings are adopted from Klambauer et al. (2017); Littwin & Wolf (2018)’s UCI experiments (see Appendix E for detail).\n7The detailed description of the extension is deferred to Appendix F.\nOther forms of Sparsity Normalization We discuss other forms of SN to alleviate VSP, already in use unwittingly due to empirical performance improvements. DropBlock (Ghiasi et al., 2018) compensates activation from dropped features by exactly counting mask vector similar to SN (similar approach discussed in Section 4.4.) It is remarkable that we can find models using SN-like normalization even in handling datasets without missing features. For example, in CBOW model (Mikolov et al., 2013) where the number of words used as an input depends on the position in the sentence, it was later revealed that SN like normalization has a practical performance improvement. As an another example, Kipf & Welling (2017) applied Laplacian normalization which is the standard way of representing a graph in graph theory, can naturally handle heterogeneous node degrees and precisely matches the SN operation. In this paper, we explicitly extend SN, which was limited and unintentionally applied to only few settings, to a model agnostic technique." }, { "heading": "6 CONCLUSION", "text": "We identified variable sparsity problem (VSP) caused by zero imputation that has not been explicitly studied before. To best of our knowledge, this paper provided the first theoretical analysis on why zero imputation is harmful to inference of neural networks. We showed that variable sparsity problem actually exists in diverse real-world datasets. We also confirmed that theoretically inspired normalizing method, Sparsity Normalization, not only reduces the VSP but also improves the generalization performance and stability of feed-forwarding of neural networks with missing values, even in areas where existing missing imputation techniques do not cover well (e.g., collaborative filtering, single cell RNA datasets)." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grants (No.2016-0-00563, No.2017-0-01779, and No.2019-0-01371), the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) grants (No.2018R1A5A1059921 and No.2019R1C1C1009192), the Samsung Research Funding & Incubation Center of Samsung Electronics via SRFC-IT1702-15, the National IT Industry Promotion Agency grant funded by the Ministry of Science and ICT, and the Ministry of Health and Welfare (NO. S0310-19-1001, Development Project of The Precision Medicine Hospital Information System (P-HIS))." }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PROOF OF THEOREM 1", "text": "Proof. From the definition of hl, w1l , h 0 l , h̃ 0 l ,ml, the following equation holds.\nE[h1l ] = n0E[w 1 l h 0 1] = n0E[w 1 l h̃ 0 lml]\nFrom the Assumption 1, w1l , h̃ 0 l , and ml are independent of each other. Thus,\nE[h1l ] = n0E[w 1 l ]E[h̃ 0 l ]E[ml]\nSimilarly, the following holds.\nE[hil] = ni−1E[w i lh i−1 l ] for i = 1, · · · , L\nSince hi−1l and w i l are independent of each other by the Assumption 1 and the definition of h i−1 l , E[hil] = ni−1E[w i l ]E[h i−1 l ]. Therefore,\nE[hLl ] = L∏ i=1 ni−1E[w i l ]E[h̃ 0 l ]E[ml] = L∏ i=1 ni−1µ i wµxµm" }, { "heading": "A.2 PROOF OF THEOREM 2", "text": "Proof. From the definition of hl, w1l , h 0 l , h̃ 0 l ,ml and the property of an affine function σ(E[·]) = E[σ(·)], the following equation holds.\nE[h1l ] = σ ( n0E[w 1 l h 0 l ] + E[b 1 l ] ) = σ ( n0E[w 1 l h̃ 0 lml] + E[b 1 l ] )\nFrom the Assumption 1, w1l , h̃ 0 l , and ml are independent of each other. Thus, E[h1l ] = σ ( n0E[w 1 l ]E[h̃ 0 l ]E[ml] + E[b 1 l ] )\n= σ ( n0µ 1 wµxµm + µ 1 b ) = f1(µxµm)\nSimilarly, the following holds. E[hil] = σ ( ni−1E[w i lh i−1 l ] + E[b i l] ) for i = 1, · · · , L\nSince hi−1l and w i l are independent of each other by the Assumption 1 and the definition of h i−1 l ,\nE[hil] = σ ( ni−1E[w i l ]E[h i−1 l ] + E[b i l] )\n= σ ( ni−1µ i wE[h i−1 l ] + µ i b ) = fi(E[h i l])\nTherefore,\nE[hLl ] = fL ◦ · · · ◦ f1(µxµm)" }, { "heading": "A.3 PROOF OF THEOREM 3", "text": "Proof. From the definition of hl, w1l , h 0 l , h̃ 0 l ,ml and the property of a convex function E[σ(·)] ≥ σ(E[·]), the following equation holds.\nE[h1l ] ≥ σ ( n0E[w 1 l h 0 l ] + E[b 1 l ] ) = σ ( n0E[w 1 l h̃ 0 lml] + E[b 1 l ] )\nFrom the Assumption 1, w1l , h̃ 0 l , and ml are independent of each other. Thus, E[h1l ] ≥ σ ( n0E[w 1 l ]E[h̃ 0 l ]E[ml] + E[b 1 l ] )\n= σ ( n0µ 1 wµxµm + µ 1 b ) = f1(µxµm)\nSimilarly, the following holds. E[hil] ≥ σ ( ni−1E[w i lh i−1 l ] + E[b i l] ) for i = 1, · · · , L\nSince hi−1l and w i l are independent of each other by the Assumption 1 and the definition of h i−1 l ,\nE[hil] ≥ σ ( ni−1E[w i l ]E[h i−1 l ] + E[b i l] )\n= σ ( ni−1µ i wE[h i−1 l ] + µ i b ) = fi(E[h i−1 l ])\nSince we assume that σ is non-decreasing, we finally get\nE[hLl ] ≥ fL ◦ · · · ◦ f1(µxµm)" }, { "heading": "A.4 PROOF OF THEOREM 4", "text": "Proof. By theorem 2, E[hLl ] = fL ◦ · · · ◦ f1(E[h0SN]). Since E[h0SN] = E[h0] ·K/µm,\nE[hLl ] = fL ◦ · · · ◦ f1(µxµm ·K/µm) = fL ◦ · · · ◦ f1(µx ·K)" }, { "heading": "B COLLABORATIVE FILTERING (RECOMMENDATION) DATASETS", "text": "" }, { "heading": "B.1 DETAILED EXPERIMENTAL SETTINGS OF TABLE 1", "text": "This subsection describes the experimental settings of detailed collaborative filtering tasks in Section 4. As already mentioned, we follow the settings of AutoRec, CF-NADE, and CF-UIcA as much as possible. We perform each experiments by 5 times, and report mean and 95% confidence intervals. We randomly select 10% of the ratings of each datasets for the test set (Harper & Konstan, 2016). As the dataset is too small for Movielens 100K and 1M datasets, the confidence interval tends to be large by changing the dataset split. Hence, the same dataset split is used in each 5 experiments in Table 1.\nAutoRec (Sedhain et al., 2015) We use two layer AutoRec model with 500 hidden units. For fair comparisons, we tune the hyper-parameters for weight decay in all experiments to have only one significant digit, and use a learning rate of 10−3 except on Movielens 10M where we use a learning rate of 10−4. We use full batch on Movielens 100K and 1M, mini-batch (1000) on Movielens 10M. Besides, we use Adam optimizer instead of Resilient Propagation (RProp) unlike the AutoRec paper. The RProp optimizer shows fast convergence but can only be used in full batch scenario. It is not possible for 12GB of GPU memory to use full batch in training a large dataset such as Movielens 10M. Thus, we decide to use Adam optimizer rather than RProp optimizer. Fortunately, although the optimizer is changed to Adam, the prediction performance is not degraded in most cases. The experimental results of comparing both optimizers are summarized in Table 4.\nCF-NADE (Zheng et al., 2016) We use two layer CF-NADE model with 500 hidden units. For fair comparisons, we tune the hyper-parameters for weight decay in all experiments to have only one significant digit, and use a learning rate8 of 0.001. Also, we use mini-batch (512) just following CF-NADE. Although the CF-NADE used weight sharing and averaging possible choices in addition to weight decay, we report the results without weight sharing and averaging possible choices in Table 1 because it is not clear how to apply SN with weight sharing, and there is almost no performance gain with averaging possible choices despite of its high computational costs. Furthermore, we do not experiment on Movielens 10M with item vector encoding because the authors of CF-NADE did not provide the results for it due to the complexity of the model.\nCF-UIcA (Du et al., 2018) We use the authors’ official code and the train/test dataset splits9 for these experiments. Since CF-UIcA is a model that accepts both user and item vector as input, it is not necessary to consider two types of encoding as in AutoRec or CF-NADE. On the other hand, it is reasonable to take different K values for user and item vector with SN. We treat K as 66 for the user vector and 110 for the item vector. For models without SN, 0.0001 is used as the parameter λ for weight decay as suggested by the CF-UIcA. It is natural to use different λ values for SN because the optimal λ must be changed along with SN. Hence, we use λ = 0.0005 for Movielens 100K and λ = 0.00006 for Movielens 1M in the case of SN. As in CF-NADE, we do not test Movielens 10M because the authors of CF-UIcA also did not report for it owing to its high computational cost.\n8The CF-NADE uses learning rate 5 × 10−4 for Movielens 10M, but we use 10−3 for fast convergence. Therefore, the results can be somewhat different from original paper.\n9https://github.com/thu-ml/CF-UIcA." }, { "heading": "B.2 DETAILED EXPERIMENTAL SETTINGS OF TABLE 2", "text": "In comparison with other state-of-the-art models, we used 1000 hidden units in AutoRec (Sedhain et al., 2015) with SN. While Sedhain et al. (2015) claimed that they were able to achieve enough performance only with 500 hidden units, 500 hidden units did not achieve sufficient performance when applying SN. The Figure 6 plots the test RMSE, changing the number of hidden units for Movielens 100K and 1M. We can see that 600 units for Movielens 100K and 900 units for Movielens 1M are necessary for getting better performance. Obviously, as datasets become more complex and larger, we need more network capacity. Therefore, we decide to use two times larger network capacity (1000 hidden units) to get better performance. The number of hidden units can also be viewed and tuned as a hyper-parameter, and we have not tuned much for the number of hidden units. Note that, unlike Table 1, we report the results with five random splits for all datasets in Table 2 to compare fairly with other state-of-the-art methods." }, { "heading": "C ELECTRONIC MEDICAL RECORDS (EMR) DATASETS", "text": "" }, { "heading": "C.1 NHIS DATASET", "text": "The NHIS dataset, which is from National Health Insurance Service (NHIS), consists of medical diagnosis of around 300,000 people. The goal is to predict the occurrence of 5 diseases. Each patients takes 34 examinations over 5 years. We split dataset into two set (train and test), where the ratio of train and test split is 3:1. We pre-process input data with min-max normalization, which makes min and max values of each features be zero and one following GAIN (Yoon et al., 2018). We train 2 layer neural networks which have 50 and 30 hidden units each, and evaluate the model with AUROC. We use ReLU activation, and dropout rate as 0.8 (if applied). Since these dataset is too imbalance, we apply class weight to the loss function for handling label imbalance. Besides, we use Adam optimizer with learning rate 10−2 without weight decay, and full batch. We evaluate the model on 5 times and report mean and 95% confidence interval. We also observe that Sparsity Normalization makes performance gain even when the dropout is integrated in the networks (See Table 5).\nData source This study used the National Health Insurance System-National Health Screening Cohort (NHIS-HEALS)* data derived from a national health screening program and the national health insurance claim database in the National Health Insurance System (NHIS) of South Korea. Data from the NHIS-HEALS10 was fully anonymized for all analyses and informed consent was not specifically obtained from each participant. This study was approved and exempt from informed consent by the Institutional Review Board of Yonsei University, Severance Hospital in Seoul, South Korea (IRB no.4-2016-0383).\nData Availability Data cannot be shared publicly because of the provisions of the National Health Insurance Service (NHIS). Korean legal restrictions prohibit authors from making the data publicly available, and the authority implemented the restrictions is NHIS (National Health Insurance Service), one of the government agency of Republic of Korea. NHIS provides limited portion of anonymized data to the researchers for the purpose of the public interest. However, they exclusively provide data to whom made direct contact of the NHIS and agreed to policies of NHIS. Redistribution of the data is not permitted for the researchers. The contact name and the information to which the data request can be sent: Haeryoung Park Information analysis department Big data operation room NHISS Tel: +82-33-736-2430. E-mail: lumen77@nhis.or.kr.\n10Seong SC, Kim YY, Park SK, et al. Cohort profile: the National Health Insurance Service-National Health Screening Cohort (NHIS-HEALS) in Korea. BMJ Open 2017;7:e016640. pmid:28947447." }, { "heading": "C.2 PHYSIONET CHALLENGE 2012 DATASET", "text": "PhysioNet Challenge 2012 dataset (Silva et al., 2012) consists of 48 hours-long multivariate clinical time series data from intensive care unit (ICU). Most of the experimental settings are followed by BRITS (Cao et al., 2018). We divide 48 hours into 288 timesteps, which contain 35 examinations each. The goal of this task is to predict in-hospital death. We use dataset split given by PhysioNet Challenge 2012 (Silva et al., 2012) where each of them contains 4000 data points. In the preprocessing phase, we standardize (make mean as zero and standard deviation as one for each features) the input features and fill zero for missing values (zero imputation). We train single layer LSTM network which has 108 hidden units, and evaluate the model with AUROC. We apply class weight to handle imbalance problem in the dataset like in setting above. We use Adam optimizer with learning rate 2 × 10−4, 512 batch size, and early stopping method based on AUROC of validation set. We evaluate the model on 5 times and report mean and 95% confidence interval. Note that input of LSTM model is the results for 35 medical examinations at a specific timestamp. Hence, we apply SN separately for each timestamp. As the aforementioned state of Section 4.2, SN could not make significant performance gain in PhysioNet Challenge 2012 dataset (Silva et al., 2012) though Sparsity Normalization eases the VSP (See Table 6). Nevertheless, SN is still valuable for its ability to prevent biased predictions in this mission-critical area." }, { "heading": "D SINGLE-CELL RNA SEQUENCE DATASETS", "text": "In the AutoImpute experiments, we run the experiments using the author’s public code11.Talwar et al. (2018) reported experimental results on eight datasets (Blakeley, Jurkat, Kolodziejczyk, Preimplantation, Quake, Usoskin, Zeisel, and PBMC). Since we can obtain preprocessed datasets only for seven datasets except PBMC, we run experiments on seven single cell RNA sequence datasets12. All experimental settings without SN are exactly followed by the author’s code and the hyper-parameter settings published in Table 2 of their paper. For the model integrated with SN, all the experimental settings are followed by the original model except that the smaller threshold for early stopping is taken because the models with SN tends to be underfitted by using the threshold as suggested by the authors13. In addition, Talwar et al. (2018) conducted experiments only on cases with test set ratios of {0.1, 0.2, 0.3, 0.4, 0.5}, but we explored more test set ratios {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9} to show that the SN works well under more sparsity (extremely high test set ratio). Like the authors’ settings, we perform 10 experiments and report mean and 95% confidence intervals. We change the test set split on every trials. It is remarkable that although hyper-parameters are not favorable for SN, SN performs similarly or better on all seven datasets that we concerned (See Figure 3 and 7)." }, { "heading": "E DROPOUT ON UCI DATASETS", "text": "Most experimental settings are adopted from Klambauer et al. (2017); Littwin & Wolf (2018)’s UCI experiments. We use ReLU (Glorot et al., 2011) networks of 4 hidden layers with 256 units. We use Mean Square Error (MSE) for loss function and Adam optimizer without weight decay, = 10−8 and learning rate 10−4. The batch size used for training is 128. We split 10% from the dataset for test and use 20% of training set as the validation set. For all inputs, we applied min-max normalization, which makes min and max values of each features be 0 and 1. Note that, we use dense datasets without any missing attributes to focus on the effects of dropout. All the datasets are used as provided in the package in sklearn.datasets.\n11https://github.com/divyanshu-talwar/AutoImpute 12https://drive.google.com/drive/folders/1q2ho_cNfsQJNbdCt9j0nwlZv-Roj_\nyK1 13We slightly tune the hyper-parameter λ of weight decay only for the Preimplantation dataset with SN (λ = 20)." }, { "heading": "F DENSITY ESTIMATION", "text": "To reproduce binarized MNIST experiments of MADE (Germain et al., 2015), we adopt and slightly modify a public implementation of MADE14. Figure 5 shows our reproduction results of Figure 2 in the original paper. We follow most settings of the MADE. We use a single hidden layer MADE networks with 500 units, learning rate 0.005, and test on authors’ binarized MNIST dataset15. The only difference from the original authors’ implementation is that we used Adam ( = 10−8, and no weight decay), rather using Adagrad. Because it is observed that the model is undefitted with Adagrad while the learning rates of each element of weight matrix is dropped so rapidly.\nSince there is no missingness in the Binarized MNIST dataset, it cannot be divided by ‖M‖1 as suggested by Algorithm 1. In this experiment, we regard ‖M‖1 as ‖h0‖0 so as not to lose the generality. That is, all pixels that are 0 in the binarized MNIST dataset are regarded as missing. We label results of this with w/SN and plot them in Figure 5. As the aforementioned state in Section 4.5, MADE can also cause variable sparsity by mask matrices of each weight. In MADE, the connection between specific units is forcibly controlled through a mechanism of element-wise product of a specific mask matrix M i in weight W i ∈ Rni×ni−1 . These mask matrices also cause variation in sparsity. We plot the results of using the new mask matrix M iSN in place of the mask matrix M\ni used by MADE with w/SN(all) in Figure 5. The method of calculating the new mask matrix M iSN using the existing mask matrix M i is as follows:\nM iSN ← (1TM i1/ni) ·M i (M i11T )\nwhere denotes element-wise division and 1 is a column vector where all elements are 1. (1TM i1/ni) corresponds to K and (M i11T ) does to ‖m‖1 in Algorithm 1.\nWe demonstrate the effectiveness of SN even in situations where the learning rate is smaller. Smaller learning rate (0.001) shows the effect of SN more clearly despite of their longer training time as show in Figure 8." }, { "heading": "G MACHINE DESCRIPTION", "text": "We perform all the experiments on a Titan X with 12GB of VRAM. 12 GB of VRAM is not always necessary, and most experiments require smaller VRAM.\n14https://github.com/karpathy/pytorch-made 15https://github.com/mgermain/MADE/releases/download/ICML2015/binarized_\nmnist.npz" }, { "heading": "H COMPARISON TO OTHER MISSING HANDLING TECHNIQUES", "text": "In this section, we compare Sparsity Normalization with other missing handling techniques. Although the main contribution of our paper is to provide a deeper understanding and the corresponding solution about the issue that the zero imputation, the simplest and most intuitive way of handling missing data, degrades the performance in training neural networks. Nonetheless, we show that Sparsity Normalization is effective for high dimensional datasets (with a large number of features and high missing rates) via a collaborative filtering dataset, while showing that Sparsity Normalization has competitive results with other modern missing handling techniques for datasets with non-high dimensional setting (electronic medical records datasets, UCI datasets w/ and w/o MCAR assumption). For fair comparisons, we consider the tasks in Section 4, as well as the tasks used by modern missing handling techniques such as GAIN (Yoon et al., 2018) and Śmieja et al. (2018).\nAs baselines, we consider modern missing handling techniques such as GAIN, Śmieja et al. (2018) and their baselines as well.\n• Zero Imputation w/o Sparsity Normalization: Missing values are replaced with zero. • Zero Imputation w/ Sparsity Normalization (ours.): Based on zero imputation, apply Sparsity\nNormalization (SN).\n• Zero Imputation with Batch Normalization (Ioffe & Szegedy, 2015): Based on zero imputation, apply Batch Normalization (BN) only on the first layer.\n• Zero Imputation with Layer Normalization (Lei Ba et al., 2016): Based on zero imputation, apply Layer Normalization (LN) only on the first layer. • Dropout16: Missing values are replaced with zero and other values are divided by( E(h0,m)∈D[‖m‖1]/n0 ) like standard dropout (Srivastava et al., 2014). Dropout uses a sin-\ngle missing (drop) probability uniformly across all instances of the dataset while SN normalizes each data instance with its own missing rate.\n• Mean Imputation: Missing values are replaced with the mean of those features. • Median Imputation: Missing values are replaced with the median of those features. • k-Nearest Neighbors (k-NN): Missing values are replaced with the mean of those features from k nearest neighbor samples. We use k = 5 following Śmieja et al. (2018)’s experimental setting.\n• Multivariate Imputation by Chained Equations (MICE): Proposed by Buuren & GroothuisOudshoorn (2010).\n• SoftImpute (Mazumder et al., 2010)17\n• Gaussian Mixture Model Compensator (GMMC): Proposed by Śmieja et al. (2018). In the case of GMMC, any activation functions except ReLU and RBF (Radial Basis Function) is prohibitive on the first hidden layer. Thus, the activation function of the first hidden layer is replaced by ReLU in all base architectures without ReLU.\n• GAIN (Yoon et al., 2018)\nWe implement Mean Imputation, Median Imputation, MICE, k-NN and SoftImpute by python package fancyimpute. We use authors’ official codes for GMMC18 and GAIN19. Layer Normalization and Batch Normalization are not commonly considered in studies of handling missing data. However, we additionally take these as baselines because they have similarities to Sparsity Normalization in terms of stabilizing the statistics of a hidden layer20 (See Appendix H.1.2 for deeper analysis).\n16The GMMC (Śmieja et al., 2018) used this method as their baseline. 17In Yoon et al. (2018), this method was named Matrix. 18https://github.com/lstruski/Processing-of-missing-data-by-neural-networks 19https://github.com/jsyoon0823/GAIN 20We only consider LN and BN in the case of applying the first hidden layer. Because we find that the\nprediction performance is more worse when LN or BN applied to all the hidden layers, and it is difficult to fairly compare with the Sparsity Normalization." }, { "heading": "H.1 COLLABORATIVE FILTERING (RECOMMENDATION) DATASET", "text": "In this section, we compare the SN with other missing handling techniques using the collaborative filtering dataset. Appendix H.1.1 compares the prediction performance of the baseline methods and the SN, while Appendix H.1.2 deeply analyzes the characteristics of the SN in comparison with Layer Normalization (Lei Ba et al., 2016) and Batch Normalization (Ioffe & Szegedy, 2015)." }, { "heading": "H.1.1 COMPARISON OF PREDICTION PERFORMANCE", "text": "It is considered training an AutoRec (Sedhain et al., 2015) model on the Movielens 100K dataset. Most experimental settings are adopted from Section 4.121. We evaluate each missing handling techniques on both data encoding (user- or item-rating vector) as shown in Table 7. In both encoding, Sparsity Normalization performs better or similar to other missing handling techniques. While some missing handling techniques perform poorly rather than zero imputation depending on the encoding, Sparsity Normalization improves performance consistently for both data encodings. It is worth mentioning that Sparsity Normalization performs statistically significantly better than all other baselines with item vector encoding, which is considered a better encoding scheme in most collaborative filtering models (Salakhutdinov et al., 2007; Sedhain et al., 2015; Zheng et al., 2016).\nH.1.2 IS BATCH NORMALIZATION OR LAYER NORMALIZATION ABLE TO SOLVE VSP?\nSomeone might wonder if Batch Normalization (Ioffe & Szegedy, 2015) or Layer Normalization (Lei Ba et al., 2016) have a similar effect to Sparsity Normalization by alleviating VSP. However, BN or LN can not solve VSP even though these three methods have something in common in terms of stabilizing the statistics of the hidden layer. To validate this, we compare SN with LN and BN by controlling the strength of weight decay regularization. We use the AutoRec model with Movielens 100K for these experiments as in Appendix H.1.1.\nFigure 9 shows that VSP occurs in all cases except SN with weak regularization (left column). The model’s prediction highly correlates the number of known entries in all methods except SN. On the other hand, strong regularization might seem to solve the VSP while the model shows relatively constant inference regardless of the number of known entries. However, the strong regularization is not an acceptable solution because it gives the less freedom of the model’s inference making constant prediction (right column). It must not be natural that the predicted values of the model are almost constant regardless of input sample (we do not want a model that recommends the same movie no\n21Only when applying Batch Normalization and Layer Normalization, we set the number of early stop iteration as 10, 000 (10 times of the other models) in order to prevent underfitting.\nmatter which movies a user like/dislike!). This trend can also be seen in the process of tuning the hyper-parameter λ for each model through that the optimal λ value of each model except LN and BN is determined at around 500, whereas that of BN and LN is determined above 500000 (inordinate regularization). In other words, unlike SN, the VSP is not solved with LN or BN. Rather, strong regularization is able to solve the VSP, but this is not a direct solution to VSP forcing the model to choice constant predicted values irrespectively of the input. It is instructive note that the trend of Figure 9 is extremely consistent with the test points as Figure 1." }, { "heading": "H.2 ELECTRONIC MEDICAL RECORDS (EMR) DATASETS", "text": "We also compare Sparsity Normalization and baselines for the five disease identification tasks in the NHIS dataset used in Section 4.2. The results are described in Table 3. Sparsity Normalization shows better or similar performance compared to other baseline methods as well." }, { "heading": "H.3 UCI DATASETS", "text": "We further compare Sparsity Normalization with other missing handling techniques on UCI datasets which have relatively low missing rates and small feature dimension (non-high dimensional datasets). We consider the UCI datasets used in GAIN (Yoon et al., 2018) and GMMC (Śmieja et al., 2018). The datasets used in the both papers can be divided into two categories: missing features are intentionally injected (w/ MCAR assumption) or missing features exist inherently (w/o MCAR assumption).\nWe adopt the settings of Klambauer et al. (2017); Littwin & Wolf (2018)’s UCI exeperiments to use the same Multi Layer Perceptron (MLP) architecture as in Section 4.4: ReLU (Glorot et al., 2011) networks of 4 hidden layers with 256 units. The main purpose of imputation should be to improve prediction performance rather than imputation performance. In this reason, we just focus on the prediction performance of each missing handling techniques for UCI datasets. Because all UCI datasets used in this section are for imbalanced binary classification tasks, prediction performance is reported with AUROC rather than accuracy, and the class weight is considered in loss function. On top of that, we use Adam Optimizer in all experiments for fair comparison with baselines.\nThough we adopt datasets used in GAIN (Yoon et al., 2018) and GMMC (Śmieja et al., 2018), we report quite different performance from that of the papers. Several possible reasons are as follows. First, GAIN and GMMC did not publish the train/test dataset split, thus we use our own split which is made under the similar settings of both papers. Second, MLP is used rather than logistic regression or Radial Basis Function Network (RBFN) which are used in GAIN and GMMC respectively. It is because we think that MLP is more reasonable and widely acceptable architecture nowadays22 than the others. Furthermore, we use AUROC and class weights, unlike the GAIN and GMMC. The final possible reason is that we use Adam Optimizer for all models because SGD with learning rate decay is difficult for fair comparison when hyper-parameters are set in favor to a particular model. In these ways, we do our best to compare Sparsity Normalization and other missing handling techniques including GMMC and GAIN in the most fair and reasonable setting." }, { "heading": "H.3.1 UCI DATASETS WITH MCAR ASSUMPTION", "text": "We deliberately inject missing values into the datasets which don’t have any missing attributes internally (w/ MCAR assumption) to perform binary classification tasks. We consider Breast, Spam, and Credit datasets from GAIN (Yoon et al., 2018) and Crashes and Heart datasets from GMMC (Śmieja et al., 2018). We make 20% of all features be missing for the Breast, Spam, and Credit datasets following GAIN, and 50% for the Crashes and Heart datasets following GMMC. The summary of the datasets are described in Table 8. For Breast, Spam, and Credit datasets taken by GAIN, we perform min-max normalization which makes min and max values of each features be 0 and 1 following GAIN paper, and for Crashes and Heart datasets taken by GMMC, we perform another kind of min-max normalization which makes min and max values of each features be -1 and 1 following GMMC paper.\nThe experimental results are summarized in Table 9. It is difficult to find a significant difference in prediction performance among each missing handling techniques for datasets with small feature dimensions. The results of these experiments are also consistent with experiments of GAIN (Yoon et al., 2018). Though the GAIN showed significantly better imputation performance compared to their baseline methods, prediction performances were not statistically significant (See Table 3 of the GAIN paper, and note that they didn’t report 95% confidence interval but standard deviation). From these overall results, we conclude that SN is quite comparable for the datasets of low dimension/missing rate with MCAR assumption." }, { "heading": "H.3.2 UCI DATASETS WITHOUT MCAR ASSUMPTION", "text": "We compare SN and each missing handling techniques on the datasets that have internal missingness (w/o MCAR assumption). We consider Bands, Hepartitis, Horse, Mammographics, and Pima datasets experimented in the GMMC (Śmieja et al., 2018) paper (See Table 8 for statistics of the datasets). Following the GMMC paper, the min-max normalization is performed, which makes min and max values of each features be -1 and 1. As shown in Table 10, it is concluded that even without MCAR assumption, SN shows comparable results for the datasets of low dimension/missing rate." }, { "heading": "H.4 CONCLUSION", "text": "In conclusion, Sparsity Normalization is significantly superior to other missing handling techniques for the high dimensional/missing rate datasets. Sparsity Normalization performs well compared to other modern missing handling techniques even on non-high dimensional datasets. Sparsity Normalization is valuable in that it performs better than other models and does not require additional training or parameters. Moreover, Sparsity Normalization is computationally inexpensive compared to Mean or Median Imputation because sparse tensors can be used to save computational costs to calculate first hidden layer not with mean or median imputation but with zero imputation (w/ or w/o SN). The reduced computational cost by using sparse tensors is relatively higher when we deal with high-dimensional datasets." } ]
2,020
null
SP:79ddf8eda1c2247a1fc928cd7f4ca3d1d95b6adc
[ "This paper proposes a semi-supervised approach to impute the labels of unlabeled samples such that a network achieves better generalization when it is trained on these labels. The proposed strategy can be easily used to improve the state-of-the-art semi-supervised methods. It mainly uses a validation data set to evaluate the updating rules of the unlabeled samples with pseudo-labels. The proposed method is applicable to both classification and regression problems including image classification and facial landmark detection tasks, which has shown in the experiments. But the following should be improved in the following aspects: ", "This paper uses a meta-learning approach to solve semi-supervised learning. The main idea is to simulate an SGD step on the loss of the meta-validation data and see how the model will perform if the pseudo-labels of unlabelled data are perturbed. Experiments on classification and regression problems show that the proposed method can improve over existing methods. The idea itself is intriguing but the derivation and some design choice are not very well-explained." ]
Recent semi-supervised learning methods have shown to achieve comparable results to their supervised counterparts while using only a small portion of labels in image classification tasks thanks to their regularization strategies. In this paper, we take a more direct approach for semi-supervised learning and propose learning to impute the labels of unlabeled samples such that a network achieves better generalization when it is trained on these labels. We pose the problem in a learning-to-learn formulation which can easily be incorporated to the state-of-the-art semi-supervised techniques and boost their performance especially when the labels are limited. We demonstrate that our method is applicable to both classification and regression problems including image classification and facial landmark detection tasks.
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Antreas Antoniou", "Amos Storkey" ], "title": "Learning to learn via self-critique", "venue": "arXiv preprint arXiv:1905.10295,", "year": 2019 }, { "authors": [ "Samy Bengio", "Yoshua Bengio", "Jocelyn Cloutier", "Jan Gecsei" ], "title": "On the optimization of a synaptic learning rule", "venue": "In Preprints Conf. Optimality in Artificial and Biological Neural Networks,", "year": 1992 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": null, "year": 1905 }, { "authors": [ "Olivier Chapelle", "Bernhard Scholkopf", "Alexander Zien" ], "title": "Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Kyle Hsu", "Sergey Levine", "Chelsea Finn" ], "title": "Unsupervised learning via meta-learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Neal Jean", "Sang Michael Xie", "Stefano Ermon" ], "title": "Semi-supervised deep kernel learning: Regression with unlabeled data by minimizing predictive variance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Martin Koestinger", "Paul Wohlhart", "Peter M Roth", "Horst Bischof" ], "title": "Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization", "venue": "In International Conference on Computer Vision Workshops", "year": 2011 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "In Fifth International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on Challenges in Representation Learning, ICML,", "year": 2013 }, { "authors": [ "Xinzhe Li", "Qianru Sun", "Yaoyao Liu", "Shibao Zheng", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Learning to self-train for semi-supervised few-shot classification", "venue": null, "year": 1906 }, { "authors": [ "Jonathan Lorraine", "David Duvenaud" ], "title": "Stochastic hyperparameter optimization through hypernetworks", "venue": "arXiv preprint arXiv:1802.09419,", "year": 2018 }, { "authors": [ "Matthew MacKay", "Paul Vicol", "Jon Lorraine", "David Duvenaud", "Roger Grosse" ], "title": "Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions", "venue": null, "year": 1903 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin A Raffel", "Ekin Dogus Cubuk", "Ian Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Aravind Rajeswaran", "Chelsea Finn", "Sham Kakade", "Sergey Levine" ], "title": "Meta-learning with implicit gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "arXiv preprint arXiv:1803.00676,", "year": 2018 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Evolutionary Principles in Self-referential Learning: On Learning how to Learn: the Meta-meta-meta...-hook", "venue": null, "year": 1987 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In International Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Andrew Gordon Wilson", "Zhiting Hu", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Deep kernel learning", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation for consistency training", "venue": null, "year": 1904 }, { "authors": [ "David Yarowsky" ], "title": "Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics", "venue": null, "year": 1995 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Zhanpeng Zhang", "Ping Luo", "Chen Change Loy", "Xiaoou Tang" ], "title": "Learning deep representation for face alignment with auxiliary attributes", "venue": "Transactions on Pattern Analysis and Machine Intelligence,", "year": 2015 }, { "authors": [ "linear-schedule as Berthelot" ], "title": "During the meta update in both our option1 and option2", "venue": null, "year": 2019 }, { "authors": [ "Tarvainen", "Valpola" ], "title": "2017)) and report the results of two variants of our method applied to PL for 250 labels case on CIFAR-10 in Fig. 5. More specifically, our option1 using 25, 50, 100 samples as meta-validation data at each training iteration obtain 43.4", "venue": null, "year": 2017 }, { "authors": [ "Berthelot" ], "title": "linear-schedule) to increase the unsupervised loss weight", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Semi-supervised learning (SSL) (Chapelle et al., 2009) is one of the approaches to learn not only from labeled samples but also unlabeled ones. Under certain assumptions such as presence of smooth prediction functions that map data to labels, of low-dimensional manifolds that the high-dimensional data lies (Chapelle et al., 2009), SSL methods provide a way to leverage the information at unlabeled data and lessens the dependency on labels. Recent work (Tarvainen & Valpola, 2017; Miyato et al., 2018; Berthelot et al., 2019; Xie et al., 2019) have shown that semi-supervised learning by using only a small portion of labels can achieve competitive results with the supervised counterparts in image classification tasks (i.e. CIFAR10, SVHN). They built on a variation of the well-known iterative bootstrapping method (Yarowsky, 1995) where in each iteration a classifier is trained on the current set of labeled data, the learned classifier is used to generate label for unlabeled data. However, the generalization performance of this approach is known to suffer from fitting the model on wrongly labeled samples and overfitting into self-generated labels (Tarvainen & Valpola, 2017). Thus, they mitigate these issues by various regularization strategies.\nWhile there exist several regularization (Srivastava et al., 2014) and augmentation (Zhang et al., 2018; DeVries & Taylor, 2017) techniques in image recognition problems which are known to increase the generalization performance of deep networks, specific regularization strategies for semi-supervised classification are used to estimate correct labels for the unlabeled data by either encouraging the models to produce confident outputs (Lee, 2013; Grandvalet & Bengio, 2005) and/or consistent output distributions when its inputs are perturbed (Tarvainen & Valpola, 2017; Miyato et al., 2018; Berthelot et al., 2019). The assumption here is that if a good regularization strategy exists, it can enable the network to recover the correct labels for the unlabeled data and then the method can obtain similar performance with the supervised counterpart when trained on them. Though this ad-hoc paradigm is shown to be effective, it raises a natural question for a more direct approach: Can we instead encourage the network to label the unlabeled data such that the network achieves better generalization performance when trained with them?\nIn this paper, we propose a new learning-to-learn method for semi-supervised learning that can be put in a meta-learning framework to address this question. Our method involves learning an update rule to label unlabeled training samples such that training our model using these predicted labels improves its performance not only on itself but also on a meta-validation set. Crucially, our method is highly generic and can easily be incorporated to the state-of-the-art methods (Lee, 2013; Berthelot et al., 2019) and boost their performance, in particular, in the regime where the number of available labels is limited. Moreover, our method is not limited to classification problems, we show that it can\nbe extended to semi-supervised regression tasks where the output space is continuous and achieves significant performance gains." }, { "heading": "2 RELATED WORK", "text": "Semi-supervised classification. There is a rich body of literature in SSL (Chapelle et al., 2009) for classification. Most of the recent work (Lee, 2013; Tarvainen & Valpola, 2017; Miyato et al., 2018; Berthelot et al., 2019; Xie et al., 2019) builds on the idea of the bootstrapping technique of Yarowsky (1995) that involves iterative optimization of classifier on a set of labeled data and refinement of its labels for the unlabeled data. This paradigm is known to overfit on the noisy self-generated labels and thus suffer from low generalization performance. To alleviate the sensitivity to inaccurate labeling, researchers introduce various regularization strategies. Grandvalet & Bengio (2005) propose a minimum entropy regularizer that encourages each unlabeled sample to be assigned to only one of the classes with high probability. Lee (2013) instead follow a more direct approach, use the predicted label with the maximum probability for each unlabeled sample as true-label which is called “pseudolabel”. An orthogonal regularization strategy is to encourage a classifier to be invariant to various stochastic factors and produce consistent predictions for unlabeled data when the noise is added to intermediate representations (Srivastava et al., 2014) and to input in an adversarial manner (Miyato et al., 2018). In the same vein, Laine & Aila (2017); Tarvainen & Valpola (2017) introduce the Π-model and Mean-Teacher that are regularized to be consistent over previous training iterations by using a temporal ensembling and teacher/student networks respectively. Recently Berthelot et al. (2019) introduced MixMatch algorithm that further extended the previous work by unifying the idea of the consistency regularization and augmentation strategies (Zhang et al., 2018). While the recent techniques are shown to be effective in several classification benchmarks, the idea of consistency regularization is still implicit in terms of generalization performance. Here we take an orthogonal and more direct approach to this approach and learn to impute the labels of the unlabeled samples that improve the generalization performance of a classifier. We also show that our method can be used with the recent SSL techniques.\nSemi-supervised regression. Some of the recent techniques in semi-supervised classification are shown to be applicable to regression problems. Jean et al. (2018) have adapted various existing SSL methods such as label propagation, VAT (Miyato et al., 2018) and Mean Teacher (Tarvainen & Valpola, 2017) and studied their performance in regression. The same authors also proposed a Bayesian SSL approach for SSL regression problems which is based on the recent deep kernel learning method (Wilson et al., 2016) based approach for semi-supervised learning that aims at minimizing predictive variance of unlabeled data. As in the classification task, these methods are typically ad-hoc and does not aim to generate labels for the unlabeled data that are optimized to improve the generalization in regression.\nMeta-learning. Our method is also related to meta-learning (Schmidhuber, 1987; Bengio et al., 1992) and inspired from the recent work (Andrychowicz et al., 2016; Finn et al., 2017) where the goal is typically to quickly learn a new task from a small amount of new data. Andrychowicz et al. (2016); Finn et al. (2017) propose a learn gradient through gradient approach to train a model which has a strong generalization ability to unseen test data and can be easily adapted to new/unseen tasks. Sung et al. (2018) introduce the relation network to learn an additional metric such that the learned model’s feature embedding can be generalized for unseen data and unseen tasks. Ren et al. (2018b) adopt the meta learning to learn the weight of samples to tackle the sample imbalance problem. Lorraine & Duvenaud (2018); MacKay et al. (2019) employ meta learning for automatically learning the hyper-parameters of the deep neural networks. Meta-learning has recently been applied to unsupervised learning (Hsu et al., 2019) and SSL for few shot learning (Ren et al., 2018a). Ren et al. (2018a) adapts the prototypical network to use unlabeled examples when producing prototypes, enabling the prototypical network to exploit those unlabeled data to assist few shot classification. Antoniou & Storkey (2019) propose to learn a label-free loss function, parameterized as a neural network that enables the classifier to leverage the information from a validation set and achieves better performance in few-shot learning. (Li et al., 2019) propose a meta-learning technique to initialize a self-training model and to filter out noisy pseudo labels for semi-supervised few-shot learning. Similarly, our work also builds on the ideas of optimizing for improving the generalization for unseen samples. However, in contrast to the existing meta-learning methods that is proposed for few-shot\nlearning problems, we focus on semi-supervised learning in general classification and regression problems where the number of samples are not limited to few." }, { "heading": "3 METHOD", "text": "Consider a dataset D that consists of |L| labeled samples L = {(x1,y1), (x2,y2), . . . , (x|L|,y|L|)} which is further split into a training T and meta-validation set V with |T | and |V| samples respectively. We also have a set of unlabeled samples U = {xu}i=1,...,|U|. Further, we let Φθ denote a model (a function parameterized for instance, as a deep neural network) with parameters θ, that is trained to predict labels y from given samples x as y = Φθ(x).\nWe are interested in imputing missing labels z of the unlabeled samples that are not only accurate but also improve performance at inference time, when included in our training set for optimizing model parameters θ. A straightforward approach to this problem is to first train a model to optimize the following cost function on the training set T :\narg min θ ∑ x∈T `(Φθ(x),y) (1)\nwhere ` is a task specific loss, such as a soft-max followed by a cross-entropy loss for classification or squared loss for regression tasks.\nOne can then use the trained model Φθ to impute labels for samples in the unlabeled set U as z = Φθ(x̃\nu) as in Tarvainen & Valpola (2017); Berthelot et al. (2019), where x̃u denotes randomly perturbed input image xu (e.g. random crops). Note that the label imputation procedure can be replaced with other pseudo-label prediction techniques.\nNow we can expand T by adding the unlabeled samples with their imputed pseudo-labels i.e. A = T ∪ U and further train θ by using Eq. (1). However, as many pseudo-labels z will likely be noisy, there is no guarantee that training on the augmented train set will improve the performance of the model on the meta-validation set V:\nCV = ∑ x∈V `(Φθ(x),y). (2)\nNote that in all our experiments V is randomly sampled from T which ensures that our method is not trained using any additional data.\nOur goal is also to minimize the expected value of the loss in Eq. (1) with respect to the model parameters θ via an algorithm such as stochastic gradient descent (SGD). Here, we go a step further and consider how the label imputation affects the ability of SGD to optimize the loss in Eq. (2).\nWe pose this as a meta-learning problem, which is derived next. We start by considering the loss value of the model Φθ on the augmented training set A:\nCA = ∑ x∈T `(Φθ(x),y) + ∑ xu∈U `(Φθ(x u), z). (3)\nWe first simulate a step of SGD using the loss in Eq. (3) to drive the update of the model parameters θ. At step t, SGD updates the parameters for minimizing Eq. (3) as follows:\nθ̂t+1 = θt − η∇θ (∑ x∈T `(Φθ(x),y) + ∑ xu∈U `(Φθ(x u), z) ) , (4)\nwhere∇θ is the gradient operator. We then wish to find z that minimizes the meta-validation objective CV (θ̂t+1) in Eq. (2) with θ̂t+1. This corresponds to the following bilevel optimization problem\nmin z (∑ x∈V `(Φθ̂t+1(x),y) ) , subject to θ̂t+1 = θt − η∇θ (∑ x∈T `(Φθ(x),y) + ∑ xu∈U `(Φθ(x u), z) ) . (5) Our goal is to learn predicted pseudo-labels that minimize the meta-validation loss in Eq. (5). To this end, we propose two options, denoted as Option 1 and Option 2. In the former, we treat z as latent (or learnable) parameters and compute the gradients of the meta-validation loss w.r.t z to updatez, which is used to update model parameters θ. In the latter, z is considered as the output of network Φθ and the gradients are thus computed w.r.t the model parameters θ for updating θ.\nOptimization. The gradients of CV w.r.t z for Option 1 and w.r.t θ for Option 2 are respectively be written as:\n(Option 1) ∂C V ∂z = ∑ x∈V ∂`(Φθ̂t+1(x),y) ∂θ̂t+1 · ( ∂θt ∂z − η∇z∇θCA ) (6)\n(Option 2) ∂C V ∂θt = ∂CV ∂z ∂z ∂θt = ∑ x∈V ∂`(Φθ̂t+1(x),y) ∂θ̂t+1 · ( I − η∇θ∇θCA ) (7)\nwhere I is identity matrix and ∇θ∇θ indicates a second order derivative which is supported by standard deep learning libraries such as Pytorch Paszke et al. (2017), Tensorflow Abadi et al. (2015).\nWe then use the gradients in Eq. (6) or Eq. (7) to update the model parameters such that the updated model can predict pseudo-labels that minimize the meta-validation loss. In Option 1, we update the pseudo-labels as ẑ = z − ηz dC V\ndz and the updated pseudo-labels are then used to update the model parameters θ by minimizing the loss CU = ∑ xu∈U `(Φθt(x u i ), ẑi). In Option 2, the gradient in Eq. (7) is used to update the model parameters as θt+1 = θt − η ∂C V\n∂θt\nWe further depict the optimization details of Option 1 and 2 in Alg. 1. We first estimate the z with the current model θt, update the model to optimize the loss CA and then re-estimate z with θ̂t. This part is so far similar to the self-labeling method of Lee (2013). The loss function CA can trivially be replaced by those used in recent work such as Mean-Teacher (Tarvainen & Valpola, 2017) or MixMatch (Berthelot et al., 2019); we applied our method in conjunction with all three loss functions and report results in Section 4. Next we apply the updated model to a mini-batch of unlabeled samples, compute the loss and simulate a SGD step w.r.t this loss. In Option 1, we initialize z with the output of Φθ(x̃u) and update it by using the gradient from Eq. (6). In Option 2, the meta-update is computed w.r.t θ and used to update the model parameters by using the gradient from Eq. (7).\nAlgorithm 1: Pseudo-code of our method for two variants. Input: T , U , V . training set, unlabeled and meta-validation set resp. Required: model function Φ and its initialization parameters θ0, a batch of training data BTt = {xi, yi}i=1,··· ,|BTt |, a batch of unlabeled data B U t = {xui }i=1,··· ,|BUt | and another batch of training data (treated as meta-validation data during training) BVt = {xi, yi}i=1,··· ,|BVt |, learning rate α, η, ηz , weight λ. for t = 0, · · · ,K − 1 do\nz = {zi}i=1,··· ,|BUt | = {Φθt(x̃ u i )}i=1,··· ,|BUt | . estimate pseudo labels for unlabeled samples, x̃ui is randomly perturbed input x u i\nCA = ∑|BTt | i=1 `(Φθt(xi), yi) + λ ∑|BUt | i=1 `(Φθt(x u i ), zi) θ̂t = θt − η dC A\ndθt . update the model and move to the meta update\nz = {zi}i=1,··· ,|BUt | = {Φθ̂t(x̃ u i )}i=1,··· ,|BUt | . estimate pseudo labels using the updated model\nC = ∑|BUt | i=1 `(Φθ̂t(x u i ), zi) θ̂t+1 = θ̂t − α ∂C ∂θ̂t\n. simulate a SGD step CV (θ̂t+1) = ∑|BVt | i=1 `(Φθ̂t+1(xi), yi) . evaluate on validation data if option 1 then ẑ = z − ηz dC V\ndz . correct pseudo labels CU = ∑|BUt | i=1 `(Φθt(x u i ), ẑi) θt+1 = θ̂t − η ∂C U\n∂θ̂t . update the model with corrected pseudo labels\nelse if option 2 then θt+1 = θ̂t − η ∂C V\n∂θ̂t . update the model directly with meta-gradient\nOutput: θK\nend" }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the performance of our method on multiple classification and regression benchmarks, and analyze the results below. Note that we use the validation sets only for tuning hyperparameters\nsuch as learning rates and for early stopping. The meta-validation sets (CV in Eq. (5)) are sampled from the training set in all our experiments as in Rajeswaran et al. (2019); Finn et al. (2017); Vinyals et al. (2016); Sung et al. (2018). During training, we sample two mini-batches of data from the same training set at each iteration: one acts as training and the other one acts as the meta-validation set to ensure that our method is not trained on more data than the baselines. We train the model on the former and optimize the pseudo labels on the latter. 1" }, { "heading": "4.1 CLASSIFICATION RESULTS", "text": "Toy datasets. We first validate our method on a synthetic binary classification dataset, two noisy concentric circles2, where each circle corresponds to a class. To this end, we generate 10,000 samples and randomly pick 50, 2500 and 1000 for training, testing and validation sets respectively and use the rest of the samples as unlabeled data. Figure 1 illustrates the dataset where the labels for both test (top row) and unlabeled data (bottom row) are indicated in blue and orange colors. The labeled training samples for two classes are shown in green and pink in GT.\nExperiment 1. In Fig. 1 we depict the predictions of two baselines: supervised learning (SL) which is trained only on the labeled training set (i.e. green and pink samples), Pseudo-Labeling (Lee, 2013) (PL) which iteratively first trains a network on the labeled and unlabeled data and then re-labels the unlabeled ones, and two variants of our method. For all the methods, we use a shallow small capacity network containing two fully-connected layers (2 fc−→ 4 fc−→ 2), one leaky ReLU activation layer in-between and a sigmoid function at the end. We first observe that SL misclassifies most of the outer circle points and predicts them as inner circle labels. This is due to the limited labeled data and its non-uniform spread over the input space. We see that PL significantly improves over SL by leveraging the unlabeled data. However many of its predictions are still inaccurate, especially in the regions of low label density and high ambiguity. This occurs because the iterative relabeling procedure is sensitive to the initial labeling of the unlabeled samples and thus it gets stuck in a non-optimal minimum. Both the variants of our method largely overcome the bias in the training data by labeling the unlabeled samples in a way that they lead to accurate predictions on the meta-validation set. This shows that the meta-updates are successful to correct the imputed labels based on the signal from the meta-validation set (see Fig. 1 bottom row) and to prevent our method to simply overfit into the pseudo-labels. While both variants agree on the most of the samples, they slightly differ in the low-density regions (i.e. the area between the circles).\nCIFAR-10 & -100. We also evaluate our method on the CIFAR image classification benchmarks (Krizhevsky et al., 2009) that are commonly used for evaluation of both supervised and semi-supervised classification. Both datasets contain 50,000 and 10,000 training and testing samples respectively. In our experiments, we strictly follow the training and testing protocol for semi-\n1Our implementation in PyTorch will be available at https://anonymous.4open.science/r/ a4721095-8266-4038-9cc6-8791ef61c610/.\n2https://scikit-learn.org\nsupervised learning which is proposed by the previous work (Oliver et al., 2018; Berthelot et al., 2019) where 5000 of training samples are used as validation data, the remaining 45,000 training samples are split into labeled and unlabeled training sets. As in Oliver et al. (2018), we randomly pick |T | samples as labeled data and the rest (|U| = 45, 000− |T | samples) are unlabeled data. We report results for multiple training data-regimes |T | = 250, 500, 1000, 2000, 4000 on CIFAR-10 and |T | = 1000, 2000, 3000, 4000, 5000 on CIFAR-100. Note that we do not use the validation data in training of the model parameters but only for hyperparameter selection.\nExperiment 2. First we compare our method to the SL and PL baselines for various number of training samples and report their performance in Table 1. We use a 13-layer conv-net as the classifier for all the methods (see Appendix A for details). First we observe that PL improves over SL by leveraging the unlabeled data in all the settings. Both our methods (option-1/2) that builds on PL achieves substantial performance gains over PL. Interestingly the relative improvements are higher in the more challenging case of when the labeled data is limited. We also observe that option-2, which involves updating the model parameters with meta-gradient, is a better strategy for these benchmarks.\nExperiment 3. Here we show that our method can be incorporated to the state-of-the art methods such as Mean Teacher (MT) (Tarvainen & Valpola, 2017) and MixMatch (MM) (Berthelot et al., 2019) that use more sophisticated backbone networks, augmentation and regularization strategies, and also boost their performance. For this experiment, we follow the implementation of Berthelot et al. (2019) – we adopt a more competitive backbone, WideResNet-28-2, use the Adam optimizer along with standard data augmentation techniques (see Appendix A for more details).\nTable 2 depicts the classification error rate for the several state-of-the-art techniques including Π model (Laine & Aila, 2017), PL (Lee, 2013), VAT (Miyato et al., 2018), MT (Tarvainen & Valpola, 2017) and MM (Berthelot et al., 2019). Note that some methods do not report results on CIFAR-100. All the results except MT and MM are taken from the original papers. As our methods are built on MM and MT, we show the results of our own implementation which are on par with the published ones. From the table, we see that our method achieves significant improvement over MT baseline, especially in the low label regime, up-to 11 points in case of 250 labels in CIFAR-10. Again our second variant consistently outperforms the first one when used with the MT in both CIFAR-10 and -100, whereas the more competitive MM baseline already produces accurate pseudo-labels in CIFAR-10 and so the two options perform comparably. The reason is, in option2, we obtain the\nmeta gradient on the model parameters and update the model directly, whereas in option1, we firstly update the pseudo labels and then train the model on them. Though updating the pseudo labels can improve the performance (e.g. MT + option1 vs MT), some of the labels can still be noisy after the update and optimizing the model on the noisy ones may degrade the performance. In contrast, in option2, the meta gradient is applied to the network parameters directly, which alleviates the potential pseudo-label noise.\nIn case of MM, our method is able to boost its performance only in case of few labels (250 for CIFAR-10 and 1000 for CIFAR-100) and does comparable and slightly worse in case of more labels. We believe that it is harder to improve the performance of MM in the CIFAR, as its performance approaches to the supervised counterpart quickly. We leave the evaluation of MM on a more challenging benchmark as future work and below we show results for tasks where MM is not applicable." }, { "heading": "4.2 REGRESSION RESULTS", "text": "AFLW. Next we move to a regression task and use the Annotated Facial Landmarks in the Wild (AFLW) dataset (Koestinger et al., 2011; Zhang et al., 2015) where we aim at predicting 5 landmarks’ location of faces in images. The AFLW is originally designed for supervised facial landmark detection. We use the official train and test splits, randomly pick 10% of samples of the original training set as the validation set only for hyperparameter tuning and early stopping, and use the rest of this data as labeled and unlabeled data in our experiments. We evaluate the baselines and our method for 1%, 2%, 5%, 10% of training data as labeled data and report the standard Mean Square Error (MSE) normalized by the inter-ocular distance as in Zhang et al. (2015).\nExperiment 4. For the regression task, we adopt the TCDCN backbone architecture in Zhang et al. (2015) and SGD optimizer, use standard data augmentation. We train all methods for 22500 steps. and set the initial learning rate to 0.03 and reduce it by 0.1 every 750 steps. The momentum and the weight decay are set to 0.9 and 0.0005. Here we use SL, PL and MT as the baselines and also build our method on both the PL and MT. Note that MM is not applicable to this task, as mixing up two face images doubles the number of landmarks. Table 3 depicts the results for the baselines and ours in terms of mean error rate.\nFirst we observe that the supervised learning on 1% of the labels is very challenging and obtains only 16.31% which is on par with the performance of simply taking the mean of each facial landmark over all training samples (16.58%). As expected, using the unlabeled face images is beneficial and both PL as well as MT methods significantly improves over SL. Our method achieves consistent improvement over PL and MT for different portions of labels. This strongly suggests that our method is able to refine the estimated landmarks of PL and MT on the unlabeled images and further improve its performance. We also analyze the effect of meta-updates during training our model in Fig. 2. To this end, we first plot the regression loss on the meta-validation batch before and after the meta-update in the left figure. This clearly shows that updating the pseudo landmark positions in the unlabeled images lead to a better accuracy on the meta-validation samples. Second we show the test loss for the same models in the right plot. It is clear that the meta-updated model does not overfit to the meta-validation set and generalizes better to the test images. We also visualize the effect of meta-updates on the landmarks on the example test images and observe that the meta-updates help them to get closer to the ground-truth ones.\nThough both option-1 and 2 improve over the baselines, here option-1 outperforms option-2 on the regression problem which is in contrast to the classification tasks above. This is possibly due to the fact that the output space of the landmarks is continuous and less constrained than the label space for classification which makes the option-2 more prone to overfit to the validation set. A promising direction which is worth investigating in future is to alleviate the overfitting by using a regularizer to enforce the structure in the output space.\nEpochEpoch\nTe st\nlo ss\nM et\na lo\nss\nFigure 2: Illustration of the meta update’s effect on facial landmarks detection. Cyan curves are the loss of the model after the meta update while orange curves are the model’s loss before the meta update. Best seen in color.\nWe also illustrate success/failure cases on the test images in Fig. 3 and depict the ground-truth, predicted landmarks of MT and our method when trained with 1% of the labeled data. The performance difference is visually significant and our method outputs more accurate landmarks than MT. The bottom row shows the cases of extreme pose variation and occlusion where both MT and ours fail to achieve accurate predictions." }, { "heading": "5 CONCLUSION", "text": "In this paper we have propose a general semi-supervised learning framework, it learns to impute the labels of the unlabeled data such that the training a deep network on these labels improves its generalization ability. Our method can easily be used in conjunction with several state-of-theart semi-supervised methods and extended to multiple classification and regression tasks such as image classification and facial landmark detection. We show that our method achieves significant performance gains over competitive baselines in challenging benchmarks, especially when the labeled data is scarce. As future work, we plan to extend our method to semi-supervised learning in structured output problems." }, { "heading": "A APPENDIX", "text": "To evaluate our method with existing semi-supervised learning, we implement our method and other methods in Pytorch." }, { "heading": "A.1 TOY EXPERIMENTS", "text": "Experiment 1. To conduct experiments on the concentric circles dataset, we use a small network containing two fully-connected layers (2 fc−→ 4 fc−→ 2) and one leaky relu activation layer. We compare our option 1 and option 2 to the Pseudo Label (PL) Lee (2013) and supervised learning approach and we train all methods for 20× 20 steps (i.e. 20 epochs and 20 steps per epoch). We use Adam as the optimizer and the learning rate is 0.03. The maximum of the unsupervised loss weight λ for both PL and Ours is set to 1 and we adopt the linear-schedule for increasing the weight on the unsupervised loss. Specifically, the λ is initialized at 0 and increases to 1 gradually in 20× 5 steps by linear-schedule as Berthelot et al. (2019). During the meta update in both our option1 and option2, we estimate the pseudo labels of unlabeled samples by gumbel softmax and compute the Mean Square Error (MSE) between prediction and pseudo labels as the unlabeled loss. ηz is 1 for all experiments." }, { "heading": "A.2 CLASSIFICATION EXPERIMENTS ON CIFAR-10 & -100", "text": "For all experiments on both CIFAR-10 and CIFAR-100, we follow the optimizing strategy in Berthelot et al. (2019), i.e. we adopt Adam as the optimizer and fix the learning rate. We evaluate the model using an exponential moving average of the learned models’ parameters with a decay rate of 0.999. In addition, the weight decay is set to 0.02 for all methods. We report the median error of the last 20 epochs for comparisons. For image preprocessing, we use standard data augmentation such as standard normalization and flipping, random crop as Berthelot et al. (2019). During the meta update in both our option1 and option2, we predict the pseudo labels of unlabeled images by applying softmax on the augmentation of the original unlabeled image and compute the Mean Square Error (MSE) between prediction on the original image and pseudo labels as the unlabeled loss as in Berthelot et al. (2019).\nExperiment 2. We use a 13-layer conv-net Tarvainen & Valpola (2017); Laine & Aila (2017) in the experiments where we compare our method to the SL and PL on both CIFAR-10 and CIFAR-100. The architecture of the network is illustrated in Fig. 4. In addition to this, we apply batch normalization on each convolutional and fully connected layers. We use Leaky Relu with negative slope (α = 0.1) as the nonlinear activation function on each convolutional layers. On CIFAR-10, the batch size and learning rate is set to 50 and 0.003, respectively, as in Tarvainen & Valpola (2017) while on CIFAR-100, the batch size is 128. We train each method for 40× 1000 steps. For unsupervised loss weight λ, we use the linear-schedule and it increases gradually to 75 in 40× (1000× 0.4) steps (for both the PL and Ours).\nEffect of Meta-validation Batch Size. To study the influence of the meta-validation batch size, we set the batch size of the meta-validation mini-batch to 25, 50, 100 (Note, in experiment 2 on CIFAR-10, the batch size of the labeled, unlabeled and meta-validation data are set to 50 as in Tarvainen & Valpola (2017); Laine & Aila (2017)) and report the results of two variants of our method applied to PL for 250 labels case on CIFAR-10 in Fig. 5. More specifically, our option1 using 25, 50, 100 samples as meta-validation data at each training iteration obtain 43.4 %, 43.74 % and 43.72 %, respectively while option2 attains 42.7 %, 42.51 % and 42.47 %. These results\nagain strongly verify that both option1 and option2 achieve consistent improvements over the PL. In addition, it is clear that the performance of our method is not sensitive to the batch size of the meta-validation mini-batch (i.e. less than 0.4 %).\nExperiment 3. We adopt the WideResNet-28-2 as the network for all methods as Berthelot et al. (2019). The batch size on CIFAR-10 is set to 32 and it is 128 on CIFAR-100. The learning rate is initialized as 0.002 and fixed. We use the same consistency weight and ramp-up schedule (linear-schedule) to increase the unsupervised loss weight λ as Berthelot et al. (2019)." }, { "heading": "A.3 REGRESSION EXPERIMENTS ON AFLW", "text": "Experiment 4. We adopt the TCDCN proposed in Zhang et al. (2015) as the network and SGD as optimizer. An illustration of the TCDCN’s architecture is shown in Fig. 6. We adapt the Pseudo Label (PL), Mean Teacher (MT) and the supervised learning methods as the baselines and our method is built on PL as well as MT. To estimate the loss on an unlabeled image, we firstly crop an image from the original image by moving the cropping window a random number of pixels. We then estimate the location of landmarks on the augmented image and subtract the number of moving pixels, resulting in the pseudo label for the original image. We then apply MSE to the prediction of the original image and the pseudo labels to estimate the loss. We use the linear-schedule to update the unsupervised loss weight λ to 1 in 9000 steps." }, { "heading": "AFLW Dataset", "text": "" } ]
2,019
null
SP:c57202d97644413a7a1586156e0ea2d88950cc80
[ "The paper introduces an interesting study that tries to explain why conditional text generation models with autoregressive decoders benefit from self-training on pseudo labels created from the same model. The paper introduces and verifies two hypotheses: 1) Decoding strategy: Since beam search is a biased estimator sampling using it doesn't reflect the learned distribution from the model and hence variations happen that benefit learning. (this partially help).", "This paper presents a self-training approach for improving sequence-to-sequence tasks. As a preliminary experiment, this study randomly sampled 100k sentences from WMT 2014 English-German dataset (WMT100K, hereafter), trained a baseline (Transformer) model on WMT100K, and applied self-training methods on the remaining English sentences as the unlabeled monolingual data. After exploring different procedures for self-training, this study uses the fine-tuning strategy: train a model on the supervision data; build pseudo parallel data by predicting translations for all unlabeled data using the trained model; train a new model on the pseudo parallel data; and fine-tune the new model on the supervision data. This strategy alone gave a 3 points improvement of BLEU." ]
Self-training is one of the earliest and simplest semi-supervised methods. The key idea is to augment the original labeled dataset with unlabeled data paired with the model’s prediction (i.e. the pseudo-parallel data). While self-training has been extensively studied on classification problems, in complex sequence generation tasks (e.g. machine translation) it is still unclear how self-training works due to the compositionality of the target space. In this work, we first empirically show that selftraining is able to decently improve the supervised baseline on neural sequence generation tasks. Through careful examination of the performance gains, we find that the perturbation on the hidden states (i.e. dropout) is critical for self-training to benefit from the pseudo-parallel data, which acts as a regularizer and forces the model to yield close predictions for similar unlabeled inputs. Such effect helps the model correct some incorrect predictions on unlabeled data. To further encourage this mechanism, we propose to inject noise to the input space, resulting in a “noisy” version of self-training. Empirical study on standard machine translation and text summarization benchmarks shows that noisy self-training is able to effectively utilize unlabeled data and improve the performance of the supervised baseline by a large margin.1
[ { "affiliations": [], "name": "REVISITING SELF-TRAINING" }, { "affiliations": [], "name": "Junxian He" }, { "affiliations": [], "name": "Jiatao Gu" }, { "affiliations": [], "name": "Jiajun Shen" } ]
[ { "authors": [ "Avrim Blum", "Tom Mitchell" ], "title": "Combining labeled and unlabeled data with co-training", "venue": "In Proceedings of the eleventh annual conference on Computational learning theory,", "year": 1998 }, { "authors": [ "Olivier Chapelle", "Alexander Zien" ], "title": "Semi-supervised classification by low density separation", "venue": "In Proceedings of AISTATS,", "year": 2005 }, { "authors": [ "Olivier Chapelle", "Bernhard Scholkopf", "Alexander Zien" ], "title": "Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Christopher D Manning", "Quoc V Le" ], "title": "Semi-supervised sequence modeling with cross-view training", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier" ], "title": "Understanding back-translation at scale", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In Proceedings of NeurIPS,", "year": 2005 }, { "authors": [ "Francisco Guzmán", "Peng-Jen Chen", "Myle Ott", "Juan Pino", "Guillaume Lample", "Philipp Koehn", "Vishrav Chaudhary", "Marc’Aurelio Ranzato" ], "title": "The FLoRes evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english", "venue": "In Proceedings of EMNLP,", "year": 2019 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Zhongqiang Huang", "Mary Harper" ], "title": "Self-training pcfg grammars with latent annotations across languages", "venue": "In Proceedings of EMNLP,", "year": 2009 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Proceedings of NeurIPS,", "year": 2014 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "In Proceedings of ICLR,", "year": 2017 }, { "authors": [ "Guillaume Lample", "Myle Ott", "Alexis Conneau", "Ludovic Denoyer" ], "title": "Phrase-based & neural unsupervised machine translation", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on Challenges in Representation Learning,", "year": 2013 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A package for automatic evaluation of summaries", "venue": "In Text summarization branches out,", "year": 2004 }, { "authors": [ "David McClosky", "Eugene Charniak", "Mark Johnson" ], "title": "Effective self-training for parsing", "venue": "In Proceedings of NAACL,", "year": 2006 }, { "authors": [ "Yishu Miao", "Phil Blunsom" ], "title": "Language as a latent variable: Discrete generative models for sentence compression", "venue": "In Proceedings of EMNLP,", "year": 2016 }, { "authors": [ "Takeru Miyato", "Andrew M Dai", "Ian Goodfellow" ], "title": "Adversarial training methods for semisupervised text classification", "venue": "In Proceedings of ICLR,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL (Demo Track),", "year": 2019 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "BLEU: a method for automatic evaluation of machine translation", "venue": "In Proceedings of ACL,", "year": 2002 }, { "authors": [ "Antti Rasmus", "Mathias Berglund", "Mikko Honkala", "Harri Valpola", "Tapani Raiko" ], "title": "Semisupervised learning with ladder networks", "venue": "In Proceedings of NeurIPS,", "year": 2015 }, { "authors": [ "Roi Reichart", "Ari Rappoport" ], "title": "Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets", "venue": "In Proceedings of ACL,", "year": 2007 }, { "authors": [ "Alexander M Rush", "Sumit Chopra", "Jason Weston" ], "title": "A neural attention model for abstractive sentence summarization", "venue": "In Proceedings of EMNLP,", "year": 2015 }, { "authors": [ "H Scudder" ], "title": "Probability of error of some adaptive pattern-recognition machines", "venue": "IEEE Transactions on Information Theory,", "year": 1965 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Improving neural machine translation models with monolingual data", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics,", "year": 2015 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In Proceedings of ACL,", "year": 2016 }, { "authors": [ "Kaitao Song", "Xu Tan", "Tao Qin", "Jianfeng Lu", "Tie-Yan Liu" ], "title": "MASS: Masked sequence to sequence pre-training for language generation", "venue": "In Proceedings of ICML,", "year": 2019 }, { "authors": [ "Nicola Ueffing" ], "title": "Using monolingual source-language data to improve mt performance", "venue": "In IWSLT,", "year": 2006 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proceedings of NeurIPS,", "year": 2017 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation", "venue": "arXiv preprint arXiv:1904.12848,", "year": 2019 }, { "authors": [ "David Yarowsky" ], "title": "Unsupervised word sense disambiguation rivaling supervised methods", "venue": "In Proceedings of ACL,", "year": 1995 }, { "authors": [ "Pengcheng Yin", "Chunting Zhou", "Junxian He", "Graham Neubig" ], "title": "StructVAE: Tree-structured latent variable models for semi-supervised semantic parsing", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Jiajun Zhang", "Chengqing Zong" ], "title": "Exploiting source-side monolingual data in neural machine translation", "venue": "In Proceedings of EMNLP,", "year": 2016 }, { "authors": [ "Yan Zhou", "Sally Goldman" ], "title": "Democratic co-learning", "venue": "IEEE International Conference on Tools with Artificial Intelligence,", "year": 2004 }, { "authors": [ "Zhi-Hua Zhou", "Ming Li" ], "title": "Tri-training: Exploiting unlabeled data using three classifiers", "venue": "IEEE Transactions on Knowledge & Data Engineering,", "year": 2005 }, { "authors": [ "Xiaojin Zhu", "Andrew B Goldberg" ], "title": "Introduction to semi-supervised learning", "venue": "Synthesis lectures on artificial intelligence and machine learning,", "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks often require large amounts of labeled data to achieve good performance. However, acquiring labels is a costly process, which motivates research on methods that can effectively utilize unlabeled data to improve performance. Towards this goal, semi-supervised learning (Chapelle et al., 2009) methods that take advantage of both labeled and unlabeled data are a natural starting point. In the context of sequence generation problems, semi-supervised approaches have been shown to work well in some cases. For example, back-translation (Sennrich et al., 2015) makes use of the monolingual data on the target side to improve machine translation systems, latent variable models (Kingma et al., 2014) are employed to incorporate unlabeled source data to facilitate sentence compression (Miao & Blunsom, 2016) or code generation (Yin et al., 2018).\nIn this work, we revisit a much older and simpler semi-supervised method, self-training (ST, Scudder (1965)), where a base model trained with labeled data acts as a “teacher” to label the unannotated data, which is then used to augment the original small training set. Then, a “student” model is trained with this new training set to yield the final model. Originally designed for classification problems, common wisdom suggests that this method may be effective only when a good fraction of the predictions on unlabeled samples are correct, otherwise mistakes are going to be reinforced (Zhu & Goldberg, 2009). In the field of natural language processing, some early work have successfully applied self-training to word sense disambiguation (Yarowsky, 1995) and parsing (McClosky et al., 2006; Reichart & Rappoport, 2007; Huang & Harper, 2009).\nHowever, self-training has not been studied extensively when the target output is natural language. This is partially because in language generation applications (e.g. machine translation) hypotheses are often very far from the ground-truth target, especially in low-resource settings. It is natural to\n∗Equal Contribution. Most of the work is done during Junxian’s internship at FAIR. 1Code is available at https://github.com/jxhe/self-training-text-generation.\nAlgorithm 1 Classic Self-training\n1: Train a base model fθ on L = {xi,yi}li=1 2: repeat 3: Apply fθ to the unlabeled instances U 4: Select a subset S ⊂ {(x, fθ(x))|x ∈ U} 5: Train a new model fθ on S ∪ L 6: until convergence or maximum iterations are reached\nask whether self-training can be useful at all in this case. While Ueffing (2006) and Zhang & Zong (2016) explored self-training in statistical and neural machine translation, only relatively limited gains were reported and, to the best of our knowledge, it is still unclear what makes self-training work. Moreover, Zhang & Zong (2016) did not update the decoder parameters when using pseudo parallel data noting that “synthetic target parts may negatively influence the decoder model of NMT”.\nIn this paper, we aim to answer two questions: (1) How does self-training perform in sequence generation tasks like machine translation and text summarization? Are “bad” pseudo targets indeed catastrophic for self-training? (2) If self-training helps improving the baseline, what contributes to its success? What are the important ingredients to make it work?\nTowards this end, we first evaluate self-training on a small-scale machine translation task and empirically observe significant performance gains over the supervised baseline (§3.2), then we perform a comprehensive ablation analysis to understand the key factors that contribute to its success (§3.3). We find that the decoding method to generate pseudo targets accounts for part of the improvement, but more importantly, the perturbation of hidden states – dropout (Hinton et al., 2012) – turns out to be a crucial ingredient to prevent self-training from falling into the same local optimum as the base model, and this is responsible for most of the gains. To understand the role of such noise in self-training, we use a toy experiment to analyze how noise effectively propagates labels to nearby inputs, sometimes helping correct incorrect predictions (§4.1). Motivated by this analysis, we propose to inject additional noise by perturbing also the input. Comprehensive experiments on machine translation and text summarization tasks demonstrate the effectiveness of noisy self-training." }, { "heading": "2 SELF-TRAINING", "text": "Formally, in conditional sequence generation tasks like machine translation, we have a parallel dataset L = {xi,yi}li=1 and a large unlabeled dataset U = {xj} l+u j=l+1, where |U | > |L| in most cases. As shown in Algorithm 1, classic self-training starts from a base model trained with parallel data L, and iteratively applies the current model to obtain predictions on unlabeled instances U , then it incorporates a subset of the pseudo parallel data S to update the current model.\nThere are two key factors: (1) Selection of the subset S. S is usually selected based on some confidence scores (e.g. log probability) (Yarowsky, 1995) but it is also possible for S to be the whole pseudo parallel data (Zhu & Goldberg, 2009). (2) Combination of real and pseudo parallel data. A new model is often trained on the two datasets jointly as in back-translation, but this introduces an additional hyper-parameter to weigh the importance of the parallel data relative to the pseudo data (Edunov et al., 2018). Another way is to treat them separately – first we train the model on pseudo parallel data S, and then fine-tune it on real data L. In our preliminary experiments, we find that the separate training strategy with the whole pseudo parallel dataset (i.e. S = {(x, fθ(x))|x ∈ U}) produces better or equal performance for neural sequence generation while being simpler. Therefore, in the remainder of this paper we use this simpler setting. We include quantitative comparison regarding joint training, separate training, and pseudo-parallel data filtering in Appendix B, where separate training is able to match (or surpass) the performance of joint training.\nIn self-training, the unsupervised loss LU from unlabeled instances is defined as:\nLU = −Ex∼p(x)Ey∼pθ∗ (y|x) log pθ(y|x), (1)\nwhere p(x) is the empirical data distribution approximated with samples from S, pθ(y|x) is the conditional distribution defined by the model. θ∗ is the parameter from the last iteration (initially it\nMethods PT FT\nbaseline – 15.6 ST (scratch) 16.8 17.9 ST (baseline) 16.5 17.5\nTable 1: Test tokenized BLEU on WMT100K. Self-training results are from the first iteration. “Scratch” denotes that the system is initialized randomly and trained from scratch, while “baseline” means it is initialized with the baseline model.\nis set as the parameter of the supervised baseline), and fixed within the current iteration. Eq. 1 reveals the connection between self-training and entropy regularization (Grandvalet & Bengio, 2005). In the context of classification, self-training can be understood from the view of entropy regularization (Lee, 2013), which favors a low-density separation between classes, a commonly assumed prior for semi-supervised learning (Chapelle & Zien, 2005)." }, { "heading": "3 A CASE STUDY ON MACHINE TRANSLATION", "text": "To examine the effectiveness of self-training on neural sequence generation, we start by analyzing a machine translation task. We then perform ablation analysis to understand the contributing factors of the performance gains." }, { "heading": "3.1 SETUP", "text": "We work with the standard WMT 2014 English-German dataset consisting of about 3.9 million training sentence pairs after filtering long and imbalanced pairs. Sentences are encoded using 40K byte-pair codes (Sennrich et al., 2016). As a preliminary experiment, we randomly sample 100K sentences from the training set to train the model and use the remaining English sentences as the unlabeled monolingual data. For convenience, we refer to this dataset as WMT100K. Such synthetic setting allows us to have high-quality unlabeled data to verify the performance of self-training. We train with the Base Transformer architecture (Vaswani et al., 2017) and dropout rate at 0.3. Full training and optimization parameters can be found in Appendix A.1. All experiments throughout this paper including the transformer implementation are based on the fairseq toolkit (Ott et al., 2019), and all results are in terms of case-sensitive tokenized BLEU (Papineni et al., 2002). We use beam search decoding (beam size 5) to create the pseudo targets and to report BLEU on test set." }, { "heading": "3.2 OBSERVATIONS", "text": "In Figure 1, we use green bars to show the result of applying self-training for three iterations. We include both (1) pseudo-training (PT): the first step of self-training where we train a new model (from scratch) using only the pseudo parallel data generated by the current model, and (2) finetuning (FT): the fine-tuned system using real parallel data based on the pretrained model from the PT step. Note that in the fine-tuning step the system is re-initialized from scratch. Surprisingly, we find that the pseudo-training step at the first iteration is able to improve BLEU even if the model is only trained on its own predictions, and fine-tuning further boosts the performance. The test BLEU keeps improving over the first three iterations, until convergence to outperform the initial baseline by 3 BLEU points.\nThis behaviour is unexpected because no new information seems to be injected during this iterative process – target sentences of the monolingual data are from the base model’s predictions, thus translation errors are likely to remain, if not magnified. This is different from back-translation where new knowledge may originate from an additional backward translation model and real monolingual targets may help the decoder generate more fluent sentences.\nOne straightforward hypothesis is that the added pseudo-parallel data might implicitly change the training trajectory towards a (somehow) better local optimum, given that we train a new model from scratch at each iteration. To rule out this hypothesis, we perform an ablation experiment and initialize θ from the last iteration (i.e. θ∗). Formally, based on Eq. 1 we have:\n∇θLU |θ=θ∗ = −Ex∼p(x) [ ∇θEy∼pθ∗ (y|x) log pθ(y|x)|θ=θ∗ ] = 0, (2)\nbecause the conditional log likelihood is maximized when pθ(y|x) matches the underlying data distribution pθ∗(y|x). Therefore, the parameter θ should not (at least not significantly) change if we initialize it with θ∗ from the last iteration.\nTable 1 shows the comparison results of these two initialization schemes at the first iteration. Surprisingly, continuing training from the baseline model also yields an improvement of 1.9 BLEU points, comparable to initializing from random. While stochastic optimization introduces randomness in the training process, it is startling that continuing training gives such a non-trivial improvement. Next, we investigate the underlying reasons for this." }, { "heading": "3.3 THE SECRET BEHIND SELF-TRAINING", "text": "To understand why continuing training contradicts Eq. 2 and improves translation performance, we examine possible discrepancies between our assumptions and the actual implementation, and formulate two new hypotheses:\nH1. Decoding Strategy. According to this hypothesis, the gains come from the use of beam search for decoding unlabeled data. Since our focus is a sequence generation task, we decode y with beam search to approximate the expectation in Ey∼pθ∗ (y|x) log pθ(y|x), yielding a biased estimate, while sampling decoding would result in an unbiased Monte Carlo estimator. The results in Table 2 demonstrate that the performance drops by 0.5 BLEU when we change the decoding strategy to sampling, which implies that beam search does contribute a bit to the performance gains. This phenomenon makes sense intuitively since beam search tends to generate higher-quality pseudo targets than sampling, and the subsequent cross-entropy training might benefit from implicitly learning the decoding process. However, the decoding strategy hypothesis does not fully explain it, as we still observe a gain of 1.4 BLEU points over the baseline from sampling decoding with dropout.\nH2. Dropout (Hinton et al., 2012). Eq. 1 and Eq. 2 implicitly ignore a (seemingly) small difference between the model used to produce the pseudo targets and the model used for training: at test/decoding time the model does not use dropout while at training time dropout noise is injected in the model hidden states. At training time, the model is forced to produce the same (pseudo) targets given the same set of inputs and the same parameter set but various noisy versions of the\nhidden states. The conjecture is that the additional expectation over dropout noise renders Eq. 2 false. To verify this, we remove dropout in the pseudo training step2. The results in Table 2 indicate that without dropout the performance of beam search decoding drops by 1.2 BLEU, just 0.7 BLEU higher than the baseline. Moreover, the pseudo-training performance of sampling without dropout is almost the same as the baseline, which finally agrees with our intuitions from Eq. 2.\nIn summary, Table 2 suggests that beam-search decoding contributes only partially to the performance gains, while the implicit perturbation – dropout – accounts for most of it. However, it is still mysterious why such perturbation results in such large performance gains. If dropout is meant to avoid overfitting and fit the target distribution better in the pseudo-training step, why does it bring advantages over the baseline given that the target distribution is from the baseline model itself ? This is the subject of the investigation in the next section." }, { "heading": "4 NOISE IN SELF-TRAINING", "text": "4.1 THE ROLE OF NOISE\nOne hypothesis as to why noise (perturbation) is beneficial for self-training, is that it enforces local smoothness for this task, that is, semantically similar inputs are mapped to the same or similar targets. Since the assumption that similar input should ideally produce similar target largely holds for most tasks in practice, this smoothing effect of pseudo-training step may provide a favorable regularization for the subsequent finetuning step. Unlike standard regularization in supervised training which is local to the real parallel data, self-training smooths the data space covered by the additional and much larger monolingual data.\nTo verify this hypothesis more easily, we work with the toy task of summing two integers in the range 0 to 99. We concatenate the two integers and view them as a sequence of digits, the sum is also predicted at the digit level, thus this is still a sequence to sequence task. There are 10000 possible data points in the entire space, and we randomly sample 250 instances for training,3 100 for validation, 5000 for test, and 4000 as the unlabeled data. Test errors are computed as the absolute difference between the predicted integer and the ground-truth integer. We use an LSTM model to tackle this task. We perform self-training for one iteration on this toy sum dataset and initialize the model with the base model to rule out differences due to the initialization. Setup details are in Appendix A.1.\nFor any integer pair (x1, x2), we measure local smoothness as the standard deviation of the predictions in a 3 × 3 neighborhood of (x1, x2). These values are averaged over all the 10000 points to obtain the overall smoothness. We compare smoothness between baseline and ST pseudo-training in Table 3. To demonstrate the effect of smoothing on the fine-tuning step, we also report test errors after fine-tuning. We observe that ST pseudo-training attains better smoothness, which helps reducing test errors in the subsequent fine-tuning step.\nOne natural question is whether we could further improve performance by encouraging even lower smoothness value, although there is a clear trade-off, as a totally smooth model that outputs a constant value is also a bad predictor. One way to decrease smoothness is by increasing the dropout probability in the pseudo-training step, but a large dropout (like 0.5) makes the model too unstable and slow at converging. Therefore, we consider a simple model-agnostic perturbation process – perturbing the input, which we refer to as noisy self-training (noisy ST).\n2 During finetuning, we still use dropout. 3We choose 250 instances since we find that 500 training samples already yields perfect performance on this\ntask. However, we want to mimic real seq2seq tasks where the supervised models are often far from perfect." }, { "heading": "4.2 NOISY SELF-TRAINING", "text": "If we perturb the input during the pseudo-training step, then Eq. 1 would be modified to:\nLU = −Ex′∼g(x),x∼p(x)Ey∼pθ∗ (y|x) log pθ(y|x ′), (3)\nwhere g(x) is a perturbation function. Note that we apply both input perturbation and dropout in the pseudo-training step for noisy ST throughout the paper, but include ablation analysis in §4.3. We first validate noisy ST in the toy sum task. We shuffle the two integers in the input as the perturbation function. Such perturbation is suitable for this task since it would help the model learn the commutative law as well. To check that, we also measure the symmetry of the output space. Specifically, for any point (x1, x2), we compute |f(x1, x2) − f(x2, x1)| and average it over all the points. Both smoothness and symmetry values are reported in Table 3. While we do not explicitly perturb the input at nearby integers, the shuffling perturbation greatly improves the smoothness metric as well. Furthermore, predictions are more symmetric and test errors are reduced.\nIn order to illustrate the effect of smoothness, in Figure 2 we show two examples of error heat map.4 When a point with large error is surrounded by points with small errors, the labels might propagate due to smoothing and its error is likely to become smaller, resulting in a “self-correcting” behaviour, as demonstrated in the left example of Figure 2. However, the prediction of some points might become worse due to the opposite phenomenon too, as shown in the right example of Figure 2. Therefore, the smoothing effect by itself does not guarantee a performance gain in the pseudotraining step, but fine-tuning benefits from it and seems to consistently improve the baseline in all datasets we experiment with." }, { "heading": "4.3 OBSERVATIONS ON MACHINE TRANSLATION", "text": "Next, we apply noisy self-training to the more realistic WMT100 translation task. We try two different perturbation functions: (1) Synthetic noise as used in unsupervised MT (Lample et al., 2018), where the input tokens are randomly dropped, masked, and shuffled. We use the default noising parameters as in unsupervised MT but study the influence of noise level in §5.4. (2) Paraphrase. We translate the source English sentences to German and translate it back to obtain a paraphrase as the perturbation. Figure 1 shows the results over three iterations. Noisy ST (NST) greatly outperforms the supervised baseline by over 6 BLEU points and normal ST by 3 BLEU points, while synthetic noise does not exhibit much difference from paraphrasing. Since synthetic noise is much simpler and more general, in the remaining experiments we use synthetic noise unless otherwise specified.\nNext, we report an ablation analysis of noisy ST when removing dropout at the pseudo-training step in Table 2. Noisy ST without dropout improves the baseline by 2.3 BLEU points and is comparable to normal ST with dropout. When combined together, noisy ST with dropout produces another 1.4 BLEU improvement, indicating that the two perturbations are complementary.\n4Error heat map for the entire space can be found in Appendix C." }, { "heading": "5 EXPERIMENTS", "text": "Our experiments below are designed to examine whether the noisy self-training is generally useful across different sequence generation tasks and resource settings. To this end, we conduct experiments on two machine translation datasets and one text summarization dataset to test the effectiveness under both high-resource and low-resource settings." }, { "heading": "5.1 GENERAL SETUP", "text": "We run noisy self-training for three iterations or until performance converges. The model is trained from scratch in the pseudo-training step at each iteration since we found this strategy to work slightly better empirically. Full model and training details for all the experiments can be found in Appendix A.1. In some settings, we also include back-translation (BT, Sennrich et al., 2015) as a reference point, since this is probably the most successful semi-supervised learning method for machine translation. However, we want to emphasize that BT is not directly comparable to ST since they use different resources (ST utilizes the unlabeled data on the source side while BT leverages target monolingual data) and use cases. For example, BT is not very effective when we translate English to extremely low-resource languages where there is almost no in-domain target monolingual data available. We follow the practice in (Edunov et al., 2018) to implement BT where we use unrestricted sampling to translate the target data back to the source. Then, we train the real and pseudo parallel data jointly and tune the upsampling ratio of real parallel data." }, { "heading": "5.2 MACHINE TRANSLATION", "text": "We test the proposed noisy self-training on a high-resource translation benchmark: WMT14 EnglishGerman and a low-resource translation benchmark: FloRes English-Nepali.\n• WMT14 English-German: In addition to WMT100K, we also report results with all 3.9M training examples. For WMT100K we use the Base Transformer architecture, and the remaining parallel data as the monolingual data. For the full setting, we use the Big Transformer architecture (Vaswani et al., 2017) and randomly sample 20M English sentences from the News Crawl corpus for noisy ST.\n• FloRes English-Nepali: We evaluate noisy self-training on a low-resource machine translation dataset FloRes (Guzmán et al., 2019) from English (en) to Nepali (ne), where we have 560K training pairs and a very weak supervised system that attains BLEU smaller than 5 points. For this dataset we have 3.6M Nepali monolingual instances in total (for BT) but 68M English Wikipedia sentences.5 We randomly sample 5M English sentences for noisy ST. We use the same transformer architecture as in (Guzmán et al., 2019).\nThe overall results are shown in Table 4. For almost all cases in both datasets, the noisy ST outperforms the baselines by a large margin (1 ∼ 5 BLEU scores), and we see that noisy ST still improves the baseline even when this is very weak.\nEffect of Domain Mismatch. Test sets of the FloRes benchmark were built with mixed originaltranslationese – some sentences are from English sources and some are from Nepali sources. Intuitively, English monolingual data should be more in-domain with English-origin sentences and\n5http://www.statmt.org/wmt19/parallel-corpus-filtering.html\nNepali monolingual data should help more for Nepali-origin sentences. To demonstrate this possible domain-mismatch effect, in Table 4 we report BLEU on the two different test sets separately.6 As expected, ST is very effective when the source sentences originate from English.\nComparison to Back-Translation. Table 4 shows that noisy ST is able to beat BT on WMT100K and on the en-origin test set of FloRes. In contrast, BT is more effective on the ne-origin test set according to BLEU, which is not surprising as the ne-origin test is likely to benefit more from Nepali than English monolingual data." }, { "heading": "5.3 TEXT SUMMARIZATION", "text": "We further evaluate noisy self-training on the Gigaword summarization dataset (Rush et al., 2015) that has 3.8M training sentences. We encode the data with 30K byte-pair codes and use the Base Transformer architecture. Similar to the setting of WMT100K, for Gigaword we create two settings where we sample 100K or 640K training examples and use the remaining as unlabeled data to compare with BT. We also consider the setting where all the 3.8M parallel samples are used and we mine in-domain monolingual data by revisiting the original preprocessing procedure7 and using the ∼4M samples that Rush et al. (2015) disregarded because they had low-quality targets. We report ROUGE scores (Lin, 2004) in Table 5. Noisy ST consistently outperforms the baseline in all settings, sometimes by a large margin (100K and 640K). It outperforms BT with 100K parallel data but underperforms with 640K parallel data. We conjecture that BT is still effective in this case because the task is still somewhat symmetric as Gigaword mostly contains short sentences and their compressed summaries. Notably, noisy ST in the full setting approaches the performance of state-of-the-art systems which use much larger datasets for pretraining (Song et al., 2019)." }, { "heading": "5.4 ANALYSIS", "text": "In this section, we focus on the WMT English-German dataset to examine the effect of three factors on noisy self-training: the size of the parallel dataset, the size of the monolingual dataset, and the noise level. All the noisy ST results are after the fine-tuning step.\n6Test set split is obtained through personal communication with the authors. 7https://github.com/facebookarchive/NAMAS\nParallel data size. We fix the monolingual data size as 20M from News Crawl dataset, and vary the parallel data size as shown in Figure 3(a). We use a small LSTM model for 10K, Base Transformer for 100K/640K, and Big Transformer for 3.9M.8 Noisy ST is repeated for three iterations. We see that in all cases noisy ST is able to improve upon the baseline, while the performance gain is larger for intermediate value of the size of the parallel dataset, as expected.\nMonolingual data size. We fix the parallel data size to 100K samples, and use the rest 3.8M English sentences from the parallel data as monolingual data. We sample from this set 100K, 500K, 1.5M, and 3.8M sentences. We also include another point that uses 20M monolingual sentences from a subset of News Crawl dataset. We report performance at the first iteration of noisy ST. Figure 3(b) illustrates that the performance keeps improving as the monolingual data size increases, albeit with diminishing returns.\nNoise level. We have shown that noisy ST outperforms ST, but intuitively larger noise must not always be better since at some point it may destroy all the information present in the input. We adopt the WMT100K setting with 100K parallel data and 3.8M monolingual data, and set the word blanking probability in the synthetic noise (Lample et al., 2018) to 0.2 (default number), 0.4, 0.6, and 0.8. We also include the baseline ST without any synthetic noise. Figure 3(c) demonstrates that performance is quite sensitive to noise level, and that intermediate values work best. It is still unclear how to select the noise level a priori, besides the usual hyper-parameter search to maximize BLEU on the validation set." }, { "heading": "5.5 NOISE PROCESS ON PARALLEL DATA ONLY", "text": "In this section, we justify whether the proposed noisy self-training process would help the supervised baseline alone without the help of any monolingual data. Similar to the training process on the monolingual data, we first train the model on the noisy source data (pseudo-training), and then finetune it on clean parallel data. Different from using monolingual data, there are two variations here in the “pseudo-training” step: we can either train with the fake target predicted by the model as on monolingual data, or train with the real target paired with noisy source. We denote them as “parallel + fake target” and “parallel + real target” respectively, and report the performance on WMT100K in Table 6. We use the same synthetic noise as used in previous experiments.\nWhen applying the same noise process to parallel data using fake target, the smoothing effect is not significant since it is restricted into the limited parallel data space, producing marginal improvement over the baseline (+0.4 BLEU). As a comparison, 100K monolingual data produces +1.0 BLEU and the effect is enhanced when we increase the monolingual data to 3.8M, which leads to +3.7 BLEU. Interestingly, pairing the noisy source with real target results in much worse performance than the baseline (-4.3 BLEU), which implies that the use of fake target predicted by the model (i.e. distillation) instead of real target is important for the success of noisy self-training, at least in the case where parallel data size is small. Intuitively, the distilled fake target is simpler and relatively easy for the model to fit, but the real target paired with noisy source makes learning even harder than training with real target and real source, which might lead to a bad starting point for fine-tuning. This issue would be particularly severe when the parallel data size is small, in that case the model would have difficulties to fit real target even with clean source." }, { "heading": "6 RELATED WORK", "text": "Self-training belongs to a broader class of “pseudo-label” semi-supervised learning approaches. These approaches all learn from pseudo labels assigned to unlabelled data, with different methods on how to assign such labels. For instance, co-training (Blum & Mitchell, 1998) learns models on two independent feature sets of the same data, and assigns confident labels to unlabeled data from one of the models. Co-training reduces modeling bias by taking into account confidence scores from two models. In the same spirit, democratic co-training (Zhou & Goldman, 2004) or tri-training (Zhou & Li, 2005) trains multiple models with different configurations on the same data feature set, and a subset of the models act as teachers for others.\n8These architectures are selected based on validation loss.\nAnother line of more recent work perturb the input or feature space of the student’s inputs as data augmentation techniques. Self-training with dropout or noisy self-training can be viewed as an instantiation of this. These approaches have been very successful on classification tasks (Rasmus et al., 2015; Miyato et al., 2017; Laine & Aila, 2017; Miyato et al., 2018; Xie et al., 2019) given that a reasonable amount of predictions of unlabeled data (at least the ones with high confidence) are correct, but their effect on language generation tasks is largely unknown and poorly understood because the pseudo language targets are often very different from the ground-truth labels. Recent work on sequence generation employs auxiliary decoders (Clark et al., 2018) when processing unlabeled data, overall showing rather limited gains." }, { "heading": "7 CONCLUSION", "text": "In this paper we revisit self-training for neural sequence generation, and show that it can be an effective method to improve generalization, particularly when labeled data is scarce. Through a comprehensive ablation analysis and synthetic experiments, we identify that noise injected during self-training plays a critical role for its success due to its smoothing effect. To encourage this behaviour, we explicitly perturb the input to obtain a new variant of self-training, dubbed noisy selftraining. Experiments on machine translation and text summarization demonstrate the effectiveness of this approach in both low and high resource settings." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We want to thank Peng-Jen Chen for helping set up the FloRes experiments, and Michael Auli, Kyunghyun Cho, and Graham Neubig for insightful discussion about this project." }, { "heading": "A EXPERIMENTS DETAILS", "text": "A.1 SETUP DETAILS\nFor all experiments, we optimize with Adam (Kingma & Ba, 2014) using β1 = 0.9, β2 = 0.98, = 1e − 8. All implementations are based on fairseq (Ott et al., 2019), and we basically use the same learning rate schedule and label smoothing as in fairseq examples to train the transformers.9 Except for the toy sum dataset which we runs on a single GPU and each batch contains 32 examples, all other experiments are run on 8 GPUs with an effective batch size of 33K tokens. All experiments are validated with loss on the validation set. For self-training or noisy self-training, the pseudo-training takes 300K synchronous updates while the fine-tuning step takes 100K steps.\nWe use the downloading and preprocessing scripts in fairseq to obtain the WMT 2014 EnglishGerman dataset,10 which hold out a small fraction of the original training data as the validation set.\nThe model architecture for the toy sum dataset is a single-layer LSTM with word embedding size 32, hidden state size 32, and dropout rate 0.3. The model architecture of WMT10K baseline in Figure 3(a) is a single layer LSTM with word embeddings size 256, hidden state size 256, and dropout rate 0.3.\nA.2 JUSTIFICATION OF THE WMT100K BASELINE\nWe provide more details and evidence to show that our baseline model on WMT100K dataset is trained properly. In all the experiments on WMT100K dataset including baseline and self-training ones, we use Adam optimizer with learning rate 0.0005, which is defaulted in fairseq. We do not use early stop during training but select the best model in terms of the validation loss. We train with 30K update steps for the baseline model and (300K pseudo-training + 100K fine-tuning) update steps for self-training. In both cases we verified that the models are trained sufficiently to fully converge through observing the increase of validation loss. Figure 4 shows the validation curve of the baseline model. Note that the model starts to overfit, and we select the model checkpoint at the lowest point. We also varied the learning rate hyperparameter as 0.0002, 0.0005, and 0.001, which produced BLEU score 15.0, 15.6 (reported in the paper), and 15.5 respectively – our baseline model in previous sections obtained the best performance.\n9https://github.com/pytorch/fairseq/blob/master/examples/translation. 10https://github.com/pytorch/fairseq/tree/master/examples/translation." }, { "heading": "B COMPARISON REGARDING SEPARATE TRAINING, JOINT TRAINING, AND FILTERING", "text": "In the paper we perform self-training with separate pseudo-training and fine-tuning steps and always use all monolingual data. However, there are other variants such as joint training or iteratively adding confident examples. Here we compare these variants on WMT100K dataset, noisy self-training uses paraphrase as the perturbation function. For joint training, we tune the upsampling ratio of parallel data just as in back-translation (Edunov et al., 2018). We perform noisy self-training for 3 iterations, and for filtering experiments we iteratively use the most confident 2.5M, 3M, and 3.8M monolingual data respectively in these 3 iterations. Table 7 shows that the filtering process helps joint training but still underperforms separate-training methods by over 1.5 BLEU points. Within separate training filtering produces comparable results to using all data. Since separate training with all data is the simplest method and produces the best performance, we stick to this version in the paper." }, { "heading": "C ADDITIONAL RESULTS ON THE TOY SUM DATASET", "text": "We additionally show the error heat maps of the entire data space on the toy sum datasets for the first two iterations. Here the model at pseudo-training step is initialized as the model from last iteration to clearly examine how the decodings change due to injected noise. As shown in Figure 5, for each iteration the pseudo-training step smooths the space and fine-tuning step benefits from it and greatly reduces the errors" } ]
2,020
null
SP:f70ab30f1b31fa2dcf450582b6798c4da8841687
[ "This paper proposes a MIL clustering method. The proposed MIL setup is called \"unique class count (ucc)\", this is, for a bag os samples ucc is the number of clusters in the bag. The method learns the features of the samples using two losses: an autoender loss and the ucc loss. Once trained on a dataset the method can perform clustering on classes (classify) better than a fully unsupervised clustering algorithm and worse than a fully supervised model. The method is evaluated on MNIST, CIFAR10, CIFAR100 and on binary breast cancer segmentation.", "This paper proposes a new type of weakly supervised clustering / multiple instance learning (MIL) problem in which bags of instances (data points) are labeled with a \"unique class count (UCC)*, rather than any bag-level or instance-level labels. For example, a histopathology slide (the bag), consisting of many individual pixels to be labeled (the instances) could be labeled at the bag level only with UCC = 1 (for only healthy or only metastatic) or UCC = 2 (for mixed / border case). The paper then proposes an approach for clustering instances based on the following two-step approach: (1) a UCC model is trained to predict the UCC given an input bag, and (2) the features of this learned UCC model are used in an unsupervised clustering algorithm to the get the instance-level clusters / labels. The paper also provides a theoretical argument for why this approach is feasible." ]
A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.
[ { "affiliations": [], "name": "Mustafa Umit Oner" }, { "affiliations": [], "name": "Hwee Kuan Lee" }, { "affiliations": [], "name": "Wing-Kin Sung" } ]
[ { "authors": [ "Stuart Andrews", "Ioannis Tsochantaridis", "Thomas Hofmann" ], "title": "Support vector machines for multiple-instance learning", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "Carlos Arteta", "Victor Lempitsky", "J Alison Noble", "Andrew Zisserman" ], "title": "Interactive object counting", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Boris Babenko", "Ming-Hsuan Yang", "Serge Belongie" ], "title": "Robust object tracking with online multiple instance learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2011 }, { "authors": [ "Babak Ehteshami Bejnordi", "Mitko Veta", "Paul Johannes Van Diest", "Bram Van Ginneken", "Nico Karssemeijer", "Geert Litjens", "Jeroen AWM Van Der Laak", "Meyke Hermsen", "Quirine F Manson", "Maschenka Balkenhol" ], "title": "Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer", "venue": "Jama, 318(22):2199–2210,", "year": 2017 }, { "authors": [ "James D Brierley", "Mary K Gospodarowicz", "Christian Wittekind" ], "title": "TNM classification of malignant tumours", "venue": null, "year": 2016 }, { "authors": [ "Nataly Brukhim", "Amir Globerson" ], "title": "Predict and constrain: Modeling cardinality in deep structured prediction", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jianlong Chang", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Deep adaptive image clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Yixin Chen", "James Z Wang" ], "title": "Image categorization by learning and reasoning with regions", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Yixin Chen", "Jinbo Bi", "James Ze Wang" ], "title": "Miles: Multiple-instance learning via embedded instance selection", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1931 }, { "authors": [ "Thomas G Dietterich", "Richard H Lathrop", "Tomás Lozano-Pérez" ], "title": "Solving the multiple instance problem with axis-parallel rectangles", "venue": "Artificial intelligence,", "year": 1997 }, { "authors": [ "Nat Dilokthanakul", "Pedro AM Mediano", "Marta Garnelo", "Matthew CH Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Deep unsupervised clustering with gaussian mixture variational autoencoders", "venue": "arXiv preprint arXiv:1611.02648,", "year": 2016 }, { "authors": [ "Murat Dundar", "Balaji Krishnapuram", "RB Rao", "Glenn M Fung" ], "title": "Multiple instance learning for computer aided diagnosis", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Pedro F Felzenszwalb", "Ross B Girshick", "David McAllester", "Deva Ramanan" ], "title": "Object detection with discriminatively trained part-based models", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2010 }, { "authors": [ "James Foulds", "Eibe Frank" ], "title": "A review of multi-instance learning assumptions", "venue": "The Knowledge Engineering Review,", "year": 2010 }, { "authors": [ "James Richard Foulds" ], "title": "Learning instance weights in multi-instance learning", "venue": "PhD thesis, The University of Waikato,", "year": 2008 }, { "authors": [ "Thomas Gärtner", "Peter A Flach", "Adam Kowalczyk", "Alexander J Smola" ], "title": "Multi-instance kernels", "venue": "In ICML,", "year": 2002 }, { "authors": [ "Kamran Ghasedi Dizaji", "Amirhossein Herandi", "Cheng Deng", "Weidong Cai", "Heng Huang" ], "title": "Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Haroon Idrees", "Imran Saleemi", "Cody Seibert", "Mubarak Shah" ], "title": "Multi-source multi-scale counting in extremely dense crowd images", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2013 }, { "authors": [ "Zach Jorgensen", "Yan Zhou", "Meador Inge" ], "title": "A multiple instance learning strategy for combating good word attacks on spam filters", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Andreas Kipf", "Thomas Kipf", "Bernhard Radke", "Viktor Leis", "Peter Boncz", "Alfons Kemper" ], "title": "Learned cardinalities: Estimating correlated joins with deep learning", "venue": "arXiv preprint arXiv:1809.00677,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Jianhua Lin" ], "title": "Divergence measures based on the shannon entropy", "venue": "IEEE Transactions on Information theory,", "year": 1991 }, { "authors": [ "Geert Litjens", "Peter Bandi", "Babak Ehteshami Bejnordi", "Oscar Geessink", "Maschenka Balkenhol", "Peter Bult", "Altuna Halilovic", "Meyke Hermsen", "Rob van de Loo", "Rob Vogels" ], "title": "h&estained sentinel lymph node sections of breast cancer", "venue": "patients: the camelyon dataset. GigaScience,", "year": 2018 }, { "authors": [ "Henry Liu", "Mingbin Xu", "Ziting Yu", "Vincent Corvinelli", "Calisto Zuzarte" ], "title": "Cardinality estimation using neural networks", "venue": "In Proceedings of the 25th Annual International Conference on Computer Science and Software Engineering,", "year": 2015 }, { "authors": [ "Lars Maaløe", "Casper Kaae Sønderby", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Auxiliary deep generative models", "venue": "arXiv preprint arXiv:1602.05473,", "year": 2016 }, { "authors": [ "Emanuel Parzen" ], "title": "On estimation of a probability density function and mode", "venue": "The annals of mathematical statistics,", "year": 1962 }, { "authors": [ "Antti Rasmus", "Mathias Berglund", "Mikko Honkala", "Harri Valpola", "Tapani Raiko" ], "title": "Semisupervised learning with ladder networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Burr Settles", "Mark Craven", "Soumya Ray" ], "title": "Multiple-instance active learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Jost Tobias Springenberg" ], "title": "Unsupervised and semi-supervised learning with categorical generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06390,", "year": 2015 }, { "authors": [ "Jinhui Tang", "Haojie Li", "Guo-Jun Qi", "Tat-Seng Chua" ], "title": "Image annotation by graph-based inference with integrated multiple/single instance representations", "venue": "IEEE Transactions on Multimedia,", "year": 2010 }, { "authors": [ "Jianfeng Wang", "Jingdong Wang", "Jingkuan Song", "Xin-Shun Xu", "Heng Tao Shen", "Shipeng Li" ], "title": "Optimized cartesian k-means", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2015 }, { "authors": [ "Xinggang Wang", "Yongluan Yan", "Peng Tang", "Xiang Bai", "Wenyu Liu" ], "title": "Revisiting multiple instance neural networks", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "Jiajun Wu", "Yinan Yu", "Chang Huang", "Kai Yu" ], "title": "Deep multiple instance learning for image classification and auto-annotation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Lihi Zelnik-Manor", "Pietro Perona" ], "title": "Self-tuning spectral clustering", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Cha Zhang", "John C Platt", "Paul A Viola" ], "title": "Multiple instance boosting for object detection", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Cong Zhang", "Hongsheng Li", "Xiaogang Wang", "Xiaokang Yang" ], "title": "Cross-scene crowd counting via deep convolutional neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Min-Ling Zhang", "Zhi-Hua Zhou" ], "title": "Multi-instance clustering with applications to multi-instance prediction", "venue": "Applied Intelligence,", "year": 2009 }, { "authors": [ "Qi Zhang", "Sally A Goldman" ], "title": "Em-dd: An improved multiple-instance learning technique", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Qi Zhang", "Sally A Goldman", "Wei Yu", "Jason E Fritts" ], "title": "Content-based image retrieval using multiple-instance learning", "venue": "In ICML,", "year": 2002 }, { "authors": [ "Yingying Zhang", "Desen Zhou", "Siqin Chen", "Shenghua Gao", "Yi Ma" ], "title": "Single-image crowd counting via multi-column convolutional neural network", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Zhi-Hua Zhou" ], "title": "A brief introduction to weakly supervised learning", "venue": "National Science Review,", "year": 2017 }, { "authors": [ "Zhi-Hua Zhou", "Min-Ling Zhang" ], "title": "Neural networks for multi-instance learning", "venue": "In Proceedings of the International Conference on Intelligent Information Technology, Beijing,", "year": 2002 }, { "authors": [ "Zhi-Hua Zhou", "Yu-Yin Sun", "Yu-Feng Li" ], "title": "Multi-instance learning by treating instances as noniid samples", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "C DETAILS" ], "title": "ON EXPERIMENTS WITH MNIST AND CIFAR DATASETS C.1 DETAILS OF MODEL ARCHITECTURES Feature extractor module θfeature has convolutional blocks similar to the wide residual blocks in Zagoruyko & Komodakis (2016). However, the parameters of architectures, number of convolutional and fully connected layers, number of filters in convolutional layers, number of nodes", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model." }, { "heading": "1 INTRODUCTION", "text": "In machine learning, there are two main learning tasks on two ends of scale bar: unsupervised learning and supervised learning. Generally, performance of supervised models is better than that of unsupervised models since the mapping between data and associated labels is provided explicitly in supervised learning. This performance advantage of supervised learning requires a lot of labelled data, which is expensive. Any other learning tasks reside in between these two tasks, so are their performances. Weakly supervised learning is an example of such tasks. There are three types of supervision in weakly supervised learning: incomplete, inexact and inaccurate supervision. Multiple instance learning (MIL) is a special type of weakly supervised learning and a typical example of inexact supervision (Zhou, 2017). In MIL, data consists of bags of instances and their corresponding bag level labels. Although the labels are somehow related to instances inside the bags, the instances are not explicitly labeled. In traditional MIL, given the bags and corresponding bag level labels, task is to learn the mapping between bags and labels while the goal is to predict labels of unseen bags (Dietterich et al., 1997; Foulds & Frank, 2010).\nIn this paper, we explore the feasibility of finding out labels of individual instances inside the bags only given the bag level labels, i.e. there is no individual instance level labels. One important application of this task is semantic segmentation of breast cancer metastases in histological lymph node sections, which is a crucial step in staging of breast cancer (Brierley et al., 2016). In this task, each pathology image of a lymph node section is a bag and each pixel inside that image is an instance. Then, given the bag level label that whether the image contains metastases or not, the task is to label each pixel as either metastases or normal. This task can be achieved by asking experts to exhaustively annotate each metastases region in each image. However, this exhaustive annotation process is tedious, time consuming and more importantly not a part of clinical workflow.\nIn many complex systems, such as in many types of cancers, measurements can only be obtained at coarse level (bag level), but information at fine level (individual instance level) is of paramount importance. To achieve this, we propose a weakly supervised learning based clustering framework. Given a dataset consisting of instances with unknown labels, our ultimate objective is to cluster the instances in this dataset. To achieve this objective, we introduce a novel MIL task based on a new kind of bag level label called unique class count (ucc), which is the number of unique classes or the number of clusters among all the instances inside the bag. We organize the dataset into non-empty bags, where each bag is a subset of individual instances from this dataset. Each bag is associated with a bag level ucc label. Then, our MIL task is to learn mapping between the bags and their associated bag level ucc labels and then to predict the ucc labels of unseen bags. We mathematically show that a ucc classifier trained on this task can be used to perform unsupervised clustering on individual instances in the dataset. Intuitively, for a ucc classifier to count the number of unique classes in a bag, it has to first learn discriminant features for underlying classes. Then, it can group the features obtained from the bag and count the number of groups, so the number of unique classes.\nOur weakly supervised clustering framework is illustrated in Figure 1. It consists of a neural network based ucc classifier, which is called as Unique Class Count (UCC) model, and an unsupervised clustering branch. The UCC model accepts any bag of instances as input and uses ucc labels for supervised training. Then, the trained UCC model is used as a feature extractor and unsupervised clustering is performed on the extracted features of individual instances inside the bags in the clustering branch. One application of our framework is the semantic segmentation of breast cancer metastases in lymph node sections (see Figure 4). The problem can be formulated as follows. The input is a set of images. Each image (bag) has a label of ucc1 (image is fully normal or fully metastases) or ucc2 (image is a mixture of normal and metastases). Our aim is to segment the pixels (instances) in the image into normal and metastases. A UCC model can be trained to predict ucc labels of individual images in a fully supervised manner; and the trained model can be used to extract features of pixels (intances) inside the images (bags). Then, semantic segmentation masks can be obtained by unsupervised clustering of the pixels (each is represented by the extracted features) into two clusters (metastases or normal). Note that ucc does not directly provide an exact label for each individual instance. Therefore, our framework is a weakly supervised clustering framework.\nFinally, we have constructed ucc classifiers and experimentally shown that clustering performance of our framework with our ucc classifiers is better than the performance of unsupervised models and comparable to performance of fully supervised learning models. We have also tested the performance of our model on the real world task of semantic segmentation of breast cancer metastases in lymph node sections. We have compared the performance of our model with the performance of popular medical image segmentation architecture of Unet (Ronneberger et al., 2015) and shown that our weakly supervised model approximates the performance of fully supervised Unet model1.\nHence, there are three main contributions of this paper:\n1. We have defined unique class count as a bag level label in MIL setup and mathematically proved that a perfect ucc classifier, in principle, can be used to perfectly cluster the individual instances inside the bags.\n1Code and trained models: http://bit.ly/uniqueclasscount\n2. We have constructed a neural network based ucc classifier by incorporating kernel density estimation (KDE) (Parzen, 1962) as a layer into our model architecture, which provided us with end-to-end training capability.\n3. We have experimentally shown that clustering performance of our framework is better than the performance of unsupervised models and comparable to performance of fully supervised learning models.\nThe rest of the paper is organized such that related work is in Section 2, details of our weakly supervised clustering framework are in Section 3, results of the experiments on MNIST, CIFAR10 and CIFAR100 datasets are in Section 4, results of the experiments in semantic segmentation of breast cancer metastases are in Section 5, and Section 6 concludes the paper." }, { "heading": "2 RELATED WORK", "text": "This work is partly related to MIL which was first introduced in (Dietterich et al., 1997) for drug activity prediction. Different types of MIL were derived with different assumptions (Gärtner et al., 2002; Zhang & Goldman, 2002; Chen et al., 2006; Foulds, 2008; Zhang & Zhou, 2009; Zhou et al., 2009), which are reviewed in detail in (Foulds & Frank, 2010), and they were used for many different applications such as, image annotation/categorization/retrieval (Chen & Wang, 2004; Zhang et al., 2002; Tang et al., 2010), text categorization (Andrews et al., 2003; Settles et al., 2008), spam detection (Jorgensen et al., 2008), medical diagnosis (Dundar et al., 2007), face/object detection (Zhang et al., 2006; Felzenszwalb et al., 2010) and object tracking (Babenko et al., 2011).\nIn MIL, different types of pooling layers are used to combine extracted features of instances inside the bags, such as max-pooling and log-sum-exp pooling (Ramon & De Raedt, 2000; Zhou & Zhang, 2002; Wu et al., 2015; Wang et al., 2018). On the other hand, our UCC model uses KDE layer in order to estimate the distribution of extracted features. The advantage of KDE over pooling layers is that it embeds the instance level features into distribution space rather than summarizing them.\nThere are also methods modeling cardinality and set distributions (Liu et al., 2015; Brukhim & Globerson, 2018; Kipf et al., 2018). However, cardinality of a set and ucc are completely different from each other. It is also important to state that ucc is obviously different from object/crowd counting (Idrees et al., 2013; Arteta et al., 2014; Zhang et al., 2015; 2016) since the task in object/crowd counting is to count the instances of the same type of object or people.\nLastly, we compare clustering accuracies of our models with clustering accuracies of unsupervised baseline models: K-means (Wang et al., 2015) and Spectral Clustering (Zelnik-Manor & Perona, 2005); state of the art unsupervised models: JULE (Yang et al., 2016), GMVAE (Dilokthanakul et al., 2016), DAC (Chang et al., 2017), DEPICT (Ghasedi Dizaji et al., 2017) and DEC (Xie et al., 2016); and state of the art semi-supervised models: AAE (Makhzani et al., 2015), CatGAN (Springenberg, 2015), LN (Rasmus et al., 2015) and ADGM (Maaløe et al., 2016)." }, { "heading": "3 WEAKLY SUPERVISED CLUSTERING FRAMEWORK", "text": "In this section, we state our machine learning objective and formally define our novel MIL task, which is the core of our weakly supervised clustering framework. Finally, we explain details of the two main components of our framework, namely UCC model and unsupervised clustering branch.\nObjective: Let X = {x1, x2, · · · , xn} be a dataset such that each instance xi ∈ X belongs to a class, but its label is unknown. In this paper, we assume that total number of classes K is known. Hence, each instance xi is endowed with an underlying, but unkown, label L(xi) = li ∈ {1, 2, · · · ,K}. Further assume that for each class k ∈ {1, 2, · · ·K}, there exist at least one element xi ∈ X such that L(xi) = li = k. Our eventual objective is to derive a predicted class label l̂i for each instance xi that tends towards underlying truth class li, i.e. l̂i → L(xi) = li." }, { "heading": "3.1 A NOVEL MIL TASK", "text": "In this novel MIL task, unique class count is used as an inexact, weak, bag level label and is defined in Definition 1. Assume that we are given subsets σζ ⊂ X , ζ = 1, 2, · · · , N and unique class counts\nησζ∀σζ . Hence, MIL dataset is D = {(σ1, ησ1), · · · , (σN , ησN )}. Then, our MIL task is to learn the mapping between the bags and their associated bag level ucc labels while the goal is to predict the ucc labels of unseen bags.\nDefinition 1 Given a subset σζ ⊂ X , unique class count, ησζ , is defined as the number of unique classes that all instances in the subset σζ belong to, i.e. ησζ = |{L(xi)|xi ∈ σζ}|. Recall that each instance belongs to an underlying unknown class.\nGiven a dataset D, our eventual objective is to assign a label to each instance xi ∈ X such that assigned labels and underlying unknown classes are consistent. To achieve this eventual objective, a deep learning model is designed such that the following intermediate objectives can be achieved while it is being trained on our MIL task:\n1. Unique class count: Given an unseen set σζ , the deep learning model, which is trained on D, can predict its unique class count ησζ correctly.\n2. Labels on sets: Let σpureζ and σ pure ξ be two disjoint pure sets (Definition 2) such that\nwhile all instances in σpureζ belong to one underlying class, all instances in σ pure ξ belong to another class. Given σpureζ and σ pure ξ , the deep learning model should enable us to develop an unsupervised learning model to label instances in σpureζ and σ pure ξ as belonging to different classes. Note that the underlying classes for instances in the sets are unknown.\n3. Labels on instances: Given individual instances xi ∈ X , the deep learning model should enable us to assign a label to each individual instance xi such that all instances with different/same underlying unknown classes are assigned different/same labels. This is the eventual unsupervised learning objective.\nDefinition 2 A set σ is called a pure set if its unique class count equals one. All pure sets is denoted by the symbol σpure in this paper." }, { "heading": "3.2 UNIQUE CLASS COUNT MODEL", "text": "In order to achieve the stated objectives, we have designed a deep learning based Unique Class Count (UCC) model. Our UCC model consists of three neural network modules (θfeature, θdrn, θdecoder) and can be trained end-to-end. The first module θfeature extracts features from individual instances; then distributions of features are constructed from extracted features. The second module θdrn is used to predict ucc label from these distributions. The last module θdecoder is used to construct an autoencoder together with θfeature so as to improve the extracted features by ensuring that extracted features contain semantic information for reconstruction.\nFormally, for xi ∈ σζ , i = {1, 2, · · · , |σζ |}, feature extractor module θfeature extracts J features {f1,iσζ , f 2,i σζ , · · · , fJ,iσζ } = θfeature(xi) for each instance xi ∈ σζ . As a short hand, we write the operator θfeature as operating element wise on the set to generate a feature matrix θfeature(σζ) = fσζ with matrix elements f j,iσζ ∈ R, representing the j\nth feature of the ith instance. After obtaining features for all instances in σζ , a kernel density estimation (KDE) module is used to accumulate feature distributions hσζ = (h 1 σζ (v), h2σζ (v), · · · , h J σζ\n(v)). Then, hσζ is used as input to distribution regression module θdrn to predict the ucc label, η̃σζ = θdrn(hσζ ) as a softmax vector (η̃ 1 σζ , η̃2σζ , · · · , η̃ K σζ\n). Concurrently, decoder module θdecoder in autoencoder branch is used to reconstruct the input images from the extracted features in an unsupervised fashion, x̃i = θdecoder(θfeature(xi)). Hence, UCC model, main modules of which are illustrated in Figure 2(a), optimizes two losses concurrently: ‘ucc loss’ and ‘autoencoder loss’. While ‘ucc loss’ is cross-entropy loss, ‘autoencoder loss’ is mean square error loss. Loss for one bag is given in Equation 1.\nα [ K∑ k=1 ηkσζ log η̃ k σζ ] ︸ ︷︷ ︸\nucc loss\n+ (1− α) 1 |σζ | |σζ |∑ i=1 (xi − x̃i)2 \n︸ ︷︷ ︸ autoencoder loss\nwhere α ∈ [0, 1] (1)" }, { "heading": "3.2.1 KERNEL DENSITY ESTIMATION MODULE", "text": "In UCC model, input is a set σζ and output is corresponding ucc label η̃σζ , which does not depend on permutation of the instances in σζ . KDE module provides UCC model with permutationinvariant property. Moreover, KDE module uses the Gaussian kernel and it is differentiable, so our model can be trained end-to-end (Appendix A). KDE module also enables our theoretical analysis thanks to its decomposability property (Appendix B). Lastly, KDE module estimates the probability distribution of extracted features and enables θdrn to fully utilize the information in the shape of the distribution rather than looking at point estimates of distribution obtained by other types of pooling layers (Ramon & De Raedt, 2000; Zhou & Zhang, 2002; Wang et al., 2018) (Appendix C.6)." }, { "heading": "3.2.2 PROPERTIES OF UNIQUE CLASS COUNT MODEL", "text": "This section mathematically proves that the UCC model guarantees, in principle, to achieve the stated intermediate objectives in Section 3.1. Proof of propositions are given in Appendix B.\nProposition 1 Let σζ , σξ be disjoint subsets ofX with predicted unique class counts η̃σζ = η̃σξ = 1. If the predicted unique class count of σν = σζ ∪ σξ is η̃σν = 2, then hσζ 6= hσξ .\nDefinition 3 A perfect unique class count classifier takes in any set σ and output the correct predicted unique class count η̃σ = ησ .\nProposition 2 Given a perfect unique class count classifier. The dataset X can be perfectly clustered into K subsets σpureξ , ξ = 1, 2, · · · ,K, such that X = ⋃K ξ=1 σ pure ξ and σ pure ξ = {xi|xi ∈ X ,L(xi) = ξ}.\nProposition 3 Given a perfect unique class count classifier. Decompose the dataset X into K subsets σpureξ , ξ = 1, · · ·K, such that σ pure ξ = {xi|xi ∈ X ,L(xi) = ξ}. Then, hσpureξ 6= hσpureζ for ξ 6= ζ.\nSuppose we have a perfect ucc classifier. For any two pure sets σpureζ and σ pure ξ , which consist of instances of two different underlying classes, ucc labels must be predicted correctly by the perfect ucc classifier. Hence, the conditions of Proposition 1 are satisfied, so we have hσpureζ 6= hσpureξ . Therefore, we can, in principle, perform an unsupervised clustering on the distributions of the sets without knowing the underlying truth classes of the instances. Hence, the perfect ucc classifier enables us to achieve our intermediate objective of “Labels on sets”. Furthermore, given a perfect ucc classifier, Proposition 2 states that by performing predictions of ucc labels alone, without any\nknowledge of underlying truth classes for instances, one can in principle perform perfect clustering for individual instances. Hence, a perfect ucc classifier enables us to achieve our intermediate objective of “Labels on instances”." }, { "heading": "3.3 UNSUPERVISED INSTANCE CLUSTERING", "text": "In order to achieve our ultimate objective of developing an unsupervised learning model for clustering all the instances in dataset X , we add this unsupervised clustering branch into our framework. Theoreticallly, we have shown in Proposition 3 that given a perfect ucc classifier, distributions of pure subsets of instances coming from different underlying classes are different.\nIn practice, it may not be always possible (probably most of the times) to train a perfect ucc classifier, so we try to approximate it. First of all, we train our ucc classifier on our novel MIL task and save our trained model (θ̄feature, θ̄drn, θ̄decoder). Then, we use trained feature extractor θ̄feature to obtain feature matrix fX = θ̄feature(X ). Finally, extracted features are clustered in an unsupervised fashion, by using simple k-means and spectral clustering methods. Figure 2(b) illustrates the unsupervised clustering process in our framework. A good feature extractor θ̄feature is of paramount importance in this task. Relatively poor θ̄feature may result in a poor unsupervised clustering performance in practice even if we have a strong θ̄drn. To obtain a strong θ̄feature, we employ an autoencoder branch, so as to achieve high clustering performance in our unsupervised instance clustering task. The autoencoder branch ensures that features extracted by θ̄feature contain semantic information for reconstruction." }, { "heading": "4 EXPERIMENTS ON MNIST AND CIFAR DATASETS", "text": "This section analyzes the performances of our UCC models and fully supervised models in terms of our eventual objective of unsupervised instance clustering on MNIST (10 clusters) (LeCun et al., 1998), CIFAR10 (10 clusters) and CIFAR100 (20 clusters) datasets (Krizhevsky & Hinton, 2009)." }, { "heading": "4.1 MODEL ARCHITECTURES AND DATASETS", "text": "To analyze different characteristics of our framework, different kinds of unique class count models were trained during our experiments: UCC, UCC2+, UCCα=1 and UCC2+α=1. These unique class count models took sets of instances as inputs and were trained on ucc labels. While UCC and UCC2+ models had autoencoder branch in their architecture and they were optimized jointly over both autoencoder loss and ucc loss,UCCα=1 andUCC2+α=1 models did not have autoencoder branch in their architecture and they were optimized over ucc loss only (i.e. α = 1 in Equation 1). The aim of training unique class count models with and without autoencoder branch was to show the effect of autoencoder branch in the robustness of clustering performance with respect to ucc classification performance. UCC and UCCα=1 models were trained on bags with labels of ucc1 to ucc4. On the other hand, UCC2+ and UCC2+α=1 models were trained on bags with labels ucc2 to ucc4. Our models were trained on ucc labels up to ucc4 instead of ucc10 (ucc20 in CIFAR100) since the performance was almost the same for both cases and training with ucc1 to ucc4 was much faster (Appendix C.2). Please note that for perfect clustering of instances inside the bags, it is enough to have a perfect ucc classifier that can perfectly discriminate ucc1 and ucc2 bags from Proposition 2. The aim of traininig UCC2+ and UCC2+α=1 models was to experimentally check whether these models can perform as good as UCC and UCCα=1 models even if there is no pure subsets during training. In addition to our unique class count models, for benchmarking purposes, we also trained fully supervised models, FullySupervised, and unsupervised autoencoder models, Autoencoder. FullySupervised models took individual instances as inputs and used instance level ground truths as labels during training. On the other hand, Autoencoder models were trained in an unsupervised manner by optimizing autoencoder loss (i.e. α = 0 in Equation 1). It is important to note that all models for a dataset shared the same architecture for feature extractor module and all the modules in our models are fine tuned for optimum performance and training time as explained in Appendix C.1.\nWe trained and tested our models on MNIST, CIFAR10 and CIFAR100 datasets. We haveXmnist,tr, Xmnist,val andXmnist,test for MNIST;Xcifar10,tr,Xcifar10,val andXcifar10,test for CIFAR10; and Xcifar100,tr, Xcifar100,val and Xcifar100,test for CIFAR100. Note that tr, val and test subscripts stand for ‘training’, ‘validation’ and ‘test’ sets, respectively. All the results presented in this paper were obtained on hold-out test sets Xmnist,test, Xcifar10,test and Xcifar100,test. FullySupervised\nmodels took individual instances as inputs and were trained on instance level ground truths. Unique class count models took sets of instances as inputs, which were sampled from the power sets 2Xmnist,tr , 2Xcifar10,tr and 2Xcifar100,tr , and were trained on ucc labels (Appendix C.2). While all the models were trained in a supervised setup, either on ucc labels or instance level ground truths, all of them were used to extract features for unsupervised clustering of individual instances." }, { "heading": "4.2 UNIQUE CLASS COUNT PREDICTION", "text": "Preceeding sections showed, in theory, that a perfect ucc classifier can perform ‘weakly’ supervised clustering perfectly. We evaluate ucc prediction accuracy of our unique class count models in accordance with our first intermediate objective that unique class count models should predict ucc labels of unseen subsets correctly. We randomly sampled subsets for each ucc label from the power sets of test sets and predicted the ucc labels by using trained models. Then, we calculated the ucc prediction accuracies by using predicted and truth ucc labels, which are summarized in Table 1 (Appendix C.3). We observed that as the task becomes harder (from MNIST to CIFAR100), it also becomes harder to approximate the perfect ucc classifier. Moreover, UCC and UCCα=1 models, in general, have higher scores than their counterpart models of UCC2+ and UCC2+α=1, which is expected since the ucc prediction task becomes easier at the absence of pure sets and models reach to early stopping condition (Appendix C.1) more easily. This is also supported by annother interesting, yet reasonable, observation that UCC2+ models have higher ucc accuracies than UCC2+α=1 models thanks to the autoencoder branch which makes UCC2+ harder to reach to early stopping condition." }, { "heading": "4.3 LABELS ON SETS", "text": "Jensen-Shannon (JS) divergence (Lin, 1991) value between feature distributions of two pure sets consisting of instances of two different underlying classes is defined as inter-class JS divergence in this paper and used for comparison on ‘Labels on sets’ objective of assigning labels to pure sets. Higher values of inter-class JS divergence are desired since it means that feature distributions of\npure sets of underlying classes are far apart from each other. The features of all the instances in a particular class are extracted by using a trained model and feature distributions associated to that class obtained by performing kernel density estimation on these extracted features. Then, for each pair of classes, inter-class JS divergence values are calculated (Appendix C.4). For a particular model, which is used in feature extraction, the minimum of these pairwise inter-class JS divergence values is used as a metric in the comparison of models. We have observed that as the task gets more challenging and the number of clusters increases, there is a drop in minimum inter-class JS divergence values, which is summarized in Table 1." }, { "heading": "4.4 LABELS ON INSTANCES", "text": "For our eventual objective of ‘Labels on instances’, we have used ‘clustering accuracy’ as a comparison metric, which is calculated similar to Ghasedi Dizaji et al. (2017). By using our trained models, we extracted features of individual instances of all classes in test sets. Then, we performed unsupervised clustering over these features by using k-means and spectral clustering. We used number of classes in ground truth as number of clusters (MNIST: 10, CIFAR10: 10, CIFAR100: 20 clusters) during clustering and gave the best clustering accuracy for each model in Table 1 (Appendix C.5).\nIn Table 1, we compare clustering accuracies of our models together with baseline and state of the art models in the literature: baseline unsupervised (K-means (Wang et al., 2015), Spectral Clustering (Zelnik-Manor & Perona, 2005)); state of the art unsupervised (JULE (Yang et al., 2016), GMVAE (Dilokthanakul et al., 2016), DAC (Chang et al., 2017), DEPICT (Ghasedi Dizaji et al., 2017), DEC (Xie et al., 2016)) and state of the art semi-supervised (AAE (Makhzani et al., 2015), CatGAN (Springenberg, 2015), LN (Rasmus et al., 2015), ADGM (Maaløe et al., 2016)). Clustering performance of our unique class count models is better than the performance of unsupervised models in all datasets and comparable to performance of fully supervised learning models in MNIST and CIFAR10 datasets. The performance gap gets larger in CIFAR100 dataset as the task becomes harder. Although semi-supervised methods use some part of the dataset with ‘exact’ labels during training, our models perform on par with AAE and CatGAN models and comparable to LN and ADGM models on MNIST dataset. ADGM and LN even reach to the performance of the FullySupervised model since they exploit training with ‘exact’ labeled data. On CIFAR10 dataset, LN and CatGAN models are slightly better than our unique class count models; however, they use 10% of instances with ‘exact’ labels, which is not a small portion.\nIn general, our UCC and UCCα=1 models have similar performance, and they are better than their counterpart models of UCC2+ and UCC2+α=1 due to the absence of pure sets during training. However, in the real world tasks, the absence of pure sets heavily depends on the nature of the problem. In our task of semantic segmentation of breast cancer metastases in histological lymph node sections, for example, there are many pure sets. Furthermore, we observed that there is a performance gap between UCC2+ and UCC2+α=1 models: UCC\n2+ models perform better than UCC2+α=1 models thanks to the autoencoder branch. The effect of autoencoder branch is also apparent in Figure 3, which shows clustering accuracy vs ucc accuracy curves for different datasets. For MNIST dataset, while UCC model gives clustering accuracy values proportional to ucc accuracy, UCCα=1 model cannot reach to high clustering accuracy values until it reaches to high ucc accuracies. The reason is that autoencoder branch in UCC helps θfeature module to extract better features during the initial phases of the training process, where the ucc classification accuracy is low. Compared to other\ndatasets, this effect is more significant in MNIST dataset since itself is clusterable. Although autoencoder branch helps in CIFAR10 and CIFAR100 datasets as well, improvements in clustering accuracy coming from autoencoder branch seems to be limited, so two models UCC and UCCα=1 follow nearly the same trend in the plots. The reason is that CIFAR10 and CIFAR100 datasets are more complex than MNIST dataset, so autoencoder is not powerful enough to contribute to extract discrimant features, which is also confirmed by the limited improvements of Autoencoder models over baseline performance in these datasets." }, { "heading": "5 SEMANTIC SEGMENTATION OF BREAST CANCER METASTASES", "text": "Semantic segmentation of breast cancer metastases in histological lymph node sections is a crucial step in staging of breast cancer, which is the major determinant of the treatment and prognosis (Brierley et al., 2016). Given the images of lymph node sections, the task is to detect and locate, i.e. semantically segment out, metastases regions in the images. We have formulated this task in our novel MIL framework such that each image is treated as a bag and corresponding ucc label is obtained based on whether the image is from fully normal or metastases region, which is labeled by ucc1, or from boundary region (i.e. image with both normal and metastases regions), which is labeled by ucc2. We have shown that this segmentation task can be achieved by using our weakly supervised clustering framework without knowing the ground truth metastases region masks of images, which require experts to exhaustively annotate each metastases region in each image. This annotation process is tedious, time consuming and more importantly not a part of clinical workflow.\nWe have used 512 × 512 image crops from publicly available CAMELYON dataset (Litjens et al., 2018) and constructed our bags by using 32× 32 patches over these images. We trained our unique class count model UCCsegment on ucc labels. Then, we used the trained model as a feature extractor and conducted unsupervised clustering over the patches of the images in the hold-out test dataset to obtain semantic segmentation masks. For benchmarking purposes, we have also trained a fully supervised Unet model (Ronneberger et al., 2015), which is a well-known biomedical image segmentation architecture, by using the ground truth masks and predicted the segmentation maps in\nthe test set. The aim of this comparison was to show that at the absence of ground truth masks, our model can approximate the performance of a fully supervised model. Moreover, we have obtained semantic segmentation maps in the test dataset by using k-means clustering as a baseline study. Example images from test dataset with corresponding ground truth masks, ucc labels and predicted masks by different models are shown in Figure 4. (Please see Appendix D.1 for more details.)\nFurthermore, we have calculated pixel level gross statistics of TPR (True Positive Rate), FPR (False Positive Rate), TNR (True Negative Rate), FNR (False Negative Rate) and PA (Pixel Accuracy) over the images of hold-out test dataset and declared the mean values in Table 2 (Appendix D.2). When we look at the performance of unsupervised baseline method of K-means clustering, it is obvious that semantic segmentation of metastases regions in lymph node sections is not an easy task. Baseline method achieves a very low TPR value of 0.370 and almost random score of 0.512 in PA. On the other hand, both our weakly supervised model UCCsegment and fully supervised model Unet outperform the baseline method. When we compare our model UCCsegment with Unet model, we see that both models behave similarly. They have reasonably high TPR and TNR scores, and low FPR and FNR scores. Moreover, they have lower FPR values than FNR values, which is more favorable than vice-versa since pathologists opt to use immunohistochemistry (IHC) to confirm negative cases (Bejnordi et al., 2017). However, there is a performance gap between two models, which is mainly due to the fact that Unet model is a fully supervised model and it is trained on ground truth masks, which requires exhaustive annotations by experts. On the contrary, UCCsegment model is trained on ucc labels and approximates to the performance of the Unet model. ucc label is obtained based on whether the image is metastatic, non-metastatic or mixture, which is much cheaper and easier to obtain compared to exhaustive mask annotations. Another factor affecting the performance of UCCsegment model is that ucc1 labels can sometimes be noisy. It is possible to have some small portion of normal cells in cancer regions and vice-versa due to the nature of the cancer. However, our UCCsegment is robust to this noise and gives reasonably good results, which approximates the performance of Unet model." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed a weakly supervised learning based clustering framework and introduce a novel MIL task as the core of this framework. We defined ucc as a bag level label in MIL setup and mathematically proved that a perfect ucc classifier can be used to perfectly cluster individual instances inside the bags. We designed a neural network based ucc classifer and experimentally showed that clustering performance of our framework with our ucc classifiers are better than the performance of unsupervised models and comparable to performance of fully supervised learning models. Finally, we showed that our weakly supervised unique class count model,UCCsegment, can be used for semantic segmentation of breast cancer metastases in histological lymph node sections. We compared the performance of our model UCCsegment with the performance of a Unet model and showed that our weakly supervised model approximates the performance of fully supervised Unet model. In the future, we want to check the performance of our UCCsegment model with other medical image datasets and use it to discover new morphological patterns in cancer that had been overlooked in traditional pathology workflow." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work is supported by the Biomedical Research Council of the Agency for Science, Technology, and Research, Singapore and the National University of Singapore, Singapore." }, { "heading": "A KERNEL DENSITY ESTIMATION", "text": "Kernel density estimation is a statistical method to estimate underlying unknown probability distribution in data (Parzen, 1962). It works based on fitting kernels at sample points of an unknown distribution and adding them up to construct the estimated probability distribution. Kernel density estimation process is illustrated in Figure 5.\nA.1 KDE MODULE IS DIFFERENTIABLE\nThe distribution of the feature hjσζ (v) is obtained by applying kernel density estimation on the extracted features f j,iσζ as in Equation 2. In order to be able to train our unique class count model end-to-end, we need to show that KDE module is differentiable, so that we can pass the gradients from θdrn to θfeature during back-propagation. Derivative of hjσζ (v) with respect to input of KDE\nmodule, f j,iσζ , can be obtained as in Equation 3.\nhjσζ (v) = 1\n|σζ | |σζ |∑ i=1 1√ 2πσ2 e − 1 2σ2 ( v−fj,iσζ )2 (2)\n∂hjσζ (v)\n∂f j,iσζ =\n1\n|σζ |\n( v − f j,iσζ ) σ2 √ 2πσ2 e − 1 2σ2 ( v−fj,iσζ )2 (3)\nAfter showing that KDE module is differentiable, we can show the weight update process for θfeature module in our model. Feature extractor module θfeature is shared by both autoencoder branch and ucc branch in our model. During back-propagation phase of the end-to-end training process, the weight updates of θfeature comprise the gradients coming from both branches (Equation 5). Gradients coming from autoencoder branch follow the traditional neural network back-propagation flow through the convolutional and fully connected layers. Different than that, gradients coming from ucc branch (Equation 6) also back-propagate through the custom KDE layer according to Equation 3.\nLoss = αLossucc︸ ︷︷ ︸ ucc loss + (1− α) Lossae︸ ︷︷ ︸ autoencoder\nloss\nwhere α ∈ [0, 1] (4)\n∂Loss ∂θfeature︸ ︷︷ ︸ gradients\nfor θfeature\n= α ∂Lossucc ∂θfeature︸ ︷︷ ︸ gradients\nfrom ucc branch\n+ (1− α) ∂Lossae ∂θfeature︸ ︷︷ ︸ gradients\nfrom autoencoder branch\n(5)\n∂Lossucc ∂θfeature = ∂Lossucc ∂hσζ × ∂hσζ ∂fσζ︸ ︷︷ ︸\nback-propagation through\nKDE layer\n× ∂fσζ ∂θfeature\n(6)" }, { "heading": "B PROOFS OF PROPOSITIONS", "text": "Before proceeding to the formal proofs, it is helpful to emphasize the decomposability property of kernel density estimation here.\nFor any set, σζ , one could partition it into a set ofM disjoint subsets σζ = σ′1∪σ′2∪· · ·∪σ′M where σ′λ∩σ′ψ = ∅ for λ 6= ψ. It is trivial to show that distribution hjσζ (v) is simply a linear combination of distributions hjσ′λ(v), λ = 1, 2, · · · ,M (Equation 7). As a direct consequence, one could decompose any set into its pure subsets. This is an important decomposition which will be used in the proofs of propositions later.\nhjσζ (v) = M∑ λ=1 wσ′λh j σ′λ (v),∀j where wσ′λ = |σ′λ| |σζ |\n(7)\nNow, we can proceed to formally state our propositions.\nDefinition 1 Given a subset σζ ⊂ X , unique class count, ησζ , is defined as the number of unique classes that all instances in the subset σζ belong to, i.e. ησζ = |{L(xi)|xi ∈ σζ}|. Recall that each instance belongs to an underlying unknown class.\nDefinition 2 A set σ is called a pure set if its unique class count equals one. All pure sets are denoted by the symbol σpure in this paper.\nProposition B. 1 For any set σζ ⊂ X , the unique class count ησζ of σζ does not depend on the number of instances in σζ belonging to a certain class.\nProof: This conclusion is obvious from the definition of unique class count in Definition 1.\nProposition B. 2 θdrn is non-linear.\nProof: We give a proof by contradiction using Proposition B.1. Suppose θdrn is linear, then θdrn(hσν ) = θdrn(wζhσζ + wξhσξ) (8)\n= wζθdrn(hσζ ) + wξθdrn(hσξ)\n= wζ η̃σζ + wξη̃σξ = η̃σν\nHence, θdrn is linear only when Equation 8 holds. However, by Proposition B.1, (θfeature, θdrn) should count correctly regardless of the proportion of the size of the sets |σζ | and |σξ|. Hence, Equation 8 cannot hold true and θdrn by contradiction cannot be linear.\nProposition B. 3 Let σζ , σξ be disjoint subsets of X with predicted unique class counts η̃σζ and η̃σξ , respectively. Let η̃σν be the predicted unique class count of σν = σζ ∪ σξ. If hσζ = hσξ , then η̃σν = η̃σζ = η̃σξ .\nProof: The distribution of set σν can be decomposed into distribution of subsets, hσν = wζhσζ + wξhσξ where wζ + wξ = 1 (9) hσζ = hσξ =⇒ hσν = hσζ (10)\nHence, η̃σν = η̃σζ = η̃σξ .\nProposition 1 Let σζ , σξ be disjoint subsets ofX with predicted unique class counts η̃σζ = η̃σξ = 1. If the predicted unique class count of σν = σζ ∪ σξ is η̃σν = 2, then hσζ 6= hσξ .\nProof: Proof of this proposition follows immediately from the contra-positive of Proposition B.3.\nDefinition 3 A perfect unique class count classifier takes in any set σ and output the correct predicted unique class count η̃σ = ησ .\nProposition 2 Given a perfect unique class count classifier. The dataset X can be perfectly clustered into K subsets σpureξ , ξ = 1, 2, · · · ,K, such that X = ⋃K ξ=1 σ pure ξ and σ pure ξ = {xi|xi ∈ X ,L(xi) = ξ}.\nProof: First note that this proposition holds because the “perfect unique class count classifier” is a very strong condition. Decompose X into subsets with single instance and then apply the unique class count on each subset, by definition, unique class counts of all subsets are one. Randomly pair up the subsets and merge them if their union still yield unique class count of one. Recursively apply merging on this condition until no subsets can be merged.\nProposition 3 Given a perfect unique class count classifier. Decompose the dataset X into K subsets σpureξ , ξ = 1, · · ·K, such that σ pure ξ = {xi|xi ∈ X ,L(xi) = ξ}. Then, hσpureξ 6= hσpureζ for ξ 6= ζ.\nProof: Since in Proposition 1, the subsets are arbitrary, it holds for any two subsets with unique class count of one. By pairing up all combinations, one arrives at this proposition. Note that for a perfect unique class count classifier, η = η̃." }, { "heading": "C DETAILS ON EXPERIMENTS WITH MNIST AND CIFAR DATASETS", "text": "C.1 DETAILS OF MODEL ARCHITECTURES\nFeature extractor module θfeature has convolutional blocks similar to the wide residual blocks in Zagoruyko & Komodakis (2016). However, the parameters of architectures, number of convolutional and fully connected layers, number of filters in convolutional layers, number of nodes in\nfully-connected layers, number of bins and σ value in KDE module, were decided based on models’ performance and training times. While increasing number of convolutional layers or filters were not improving performance of the models substantialy, they were putting a heavy computation burden. For determining the architecture of θdrn, we checked the performances of different number of fully connected layers. As the number of layers increased, the ucc classification performance of the models increased. However, we want θfeature to be powerful, so we stopped to increase number of layers as soon as we got good results. For KDE module, we have tried parameters of 11 bins, 21 bins, σ = 0.1 and σ = 0.01. Best results were obtained with 11 bins and σ = 0.1. Similarly, we have tested different number of features at the output of θfeature module and we decided to use 10 features for MNIST and CIFAR10 datasets and 16 features for CIFAR100 dataset based on the clustering performance and computation burden.\nDuring training, loss value of validation sets was observed as early stopping criteria. Training of the models was stopped if the validation loss didn’t drop for some certain amount of training iterations.\nFor the final set of hyperparameters and details of architectures, please see the code for our experiments: http://bit.ly/uniqueclasscount\nC.2 DETAILS OF DATASETS\nWe trained and tested our models on MNIST, CIFAR10 and CIFAR100 datasets. While MNIST and CIFAR10 datasets have 10 classes, CIFAR100 dataset has 20 classes. For MNIST, we randomly splitted 10,000 images from training set as validation set, so we had 50,000, 10,000 and 10,000 images in our training Xmnist,tr, validation Xmnist,val and test sets Xmnist,test, respectively. In CIFAR10 dataset, there are 50,000 and 10,000 images with equal number of instances from each class in training and testing sets, respectively. Similar to MNIST dataset, we randomly splitted 10,000 images from the training set as validation set. Hence, we had 40,000, 10,000 and 10,000 images in our training Xcifar10,tr, validation Xcifar10,val and testing Xcifar10,test sets for CIFAR10, respectively. In CIFAR100 dataset, there are 50,000 and 10,000 images with equal number of instances from each class in training and testing sets, respectively. Similar to other datasets, we randomly splitted 10,000 images from the training set as validation set. Hence, we had 40,000, 10,000 and 10,000 images in our training Xcifar100,tr, validation Xcifar100,val and testing Xcifar100,test sets for CIFAR10, respectively.\nFullySupervised models took individual instances as inputs and were trained on instance level ground truths. Xmnist,tr, Xcifar10,tr and Xcifar100,tr were used for training of FullySupervised models. Unique class count models took sets of instances as inputs and were trained on ucc labels. Inputs to unique class count models were sampled from the power sets of MNIST, CIFAR10 and CIFAR100 datasets, i.e. 2Xmnist,tr , 2Xcifar10,tr and 2Xcifar100,tr . For MNIST and CIFAR10 datasets, the subsets (bags) with 32 instances and for CIFAR100 dataset, the subsets (bags) with 128 instances are used in our experiments. While UCC and UCCα=1 models are trained on ucc1 to ucc4 labels, UCC2+ and UCC2+α=1 models are trained on ucc2 to ucc4 labels.\nOur models were trained on ucc labels up to ucc4 instead of ucc10 (ucc20 in CIFAR100) since the performance was almost the same for both cases in our experiment with MNIST dataset, results of which are shown in Table 3. On the other hand, training with ucc1 to ucc4 was much faster than ucc1 to ucc10 because as the ucc label gets larger, the number of instances in a bag is required to be larger in order to represent each class and number of elements in powerset also grows exponentially. Please note that for perfect clustering of instances, it is enough to have a perfect ucc classifier that can discriminate ucc1 and ucc2 from Proposition 2.\nAll the results presented in this paper were obtained on hold-out test sets Xmnist,test, Xcifar10,test and Xcifar100,test.\nC.3 CONFUSION MATRICES FOR ucc PREDICTIONS\nWe randomly sampled subsets for each ucc label from the power sets of test sets and predicted the ucc labels by using trained models. Then, we calculated the ucc prediction accuracies by using predicted and truth ucc labels, which are summarized in Table 1. Here, we show confusion matrices of our UCC and UCC2+ models on MNIST, CIFAR10 and CIFAR100 datasets as examples in Figure 6, 7 and 8, respectively.\nC.4 FEATURE DISTRIBUTIONS AND INTER-CLASS JS DIVERGENCE MATRICES\nThe features of all the instances in a particular class are extracted by using a trained model and feature distributions associated to that class obtained by performing kernel density estimation on these extracted features. Then, for each pair of classes, inter-class JS divergence values are calculated. We show inter-class JS divergence matrices for our FullySupervised and UCC models on MNIST test dataset in Figure 9. We also show the underlying distributions for FullySupervised and UCC models in Figure 10 and 11, respectively.\nFi gu\nre 10\n:D is\ntr ib\nut io\nns of\nex tr\nac te\nd fe\nat ur\nes by\nou rF\nu ll y S u p er v is ed\nm od\nel on\nM N\nIS T\nte st\nda ta\nse t.\nE ac\nh co\nlu m\nn co\nrr es\npo nd\ns to\na fe\nat ur\ne le\nar ne\nd by\nm od\nel an d ea ch ro w co rr es po nd s to an un de rl yi ng cl as s in th e te st da ta se t.\nFi gu\nre 11\n: D\nis tr\nib ut\nio ns\nof ex\ntr ac\nte d\nfe at\nur es\nby ou\nr U C C\nm od\nel on\nM N\nIS T\nte st\nda ta\nse t.\nE ac\nh co\nlu m\nn co\nrr es\npo nd\ns to\na fe\nat ur\ne le\nar ne\nd by\nm od\nel an\nd ea\nch ro w co rr es po nd s to an un de rl yi ng cl as s in th e te st da ta se t.\nC.5 K-MEANS AND SPECTRAL CLUSTERING ACCURACIES OF OUR MODELS\nWe performed unsupervised clustering by using k-means and spectral clustering and gave the best clustering accuracy for each model on each dataset in Table 1 in the main text. Here, we present all the clustering accuracies for our models in Table 4.\nC.6 UCC MODELS WITH AVERAGING LAYER AND KDE LAYER\nKDE layer is chosen as MIL pooling layer in UCC model because of its four main properties, first three of which are essential for the proper operation of proposed framework and validity of the propositions in the paper:\n1. KDE layer is permutation-invariant, i.e. the output of KDE layer does not depend on the permutation of its inputs, which is important for the stability of θdrn module.\n2. KDE layer is differentiable, so UCC model can be trained end-to-end. 3. KDE layer has decomposability property which enables our theoretical analysis (Ap-\npendix B). 4. KDE layer enables θdrn to fully utilize the information in the shape of the distribution rather\nthan looking at point estimates of distribution.\nAveraging layer (Wang et al., 2018) as an MIL pooling layer, which also has the first three properties, can be an alternative to KDE layer in UCC model. We have conducted additional experiments by replacing KDE layer with ’averaging layer’ and compare the clustering accuracy values of the models with averaging layer and the models with KDE layer in Table 5." }, { "heading": "D DETAILS ON SEMANTIC SEGMENTATION TASK", "text": "D.1 DETAILS OF MODEL AND DATASET\nOur model UCCsegment has the same architecture with the UCC model in CIFAR10 dataset, but this time we have used 16 features. We have also constructed the Unet model with the same blocks used in UCCsegment model in order to ensure a fair comparison. The details of the models can be seen in our code: http://bit.ly/uniqueclasscount\nWe have used 512 × 512 image crops from publicly available CAMELYON dataset (Litjens et al., 2018). CAMELYON dataset is a public Whole Slide Image (WSI) dataset of histological lymph node sections. It also provides the exhaustive annotations for metastases regions inside the slides which enables us to train fully supervised models for benchmarking of our weakly supervised unique class count model.\nWe randomly crop 512×512 images over the WSIs of CAMELYON dataset and associate a ucc label to each image based on whether it is fully metastases/normal (ucc1) or mixture (ucc2). We assigned ucc labels based on provided ground truths since they are readily available. However, please note that in case no annotations provided, obtaining ucc labels is much cheaper and easier compared to tedious and time consuming exhaustive metastases region annotations. We assigned ucc1 label to an image if the metastases region in the corresponding ground truth mask is either less than 20% (i.e. normal) or more than 80% (i.e metastases). On the other hand, we assigned ucc2 label to an image if the metastases region in the corresponding ground truth mask is more than 30% and less than 70% (i.e. mixture). Actually, this labeling scheme imitates the noise that would have been introduced if ucc labeling had been done directly by the user instead of using ground truth masks. Beyond that, ucc1 labels in this task can naturally be noisy since it is possible to have some small portion of normal cells in cancer regions and vice-versa due to the nature of the cancer. In this way, we have constructed our segmentation dataset consisting of training, validation and testing sets. The images in training and validation sets are cropped randomly over the WSIs in training set of CAMELYON dataset and the images in testing set are cropped randomly over the test set of CAMELYON dataset. Then, the bags in our MIL dataset to train UCCsegment model are constructed by using 32 × 32 patches over these images. Each bag contains 32 instances, where each instance is a 32× 32 patch. The details of our segmentation dataset are shown in Table 6.\nWe have provided the segmentation dataset under “./data/camelyon/” folder inside our code folder. If you want to use this dataset for benchmarking purposes please cite our paper (referenced later) together with the original CAMELYON dataset paper of Litjens et al. (2018).\nWe have given confusion matrix for ucc predictions of our UCCsegment model in Figure 12. For Unetmodel, we have shown loss curves of traininig and validation sets during training in Figure 13.\nD.2 DEFINITIONS OF EVEALUTION METRICS\nIn this section, we have defined our pixel level evaluation metrics used for performance comparison of our weakly supervised UCCsegment model, fully supervised Unetmodel and unsupervised baseline K −means model. Table 7 shows the structure of pixel level confusion matrix together with basic statistical terms. Then, our pixel level evaluation metrics TPR (True Positive Rate), FPR (False Positive Rate), TNR (True Negative Rate), FNR (False Negative Rate) and PA (Pixel Accuracy) are defined in Equation 11, 12, 13, 14 and 15, respectively.\nFPR = FP\nFP + TN (12)\nTNR = TN\nTN + FP (13)\nFNR = FN\nFN + TP (14)\nPA = TP + TN\nTP + FP + TN + FN (15)" } ]
2,020
WEAKLY SUPERVISED CLUSTERING BY EXPLOITING UNIQUE CLASS COUNT
SP:8a2a437441032f68341e305c568f59643a1c81e8
[ "The paper describes a method for accelerating MRI scans by proposing lines in k-space to acquire next. The proposals are based on posterior uncertainty estimates obtained from GAN-based reconstructions from parts of the k-space acquired thus far. The authors address an interesting and important problem of speeding up MRI scans and thus improving the subject's experience. The proposed method achieves better posterior uncertainty and SSIM scores than competing methods.", "The paper proposes an uncertainty driven acquisition for MRI reconstruction. Contrary to most previous approaches (which try to get best reconstruction for a fixed sampling pattern) the method incorporates an adaptive, on-the-fly masking building (which is similar in spirit to Zhang at al. 2019). The measurements to acquire are selected based on variance/uncertainty estimates coming from a conditional GAN model. This is mostly an \"application\" paper that is evaluated on one dataset." ]
This work proposes a closed loop, uncertainty-driven adaptive sampling framework (CLUDAS) for accelerating magnetic resonance imaging (MRI) via deep Bayesian inversion. By closed loop, we mean that our samples adapt in real-time to the incoming data. To our knowledge, we demonstrate the first generative adversarial network (GAN) based framework for posterior estimation over a continuum sampling rates of an inverse problem. We use this estimator to drive the sampling for accelerated MRI. Our numerical evidence demonstrates that the variance estimate strongly correlates with the expected mean squared error (MSE) improvement for different acceleration rates even with few posterior samples. Moreover, the resulting masks bring improvements to the state-of-the-art fixed and active mask designing approaches across MSE, posterior variance and structural similarity metric on real undersampled MRI scans.
[]
[ { "authors": [ "Daniel Y Abramovitch", "Sean B Andersson", "Lucy Y Pao", "Georg Schitter" ], "title": "A tutorial on the mechanisms, dynamics, and control of atomic force microscopes", "venue": "American Control Conference,", "year": 2007 }, { "authors": [ "Jonas Adler", "Ozan Öktem" ], "title": "Deep bayesian inversion", "venue": "arXiv preprint arXiv:1811.05910,", "year": 2018 }, { "authors": [ "Cagla Deniz Bahadir", "Adrian V Dalca", "Mert R Sabuncu" ], "title": "Learning-based optimization of the under-sampling pattern in MRI", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2019 }, { "authors": [ "Claire Boyer", "Nicolas Chauffert", "Philippe Ciuciu", "Jonas Kahn", "Pierre Weiss" ], "title": "On the generation of sampling schemes for magnetic resonance imaging", "venue": "SIAM Journal on Imaging Sciences,", "year": 2016 }, { "authors": [ "Mark Bydder", "David J Larkman", "Joseph V Hajnal" ], "title": "Combination of signals from array coils using image-based estimation of coil sensitivity profiles", "venue": "Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine,", "year": 2002 }, { "authors": [ "Emmanuel J Candès", "Justin Romberg", "Terence Tao" ], "title": "Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information", "venue": "IEEE Trans, on Inf. Theory,", "year": 2006 }, { "authors": [ "Nicolas Chauffert", "Philippe Ciuciu", "Jonas Kahn", "Pierre Weiss" ], "title": "Variable density sampling with continuous trajectories", "venue": "SIAM Journal on Imaging Sciences,", "year": 1962 }, { "authors": [ "Joseph Y Cheng", "Feiyu Chen", "Christopher Sandino", "Morteza Mardani", "John M Pauly", "Shreyas S Vasanawala" ], "title": "Compressed sensing: From research to clinical practice with data-driven learning", "venue": null, "year": 1903 }, { "authors": [ "David L Donoho" ], "title": "Compressed sensing", "venue": "IEEE transactions on Information Theory,", "year": 2006 }, { "authors": [ "Li Feng", "Robert Grimm", "Kai Tobias Block", "Hersh Chandarana", "Sungheon Kim", "Jian Xu", "Leon Axel", "Daniel K Sodickson", "Ricardo Otazo" ], "title": "Golden-angle radial sparse parallel MRI: Combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI", "venue": "Magnetic Resonance in Medicine,", "year": 2014 }, { "authors": [ "Baran Gözcü", "Rabeeh K. Mahabadi", "Yen-Huan Li", "Efe Ilıcak", "Tolga Çukur", "Jonathan Scarlett", "Volkan Cevher" ], "title": "Learning-based compressive MRI", "venue": "IEEE Transactions on Medical Imaging,", "year": 2018 }, { "authors": [ "Baran Gözcü", "Thomas Sanchez", "Volkan Cevher" ], "title": "Rethinking sampling in parallel MRI: A datadriven approach", "venue": "In 27th European Signal Processing Conference (EUSIPCO),", "year": 2019 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Justin P Haldar", "Daeun Kim" ], "title": "Oedipus: An experiment design framework for sparsity-constrained mri", "venue": "IEEE transactions on medical imaging,", "year": 2019 }, { "authors": [ "Kerstin Hammernik", "Teresa Klatzer", "Erich Kobler", "Michael P Recht", "Daniel K Sodickson", "Thomas Pock", "Florian Knoll" ], "title": "Learning a variational network for reconstruction of accelerated mri data", "venue": "Magnetic resonance in medicine,", "year": 2018 }, { "authors": [ "Oren N Jaspan", "Roman Fleysher", "Michael L Lipton" ], "title": "Compressed sensing mri: a review of the clinical literature", "venue": "The British journal of radiology,", "year": 2015 }, { "authors": [ "Kyong Hwan Jin", "Dongwook Lee", "Jong Chul Ye" ], "title": "A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank Hankel matrix", "venue": "IEEE Transactions on Computational Imaging,", "year": 2016 }, { "authors": [ "Kyong Hwan Jin", "Michael Unser", "Kwang Moo Yi" ], "title": "Self-supervised deep active accelerated MRI", "venue": "arXiv preprint arXiv:1901.04547,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Florian Knoll", "Christian Clason", "Clemens Diwoky", "Rudolf Stollberger" ], "title": "Adapted random sampling patterns for accelerated MRI", "venue": "Magnetic resonance materials in physics, biology and medicine,", "year": 2011 }, { "authors": [ "Libor Kovarik", "A Stevens", "A Liyu", "Nigel D Browning" ], "title": "Implementing an accurate and rapid sparse sampling approach for low-dose atomic resolution stem imaging", "venue": "Applied Physics Letters,", "year": 2016 }, { "authors": [ "Carole Lazarus", "Pierre Weiss", "Nicolas Chauffert", "Franck Mauconduit", "Michel Bottlaender", "Alexandre Vignaud", "Philippe Ciuciu" ], "title": "SPARKLING: Novel non-cartesian sampling schemes for accelerated 2D anatomical imaging at 7T using compressed sensing", "venue": "In 25th annual meeting of the International Society for Magnetic Resonance Imaging,", "year": 2017 }, { "authors": [ "Sajan Goud Lingala", "Mathews Jacob" ], "title": "Blind compressive sensing dynamic MRI", "venue": "IEEE transactions on medical imaging,", "year": 2013 }, { "authors": [ "Michael Lustig", "David Donoho", "John M Pauly" ], "title": "Sparse MRI: The application of compressed sensing for rapid MR imaging", "venue": "Magnetic Resonance in Medicine,", "year": 2007 }, { "authors": [ "Michael Lustig", "David L Donoho", "Juan M Santos", "John M Pauly" ], "title": "Compressed sensing MRI", "venue": "IEEE signal processing magazine,", "year": 2008 }, { "authors": [ "Matti Malinen", "Tomi Huttunen", "Jari P Kaipio", "Kullervo Hynynen" ], "title": "Scanning path optimization for ultrasound surgery", "venue": "Physics in Medicine & Biology,", "year": 2005 }, { "authors": [ "Ricardo Otazo", "Emmanuel Candès", "Daniel K Sodickson" ], "title": "Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components", "venue": "Magnetic Resonance in Medicine,", "year": 2015 }, { "authors": [ "Saiprasad Ravishankar", "Yoram Bresler" ], "title": "Adaptive sampling design for compressed sensing MRI", "venue": "In Engineering in Medicine and Biology Society, EMBC,", "year": 2011 }, { "authors": [ "Saiprasad Ravishankar", "Yoram Bresler" ], "title": "MR image reconstruction from highly undersampled k-space data by dictionary learning", "venue": "IEEE Transactions on Medical Imaging,", "year": 2011 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Thomas Sanchez", "Baran Gözcü", "Ruud B van Heeswijk", "Efe Ilıcak", "Tolga Çukur" ], "title": "Scalable learning-based sampling optimization for compressive dynamic MRI", "venue": null, "year": 1902 }, { "authors": [ "Jo Schlemper", "Jose Caballero", "Joseph V Hajnal", "Anthony Price", "Daniel Rueckert" ], "title": "A deep cascade of convolutional neural networks for mr image reconstruction", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2017 }, { "authors": [ "Jo Schlemper", "Jose Caballero", "Joseph V Hajnal", "Anthony N Price", "Daniel Rueckert" ], "title": "A deep cascade of convolutional neural networks for dynamic MR image reconstruction", "venue": "IEEE Transactions on Medical Imaging,", "year": 2018 }, { "authors": [ "Matthias Seeger", "Hannes Nickisch", "Rolf Pohmann", "Bernhard Schölkopf" ], "title": "Optimization of kspace trajectories for compressed sensing by bayesian experimental design", "venue": "Magn. Reson. Med.,", "year": 2010 }, { "authors": [ "Ferdia Sherry", "Martin Benning", "Juan Carlos De los Reyes", "Martin J Graves", "Georg Maierhofer", "Guy Williams", "Carola-Bibiane Schönlieb", "Matthias J Ehrhardt" ], "title": "Learning the sampling pattern for MRI", "venue": null, "year": 1906 }, { "authors": [ "Patrice Y Simard", "David Steinkraus", "John C Platt" ], "title": "Best practices for convolutional neural networks applied to visual document analysis", "venue": "In Icdar,", "year": 2003 }, { "authors": [ "Jaganathan Vellagoundar", "Ramasubba Reddy Machireddy" ], "title": "A robust adaptive sampling method for faster acquisition of MR images", "venue": "Magnetic resonance imaging,", "year": 2015 }, { "authors": [ "Shanshan Wang", "Zhenghang Su", "Leslie Ying", "Xi Peng", "Shun Zhu", "Feng Liang", "Dagan Feng", "Dong Liang" ], "title": "Accelerating magnetic resonance imaging via deep learning", "venue": "In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI),", "year": 2016 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "Tomer Weiss", "Sanketh Vedula", "Ortal Senouf", "Alex Bronstein", "Oleg Michailovich", "Michael Zibulevsky" ], "title": "Learning fast magnetic resonance imaging", "venue": null, "year": 1905 }, { "authors": [ "Shuo Zhang", "Kai Tobias Block", "Jens Frahm" ], "title": "Magnetic resonance imaging in real time: advances using radial flash", "venue": "Journal of Magnetic Resonance Imaging,", "year": 2010 }, { "authors": [ "Shuo Zhang", "Martin Uecker", "Dirk Voit", "Klaus-Dietmar Merboldt", "Jens Frahm" ], "title": "Real-time cardiovascular magnetic resonance at high temporal resolution: radial FLASH with nonlinear inverse reconstruction", "venue": "Journal of Cardiovascular Magnetic Resonance,", "year": 2010 }, { "authors": [ "Zizhao Zhang", "Adriana Romero", "Matthew J Muckley", "Pascal Vincent", "Lin Yang", "Michal Drozdzal" ], "title": "Reducing uncertainty in undersampled MRI reconstruction with active acquisition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Simard" ], "title": "y-axes, rotations of [0, 2π), reflection along the x-axis with 50% probability. We also apply elastic deformations using the implementation", "venue": null, "year": 2003 } ]
[ { "heading": null, "text": "This work proposes a closed loop, uncertainty-driven adaptive sampling framework (CLUDAS) for accelerating magnetic resonance imaging (MRI) via deep Bayesian inversion. By closed loop, we mean that our samples adapt in real-time to the incoming data. To our knowledge, we demonstrate the first generative adversarial network (GAN) based framework for posterior estimation over a continuum sampling rates of an inverse problem. We use this estimator to drive the sampling for accelerated MRI. Our numerical evidence demonstrates that the variance estimate strongly correlates with the expected mean squared error (MSE) improvement for different acceleration rates even with few posterior samples. Moreover, the resulting masks bring improvements to the state-of-the-art fixed and active mask designing approaches across MSE, posterior variance and structural similarity metric on real undersampled MRI scans." }, { "heading": "1 INTRODUCTION", "text": "Myriad of applications in control, data processing, and learning—from platform navigation to data mining and from channel estimation to compressive sensing (CS)—involve a linear projection of signals or data points into lower-dimensional space. In this dimensionality reduction, the measurement or the sensing matrix determines how much information we acquire per measurement, the ensuing computational ease of processing (since algorithms use the sensing matrix and its adjoint as subroutines), and the recovery guarantees.\nIn a resource constrained setting, these utilities create a Pareto trade-off wherein improving one worsens another. To impact all these fronts simultaneously, adaptive sensing (or sequential experimental design, active learning, etc.) aims to close the loop between the data acquisition and the inference, for instance, by exploiting information collected in past samples to adjust the future sampling process. While adaptive procedures promise great improvements over non-adaptive methods, they are too computationally demanding for real-time online response.\nIn the context of magnetic resonance imaging (MRI), the dimensionality reduction process (i.e., undersampling in the Fourier domain, often referred to as k-space) directly correlates with patient comfort, as it results in shorter scanning times. For the last decade, approaches motivated by compressed sensing have enabled successful reconstruction from highly accelerated (i.e., subsampled) data (see Lustig et al. (2007); Ravishankar & Bresler (2011b); Lingala & Jacob (2013); Otazo et al. (2015); Jin et al. (2016) and references therein). While compressed sensing prescribed fully random sampling (Candès et al., 2006; Donoho, 2006) for recovery, most CS-inspired approaches to MRI departed from this paradigm and relied on the heuristics of variable-density sampling (VDS) (Lustig et al., 2007).\nThere, the sampling pattern (or mask) is picked at random from a probability distribution that reasonably imitates the energy distribution in Fourier space, whereas fully random sampling of the Fourier space ignores this important structure in the signal, which leads to practically poor results (Lustig et al., 2008). VDS appears as a heuristic middle ground for a sampling pattern to incorporate the structure of energy distribution in Fourier space, while preserving the benefits of incoherent sampling. In VDS, the probability distribution considered has traditionally been parametric (Lustig\net al., 2007; Chauffert et al., 2014; Boyer et al., 2016) or constructed from data (Knoll et al., 2011; Vellagoundar & Machireddy, 2015; Bahadir et al., 2019).\nThe CS-inspired methods shift the burden from acquisition to reconstruction, as most of these methods are iterative, preventing online reconstruction of accelerated data. This slow reconstruction rendered the problem of optimizing Fourier space sampling prohibitively expensive, so little work was devoted to general method of designing sampling patterns for generic sampling methods (Gözcü et al., 2018).\nIn recent years, deep learning applied to MRI enabled high quality reconstruction for unprecedented acceleration rates (Wang et al., 2016; Schlemper et al., 2017; Hammernik et al., 2018), as well as near-online reconstruction times. This also led to a freshly renewed interest in the problem of optimizing the Fourier space sampling pattern (Bahadir et al., 2019; Weiss et al., 2019; Sherry et al., 2019; Jin et al., 2019; Zhang et al., 2019), although most of the research energy is still focused on developing more efficient reconstruction methods.\nIn its current stage, deep learning applied to MRI suffers from three main drawbacks: (i) as mentioned, most of the research energy has been devoted to more efficient reconstruction methods, despite recent results showing that the sampling masks used have a significant effect on the quality of reconstruction (Gözcü et al., 2018; Gözcü et al., 2019), (ii) assessing the reliability of the prediction of a reconstructed image is difficult for a clinician, due to the black-box nature of deep learning methods and (iii) the commonly used metrics for assessing the quality of the reconstruction (e.g. MSE, PSNR, SSIM) do not align with what clinicians see as valuable (Cheng et al., 2019).\nA recent work of Adler & Öktem (2018) successfully demonstrated that a conditional Wasserstein GAN (cWGAN) (Arjovsky et al., 2017) can be used to learn the posterior distribution of images given undersampled measurements in a tractable fashion while only relying on samples from the joint distribution. This result provides a key opportunity to address all three drawbacks in the context of MRI by using the posterior variance of the reconstruction as an uncertainty estimate, which is a more natural criterion for image quality.\nTo this end, we show that a conditional WGAN can be trained on a continuum of inverse problems on various sampling rates, yielding an estimator of the posterior variance in the Fourier domain that can be used to drive the whole sampling process in a closed loop fashion. Despite the model being trained only for reconstruction and not sampling, the resulting variance estimator can reliably be used to guide a closed loop sampling procedure, thus providing patient-adapted sampling masks.\nIn particular, we demonstrate that the generated masks that minimize the uncertainty estimates in an online fashion reach similar reconstruction MSE as compared to the state-of-the-art fixed-mask approaches like Gözcü et al. (2018); Gözcü et al. (2019). While these approaches explicitly focus on minimizing MSE, we show that CLUDAS naturally outperforms them in terms of key visual metrics such as SSIM (Wang et al., 2004) without being trained on these metrics.\nIn addition, we investigate the reliability of our estimator in a wide range of undersampling regimes and show that even when using a few samples from the cWGAN posterior, the variance estimate is reliable enough to be used to drive the design of the mask. This makes it feasible to use CLUDAS in a closed loop adaptive setting, where our approach is competitive with an approach using an MSEoracle on the testing set (which is not feasible on real problem without the ground truth available), and also beats strong open loop adaptive baselines while being easier to train and easier to apply." }, { "heading": "Contributions.", "text": "• We show how to train a cWGAN to generate posterior samples across a continuum of sampling rates; We solve in a Bayesian fashion not only a single inverse problem, but a continuum of inverse problems. • We demonstrate the first posterior-based mask design method for MRI. • We propose to use the variance of the posterior distribution of images given measurements\nas quality metric for mask performance. • We show that despite the network being trained as a reconstruction method and not as a\nsampling method, our adaptive approach CLUDAS is competitive in both settings and a strong contender for being used in active settings as it matches computationally expensive state-of-the-art approaches.\nImplications. We contend that our uncertainty driven sampling framework can extend to many similar problems where one wants to reduce acquisition times, such as atomic force microscopy (Abramovitch et al., 2007), transmission electron microscopy (Kovarik et al., 2016), or trajectory optimization for ultrasound acquisitions (Malinen et al., 2005). Our results open the door of leveraging Bayesian experimental design methods for designing adaptive sampling patterns in clinical settings and beyond.\nRelated works. We especially want to highlight the work of Zhang et al. (2019), which bears several similarities with our method. It is important to note that their approach is not generative, as they only have point estimates of the mean and some learned uncertainty metric. Moreover, they assume the reconstructed image to be normally distributed with a diagonal covariance, a practically unrealistic assumption, which is not required in Adler & Öktem (2018)." }, { "heading": "2 NOTATION AND PROBLEM SETTING", "text": "Notation. Throughout this paper, we will refer to vectors as boldface lowercase letters e,x,y, and assume that they correspond to N × N images that are vectorized to a p dimensional space. In particular, we will assume that these vectors belong to some appropriately defined spaces X ,Y ⊆ Cp. We use vectors in Cp as MR images are inherently complex. Boldface uppercase letters X,Y will in general refer to random variables (formally, they are random vectors), with the exception of F and Pω which will respectively refer to the discrete Fourier transform operator and the sampling operator, that will be defined below. Finally, we will use ω ∈ O = {0, 1}p to be a p dimensional binary vector, and we will refer to it interchangeably as a sampling, subsampling or undersampling pattern/mask. Finally, we will refer to the distribution of a random variable X as PX, and extend the nottation to conditioned random variables such as PX|y ≡ PX|Y=y . Problem setting. An inverse problem is the task of recovering the ground truth x ∈ X from measurements y ∈ Y . In the case of MRI, we consider the following acquisition model\nyω = PωFx + e, (1)\nwhere yω ∈ Y corresponds to the measurements obtained with a given sampling mask ω and where the sampling operator (Pω)ii = 1 if i ∈ ω, 0 otherwise. ω ⊆ [p] := {1, . . . , p} is a set containing the sampled locations. Note that while the ground truth x is an image living in the image space X , the measurements yω live in the Fourier space. The Fourier space is often referred to as k-space in the MRI literature.\nIn this paper, we restrict ourselves to the setting where ω is composed of lines in the Fourier space, also known as Cartesian sampling in the MRI literature1, and usually constrained by a maximal number n of lines that can be acquired. F denotes the Fourier transform, x ∈ X is the ground truth image and e ∈ Y is a white additive noise. Without loss of generality, neglecting basic sampling effects, such as magnetic field inhomogeneity and spin relaxation, we assume e = 0 in the sequel.\nAs we will be working in a Bayesian framework, we define a random variable X ∼ PX from which ground truth, complete measurements are generated, distributed according to the unknown prior PX. From this we also define the distribution PYω as well as the joint distribution PX,Yω and the posterior PX|Yω , where Yω = PωFX.The posterior is especially of interest as for a fixed yω ∈ Y , PX|yω (short for PX|Yω=yω ) represents the probability distribution of ground truths that are likely to have generated the observed data yω with a given mask ω." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 MASK DESIGN IN MRI", "text": "Fixed masks. The overwhelming majority of data-driven mask design approaches work in an open loop fashion: the sampling mask is built using training data and kept fixed at inference time. Even\n1This kind of structured acquisition originates from physical considerations and has the benefit of being easily implementable in practice.\nthe VDS paradigm, that prescribes sampling a mask at random from a parametric distribution, uses a fixed mask that is tuned in an ad-hoc fashion when applied clinically (Jaspan et al., 2015).\nFormally, we consider a training set of original data {xi}mi=1 that are assumed to originate from an unknown prior distribution PX. We are constrained to a maximal sampling budget n and want to find a mask that minimizes a given loss function ` (e.g. MSE, SSIM) on these training samples.\nThe abstract problem of our interest then can be written as follows min ω∈A EX∼PX [`(X,Y = PωFX)] . (2)\nwhereA denotes the constrained set of masks ω that are made of lines (cf. section ) and that respect the maximal sampling budget, i.e. |ω| ≤ n (here, | · | denotes the cardinality of the set ω). Formally, let us define S as a set of subsets of {1, . . . , p} that contains the N possible lines given a vectorized image of dimension p = N2. A is defined as\nA := {ω ⊆ [p] : ω = ⋃ v∈S v, S ⊆ S, |ω| ≤ n},\nwhich means that ω is constructed as a union of the elements v of a subset S of all possible lines S, with the additional constraint that the overall mask will respect the sampling budget n. However, the finite amount of samples requires to solve the empirical risk minimization version of Equation 2, namely minω∈A 1m ∑m i=1 `(xi,y = PωFxi). Given that m is large enough and that the training and testing data do originate from PX, statistical learning theory guarantees that a mask performing well on the training set will adequately generalize.\nIn the literature, two trends are noticeable. A first body of works focused on constructing a good distribution from which to sample Ravishankar & Bresler (2011a); Vellagoundar & Machireddy (2015); Bahadir et al. (2019); Sherry et al. (2019). Other approaches tried to directly design a fixed sampling mask that performs well on training data (Seeger et al., 2010; Gözcü et al., 2018; Gözcü et al., 2019; Haldar & Kim, 2019). More recent approaches tried to jointly train the mask with a deep network Weiss et al. (2019); Bahadir et al. (2019).\nAdaptive masks. Until the re-birth of deep learning, most reconstruction methods relying on CS suffered from long reconstruction times, due to their iterative nature. However, recent works leveraging the online reconstruction speed of deep learning achieved mask designs in a closed loop fashion, i.e., developing algorithms that could be used in an online fashion to adapt to patients.\nFor a fixed, unknown data x, the adaptive approach aims at leveraging the information from the already measured frequencies to guide what should be acquired next. Instead of a fixed mask ω, we are building up partial masks ωt as union of individual Fourier space lines vt, with ω = ωT being the largest mask satisfying |ωT | ≤ n. The optimization now happens at runtime as a multistage problem which has to choose each new element vt such that\nmin vt|v0,...,vt−1\n`(x,yωT = PωT Fx; ωT = ⋃T i=1vi), (3)\nthat is, we want to take at each time step t the sample that will allow us to get the lowest final error, but being constrained by our previous acquisitions and the fact that we cannot look into the future to know where we should sample. The problem requires developing an online sampling method that uses the partial information yωt at each t to decide on its next action.\nTwo approaches have been proposed for this problem in the literature. Jin et al. (2019) took a self-supervised learning approach, where a sampling network learns to imitate a Monte-Carlo tree search method and predicts vt|v0, . . . , vt−1 for all t. Zhang et al. (2019) leveraged adversarial training to jointly train a reconstruction algorithm and an evaluator that gives scores to the quality of reconstructed lines in Fourier space. The sampling procedure simply iteratively added to the mask the lines with the lowest reconstruction score.\nOther types of sampling. While our work here focuses on Cartesian sampling, which is by far the most widely used trajectory in MRI (Lustig et al., 2007), many other physically feasible trajectories have been investigated over the years for accelerated MRI. Radial trajectories have mainly been used in the context of dynamic MRI (Zhang et al., 2010a; Feng et al., 2014), and non-structured trajectories have also been explored and validated on real acquisitions (Lazarus et al., 2017). In particular, we note that some CS-based methods have shown online reconstruction times in the context of dynamic MRI (Zhang et al., 2010b) using radial trajectories, but such trajectories are rarely used in the context of static MRI." }, { "heading": "4 METHODOLOGY", "text": "" }, { "heading": "4.1 DEEP BAYESIAN INVERSION", "text": "In (Adler & Öktem, 2018), the authors propose a framework to estimate the posterior distribution PX|Y, i.e., the distribution of original images x that are likely to have generated the observed data y. They formulate the problem as finding a parametrized generator Gθ∗ : Y → PX that allows to minimize the Wasserstein distance with the unknown posterior PX|Y over all the observations, namely minimizing\nmin θ∈Θ\nEY∼PY [ W(PX|Y,Gθ(Y)) ] . (4)\nHere, PX is the space of probability measures on X . As this approach in not tractable in practice, they show that Equation 4 can equivalently be formulated as\nmin θ∈Θ { max φ∈Φ E(X,Y)∼PX,Y Z∼η [ Dφ(X,Y)−Dφ(Gθ(Z,Y),Y) ]} (5)\nAfter finding the optimal parameters (θ∗, φ∗), the conditional generator G(z,y) : X × Y → X approximates the posterior distribution PX|y and different values of z yield different samples from PX|y.\nNote that Adler & Öktem (2018) applies this method to reconstruction data from ultra-low dose 3D helical computed tomography (CT). This differs from MRI in several regards, but it is significant in our case that we can generate different instances of Equation 4 by selecting different ω in Equation 1. Minimizing Equation 4 without giving ω explicitely would amount to generating a posterior distribution from data y acquired when using any possible mask ω sampled from a random vector Ω with a possibly unknown distribution. This is why the approach of (Adler & Öktem, 2018) minimizes over the distance for observation in Equation 4 and mask designs approaches consider minimization over original data (cf., equation 2).\nUncertainty estimation. Once the generator has been trained, we can sample from Gθ∗(Z,yω) which approximates PX|yω . Let {x̂i = Gθ∗(zi,yω)} ns i=1 be samples of the posterior PX|yω , where zi are iid samples from Z and ns and the number of samples taken. Then, as in (Adler & Öktem, 2018), the ground truth image x can be estimated by the empirical point-wise mean of these samples, namely x̄ = 1ns ∑ns i=1 x̂i. The corresponding empirical point-wise variance can be defined σ̄\n2 = 1 ns−1 ∑ns i=1 |x̂i − x̄|2, where | · | denotes the modulus.\nEqually, one can estimate the ground truth Fourier spectrum Fx using the empirical average estimator Fx̄. The empirical point-wise variance in the Fourier space can be obtained as σ̄2F =\n1 ns−1 ∑ns i=1 |Fx̂i − Fx̄|2. This feature is specific to generative models, as getting samples from PX|yω allows to transform these to a different domain, enabling to have simultaneous variance estimates in both the image and Fourier spaces. This is not possible with methods that only provide point-wise estimates of the mean and the variance in image space, such as the one used by Zhang et al. (2019) or the direct estimation of Adler & Öktem (2018)." }, { "heading": "4.2 UNCERTAINTY DRIVEN SAMPLING", "text": "Due to the ability of constructing estimates of both the spatial and Fourier space pixel-wise variances of PX|yω , the approach of (Adler & Öktem, 2018) can be leveraged to produce both fixed and adaptive sampling patterns by acquiring the frequencies with the highest empirical pixel-wise variance in the Fourier domain. As we constrained ourselves to acquiring full lines in the Fourier domain, we will consider the total estimated variance in the Fourier space along the i-th line vi ∈ S and define\nu1Di (yω) = ∑ j∈vi σ̄2F,j(yω). (6)\nNote that u1Di (yω) ∈ RN contains the variances of the N possible lines in the Fourier domain. We will refer to u1D(yω) as the aggregated variance (along the x-dimension in Figure TODO).\nFixed sampling. Using the aggregated variance as a loss function, we can reformulate Equation 2 as\nmin ω∈A EX∼PX [∑ i u1Di (Yω = PωFX) ] (7)\nwhere samples {xi}ms=1 are obtained from an unknown prior distribution PX, and Yω contains partial information on X through the model 1.\nIn practice, as PX is not available, one seeks to solve the empirical risk minimization (ERM) minω∈A 1m ∑m s=1 [∑ i u 1D i (yω,s = PωFxs) ] . The aggregated point-wise variance of the posterior can be seen as a cost that one seeks to minimize on a training set, and consequently, can it can replace traditional cost functions such as the `2-norm in most fixed sampling optimization method.\nAdaptive sampling. Ideally, we aim at making a series of sampling decisions v1, . . . , vT to minimize the total final uncertainty for a given ground truth image x once our sampling budget n lines is spent, i.e., for each t,\nmin vt|v0,...,vt−1 N∑ i=1 u1Di (yωT ) , (8)\nwhere ωT = ⋃T t=1 vt. Due to causality, we do not have access to this final posterior, or even the posterior of the mask which will result from choosing the next innovation vt. We can only make use of the partial observations yωt corresponding to the partial masks ωt = ⋃t−1 i=1 vt up to the time step t. We choose to adopt a greedy approach to approximately solve\nmin vt∈S ∑ i u1Di (yωt∪vt) at each time step t (9)\nby simply choosing as vt the line i with the largest aggregated variance u1Di . The overall flow is then: at each time t, (i) observe yωt , (ii) select the line vt = vi∗ ,where i ∗ = argmaxi u 1D i (yωt), (iii) update ωt+1 = ωt ∪ vt and (iv) iterate until the cardinality constraint is met. As no assumptions are made on the underlying distribution of the posterior, this application is only made possible by leveraging a generative framework that can estimate the posterior at all sampling rates considered for widely different mask designs. It is rendered tractable by the fact that even two samples from the posterior allow to construct an empirical variance estimate that can efficiently drive the sampling procedure." }, { "heading": "5 IMPLEMENTATION", "text": "Training data. The data set used in the first three experiments (subsections) below consists of a proprietary dataset of 2D T1-weighted brain scans. In our experiments, we use 100 slices of sizes 256×256 from five such subjects, 20 per subject. Three subjects (60 slices) were used for training the network, two subjects (30 slices) for testing. The data were then massively augmented with both rigid transformations and elastic deformations to counter overfitting as our dataset is very small, following the recommendations of (Ronneberger et al., 2015; Schlemper et al., 2018). Exact details on the dataset and augmentation methods used can be found in Appendix A.1.\nArchitecture For posterior sampling, we used the same discriminator architecture as described in (Adler & Öktem, 2018). For the conditional generator, we used the cascading network of (Schlemper et al., 2018), where the data-consistency layer enforced perfect consistency. Perfect data consistency means that at the end of each block, one replaces the reconstructed value with the corresponding measured value in the Fourier space. This ensures that the reconstruction is consistent with the observations where measurements were acquired. We used 3 CNN blocks, where each block contained 5 convolutional layers followed by ReLu.\nAs our data are complex, we split the real and imaginary part as two channels and add two channels of Gaussian white noise to the conditional generator.\nTraining. We use the same loss as in (Adler & Öktem, 2018). The loss is reproduced in Appendix A.4 for completeness. We use Adam (Kingma & Ba, 2014) with β1 = 0.5, β2 = 0.9, and learning rate 2 · 10−4 as in Adler & Öktem (2018) although we do not use noisy linear cosine decay\nout of simplicity. The model is trained for 6 · 105 backpropagations, which was chosen adhoc to account for the fact there are combinatorial numbers of masks being observed for each image (our reference point Adler & Öktem (2018) uses 5 · 104 for a larger dataset).. For every 5 iterations, the generator was trained once and the discriminator was trained four times. In order to allow calculating the posterior throughout the sampling process, we generate observations of subsampled images at various rates by randomly generating horizontal Cartesian masks for sampling rates ∈ [0.025, 0.5], as described in detail in Appendix A.2.\nMetrics. We will use mean squared-error (MSE), structural similarity (SSIM) (Wang et al., 2004) as well as the posterior variance for comparisons. MSE and SSIM are computed between the reconstructed image and the corresponding original, ground truth image. The posterior variance is estimated through the a pixel-wise empirical variance estimate, and is averaged on the whole image to produce a single scalar. This metric does not require a reference." }, { "heading": "6 EXPERIMENTS", "text": "Throughout our experiments, we use the empirical mean obtained from two posterior samples, as well as the corresponding empirical standard deviation. We show exhaustively in appendix B that while using 10 samples from the posterior improves the quality of reconstruction, it is sufficient to use the variance estimate from 2 posterior samples in the CLUDAS method." }, { "heading": "6.1 CONSISTENCY OF THE UNCERTAINTY ESTIMATE", "text": "As can be Figure 1, the average MSE correlates well with the average posterior variance, both in image space and in Fourier space, which suggests that the posterior variance could serve as an approximate MSE oracle. While training was only performed in the image domain, the generated samples have consistency both in Fourier and in image space, showing that the uncertainty-based approach does provide meaningful information on the error in the reconstruction. The consistency in Fourier space is crucial for the sampling procedure, as our sampling method leverages the variance estimates in Fourier space, while the consistency in image space gives valuable information to interpret the reconstructed data, which are important for clinicians." }, { "heading": "6.2 RECONSTRUCTION QUALITY", "text": "In order to assess the reconstruction obtained by the adaptive masks, we define a closed loop oracle MSE driven adaptive sampling method (CLOMDAS), which leverages MSE instead of uncertainty at inference time. While CLOMDAS is not feasible in practice, it remains an interesting baseline showing how the mask design could be improved by having access to the actual MSE at testing time.\nFigure 2 shows how the CLUDAS method compares against the CLOMDAS method on a sample from the testing set. CLUDAS is competitive with CLOMDAS at every sampling rate considered, even without having access to any oracle information. This behaviour is consistently observed on the whole testing set, as shown in Tables 1 below.\n0 10 20 30 40 50 60 70\n0 10 20 30 40 50 60 70\n0 10 20 30 40 50 60 70\n0 10 20 30 40 50 60 70\n0 10 20 30 40 50 60 70\n0 10 20 30 40 50 60 70" }, { "heading": "6.3 COMPARISON WITH OTHER METHODS", "text": "We compare out method to the following\n• Learning based compressed sensing (LBC) (Gözcü et al., 2018; Sanchez et al., 2019): This method incrementally builds up ω by computing an expected improvement of a loss ` at each step. This expected improvement is simply obtained by searching which line will add the largest improvement at the next step, and once it has been found, it is permanently added to the mask. Then, the algorithm proceeds until the cardinality constraint |ω| = n is met. When trained with MSE, we will refer to the method as LBC-M, and when trained to minimize variance, we will refer to it as LBC-V\n• Vellagoundar & Machireddy (2015): This method proposed the simple heuristic approach of (i) constructing a PDF from a training data and (ii) sampling at random from the obtained PDF. Our implementation used the spectrum of the whole averaged training set for the PDF.\nWe were not able to compare our method to the closed loop method of Zhang et al. (2019), since their code is not being publicly available at the time of writing.\nWhen comparing the performance of reconstruction of different mask designing methods on the modified generator of Adler and Öktem, we observe that the heuristic baseline of Vellagoundar & Machireddy (2015) performs significantly worse at any sampling rate and for any metric. This is not surprising, as this method simply samples art random from a constructed PDF. Comparing the variations of the LBC methods, we notice that increasing the number of averaged samples during the training phase of the mask is translated into a uniform improvment of the performance for any sampling rate. This is more exhaustively discussed in Appendix B . Focusing on the LBC methods\ntrained with 10 posterior sample averaging, we see that the LBC-M method outperforms the LBC-V method. It is worth noting that the LBC-M uses the full ground truth to build its mask, while the LBC-V method only leverages the variance estimation in Fourier domain. This again highlight the reliability of the uncertainty as a mask designing technique. This conclusion is also supported by the CLUDAS approach remaining competitive with the CLOMDAS one, which is infeasible in practice, due to requiring oracle MSE calls at test time. Note also that our method does not require the heavy computational burden of generating the sampling mask ahead of time, and can immediately be used on-the-fly after training.\nThe CLUDAS method is the most effective at reducing posterior variance, even if LBC-M(10) and CLOMDAS(2) are close runner-ups. More surprisingly, our CLUDAS method is found to yield the best SSIM performance, a metric designed to match the human perception of quality better than MSE." }, { "heading": "7 DISCUSSION AND FUTURE WORKS", "text": "Posterior distribution for a continuum of sampling rates. Successfully modelling the continuum of sampling rates stems from the fact the these inverse problems depend on each other in a highly structured and regular fashion. This enabled us to successfully demonstrate for the first time that a principled Bayesian approach for a closed loop mask optimization with rigorous variance estimates is feasible.\nGenerically trained generative reconstruction method. The current generative model was trained in a generic fashion and not specifically to optimize the quality of masks designed through it. The ability for designing masks stems purely from training it as a rigorous Bayesian modelling of the continuum of inverse problems. This allows the posterior to be conditioned on incrementally collected information in a closed loop fashion. However, our method could easily be incorporated in a reinforcement learning-based framework aimed at jointly training reconstruction and sampling such as Jin et al. (2019). This would give the best of both worlds, giving principled uncertainty estimates to the RL sampler, moving beyond greedy sampling and possibly speeding up the training of the reconstruction method by focusing on regions with less reliable varaince estimates instead of using masks sampled from distributions as in this work.\nLimitations of the posterior estimation. We leveraged the approach of Adler & Öktem (2018), which is the first of its kind to allow to construct a posterior estimator from samples of the joint distribution. While it works well empirically, the authors did not provide any analysis or guarantees on how well the generator captures the tails of the posterior distribution. Our observations suggest that unusual images, i.e. far away from the mean of the learned distribution might not be accurately\ncaptured, i.e. the estimated variance is lower than expected. This could be due to the limited training data available, but might also an artifact of the cWGAN approach which tends to struggle with capturing weaker modes of their distributions. Specifically, the problematic examples might be an indication that while the loss shown in eq. (11) avoids mode collapse, there might still be some ”mode deflation” leading to the network underestimating the diversity of the data distribution.\nAdaptive vs fixed sampling. Adaptive and fixed sampling methods both have advantages and limitations from a practical perspective. The main advantages of a fixed mask approach lie in the ease of deployment of the obtained mask, as it simply needs to be programmed into a scanner. We also have a simple generalization bound of the obtained mask, relying on a simple application of Hoeffding’s inequality. In contrast, adaptive methods are currently difficult to deploy it on scanners, as it would require hardware capable of running a neural network guiding the sampling procedure. They are also harder to train and currently lack rigorous reliability guarantees. However, if successfully trained and deployed they avoid rigid assumptions about the problem and are able to incorporate partial data into the acquisition process, which in turn leads to an improved performance. In this work, we found that using the adaptive method also increased robustness to the noise in the quality criterion used to drive the mask\nReliability guarantees beyond the variance estimate presented in this work (i.e. quantifying the uncertainty of the uncertainty estimate) are an important future direction of research.\nExtensions to the current model. We showed that it is possible to use the uncertainty estimate to design fixed masks as in LBC-V, for settings where we are only interested in using the posterior variance as a natural criterion for reconstruction quality, e.g. masks for MRI systems where incorporating a neural network at scanning time is not feasible. In this setting, there are low hanging fruits for improving the method by making use of the available ground truth information, i.e. jointly using posterior variance with other metrics which require a ground truth." }, { "heading": "A.1 TRAINING DATA", "text": "The data set used in all experiments consists of 2D T1-weighted brain scans of seven healthy subjects, which were scanned with a FLASH pulse sequence and a 12-channel receive-only head coil. In our experiments, we use 20 slices of sizes 256×256 from five such subjects, for a total of 100 slices. We use three subjects (60 slices) for training, two subjects (40 slices) for testing. Data from individual coils was processed via a complex linear combination, where coil sensitivities were estimated from an 8×8 central calibration region of Fourier space Bydder et al. (2002). The acquisition used a field of view (FOV) of 220×220 mm2 and a resolution of 0.9× 0.7 mm2. The slice thickness was 4.0 mm. The imaging protocol comprised a flip angle of 70◦, a TR/TE of 250.0/2.46 ms, with a scan time of 2 minutes and 10 seconds. Following the recommendations of (Ronneberger et al., 2015; Schlemper et al., 2018), we then massively augmented the dataset to counter overfitting.\nWe apply both rigid transformations and elastic deformations. Specifically, at training time, each image was dynamically augmented with a randomly applied translation of ±6 pixels along x- and y-axes, rotations of [0, 2π), reflection along the x-axis with 50% probability. We also apply elastic deformations using the implementation in Simard et al. (2003) with α ∈ [0, 40] and σ ∈ [5, 8]." }, { "heading": "A.2 GENERATING OBSERVATIONS AT VARIOUS RATES OF SUBSAMPLING", "text": "We also dynamically generate horizontal Cartesian masks from sampling rates ∈ [0.025, 0.5] by randomly selecting lines following suitably deformed Gaussians (see fig. 4).\nThis generation of training masks is biased towards the lower end of the frequency spectrum, and also does not consider extreme acceleration rates beyond 0.025. Ideally, one would use fully random subsampling across the whole range of subsampling rates, ensuring equally reliable variance estimates. In practice, extreme acceleration rates are unlikely to be used and most of the information is found in the lower frequencies, meaning these would almost surely be selected first. The present method represents a reasonable tradeoff between generalization across frequencies and training time." }, { "heading": "A.3 ARCHITECTURE", "text": "For posterior sampling, we used the same discriminator architecture as described in (Adler & Öktem, 2018). For the conditional generator, we used the cascading network of (Schlemper et al., 2018), where the data-consistency layer enforced perfect consistency. We used 3 CNN blocks, where each block contained 5 convolutional layers followed by ReLu.\nAs our data are complex, we split the real and imaginary part as two channels and add two channels of Gaussian white noise to the conditional generator. This also means that the discriminator of (Adler & Öktem, 2018) needs to be adapted to 6 input channels instead of 3." }, { "heading": "A.4 LOSS FUNCTION", "text": "The conditional WGAN must have its discriminator and generator trained alternatively, and Adler and Öktem proposed a novel discriminator loss that empirically avoids mode collapse. This discriminator takes three inputs instead of two in Equation 5, and is the one that we use in practice in the experiments. The general GAN loss reads\nLW (θ, φ) =E(X,Y)∼PX,Y Z1,Z2∼η\n[ Dφ ( Gθ(Z1,Y),Gθ(Z2,Y), y ) −Dφ ( 1\n2\n( Dφ(Gθ(Z1,Y),X,Y ) +Dφ ( X, Gθ(Z2,Y),Y )))] (10) Then, for a fixed θ, the discriminator loss can be formulated as\nLD(φ) = LW (θ, φ) + 10Lgrad(θ, φ) + 10 −3Ldrift(θ, φ) + 10 −4 ‖ φ ‖2 (11)\nwhere, Lgrad is the gradient penalty term for the 1-Lipschitz constraint introduced in Gulrajani et al. (2017). The drift loss is used to stabilize training, to prevent the discriminator from being shifted to large values, as its performance is invariant to constant shifts. If the discriminator has a large constant, the overall loss (11) will not be influenced by the constant, so the drift loss penalizes large the expected squared norm of the discriminator.\nFor the generator, given a fixed φ, the loss is defined as\nLG(θ) = −E(X,Y)∼PX,Y Z1,Z2∼η\n[Dφ ( Gθ(Z1,Y), Gθ(Z2,Y),Y ) ] + 10−4‖θ‖2. (12)\nWe refer the reader the Appendix D.3 of (Adler & Öktem, 2018) for the full discussion on the loss function." }, { "heading": "B GREEDY VS. ADAPTIVE", "text": "As can be seen in Figure 7, in the fixed mask (greedy) setting we require ten samples from the posterior before the uncertainty estimation yields competitive masks, while in the adaptive setting two samples suffice. The reason for this can be understood from considering the decision process and information flow of each algorithm, visualized in fig. 5. The adaptive algorithm uses only a single uncertainty estimate to make a decision, as it aggregates the estimate for each candidate u1Di line in the Fourier space from a single pointwise uncertainty estimate. Due to the online nature of the sampling, if this estimate overshot or undershot, the algorithm will receive immediate feedback and can so revert to the mean, making it robust against a noisy estimator. In contrast to this, the greedy algorithm a) requires a separate uncertainty estimate for each candidate (since we want to look into the future and choose the candidate line which yielded the greatest improvement) and b) is performed on the training set only and then fixed. This means there is a good chance for a noisy estimator to spike in the uncertainty estimate and give the illusion of large improvements in uncertainty during the precomputation of the mask, and no way to compensate for mistakes at test time. This means we need a much more reliable estimator, increasing the number of posterior samples required.\nCLUDAS acquisition decision (closed loop)\nLBC sampling decision (open loop)" }, { "heading": "C ADDITIONAL RESULTS", "text": "" }, { "heading": "C.1 EFFECT OF INCREASING THE NUMBER OF SAMPLES IN THE AVERAGE", "text": "Comparing adaptive baselines with 10 averaged samples in Figure 6, we see that the posterior variance magnitude is not affected by the increase in the number of samples, while MSE is simply shifted almost uniformly for each data. While the larger number of samples might affect how the variance is distributed on the image, it does not affect too much the variance obtained out of the sampling procedure.\nTables 2, 3 and 4 show the second experiment reproduced with averages computed with 10 posterior samples. The results are consistent with those of Section 6.3." }, { "heading": "C.2 SUPPLEMENTARY RESULTS", "text": "Figure 7 shows the performance across all sampling rates of the methods considered in the main paper." } ]
2,019
null
SP:0c4124acde5770c92f5afb3a0f12d2f70eead48d
[ "This paper takes the reference-game setup of Lazaridou et al. (2018), as a means of enabling emergent communication, and adds an auxiliary task to demonstrate that this helps with language emergence. The auxiliary task is to enable the speaker to predict the hidden state of the listener, after the message has been received. This is (not unreasonably) likened to providing the speaker with some empathy, in that it enables the speaker to try and predict what the effect of the message will be on the listener.", "This paper aims to take insight from human language acquisition and the importance of empathic connection to learn better models for emergent language. The authors propose an approach to introduce the notion of empathy to multi-agent deep RL by extending existing approaches on referential games with an auxiliary task for the speaker to predict the listener’s empathy/mind. Experiments show that this gives some improvement with faster convergence." ]
The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents. If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly. That is seen as an important step towards achieving general AI. The scope of emergent communication is so far, however, still limited. It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity. We took an example from human language acquisition and the importance of the empathic connection in this process. We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning. We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener’s mind change improving the learning time. Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup.
[]
[ { "authors": [ "Kai Arulkumaran", "Marc Peter Deisenroth", "Miles Brundage", "Anil Anthony Bharath" ], "title": "Deep reinforcement learning: A brief survey", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Abhishek Das", "Satwik Kottur", "José MF Moura", "Stefan Lee", "Dhruv Batra" ], "title": "Learning cooperative visual dialog agents with deep reinforcement learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Katrina Evtimova", "Andrew Drozdov", "Douwe Kiela", "Kyunghyun Cho" ], "title": "Emergent language in a multi-modal, multi-step referential game", "venue": "arXiv preprint arXiv:1705.10369,", "year": 2017 }, { "authors": [ "Jakob Foerster", "Ioannis Alexandros Assael", "Nando de Freitas", "Shimon Whiteson" ], "title": "Learning to communicate with deep multi-agent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Serhii Havrylov", "Ivan Titov" ], "title": "Emergence of language with multi-agent games: Learning to communicate with sequences of symbols", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Karl Moritz Hermann", "Felix Hill", "Simon Green", "Fumin Wang", "Ryan Faulkner", "Hubert Soyer", "David Szepesvari", "Wojciech Marian Czarnecki", "Max Jaderberg", "Denis Teplyashin" ], "title": "Grounded language learning in a simulated 3d world", "venue": "arXiv preprint arXiv:1706.06551,", "year": 2017 }, { "authors": [ "Pablo Hernandez-Leal", "Bilal Kartal", "Matthew E Taylor" ], "title": "Is multiagent deep reinforcement learning the answer or the question? a brief survey", "venue": "arXiv preprint arXiv:1810.05587,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Jakob Hohwy" ], "title": "The predictive mind", "venue": null, "year": 2013 }, { "authors": [ "Emilio Jorge", "Mikael Kågebäck", "Fredrik D Johansson", "Emil Gustavsson" ], "title": "Learning to play guess who? and inventing a grounded language as a consequence", "venue": "arXiv preprint arXiv:1611.03218,", "year": 2016 }, { "authors": [ "Tatsuya Kasai", "Hiroshi Tenmoto", "Akimoto Kamiya" ], "title": "Learning of communication codes in multiagent reinforcement learning problem", "venue": "In Soft Computing in Industrial Applications,", "year": 2008 }, { "authors": [ "Tatsuya Kasai", "Hayato Kobayashi", "Ayumi Shinohara" ], "title": "The size of message set needed for the optimal communication policy", "venue": "In Proceedings of the 7th european workshop on multi-agent systems (EUMAS", "year": 2009 }, { "authors": [ "Eugene Kharitonov", "Rahma Chaabouni", "Diane Bouchacourt", "Marco Baroni" ], "title": "EGG: a toolkit for research on Emergence of lanGuage in Games. arXiv preprint arXiv:1907.00852", "venue": null, "year": 1907 }, { "authors": [ "Angeliki Lazaridou", "Alexander Peysakhovich", "Marco Baroni" ], "title": "Multi-agent cooperation and the emergence of (natural) language", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Angeliki Lazaridou", "Karl Moritz Hermann", "Karl Tuyls", "Stephen Clark" ], "title": "Emergence of linguistic communication from referential games with symbolic and pixel input", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "David Lewis" ], "title": "Convention: A philosophical study", "venue": null, "year": 2008 }, { "authors": [ "Bruce MacLennan" ], "title": "Synthetic ethology: An approach to the study of communication", "venue": "University of Tennessee, Computer Science Department,", "year": 1990 }, { "authors": [ "Kozue Noro", "Hiroshi Tenmoto", "Akimoto Kamiya" ], "title": "Signal learning with messages by reinforcement learning in multi-agent pursuit problem", "venue": "Procedia Computer Science,", "year": 2014 }, { "authors": [ "Junhyuk Oh", "Xiaoxiao Guo", "Honglak Lee", "Richard L Lewis", "Satinder Singh" ], "title": "Action-conditional video prediction using deep networks in atari games", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "David Premack", "Guy Woodruff" ], "title": "Does the chimpanzee have a theory of mind", "venue": "Behavioral and brain sciences,", "year": 1978 }, { "authors": [ "Phil Robbins" ], "title": "The effect of parasitism on the evolution of a communication protocol an artificial life simulation. In Proceedings of the third international conference on Simulation of adaptive behavior: from animals to animats 3: from animals to animats", "venue": null, "year": 1994 }, { "authors": [ "Sara M Schaafsma", "Donald W Pfaff", "Robert P Spunt", "Ralph Adolphs" ], "title": "Deconstructing and reconstructing theory of mind", "venue": "Trends in cognitive sciences,", "year": 2015 }, { "authors": [ "Carina Silberer", "Vittorio Ferrari", "Mirella Lapata" ], "title": "Models of semantic representation with visual attributes", "venue": "In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2013 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning multiagent communication with backpropagation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Introduction to reinforcement learning, volume 2", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Kyle Wagner", "James A Reggia", "Juan Uriagereka", "Gerald S Wilkinson" ], "title": "Progress in the simulation of emergent communication and language", "venue": "Adaptive Behavior,", "year": 2003 }, { "authors": [ "G.M. Werner", "M.G. Dyer" ], "title": "Evolution of communication in artificial systems", "venue": "Artificial Life II,", "year": 1991 }, { "authors": [ "Gregory M. Werner", "Michael G. Dyer" ], "title": "Evolution of herding behavior in artificial animals", "venue": "In From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior,", "year": 1993 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Ludwig Wittgenstein. Philosophical investigations." ], "title": "Trans", "venue": "GEM Anscombe. Oxford: Blackwell, 1958.", "year": 1953 } ]
[ { "heading": "1 INTRODUCTION", "text": "Natural language is not as rule-based as researchers in supervised language learning would prefer. There are limitless context-dependent notions to it, and flexible language use is considered as a necessary aspect of general AI. Originally, natural language emerged through a necessity to achieve successful coordination. Hence, a general AI would need to understand the functional aspects of language and learn communication through interaction (Wittgenstein, 1958; Wagner et al., 2003). These considerations led to the research field of emergent communication and the attempt to ground natural language through reinforcement learning.\nDeep reinforcement learning has achieved some impressive results over the last years (Arulkumaran et al., 2017). One of its principal aspects is the ability to extract features from high dimensional input data without manual preprocessing. This capability is especially useful if the necessary representation is unknown to the designer.\nClassical deep reinforcement learning approaches rely on a large number of training examples, mainly because the sparse reward hardly provides enough feedback to shape the deep layers. These deep layers are responsible for the embedding of input data into a meaningful representation. Therefore, it takes many training steps before a useful representation emerges; if it converges at all. According to the theory of the predictive mind (Hohwy, 2013), the human brain generates richer feedback through learning several unsupervised prediction tasks while training on the main task. The purpose of these predictions is to produce more and more expressive models and representations of the world.\nOh et al. (2015) achieved a far more expressive representation of their visual inputs by learning an auxiliary prediction task. The sole purpose of the auxiliary net is to predict the change in the visual input given the last movement action. Training this net does not directly affect the original task, but it refines the visual representation to reflect the concepts of a 3D world. Hermann et al. (2017) used predictive tasks to ground natural language, but only focused on better understanding an existent language. We transfer the auxiliary prediction to the task of active communication. This goes along with the theory of mind (Premack & Woodruff, 1978; Schaafsma et al., 2015) stating that an essential part of intelligence in interaction emerges through predicting the mental state of the interaction partner.\nWe let the speaker train an auxiliary net that tries to predict how the speaker’s utterance will change the listener’s hidden state. That resembles humans empathetic way of understanding what a message will do to the listener. We assume this leads to a more communication effective representation of the sensory input; in other words, the input encoding becomes more communicatable. The effect is visible in the essential acceleration of learning successes in developing a shared language.\nOur main contribution is an elegant extension to multi-agent deep reinforcement learning (MADRL) algorithms aiming to emerge a communication. It resembles an empathic connection between speaker and listener, which leads to faster convergence to a shared language. We doubled the learning speed of a MADRL algorithm playing a referential game by introducing this auxiliary prediction task to the speaking agent. We attribute the improvement to the richer gradients in the lower layers of the neural network to embed the input." }, { "heading": "2 BACKGROUND", "text": "Reinforcement Learning (RL) An agent in a reinforcement learning setting can fully or partially observe its current state s ∈ S and is able to choose an action a ∈ A through a policy π(s) = a. The chosen action will lead to receiving a reward R. The agent’s goal in its environment is to maximize the expected reward (Sutton et al., 1998).\nJ(θ) = E[R(s, a)] (1)\nRL with neural networks (NN) Using neural networks as a policy representation for reinforcement learning has the benefit of being able to represent any policy function and the downside of needing a huge number of data samples to learn. In our case, the policy outputs a direct probability for taking each action. Such policies can be updated by using Policy Gradient methods (Sutton et al., 2000). The policy parameters θ, in this case, the parameters of the neural net, are updated according to their effect on the objective J with a learning rate β:\n∆θ ≈ β δJ(θ) δθ\n(2)\nUsing the REINFORCE algorithm (Williams, 1992) the effect on the objective can be estimated as the following:\n∇θJ(θ) = ∇θ log π(a|s)R(s, a) (3)\nLong Short-Term Memory Network (LSTM) Recurrent neural networks (RNN) can accumulate input in an internal representation over time as well as produce a consistent output over several time steps from it. LSTMs are RNNS that are specifically created to remember information over an extended period of steps (Hochreiter & Schmidhuber, 1997).\nAuxiliary tasks in RL Auxiliary unsupervised tasks were introduced into RL with NNs by Oh et al. (2015). They proposed an architecture that predicts the next visual input, given the internal representation of the last visual inputs and the last taken action. The unsupervised task of correctly predicting the next visual input leads to better performance on the main task, which was playing an atari game. They assume that the auxiliary task enforces a more expressive internal representation of the visual input, which then aids the main task. Hermann et al. (2017) transferred this auxiliary task to natural language acquisition by predicting the next word spoken by an instructor." }, { "heading": "3 RELATED WORK", "text": "MacLennan (1990) started the field of learning communication in artificial agents with the aim to research the mechanisms by which language and communication emerge. Werner & Dyer (1991) contributed by using classical genetic scenarios, where ”male” agents had to find ”female” agents based on signals they emitted. They extended their setting in 1993 to include predator and prey agents and showed that known prey strategies as herding emerge if the agents have the possibility to communicate(Werner & Dyer, 1993). Robbins (1994) achieved the emergence of more robust signals by introducing lying agents (parasites) in this setting. The successive advances in the field of learning communication can be assigned to the progress of learning algorithms for neural networks for a big part. Kasai et al. (2008) started using Q-learning\non the pursuit problem, including learned communication, but also follow up work only reached simple information sharing about the prey position (Kasai et al., 2009; Noro et al., 2014).\nThe field was then alleviated to a new level of complexity by Sukhbaatar et al. (2016) and Foerster et al. (2016). They transferred the new progress in deep learning to multi-agent coordination to emerge even more complex communication patterns. In previous work, we have shown that these algorithms can be further improved even to solve tasks that lie outside the communication range [blind].\nThough many multi-agent learning setups use communication as a matter of success, only some focus on the emerging protocols and their properties (Hernandez-Leal et al., 2018). From those focusing on the emergence of communication or language, a significant number of publications used referential games (Lewis, 2008) as a testbed (Jorge et al., 2016; Lazaridou et al., 2017; Havrylov & Titov, 2017; Das et al., 2017; Evtimova et al., 2017; Lazaridou et al., 2018).\nEspecially interesting in that context is the work of Lazaridou et al. (2017), as they could vividly show, that this approach to language emergence can lead to a flexible language use which could be understood by humans even when applied to objects unknown to the algorithm, yet." }, { "heading": "4 CONTRIBUTIONS", "text": "We introduce the idea of auxiliary tasks into the field of language emergence. The speaking agent is equipped with an auxiliary single-layer perceptron, to predict the hidden state of the listener agent, after this ultimately encoded the message. The input for this prediction is the hidden state of the speaker right before it starts forming the message. The aim is to achieve a high relation between the hidden states of both agents. This signifies the speaker can communicate its means well. We state that in this application, the auxiliary prediction resembles empathy in humans, as the speaker tries to predict how its utterance will affect the listener’s mindset.\nThe prediction task is unsupervised and can be trained on the same samples and at the same time as the main task. Training the main RL task automatically generates the samples for the unsupervised task. The gradients can be backpropagated into the encoding layers of the speaker, where they are added to the gradients of the RL task and optimized together. With our approach, we further enhance the possibilities in language emergence by providing richer feedback to form the internal communicatable representation in the speaking agent. We provide experimental evidence that these extensions can lead to a doubled learning speed when added to an existing approach to language emergence in referential games." }, { "heading": "5 EXPERIMENTS", "text": "To test the potential of the auxiliary prediction task, we used the referential game setup proposed by Lazaridou et al. (2018) shown in Fig. 1. Out of the existing implementations, we chose this one because the setup has proven to converge to an emergent communication at a relatively low computational cost.\nDataset We use the Visual Attributes for Concepts Dataset (VisA) of Silberer et al. (2013). It contains attribute annotations for 500 concrete concepts (like cat or sofa), annotated with 636 general attributes (like is black or made of wood). The annotations are human-made and therefore carry an inherent structure that can be seen as disentangled.\nAgent Setup A speaker agent gets shown a target concept t that is realized as a binary vector with as many entries as possible attributes. The speaker then uses a policy πS to produce a message m out of an alphabet of discrete symbols (numbers 1 to 100 in our case). The message is then interpreted by a listener agent that observes several candidate concepts C at the same time. The listener uses a pointing policy πL to decide, which of the candidate concepts the speaker agent is describing. Both agents receive a shared reward R if the listener correctly identifies the described concept.\nR(t′) =\n{ 1 if t = t′\n0 else (4)\nThe speaker agent consists of a single encoding layer to encode the input vector into a dense representation hS and an LSTM to produce a message out of hS . The listener agent encodes the message with an embedding layer and an LSTM into a dense representation hL. The listener contains an encoding layer as well, which it applies to every candidate concept respectively to generate a set of representations. It calculates the compliance between message and candidate concepts with the dot product between the message representation and the concept representation. The result is treated as a Gibbs Distribution. Both policies πS and πL output a probability distribution over all possible actions. For the speaker, the possible actions are the elements of the alphabet, once for every symbol over the length of the message. For the listener, the actions consist of choosing each of the candidate concepts in C.\nFor more details see Lazaridou et al. (2018).\nLearning As part of the reinforcement learning setting, the agents try to maximize the expected reward. They do not share any parameters but try to maximize the probability of their action that resulted in a positive reward, respectively. Therefore together, they maximize the objective function in each training instance:\nR(t′) ( L∑ l=1 logpπS (m l t|m<lt , hS) + logpπL(t′|hL, C) ) (5)\nEmpathy Extension To generate richer gradients for shaping the deep encoding layers of the speaker, we assign an auxiliary unsupervised prediction task to it. We add a single layer MultiLayer-Perceptron (MLP) to the graph, which predicts the activation of the listener’s hidden layer hL after hearing the full message. The input is the activation of the speaker’s hidden layer hS before starting the utterance. That corresponds to predicting the effect of the to-be-made sentence on the mindset of the listener. We use the mean absolute error as the loss function for the prediction task:\nloss = α|σ(wθ(hS))− hL| (6)\nwhere α is a weighting factor that ensures that the unsupervised task does not corrupt the main reinforcement learning task. An α close to 1 would mean, that effectively manipulating the listener’s mind is as important to the speaker as communicating the target concept. wθ is the linear transformation through the MLP with sigmoid activation function σ. The gradients of the unsupervised task are calculated on the same trial and added to the gradients of the reinforcement task. Hence, no additional training steps are necessary. The optimization then uses the summed up gradient.\nWe implemented the setup using the EGG toolkit Kharitonov et al..\nInput\nEncoder\nSpeaker Listener\nEncoder\nDecoder\nDecoder\nOutput\nMessage\nAuxiliary Prediction\n(unsupervised)\nFigure 2: Integration of the auxiliary prediction task of the speaker into the neural network architecture." }, { "heading": "6 RESULTS", "text": "We found that with an α of around 0.1, i.e. weighting the prediction gradients 10% compared to the main task, we can increase the learning speed to double or triple. To be comparable, we used the same initialization and sampling seeds on both options. Good or bad initialization can make up for half the learning speed, but the relative learning speed improvement through using the prediction task stays consistent over different initializations. In Fig. 3 we compared the learning curves with and without the prediction task. For a game setup with two candidate concepts and a maximum message length of two, all marks are reached in half the time. For a more complex game setup with five candidate concepts and a maximum message length of five, some marks are even reached in a third of the time, when using the prediction task." }, { "heading": "7 CONCLUSION", "text": "Using an auxiliary predictive task on a communication learning task has proven auspicious. Sampleefficiency is highly desirable when acquiring language, so the fact that our auxiliary task doubles the learning speed is of high significance. Our experiments do only feature a small partition of the potential of this elegant mechanism, yet. Higher sample-efficiency at no computational cost now allows acquiring more complicated language tasks, that were previously impossible to learn in a reasonable time. We plan to apply our algorithm to much more challenging tasks in the future. We\ndid, for example, only test disentangled input due to computational limitations. The mechanism would be even more useful when applied on entangled input because developing an expressive representation is then of higher importance. For future research, we propose the use of an auxiliary prediction task for the listener to align with the word usage of the speaker, as well. We hope that this simple but powerful mechanism brings the field of language emergence a big step forward." } ]
2,019
null
SP:6bf1569771191ea913217f527173f454d50e266c
[ "This paper propose to modify the existing work [1] of self-training framework for graph convolutional networks. It tracks three limitations of [1] and propose three  use a threshold-based rule to insert new pseudo-labels and dynamic change the pseudo-label set. Moreover personalized weight are assigned to each activate pseudo-label proportional to its current classification margin. Evaluation of the proposed framework is performed on four networks for semi-supervised node classification task with varying label rates.", "This paper proposes a generalised self-training framework to build a Graph Neural Network to label graphs. Of importance is the dynamic nature of the self-training. The authors do not change the GCN but extend the self-training portion as per the prior GCN paper by introducing Dynamic Self-Training that keeps a confidence score of labels predicted for unlabelled nodes." ]
Graph neural networks (GNN) such as GCN, GAT, MoNet have achieved stateof-the-art results on semi-supervised learning on graphs. However, when the number of labeled nodes is very small, the performances of GNNs downgrade dramatically. Self-training has proved to be effective for resolving this issue, however, the performance of self-trained GCN is still inferior to that of G2G and DGI for many settings. Moreover, additional model complexity make it more difficult to tune the hyper-parameters and do model selection. We argue that the power of self-training is still not fully explored for the node classification task. In this paper, we propose a unified end-to-end self-training framework called Dynamic Self-traning, which generalizes and simplifies prior work. A simple instantiation of the framework based on GCN is provided and empirical results show that our framework outperforms all previous methods including GNNs, embedding based method and self-trained GCNs by a noticeable margin. Moreover, compared with standard self-training, hyper-parameter tuning for our framework is easier.
[]
[ { "authors": [ "James Atwood", "Don Towsley" ], "title": "Diffusion-convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Eliav Buchnik", "Edith Cohen" ], "title": "Bootstrapped graph diffusions: Exposing the power of nonlinearity", "venue": "In Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems,", "year": 2018 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Minmin Chen", "Kilian Q Weinberger", "John Blitzer" ], "title": "Co-training for domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Brett Drury", "Luis Torgo", "Jose Joao Almeida" ], "title": "Guided self training for sentiment classification", "venue": "In Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing,", "year": 2011 }, { "authors": [ "David K Duvenaud", "Dougal Maclaurin", "Jorge Iparraguirre", "Rafael Bombarell", "Timothy Hirzel", "Alán Aspuru-Guzik", "Ryan P Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Marti Hearst" ], "title": "Noun homograph disambiguation using local context in large text corpora", "venue": "Using Corpora,", "year": 1991 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": "arXiv preprint arXiv:1506.05163,", "year": 2015 }, { "authors": [ "Zhongqiang Huang", "Mary Harper" ], "title": "Self-training pcfg grammars with latent annotations across languages", "venue": "In Proceedings of the 2009 conference on empirical methods in natural language processing: Volume 2-Volume", "year": 2009 }, { "authors": [ "Zhongqiang Huang", "Vladimir Eidelman", "Mary Harper" ], "title": "Improving a simple bigram hmm partof-speech tagger by latent annotation and self-training", "venue": "In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers,", "year": 2009 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Zornitsa Kozareva", "Boyan Bonev", "Andres Montoyo" ], "title": "Self-training and co-training applied to spanish named entity recognition", "venue": "In Mexican International conference on Artificial Intelligence,", "year": 2005 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on Challenges in Representation Learning, ICML,", "year": 2013 }, { "authors": [ "Jurica Levatić", "Michelangelo Ceci", "Dragi Kocev", "Sašo Džeroski" ], "title": "Self-training for multi-target regression with tree ensembles", "venue": "Knowledge-Based Systems,", "year": 2017 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Qian Liu", "Bingyang Liu", "Dayong Wu", "Yue Liu", "Xueqi Cheng" ], "title": "A self-learning template approach for recognizing named entities from web text", "venue": "In Proceedings of the Sixth International Joint Conference on Natural Language Processing,", "year": 2013 }, { "authors": [ "Zhiguang Liu", "Xishuang Dong", "Yi Guan", "Jinfeng Yang" ], "title": "Reserved self-training: A semi-supervised sentiment classification method for chinese microblogs", "venue": "In Proceedings of the Sixth International Joint Conference on Natural Language Processing,", "year": 2013 }, { "authors": [ "David McClosky", "Eugene Charniak", "Mark Johnson" ], "title": "Effective self-training for parsing", "venue": "In Proceedings of the main conference on human language technology conference of the North American Chapter of the Association of Computational Linguistics,", "year": 2006 }, { "authors": [ "David McClosky", "Eugene Charniak", "Mark Johnson" ], "title": "When is self-training effective for parsing", "venue": "In Proceedings of the 22nd International Conference on Computational Linguistics-Volume", "year": 2008 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In Proc. CVPR,", "year": 2017 }, { "authors": [ "Kamal Nigam", "Andrew Kachites McCallum", "Sebastian Thrun", "Tom Mitchell" ], "title": "Text classification from labeled and unlabeled documents using em", "venue": "Machine learning,", "year": 2000 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2014 }, { "authors": [ "Yanjun Qi", "Pavel Kuksa", "Ronan Collobert", "Kunihiko Sadamasa", "Koray Kavukcuoglu", "Jason Weston" ], "title": "Semi-supervised sequence labeling with self-learned features", "venue": "Ninth IEEE International Conference on Data Mining,", "year": 2009 }, { "authors": [ "Ellen Riloff", "Rosie Jones" ], "title": "Learning dictionaries for information extraction by multi-level bootstrapping", "venue": "In AAAI/IAAI, pp", "year": 1999 }, { "authors": [ "Ellen Riloff", "Janyce Wiebe", "Theresa Wilson" ], "title": "Learning subjective nouns using extraction pattern bootstrapping", "venue": "In Proceedings of the seventh conference on Natural language learning at HLTNAACL 2003-Volume", "year": 2003 }, { "authors": [ "Chuck Rosenberg", "Martial Hebert", "Henry Schneiderman" ], "title": "Semi-supervised self-training of object detection", "venue": "models. WACV/MOTION,", "year": 2005 }, { "authors": [ "Kenji Sagae" ], "title": "Self-training without reranking for parser domain adaptation and its impact on semantic role labeling", "venue": "In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing,", "year": 2010 }, { "authors": [ "H Scudder" ], "title": "Probability of error of some adaptive pattern-recognition machines", "venue": "IEEE Transactions on Information Theory,", "year": 1965 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "CoRR, abs/1811.05868,", "year": 2018 }, { "authors": [ "Ke Sun", "Zhanxing Zhu", "Zhouchen Lin" ], "title": "Multi-stage self-supervised learning for graph convolutional networks", "venue": "arXiv preprint arXiv:1902.11038,", "year": 2019 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "Line: Large-scale information network embedding", "venue": "In Proceedings of the 24th International Conference on World Wide Web,", "year": 2015 }, { "authors": [ "Vincent Van Asch", "Walter Daelemans" ], "title": "Predicting the effectiveness of self-training: Application to sentiment classification", "venue": "arXiv preprint arXiv:1601.03288,", "year": 2016 }, { "authors": [ "Petar Veličković", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Daixin Wang", "Peng Cui", "Wenwu Zhu" ], "title": "Structural deep network embedding", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Wen Wang", "Zhongqiang Huang", "Mary Harper" ], "title": "Semi-supervised learning for part-of-speech tagging of mandarin transcribed speech", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07,", "year": 2007 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "David Yarowsky" ], "title": "Unsupervised word sense disambiguation rivaling supervised methods", "venue": "In 33rd annual meeting of the association for computational linguistics,", "year": 1995 }, { "authors": [ "Yan Zhou", "Murat Kantarcioglu", "Bhavani Thuraisingham" ], "title": "Self-training with selection-by-rejection", "venue": "IEEE 12th international conference on data mining,", "year": 2012 }, { "authors": [ "Xiaojin Zhu", "Zoubin Ghahramani", "John D Lafferty" ], "title": "Semi-supervised learning using gaussian fields and harmonic functions", "venue": "In Proceedings of the 20th International conference on Machine learning,", "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs or networks can be used to model any interactions between entities such as social interactions (Facebook, Twitter), biological networks (protein-protein interaction), and citation networks. There has been an increasing research interest in deep learning on graph structured data, e.g., (Bruna et al., 2014; Defferrard et al., 2016; Monti et al., 2017; Kipf & Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Tang et al., 2015; Perozzi et al., 2014).\nSemi-supervised node classification on graphs is a fundamental learning task with many applications. Classic methods rely on some underly diffusion process to propagate label information. Recently, network embedding approaches have demonstrate outstanding performance on node classification (Tang et al., 2015; Grover & Leskovec, 2016; Bojchevski & Günnemann, 2018). This approach first learns a lower-dimensional embedding for each node in an unsupervised manner, and then the embeddings are used to train a supervised classifier for node classification, e.g., logistic regression or multi-layer perceptron (MLP). Graph neural networks (GNN) are semi-supervised models and have achieved state-of-the-art performance on many benchmark data sets (Monti et al., 2017; Kipf & Welling, 2017; Velickovic et al., 2018). GNNs generalize convolution to graph structured data and typically have a clear advantage when the number of training examples is reasonably large. However, when there are very few labeled nodes, GNNs is outperformed by embedding based method (as shown by our experimental results), e.g., G2G from (Bojchevski & Günnemann, 2018) and DGI from (Veličković et al., 2019).\nTo overcome this limitation of GCNs (Kipf & Welling, 2017), Li et al. (Li et al., 2018) propose to apply self-training and co-training techniques (Scudder, 1965). The idea of these techniques is to augment the original training set by adding in some unlabeled examples together with their label predictions. Such “pseudo-label” information is either from the base model trained on the original training set (self-training) or another learning algorithm (co-training). The results from (Li et al., 2018) demonstrate the effectiveness of co-training and self-training. However, among the four variants implemented in (Li et al., 2018), there is not a single one that achieves the best performance across different settings; and from our experiments, G2G and DGI outperforms all the four variants when the number of labels from each class is less than 10. There are clear restrictions in prior self-training approaches. First, the pseudo-label set is incremental only, i.e., after an unlabeled\nexample is added to the training set, it will never be deleted and its pseudo-label will never change even if its prediction and/or the corresponding margin has changed drastically. Secondly, all the pseudo-labels are considered equal, although they may have very different classification margins. Furthermore, it introduces extra hyper-parameters such as the number of unlabeled nodes to be added into the training set and the total number of self-training iterations. The performance gain is sensitive to such parameters and their optimal values may differ for different data sets and label rates (Buchnik & Cohen, 2018).\nTo fully understand and explore the power of self-training on the node classification task, we propose a novel self-training framework, named Dynamic Self-training, which is general, flexible, and easy to use. We provide a simple instantiation of the framework based on GCN (Kipf & Welling, 2017) and empirically show that it outperforms state-of-art methods including GNNs, self-trained GCN (Li et al., 2018), and embedding based methods. Our framework has the following distinguishing features compared with (Li et al., 2018; Buchnik & Cohen, 2018).\n1. We augment the training set and recalculate the pseudo-labels after each epoch. So the number self-training iterations is the same as the number of epochs and the pseudo-label assigned to an unlabeled example may change during the training process.\n2. In stead of inserting a fixed number of new pseudo-labels with highest margin in each iteration, we use a threshold-based rule, i.e., insert an unlabeled node if and only if its classification margin is above the threshold.\n3. The pseudo-label set is dynamic. When the margin of an unlabeled node is above the threshold, we activate it by adding it to the loss function, but if the margin of this node becomes lower than the threshold in a later epoch, we will deactivate it.\n4. We assign a (dynamic) personalized weight to each active pseudo-label proportional to its current classification margin. The total pseudo-label loss is thus the weighted sum of losses corresponds to all pseudo-labels." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 GRAPH NOTATION AND PROBLEM DEFINITION", "text": "In the problem, we are given an undirected graph with node attributes G = (V,E,X), where V is the vertex set, E is the edge set. Here, X is the feature matrix, the i-th row of which, denoted as xi, is the feature vector of node i. We assume each node belongs to exactly one class and use yi to denote the class label of the i-th node. The aim is to design learning algorithms to predict the labels of all nodes based on the labels of a small set of training nodes provided in the beginning. We use Nk(i) to denote the set of nodes whose distance to node i is at most k. L ⊂ V is the set of labeled nodes and U = V \\ L is the set of unlabeled nodes." }, { "heading": "2.2 GRAPH CONVOLUTIONAL NETWORKS", "text": "GCN introduced in (Kipf & Welling, 2017) is a graph neural network model for semi-supervised classification. GCN learns the representations of each node by iteratively aggregating the embeddings of its neighbors. Specifically, GCN consists of L > 0 layers each with the same propagation rule defined as follows. In the l-th layer, the hidden representations H(l−1) are averaged among one-hop neighbors as:\nH(l) = σ(D̃− 1 2 ÃD̃− 1 2H(l−1)W (l)). (1)\nHere, Ã = A+ In is the adjacency matrix of G after adding self-loops (In is the identity matrix), D̃ is a diagonal matrix with D̃ii = ∑ j Ãij , W\n(l) is a trainable weight matrix of the l-th layer, and σ is a nonlinear activation function; H(l) ∈ Rn×dl denotes hidden feature matrix of the l-th layer and H(0) = X and fi = H (L) i represents the output of i-th node.\nWe use l(yi, fi) to denote the classification loss of node i, which is typically the cross entropy function. Thus, loss function used by GCN is of the form:\nL = ∑ i∈L l(yi, fi) (2)\nFor a k-layer GCN, the receptive field of each training example is its order-k neighborhood. When there are only few training samples, we need to increase the number of layers in order to cover most of the unlabeled nodes. However, deeper GCN will cause the problem of over-smoothing, i.e., critical features of the vertices may be smoothed through the iterative averaging process, which makes nodes from different class indistinguishable (Xu et al., 2018; Li et al., 2018)." }, { "heading": "2.3 SELF TRAINING", "text": "Recently (Li et al., 2018) apply self-training to overcome these limitations of GCNs. Self-training is a natural and general approach to semi-supervised learning, which is particularly well-motivated in the context of node classification (Buchnik & Cohen, 2018; Li et al., 2018). Assume we have a base model/algorithm for the learning problem, which takes as input a set of labeled examples and makes predictions for other examples. Typically, for each unlabeled node, the base algorithm will also return an associated margin or confidence score. The self-training framework trains and applies the base model in rounds, where at the end of each round, the highest-confidence predictions are converted to become new labeled examples in the next round of training and prediction. Thus, the receptive fields of all the labeled nodes increases and will eventually cover the entire graph, which resolve the issue of GCNs without adding more layers." }, { "heading": "3 OUR METHOD", "text": "" }, { "heading": "3.1 A GENERALIZED SELF-TRAINING FRAMEWORK", "text": "Algorithm 1: Dynamic Self-training Framework 1 Generate initial parameter θ0 for model f(·, ·), and the initial confidence score vector SV . 2 for each epoch t = 1, 2, ..., T do 3 Compute prediction fV ← f(G, θt−1) 4 Update confidence score SV ← UC(fV ). 5 Update model parameter by confidence score. θt ← UP(fV , SV , f) 6 if stopping criteria is met then 7 Break 8 end 9 end\nSun et al. (Sun et al., 2019) proposed Multi-stage Training Framework as generalization for selftraining method in (Li et al., 2018). Inspired by this, we propose a more generalized end-to-end self-training framework named Dynamic Self-training Framework shown in algorithm 1. Instead of operating on data split, we maintain a confidence score in each iteration. There is no specified training stages here, but we update the confidence value for each unlabeled node after every epoch.\nConsider the original model f(·, ·) as a forward predicting function with backward trainable parameters. The graph data G and the trainable parameters θt is the input of this function, and the output of this model is collected into fV ∈ Rn×C , where fv denotes the output vector (before assigned with label) of node v ∈ V , and C = dL is the number of classes. Then we construct the confidence score vector SV ∈ Rn from the model output fv using a function UC, which can be instantiated in many forms. For example, Algorithm 2 illustrates how standard multi-stage self-training GCN implement this part. Finally we update the model parameters using a specified algorithm such as gradient descent, where the confidence score vector plays a role. The confidence score participates in the parameter updating process in an end-to-end manner. An example of this part can be seen in section 3.3." }, { "heading": "3.2 PSEUDO LABEL METHOD", "text": "Define the pseudo label ỹi ∈ RdL of i-th node which satisfies :\nỹij = { 1 if j = argmaxj′ fij′ 0 otherwise\n(3)\nAlgorithm 2: Update confidence score for Multi-stage Self-training GCN 1 if the stage is currently switched then 2 for each class k do 3 Find the top m vertices v in fV and v ∈ U 4 Change the value of v in SV to 1 5 end 6 return SV 7 end\n(Lee, 2013) introduced a pseudo label version of semi-supervised losses: L = ∑ i∈L l(yi, fi) + λ ∑ i∈U l(ỹi, fi), (4) where λ = nn′ γ, n = |L|, n ′ = |U|, γ ∈ R is a hyper-parameter and the additive term ∑ i∈U l(ỹi, fi) is the pseudo label loss. Here, λ measures how much the pseudo label term influence the training process. This is equivalent to Entropy Regularization for classification problems (Lee, 2013)." }, { "heading": "3.3 SOFT LABEL CONFIDENCE", "text": "In standard multi-stage self-training methods, a node just has two states: in the training set or not, which corresponds to binary-valued confidences {0, 1}; and in most cases, if a node is added in training set, it will be kept there. This simple setting hinders learning in some cases. For instance, if the classifier puts a wrongly labeled node into the training set, which is of high possibility in preliminary training epochs, it will persistently learn wrong knowledge from this node. Worse still, another wrongly adding is more possible. This negative feedback loop may contribute to a extremely poor classifier. Moreover, original labeled nodes and added nodes in the training are treated equally, which is too restricted and may harm the learning; explicitly distinguishing them in the training process could be beneficial. To resolve these problems, we introduce a mechanism named Soft Label Confidence as the confidence updating component in algorithm 1, which computes a personalized confidence value for each node, and the training set is dynamically changing except the ground truth labels. Based on the pseudo label loss (4), we propose the loss wrapped by soft label confidence:\nL = ∑ i∈L l(yi, fi) + λ ∑ i∈U α(fi)l(ỹi, fi). (5)\nHere α is a function mapping from RdL to R, defined as confidence function. While there are other possible choices for α, in our method we adopt a threshold based function:\nα(fi) = 1\nn′ci max(ReLU(fi − β · 1)), (6)\nHere β ∈ (0, 1) is a hyper-parameter as threshold, n′ci denotes the number of nodes whose pseudo label belongs to class ci, ci is the class which i-th node’s pseudo label belongs to, and 1 is the all 1 vector. We introduce n′ci here to balance the categories of pseudo labels, because pseudo labels could be initially extremely unbalanced and lead to a poor classifier in practice.\nAlthough α(fi) depends on fi, and thus a function of network’s weights, we will block the flow of gradient through α(fi) for the following reasons: Firstly, confidence function is non-differentiable in most cases. Secondly, if we allow the gradient to flow through α(fi), the optimizer may tend to find a solution that satisfies max(fi) < β,∀i ∈ V , since for such a solution, α(fi) = 0 for all i and the pseudo label loss is zero, which does no good to self-supervised learning. So we use the following way to compute the gradient:\n∂L ∂W ls,t = ∑ i∈L ∂l(yi, fi) ∂W ls,t + λ ∑ i∈U α(fi) ∂l(ỹi, fi) ∂W ls,t (7)" }, { "heading": "4 RELATED WORK", "text": "Graph Convolutional Network The work of GNNs seeks generalizations of the convolution operator to graph structured data. One way to do this is to apply convolution in the spectral domain,\nwhere the eigenvectors of the graph Laplacian are considered as the Fourier basis (Bruna et al., 2014; Henaff et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2017). Such spectral methods learns hidden layer representations that encode both graph structure and node features simultaneously. Kipf and Welling (Kipf & Welling, 2017) simplify previous spectral techniques by restricting the propagation to a 1-hop neighborhood in each layer. (Chen et al., 2018) propose fast GCNs, which improves the training speed of the original GCN. GAT of (Velickovic et al., 2018) allows for assigning different importances to nodes of the same neighborhood via attention mechanisms. (Xu et al., 2018) introduce JK networks, which adjust the influence radii of each node adaptively. Another direction that generalizes convolutions to graph structured data, namely non-spectral approaches, define convolutions directly in the spatial domain (Duvenaud et al., 2015; Atwood & Towsley, 2016; Monti et al., 2017). Such methods are easier to be adapted to do inductive learning (Hamilton et al., 2017; Velickovic et al., 2018; Bojchevski & Günnemann, 2018). However, few-shot learning remains a challenge for this class of methods.\nLabel Propagation Unlike GNNs, which propagate node representations, the classic Label Propagation (LP) method (Zhu et al., 2003) iteratively propagates (soft) labels. More specifically, in each iteration, each unlabeled node obtains a new soft label that is the aggregation of the soft labels from the previous iteration of its neighbors. The key to LP is to design an effective propagation rule; for some propagation rules, the algorithm may not converge and/or the accuracy may not improve over iterations. Thus, one often needs to specify a stopping criteria and a validation set for model selection. LP can also be used as the base algorithm in the self-training framework.\nSelf-training Self-training is a natural and general approach to semi-supervised learning (Scudder, 1965) and has been widely used in the NLP literature. Self-training is used by (Yarowsky, 1995; Hearst, 1991) for word sense disambiguation. (Riloff et al., 1999) used self-training in the form of bootstrapping for information extraction and later for learning subjective nouns. (Riloff et al., 2003) with (Nigam et al., 2000) using EM for text classification. Self-training has been used for object recognition (Rosenberg et al., 2005; Zhou et al., 2012). (McClosky et al., 2006; 2008; Huang & Harper, 2009; Sagae, 2010) shows how effective can self-training be in parsing. (Wang et al., 2007; Huang et al., 2009; Qi et al., 2009) introduce self-training techniques to part of speech tagging, and (Kozareva et al., 2005; Liu et al., 2013a) adopt self-training in named entity recognition. (Van Asch & Daelemans, 2016; Drury et al., 2011; Liu et al., 2013b) used self-training in sentiment classification. Recently, self-training has also been successfully applied on node classification. Li et al. (Li et al., 2018) study self-training GCNs; Buchnik and Cohen (Buchnik & Cohen, 2018) mainly consider the effect self-training for diffusion-based techniques. In pseudo-label method of (Lee, 2013), for unlabeled data, their pseudo-labels are recalculated every weights update. However, they don’t assign weight to each unlabeled data.\nAs for the self-training algorithm itself, (Chen et al., 2011) shows that selecting highly confident instances with a pre-defined threshold may not perform well. (McClosky et al., 2006) produce a ranked list of n-best predicted parses and selected the best one. (Rosenberg et al., 2005) shows that a training data selection metric that is defined independently of the detector greatly outperforms a selection metric based on the detection confidence generated by the detector. (Zhou et al., 2012) suggests that selecting more informative unlabelled data using a guided search algorithm can significantly improve performance over standard self-training framework. Most recently, (Levatić et al., 2017) proposed proposed an algorithm to automatically select appropriate threshold.\nNetwork Embedding Node classification is also one of the main applications of network embedding methods, which learns a lower-dimensional representation for each node in an unsupervised manner, followed by a supervised classifier layer for node classification (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016; Wang et al., 2016; Bojchevski & Günnemann, 2018). A recent work of (Bojchevski & Günnemann, 2018) proposes Graph2Gauss. This method embeds each node as a Gaussian distribution according to a novel ranking similarity based on the shortest path distances between nodes. A distribution embedding naturally captures the uncertainty about the representation. DGI (Veličković et al., 2019) is an embedding method based on GCNs, the unsupervised objective of which is to maximize mutual information. The work of Embedding approaches achieve competitive performance in node classification tasks, while the learned representations also prove to be extremely useful for other downstream applications." }, { "heading": "5 EVALUATION", "text": "" }, { "heading": "5.1 DATASET", "text": "We conduct the evaluation on four benchmark citation datasets: Cora, Citeseer, Pubmed (Sen et al., 2008), and Core-full (Bojchevski & Günnemann, 2018). Each of these four datasets is undirected graph with node feature. Each node is a document and the edges denote the citation relationship; the feature of a node is the bag-of-words representation of the document. The number of layers in GCN is two by default, and thus the receptive field of each labeled node is its order-2 neighborhood. We measure the fraction of nodes which is covered by the 2-hop neighbors of all labeled nodes, i.e., | ⋃\ns∈S N2(s)|/|V |, where S is the set of labeled nodes randomly sampled from V . Here we report the 2-hop coverage ratio on the four datasets when the label rates are 1% and 0.5% respectively. We summarize the information of datasets in Table 1." }, { "heading": "5.2 EXPERIMENT SETTINGS", "text": "We evaluate models on semi-supervised node classification tasks with varying label rates. Instead of evaluating on a fixed data split as in (Kipf & Welling, 2017; Velickovic et al., 2018), we mainly consider random splits as (Li et al., 2018) does. In detail, for a given label rate, we randomly generate 100 different splits on each dataset. In each split, there is a labeled set with prespecified size for training, and in this set each class contains the same number of labeled nodes. As in (Li et al., 2018), we don’t use a validation set, and all the remaining nodes will be used for testing. For simplicity, we will refer to a task in the form of dataset-l, where l is the number of labeled nodes per class. For example, Cora-1 denotes the classification task on dataset Cora with one seed per class." }, { "heading": "5.3 IMPLEMENTATION DETAILS", "text": "For all the models(Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016; Wang et al., 2016; Bojchevski & Günnemann, 2018; Velickovic et al., 2018; Monti et al., 2017) except for GCN based methods, settings of hyper-parameters are the same as suggested in original papers. All GCN based methods including GCN, Self-training GCN, Co-training GCN, Intersection GCN, Union GCN, and DSGCN share the same setting of hyper-parameter following (Shchur et al., 2018): one hidden layer with 64 units, dropout rate 0.8, Adam optimizer (Kingma & Ba, 2015) with learning rate 10−2, a L2 regularization with weight 10−3. We train other GCN based methods for a fixed epochs of 200, while DSGCN is trained for 600 epochs in few-label tasks such as 1, 3, 5, 10 tasks. Because 20 or 50 labels per class implies ample supervised information, we train DSGCN for 200 epochs in these tasks. The four variants of (Li et al., 2018): Self-training GCN, Co-training GCN, Intersection GCN and Union GCN follow original self-training settings in (Li et al., 2018). For DSGCN, we use a threshold of 0.6 when the number of labels per class is below 3, and set the threshold to 0.75 for label rate above 3 but below 10. Otherwise, the threshold is 0.9 by default." }, { "heading": "5.4 RESULT ANALYSIS", "text": "The numerical results are summarized in Table 2 and Table 3. The highest accuracy in each column is highlighted in bold and the top 3 are underlined. We group all models into three categories: GNN variants(GCN, GAT, MoNet), unsupervised embedding methods (DeepWalk, DGI, LINE, G2G) and GCN with self-training (Co-training, Self-training, Union and Intersection, DSGCN).\nComparison Between GNN Variants and Embedding Methods As unsupervised methods, G2G and DGI outperform all GNN variants in very few labels cases, e.g., 1 and 3 per class on both Cora and Citeseer. Observing that LP performs well in Cora-1 while other feature propagation methods not, we can naturally conclude that in dataset with graph structure, concentrating more on the unsupervised information (both strong manifold structure(Li et al., 2018) and feature patterns) will improve semi-supervised model compared to just utilizing supervised information, in the case of low label rate. When label rate goes higher, all GNN variants enjoy better accuracies compared to unsupervised models. Hence we empirically verify the strong generalization ability of GNNs when the supervised information is sufficient. Sun et al. (Sun et al., 2019) has demonstrated the limitation of GCN in few labels case, and here we find that these convolution based methods suffer from inefficient propagation of label information as well, which can be seen as the intrinsic drawbacks of semi-supervised graph convolution based methods.\nComparison Between Self-training GCNs and All Other Models In all few-label tasks, selftraining strategies improve over GCN by a remarkable margin. Except for tasks with 50 labels per class, the best accuracy is always obtained by self-training GCN. Even in extreme one-label case, where unsupervised information is more vital, DSGCN outperforms G2G by a margin of 6.2% in Cora and 9.2% in Citeseer. We conclude that self-training strategy is capable of utilizing unsupervised information more effectively. Thus it significantly helps classification. Additionally, four naive selftraining GCNs implemented in (Li et al., 2018) are worse than GCN when label rate goes higher, e.g., Cora-50 and Cora-full-5, which manifests that inappropriate self-training strategies will sometimes degrade the performance of the base model. Hence there is a trade-off: capturing unsupervised signals, or learning supervised information well. However, DSGCN holds a good balance here. It doesn’t show much decrease compared to GCN even in the worst case task, Cora-full-50, where the\naccuracy only decreases by 0.6%; in all other cases it is always better than GCN. This demonstrates that the dynamic self-training framework not only helps the original model to capture unsupervised information, but also retains the learning ability when there are enough labels.\nComparison of Self-training GCNs By applying a simpler and more general self-training strategy, DSGCN outperforms other self-training based GCNs with considerable margins in most cases. In Citeseer-1, the margin even reaches 14.1% compared with the best strategy among Co-training, Selftraining, Union and Intersection. This empirically supports the advantage of DSGCN for tackling a wide range of classification tasks over conventional self-training methods.\nEffect of Threshold Here we discuss how the important hyper-parameter β influence the performance of DSGCN. We train DSGCN with different threshold: 0.45, 0.6, 0.75, 0.9, 1.0 for 1000 epochs on dataset Cora and Citeseer for the same split with the same initialized weights. We conduct these experiments on tasks with different seed numbers, the results are presented in figure 1. As shown in figure 1, when labels are very few, DSGCN with a relatively lower threshold β demonstrate a clear improvement in accuracy over the original GCN. Besides, GCN’s accuracy curve erratically fluctuates while the curve of DSGCN with a low threshold does not. Thus, we observe that the stability of the base model is also improved by wrapping it into the dynamic self-training framework. When more labels are provided, all models tend to be stable and a low threshold could harm the training process." }, { "heading": "6 CONCLUSION", "text": "In this paper, we firstly introduce a novel self-training framework. This framework generalizes and simplifies prior work, providing customizable modules as extension for multi-stage self-training. Then we instantiate this framework based on GCN and empirically compare this model with a number of methods on different dataset splits. Result of experiments suggests that when labels are few, the proposed DSGCN not only outperform all previous models with noticeable margins in accuracy but also enjoy better stability in the training process. Overall, the Dynamic Self-training Framework is powerful for few-label tasks on graph data, and provides a novel perspective on self-training techniques." }, { "heading": "A APPENDIX: ADDITIONAL EXPERIMENTS", "text": "We also test our self-training methods on other GNNs as well, e.g., SGC(Wu et al., 2019), GAT (Velickovic et al., 2018), and GraphSage (Hamilton et al., 2017). For the three GNN models, settings of hyper-parameters are the same as suggested in original papers. And our dynamic self-training framework share the same setting of hyper-parameter: one hidden layer with 32 units, dropout rate 0.7, Adam optimizer (Kingma & Ba, 2015), a L2 regularization with weight 5−4 and set the threshold to 0.9. Clearly, our dynamic self-training framework achieves similar improvements on all the three base models. The numerical results are summarized in Table 4. We can see equipped with our DS framework, these models enjoys noticeable increase in performance.\nTo evaluate the computation overhead introduced by dynamic self-training framework, we test the total training time for various models. Intuitively the computational cost will only slightly increase. The reason is that the computational cost of the original GCN model is dominated by previous layers, where the entire graph is included. So even if all nodes become pseudo labels, the size of the entire network is increased by at most a factor of 2, and the number of parameters remains the same. Therefore, the computational costs will increase by at most a small constant in theory. We have also verified this empirically. We record the training time of base models before and after applying our framework. In the experiments, the training size is 20 per class, the number of epoch is 200, and the time is the average time (in seconds) of 25 runs. The numerical results can be seen in Tabel 5." } ]
2,019
null
SP:bcf9ed060b00d47720785afbdfd540a4c98715d4
[ "The authors prove lower bounds on the number of queries required for optimizing sums of convex functions. They consider more powerful queries than the usual queries that provide function evaluation/gradient pairs for chosen summands. As was done in [1] (which is cited in the submission), in this work algorithms can also get the answer to a", "This paper proves a better complexity lower bound for stochastic PIFO optimizers on the problem of finite-sum minimization. The paper assumes that the objective function is the sum of n individual loss functions. It further assumes that (1) the optimizer initializes at a fixed point, and (2) at each iteration, it randomly and independently selects one loss function to update the parameter vector. " ]
This paper studies the lower bound complexity for the optimization problem whose objective function is the average of n individual smooth convex functions. We consider the algorithm which gets access to gradient and proximal oracle for each individual component. For the strongly-convex case, we prove such an algorithm can not reach an ε-suboptimal point in fewer than Ω((n + √ κn) log(1/ε)) iterations, where κ is the condition number of the objective function. This lower bound is tighter than previous results and perfectly matches the upper bound of the existing proximal incremental first-order oracle algorithm Point-SAGA. We develop a novel construction to show the above result, which partitions the tridiagonal matrix of classical examples into n groups to make the problem difficult enough to stochastic algorithms. This construction is friendly to the analysis of proximal oracle and also could be used in general convex and average smooth cases naturally.
[]
[ { "authors": [ "Alekh Agarwal", "Leon Bottou" ], "title": "A lower bound for the optimization of finite sums", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Zeyuan Allen-Zhu" ], "title": "Katyusha X: Practical momentum method for stochastic sum-of-nonconvex optimization", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Yossi Arjevani", "Ohad Shamir" ], "title": "Communication complexity of distributed convex learning and optimization", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Yair Carmon", "John C. Duchi", "Oliver Hinder", "Aaron Sidford" ], "title": "Lower bounds for finding stationary points I", "venue": "arXiv preprint arXiv:1710.11606,", "year": 2017 }, { "authors": [ "Aaron Defazio" ], "title": "A simple practical accelerated method for finite sums", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Aaron Defazio", "Francis Bach", "Simon Lacoste-Julien" ], "title": "SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Cong Fang", "Chris Junchi Li", "Zhouchen Lin", "Tong Zhang" ], "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator", "venue": null, "year": 2018 }, { "authors": [ "Robert Hannah", "Yanli Liu", "Daniel O’Connor", "Wotao Yin" ], "title": "Breaking the span assumption yields fast finite-sum minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "Guanghui Lan", "Yi Zhou" ], "title": "An optimal randomized incremental gradient method", "venue": "Mathematical programming,", "year": 2017 }, { "authors": [ "Yurii Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o(1/kˆ2)", "venue": "In Dokl. akad. nauk Sssr,", "year": 1983 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex optimization: A basic course, volume 87", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux", "Francis Bach" ], "title": "Minimizing finite sums with the stochastic average gradient", "venue": "Mathematical Programming,", "year": 2017 }, { "authors": [ "Blake Woodworth", "Nathan Srebro" ], "title": "Tight complexity bounds for optimizing composite objectives", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Lin Xiao", "Tong Zhang" ], "title": "A proximal stochastic gradient method with progressive variance reduction", "venue": "SIAM Journal on Optimization,", "year": 2014 }, { "authors": [ "Lijun Zhang", "Mehrdad Mahdavi", "Rong Jin" ], "title": "Linear convergence with condition number independent access of full gradients", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "Dongruo Zhou", "Quanquan Gu" ], "title": "Lower bounds for smooth nonconvex finite-sum optimization", "venue": "In ICML,", "year": 2019 } ]
[ { "heading": null, "text": "√ κn) log(1/ε))\niterations, where κ is the condition number of the objective function. This lower bound is tighter than previous results and perfectly matches the upper bound of the existing proximal incremental first-order oracle algorithm Point-SAGA. We develop a novel construction to show the above result, which partitions the tridiagonal matrix of classical examples into n groups to make the problem difficult enough to stochastic algorithms. This construction is friendly to the analysis of proximal oracle and also could be used in general convex and average smooth cases naturally.\n1 INTRODUCTION\nWe consider the minimization of the following optimization problem\nmin x∈Rd\nf(x) , 1\nn n∑ i=1 fi(x), (1)\nwhere the fi(x) are L-smooth and µ-strongly convex. Accordingly, the condition number is defined as κ = L/µ, which is typically larger than n in real-world applications. Many machine learning models can be formulated as the above problem such as ridge linear regression, ridge logistic regression, smoothed support vector machines, graphical models, etc. This paper focuses on the first order methods for solving Problem (1), which access to the Proximal Incremental First-order Oracle (PIFO) for each individual component, that is,\nhf (x, i, γ) , [ fi(x),∇fi(x),proxγfi(x) ] , (2)\nwhere i ∈ {1, . . . , n}, γ > 0, and the proximal operation is defined as\nproxγfi(x) = arg min u\n{ fi(u) + 1\n2γ ‖x− u‖22\n} .\nWe also define the Incremental First-order Oracle (IFO)\ngf (x, i, γ) , [fi(x),∇fi(x)] .\nPIFO provides more information than IFO and it would be potentially more powerful than IFO in first order optimization algorithms. Our goal is to find an ε-suboptimal solution x̂ such that\nf(x̂)− min x∈Rd f(x) ≤ ε\nby using PIFO or IFO.\nThere are several first-order stochastic algorithms to solve Problem (1). The key idea to leverage the structure of f is variance reduction which is effective for ill-conditioned problems. For example, SVRG (Zhang et al., 2013; Johnson and Zhang, 2013; Xiao and Zhang, 2014) can\nfind an ε-suboptimal solution in O((n+κ) log(1/ε)) IFO calls, while the complexity of the classical Nesterov’s acceleration (Nesterov, 1983) is O(n √ κ log(1/ε)). Similar results1 also hold for SAG (Schmidt et al., 2017) and SAGA (Defazio et al., 2014). In fact, there exists an accelerated stochastic gradient method with √ κ dependency. Defazio (2016) introduced a simple and practical accelerated method called Point SAGA, which reduces the iteration complexity to O((n + √ κn) log(1/ε)). The advantage of Point SAGA is in that it has only one parameter to be tuned, but the iteration depends on PIFO rather than IFO. Allen-Zhu (2017) proposed the Katyusha momentum to accelerate variance reduction algorithms, which achieves the same iteration complexity as Point-SAGA but only requires IFO calls.\nThe lower bound complexities of IFO algorithms for convex optimization have been well studied (Agarwal and Bottou, 2015; Arjevani and Shamir, 2015; Woodworth and Srebro, 2016; Carmon et al., 2017; Lan and Zhou, 2017; Zhou and Gu, 2019). Specifically, Lan and Zhou (2017) showed that at least Ω((n+ √ κn) log(1/ε)) IFO calls2 are needed to obtain an ε-suboptimal solution for some complicated objective functions. This lower bound is optimal because it matches the upper bound complexity of Katyusha (Allen-Zhu, 2017).\nIt would be interesting whether we can establish a more efficient PIFO algorithm than IFO one. Woodworth and Srebro (2016) provided a lower bound Ω(n+ √ κn log(1/ε)) for PIFO algorithms, while the known upper bound of the PIFO algorithm Point SAGA [3] is O((n+ √ κn) log(1/ε)). The difference of dependency on n implies that the existing theory of PIFO algorithm is not perfect. This gap can not be ignored because the number of components n is typically very large in many machine learning problems. A natural question is can we design a PIFO algorithm whose upper bound complexity matches Woodworth and Srebro’s lower bound, or can we improve the lower bound complexity of PIFO to match the upper bound of Point SAGA. In this paper, we prove the lower bound complexity of PIFO algorithm is Ω((n+ √ κn) log(1/ε)) for smooth and strongly-convex fi, which means the existing Point-SAGA (Defazio, 2016) has achieved optimal complexity and PIFO can not lead to a tighter upper bound than IFO. We provide a novel construction, showing the above result by decomposing the classical tridiagonal matrix (Nesterov, 2013) into n groups. This technique is quite different from the previous lower bound complexity analysis (Agarwal and Bottou, 2015; Woodworth and Srebro, 2016; Lan and Zhou, 2017; Zhou and Gu, 2019). Moreover, it is very friendly to the analysis of proximal operation and easy to follow. We also use this technique to study general convex and average smooth cases (Allen-Zhu, 2018; Zhou and Gu, 2019), and extend our result to non-convex problems (see Appendix J).\n2 OUR ANALYSIS FRAMEWORK\nIn this paper, we consider the Proximal Incremental First-order Oracle (PIFO) algorithm for smooth convex finite-sum optimization. All proofs in this section can be found in Appendices C and D for a detailed version. We analyze the lower bounds of the algorithms when the objective functions are respectively strongly convex, general convex, smooth and average smooth (Zhou and Gu, 2019).\nDefinition 2.1. For any differentiable function f : Rm → R,\n• f is convex, if for any x,y ∈ Rm it satisfies f(y) ≥ f(x) + 〈∇f(x),y − x〉.\n• f is µ-strongly convex, if for any x,y ∈ Rm it satisfies\nf(y) ≥ f(x) + 〈∇f(x),y − x〉+ µ 2 ‖x− y‖22.\n• f is L-smooth, if for any x,y ∈ Rm it satisfies ‖∇f(x)−∇f(y)‖2 ≤ L‖x− y‖2.\n1SVRG, SAG and SAGA only need to introduce the proximal operation for composite objective, that is, fi(x) = gi(x) + h(x), where h may be non-smooth. Their iterations only depend on IFO when all the fi(x) are smooth. Hence, we regard these algorithms only require IFO calls in this paper.\n2Lan and Zhou’s construction requires f to be µ-strongly convex and every fi to be convex, while this paper studies the lower bound with stronger condition that is every fi is µ-strongly convex. For the same lower bound complexity, the result with stronger assumptions on the objective functions is stronger.\nDefinition 2.2. We say differentiable functions {fi}ni=1, fi : Rm → R, to be L-average smooth if for any x,y ∈ Rm, they satisfy\n1 n n∑ i=1 ‖∇fi(x)−∇fi(y)‖22 ≤ L 2 ‖x− y‖22 . (3)\nRemark 2.3. We point out that\n1. if each fi is L-smooth, then {fi}ni=1 are L-average smooth. 2. if {fi}ni=1 are L-average smooth, then f(x) = 1n ∑n i=1 fi(x) is L-smooth.\nWe present the formal definition for PIFO algorithm. Definition 2.4. Consider a stochastic optimization algorithm A to solve Problem (1). Let xt be the point obtained at time-step t and the algorithm starts with x0. The algorithmA is said to be a PIFO algorithm if for any t ≥ 0, we have\nxt ∈ span { x0, . . . ,xt−1,∇fi1(x0), · · · ,∇fit(xt−1),prox\nγ1 fi1 (x0), · · · ,proxγtfit (xt−1) } , (4)\nwhere it is a random variable supported on [n] and takes P(it = j) = pj for each t ≥ 0 and 1 ≤ j ≤ n where ∑n j=1 pj = 1.\nWithout loss of generality, we assume x0 = 0 and p1 ≤ p2 ≤ · · · ≤ pn to simplify our analysis. Otherwise, we can take {f̂i(x) = fi(x + x0)}ni=1 into consideration. On the other hand, suppose that ps1 ≤ ps2 ≤ · · · ≤ psn where {si}ni=1 is a permutation of [n]. Define {f̃i}ni=1 such that f̃si = fi, then A takes component f̃si in probability psi , i.e., A takes fi in probability psi .\nTo demonstrate the construction of adversarial functions, we first introduce the following class of matrices:\nB(m,ω) = −1 1 −1 1 . . . . . .\n−1 1 ω ∈ Rm×m. Then we define\nA(m,ω) , B(m,ω)>B(m,ω) = ω2 + 1 −1 −1 2 −1 . . . . . . −1 2 −1\n−1 1 . (5) The matrix A(m,ω) is widely-used in the analysis of lower bounds for convex optimization (Nesterov, 2013; Agarwal and Bottou, 2015; Lan and Zhou, 2017; Carmon et al., 2017; Zhou and Gu, 2019). We now present a decomposition of A(m,ω) based on Eq. (5). Denote the l-th row of the matrix B(m,ω) by bl(m,ω)> and let\nLi = { l : 1 ≤ l ≤ m, l ≡ i− 1(mod n) } , i = 1, 2, · · · , n.\nOur construction is based on the following class of functions\nr(x;λ0, λ1, λ2,m, ω) , 1\nn n∑ i=1 ri(x;λ0, λ1, λ2,m, ω),\nwhere\nri(x;λ0, λ1, λ2,m, ω) = λ1 ∑ l∈L1 ∥∥bl(m,ω)>x∥∥22 + λ2 ‖x‖22 − λ0〈em,x〉, for i = 1, λ1 ∑ l∈Li\n∥∥bl(m,ω)>x∥∥22 + λ2 ‖x‖22 , for i = 2, 3, · · · , n. (6) We can determine the smooth and strongly-convex coefficients of ri as follows. Proposition 2.5. For any λ1 > 0, λ2 ≥ 0, ω < √ 2, we have that the ri are (4λ1 + 2λ2)-smooth and λ2-strongly convex, and {ri}ni=1 is L′-average smooth where\nL′ = 2\n√ 4\nn [(λ1 + λ2)2 + λ21] + λ 2 2.\nWe define the subspaces {Fk}mk=0 as Fk = {\nspan{em, em−1, · · · , em−k+1}, for 1 ≤ k ≤ m, {0}, for k = 0.\nThe following technical lemma plays a crucial role in our proof. Lemma 2.6. For any λ0 6= 0, λ1 > 0, λ2 ≥ 0 and x ∈ Fk, 0 ≤ k < m, we have that\n∇ri(x;λ0, λ1, λ2,m, ω) and proxγri(x) ∈ { Fk+1, if k ≡ i− 1(mod n), Fk, otherwise.\nIn short, if x ∈ Fk and let fi(x) , ri(x;λ0, λ1, λ2, ω), then there exists only one i ∈ {1, . . . , n} such that hf (x, i, γ) could (and only could) provide additional information in Fk+1. The “only one” property is important to the lower bound analysis for first order stochastic optimization algorithms (Lan and Zhou, 2017; Zhou and Gu, 2019), but these prior constructions only work for IFO rather than PIFO.\nLemma 2.6 implies that xt = 0 will host until algorithm A draws the component f1. Then, for any t < T1 = mint{t : it = 1}, we have xt ∈ F0 and xT1 ∈ F1. The value of T1 can be regarded as the smallest integer such that xT1 could host. Similarly, we can define Tk to be the smallest integer such that xTk ∈ Fk could host. We give the formal definition of Tk recursively and connect it to geometrically distributed random variables in the following corollary.\nCorollary 2.7. Let T0 = 0, and Tk = min\nt {t : t > Tk−1, it ≡ k (mod n)} for k ≥ 1. (7)\nThen for any k ≥ 1 and t < Tk, we have xt ∈ Fk−1. Moreover, Tk can be written as sum of k independent random variables {Yl}1≤l≤k, i.e., Tk = ∑k l=1 Yl, where Yl follows a geometric distribution with success probability ql = pl′ where l′ ≡ l (mod n), 1 ≤ l′ ≤ n.\nThe basic idea of our analysis is that we guarantee the minimizer of r lies in Fm and assure the PIFO algorithm extend the space of span{x0,x1, . . . ,xt} slowly with t increasing. We know that span{x0,x1, . . . ,xTk} ⊆ Fk by Corollary 2.7. Hence, Tk is just the quantity that reflects how span{x0,x1, . . . ,xt} verifies. Because Tk can be written as the sum of geometrically distributed random variables, we needs to introduce some properties of such random variables which derive the lower bounds of our construction. Lemma 2.8. Let {Yi}1≤i≤N be independent random variables, and Yi follows a geometric distribution with success probability pi. Then\nP ( N∑ i=1 Yi > N2 4( ∑N i=1 pi) ) ≥ 1− 16 9N . (8)\nFrom Lemma 2.8, the following result implies how many PIFO calls we need. Lemma 2.9. If M ≥ 1 satisfies minx∈FM f(x) − minx∈Rm f(x) ≥ 9ε and N = n(M + 1)/4, then we have\nmin t≤N Ef(xt)− min x∈Rm f(x) ≥ ε.\n3 MAIN RESULTS\nWe present the our lower bound results for PIFO algorithms and summarize all of results in Table 1 and 2 . We first start with smooth and strongly convex setting, then consider the general convex and average smooth cases. Theorem 3.1. For any PIFO algorithm A and any L, µ, n,∆, ε such that κ = L/µ ≥ 2, and ε/∆ ≤ 0.5, there exist a dimension d = O ( 1 + √ κ/n log (∆/ε) ) and n L-smooth and µ-strongly convex functions {fi : Rd → R}ni=1 such that f(x0) − f(x∗) = ∆. In order to find x̂ ∈ Rd such that Ef(x̂)− f(x∗) < ε, A needs at least N queries to hf , where\nN =\n{ Ω ((n+ √ κn) log (∆/ε)) , for n = O(κ),\nΩ ( n+ ( n\n1+(log(n/κ))+\n) log (∆/ε) ) , for κ = O(n).\nRemark 3.2. In fact, the lower bound in Theorem 3.1 perfectly match the upper bound of the PIFO algorithm Point SAGA (Defazio, 2016) 3 in n = O(κ) case and match the the upper bound of the IFO algorithm4 prox-SVRG (Hannah et al., 2018) in κ = O(n) case. Hence, the lower bound in Theorem 3.1 is tight, while Woodworth and Srebro (2016) only provided lower bound Ω (n+ √ κn log (1/ε)) in n = O(κ) case. The theorem also shows that the PIFO algorithm can not be more powerful than the IFO algorithm in the worst case, because Hannah et al. (2018) proposed a same lower bound for IFO algorithms.\nNext we give the lower bound when the objective function is not strongly-convex. Theorem 3.3. For any PIFO algorithm A and any L, n,B, ε such that ε ≤ LB2/4, there exist a dimension d = O ( 1 +B √ L/(nε) ) and n L-smooth and convex functions {fi : Rd → R}ni=1 such that ‖x0 − x∗‖2 ≤ B. In order to find x̂ ∈ Rd such that Ef(x̂)−f(x∗) < ε,A needs at least Ω ( n+B √ nL/ε ) queries to hf .\n3Defazio (2016) proves Point SAGA requires O ((n+ √ κn) log (1/ε)) PIFO calls to find x̂ such that E‖x̂ − x∗‖22 < ε‖x0 − x∗‖22, which is not identical to the condition Ef(x̂) − f(x∗) < ε in Theorem 3.1. However, it is unnecessary to worry about it because we also establish a PIFO lower bound Ω ((n+ √ κn) log (1/ε)) for E‖x̂− x∗‖22 < ε‖x0 − x∗‖22 in Theorem F.1.\n4IFO algorithm is apparently also a PIFO algorithm.\nRemark 3.4. The lower bound in Theorem 3.3 is the same as the one of Woodworth and Srebro’s result. However, our construction only requires the dimension be O ( 1 +B √ L/(nε) ) , which is\nmuch smaller than O ( L2B4 ε2 log ( nLB2 ε )) in (Woodworth and Srebro, 2016).\nThen we extend our results to the weaker assumption: that is, the objective function F is L-average smooth (Zhou and Gu, 2019). We start with the case that F is strongly convex. Theorem 3.5. For any PIFO algorithm A and any L, µ, n,∆, ε such that κ = L/µ ≥√ 3/n ( n 2 + 1 ) , and ε/∆ ≤ 0.00327, there exist a dimension d = O ( n−1/4 √ κ log (∆/ε) ) and n functions {fi : Rd → R}ni=1 where the {fi}ni=1 are L-average smooth and f is µ-strongly convex, such that f(x0)− f(x∗) = ∆. In order to find x̂ ∈ Rd such that Ef(x̂)− f(x∗) < ε, A needs at least Ω (( n+n3/4 √ κ ) log (∆/ε) )\nqueries to hf . Remark 3.6. Compared with Zhou and Gu’s lower bound Ω ( n+ n3/4 √ κ log (∆/ε) ) for IFO algorithms, Theorem 3.5 shows tighter dependency on n and supports PIFO algorithms additionally.\nWe also give the lower bound for general convex case under the L-average smooth condition. Theorem 3.7. For any PIFO algorithm A and any L, n,B, ε such that ε ≤ LB2/4, there exist a dimension d = O ( 1 +Bn−1/4 √ L/ε ) and n functions {fi : Rd → R}ni=1 which the {fi}ni=1 are L-average smooth and f is convex, such that ‖x0 − x∗‖2 ≤ B. In order to find x̂ ∈ Rd such that Ef(x̂)− f(x∗) < ε, A needs at least Ω ( n+Bn3/4 √ L/ε ) queries to hf .\nRemark 3.8. The lower bound in Theorem 3.7 is comparable to the one of Zhou and Gu’s result, but our construction only requires the dimension beO ( 1 +Bn−1/4 √ L/ε ) , which is much smaller\nthan O ( n+Bn3/4 √ L/ε ) in (Zhou and Gu, 2019).\n4 CONSTRUCTIONS IN PROOF OF MAIN THEOREMS\nWe demonstrate the detailed constructions for PIFO lower bounds in this section. All the omitted proof in this section can be found in Appendix for a detailed version.\n4.1 STRONGLY CONVEX CASE\nThe analysis of lower bound complexity for the strongly-convex case depends on the following construction.\nDefinition 4.1. For fixed L, µ,∆, n, let α = √\n2(L/µ−1) n + 1. We define fSC,i : R m → R as follows\nfSC,i(x) = ri\n( x; √ 2(L− µ)n∆\nα− 1 , L− µ 4 , µ 2 ,m,\n√ 2\nα+ 1\n) , for 1 ≤ i ≤ n, (9)\nand\nFSC(x) , 1\nn n∑ i=1 fSC,i(x) = L− µ 4n ∥∥∥∥∥B ( m, √ 2 α+ 1 ) x ∥∥∥∥∥ 2\n2\n+ µ\n2 ‖x‖22 − √ 2(L− µ)∆ n(α− 1) 〈em,x〉.\nNote that the fSC,i are L-smooth and µ-strongly convex, and FSC(x0)− FSC(x∗) = ∆ (see Proposition E.1 in Appendix for more details). Next we show that the functions {fSC,i}ni=1 are “hard enough” for any PIFO algorithm A, and deduce the conclusion of Theorem 3.1. Theorem 4.2. Suppose that\nε ≤ ∆ 9 ( α− 1 α+ 1 )2 , and m = 1 4 (√ 2 L/µ− 1 n + 1 ) log ( ∆ 9ε ) + 1,\nwhere α = √\n2(L/µ−1) n + 1. In order to find x̂ ∈ R m such that EFSC(x̂) − FSC(x∗) < ε, PIFO algorithm A needs at least N queries to hFSC . where\nN = Ω (( n+ √ nL µ ) log ( ∆ 9ε )) , for Lµ ≥ n 2 + 1, Ω ( n+ ( n\n1+log(nµ/L)\n) log (\n∆ 9ε )) , for 2 ≤ Lµ < n 2 + 1.\nFor larger ε, we can apply following Lemma.\nLemma 4.3. For any PIFO algorithm A and any L, µ, n,∆, ε such that ε ≤ ∆/2, there exist n L-smooth and µ-strongly convex functions {fi : R → R}ni=1 such that F (x0) − F (x∗) ≤ ∆. In order to find x̂ ∈ R such that EF (x̂)− F (x∗) < ε, A needs at least Ω(n) queries to hF .\nAs we explain in Remark H.1, the lower bound in Lemma 4.3 is same as the lower bound in Theorem 4.2 for ε > ∆9 ( α−1 α+1 )2 . In conclusion, we obtain Theorem 3.1.\n4.2 CONVEX CASE\nThe analysis of lower bound complexity for non strongly-convex cases depends on the following construction. Definition 4.4. For fixed L,B, n, we define fC,i : Rm → R as follows\nfC,i(x) = ri\n( x; √ 3\n2\nBL (m+ 1)3/2 , L 4 , 0,m, 1\n) (10)\nand\nFC(x) , 1\nn n∑ i=1 fC,i(x) = L 4n ‖B(m, 1)x‖22 − √ 3 2\nBL\n(m+ 1)3/2n 〈em,x〉.\nNote that the fC,i are L-smooth and convex, and ‖x0 − x∗‖2 ≤ B (see Proposition G.1 in Appendix for more details). Next we show the lower bound for functions fC,i defined above. Theorem 4.5. Suppose that\nε ≤ B 2L\n384n and m =\n⌊√ B2L\n24nε\n⌋ − 1.\nIn order to find x̂ ∈ Rm such that EFC(x̂)−FC(x∗) < ε,A needs at least Ω ( n+B √ nL ε ) queries to hFC .\nTo derive Theorem 3.3, we also need the following lemma in the case ε > B 2L\n384n .\nLemma 4.6. For any PIFO algorithm A and any L, n,B, ε such that ε ≤ LB2/4, there exist n L-smooth and convex functions {fi : R → R}ni=1 such that |x0 − x∗| ≤ B. In order to find x̂ ∈ R such that EF (x̂)− F (x∗) < ε, A needs at least Ω(n) queries to hF .\nIt is worth noting that if ε > B 2L 384n , then Ω(n) = Ω ( n+B √ nL ε ) . Thus combining Theorem 4.5 and Lemma 4.6, we obtain Theorem 3.3.\n4.3 AVERAGE SMOOTH CASE\nZhou and Gu (2019) established lower bounds of IFO complexity under the average smooth assumption. Here we demonstrate that our technique can also develop lower bounds of PIFO algorithm under this assumption.\n4.3.1 F IS STRONGLY CONVEX For fixed L′, µ,∆, n, ε, we set L = √\nn(L′2−µ2) 2 − µ2, and consider {fSC,i} n i=1 and FSC defined in\nDefinition 4.1. Proposition 4.7. For n ≥ 2, we have that\n1. FSC(x) is µ-strongly convex and {fSC,i}ni=1 is L′-average smooth.\n2. If L ′ µ ≥ √ 3 n ( n 2 + 1), then we have √ n 3L ′ ≤ L ≤ √ n 2L ′ and L/µ ≥ n/2 + 1.\nTheorem 4.8. Suppose that\nL′ µ ≥ √ 3 n (n 2 + 1 ) , ε ≤ ∆ 9 (√ 2− 1√ 2 + 1 )2 , and m = 1 4 √√ 2 n L′ µ + 1 log(∆ 9ε ) + 1.\nIn order to find x̂ ∈ Rm such that EFSC(x̂) − FSC(x∗) < ε, PIFO algorithm A needs at least Ω (( n+ n3/4 √ L′\nµ\n) log (\n∆ ε )) queries to hFSC .\n4.3.2 F IS CONVEX For fixed L′, B, n, ε, we set L = √\nn 2L ′, and consider {fC,i}ni=1 and FC defined in Definition 4.4.\nIt follows from Proposition 2.5 that {fC,i}ni=1 is L′-average smooth. Theorem 4.9. Suppose that\nε ≤ √ 2\n768 B2L′√ n and m =\n⌊ 4 √\n18\n12 Bn−1/4\n√ L′\nε\n⌋ − 1.\nIn order to find x̂ ∈ Rm such that EFC(x̂) − FC(x∗) < ε, A needs at least Ω ( n+Bn3/4 √ L′\nε ) queries to hFC . Similar to Lemma 4.6, we also need the following lemma for the case ε > √\n2 768 B2L′√ n .\nLemma 4.10. For any PIFO algorithm A and any L, n,B, ε such that ε ≤ LB2/4, there exist n functions {fi : R→ R}ni=1 which is L-average smooth, such that F (x) is convex and ‖x0−x∗‖2 ≤ B. In order to find x̂ ∈ R such that EF (x̂)− F (x∗) < ε, A needs at least Ω(n) queries to hF .\nSimilarly, note that if ε > √\n2 768 B2L′√ n\n, then Ω(n) = Ω ( n+Bn3/4 √ L′\nε\n) . In summary, we obtain\nTheorem 3.7.\n5 CONCLUSION AND FUTURE WORK\nIn this paper we have studied lower bound of PIFO algorithm for smooth convex finite-sum optimization. We have given a tight lower bound of PIFO algorithms in the strongly convex case. We have proposed a novel construction framework that is very useful to the analysis of proximal algorithms. Based on this framework, we can extended our result to non-strongly convex, average smooth and non-convex problems easily (Appendix J). It would be interesting to prove tight lower bounds in more general setting, such as F is of (σ, L)-smoothness while each fi is (l, L)-smoothness.\nREFERENCES\nAlekh Agarwal and Leon Bottou. A lower bound for the optimization of finite sums. In ICML, 2015.\nZeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. Journal of Machine Learning Research, 18(1):8194–8244, 2017.\nZeyuan Allen-Zhu. Katyusha X: Practical momentum method for stochastic sum-of-nonconvex optimization. In ICML, 2018.\nYossi Arjevani and Ohad Shamir. Communication complexity of distributed convex learning and optimization. In NIPS, 2015.\nYair Carmon, John C. Duchi, Oliver Hinder, and Aaron Sidford. Lower bounds for finding stationary points I. arXiv preprint arXiv:1710.11606, 2017.\nAaron Defazio. A simple practical accelerated method for finite sums. In NIPS, 2016.\nAaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS, 2014.\nCong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In NIPS, 2018.\nRobert Hannah, Yanli Liu, Daniel O’Connor, and Wotao Yin. Breaking the span assumption yields fast finite-sum minimization. In Advances in Neural Information Processing Systems, pages 2312–2321, 2018.\nRie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013.\nGuanghui Lan and Yi Zhou. An optimal randomized incremental gradient method. Mathematical programming, pages 1–49, 2017.\nYurii Nesterov. A method for solving the convex programming problem with convergence rate o(1/kˆ2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.\nYurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.\nMark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 162(1-2):83–112, 2017.\nBlake Woodworth and Nathan Srebro. Tight complexity bounds for optimizing composite objectives. In NIPS, 2016.\nLin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014.\nLijun Zhang, Mehrdad Mahdavi, and Rong Jin. Linear convergence with condition number independent access of full gradients. In NIPS, 2013.\nDongruo Zhou and Quanquan Gu. Lower bounds for smooth nonconvex finite-sum optimization. In ICML, 2019.\nA COMPARISON OF REQUIRED NUMBER OF DIMENSIONS\nB COMPARISON WITH EXISTING PROOFS\nRecall the adversary function we used is (please see detailed defintion in Section 2)\nr(x;λ0, λ1, λ2,m, ω) , 1\nn n∑ i=1 ri(x;λ0, λ1, λ2,m, ω) (11)\n= λ1 n x>A(m,ω)x + λ2 ‖x‖22 − λ0 n 〈em,x〉, (12)\nwhere\nri(x;λ0, λ1, λ2,m, ω) = λ1 ∑ l∈L1 ∥∥bl(m,ω)>x∥∥22 + λ2 ‖x‖22 − λ0〈em,x〉, for i = 1, λ1 ∑ l∈Li\n∥∥bl(m,ω)>x∥∥22 + λ2 ‖x‖22 , for i = 2, 3, . . . , n. The constructions in previous work (Lan and Zhou, 2017; Zhou and Gu, 2019) for IFO algorithms employ an aggregation of r , that is,\nf(x) , 1\nn n∑ i=1 fi(x), where fi(x) = nr(xi),\nx = x1 x2 ... xn ∈ Rmn, and xi ∈ Rm for i = 1, . . . , n.\nThe disadvantage of their construction is in that the important property from Lemma 2.6, which we can obtain information of only one extra dimension at each PIFO query, cannot be held. Note that the framework in this paper is the first lower bound analysis that utilize the decomposition form (11), which makes the “only one” property also hold for PIFO query. The previous works only consider the presentation (12) and are unaware of the decomposition (11). Moreover, the fact r : Rm → R and f : Rmn → R provide an intuitive understanding why our construction requires a smaller dimension (see Table 2).\nThe analysis in (Woodworth and Srebro, 2016; Fang et al., 2018) considers a very complicated approach to dealing with the proximal operator (completely different from how to deal with gradient operator). In contrast, our construction holds “only one” property (Lemma 2.6) both for proximal and gradient operator, which leads the proof is more concise. Our construction more clearly shows that PIFO algorithms are not more powerful than IFO algorithms in the sense of lower complexity bound. We also use our technique to prove the tight lower bound of PIFO algorithm when κ = O(n), which is a new result.\nC DETAILED PROOF FOR SECTION 2\nIn this section, we use ‖A‖ to denote the spectral radius of A. For simplicity, let\nB = B(m,ω) = −1 1 −1 1 . . . . . .\n−1 1 ω ∈ Rm×m, b>l is the l-th row of B, and fi(x) = ri(x;λ0, λ1, λ2,m, ω). Recall that Li = {l : 1 ≤ l ≤ m, l ≡ i− 1(modn)}, i = 1, 2, · · · , n. For 1 ≤ i ≤ n, let Bi be a submatrix which is formed from rows Li of B, that is Bi = B[Li; ]\nThen fi can be wriiten as\nfi(x) = λ1 ‖Bix‖22 + λ2 ‖x‖ 2 2 − ηi〈em,x〉,\nwhere η1 = λ0, ηi = 0, i ≥ 2.\nProof of Proposition 2.5. Note that\n〈u,B>i Biu〉 = ‖Biu‖ 2 2 = ∑ l∈Li (b>l u) 2\n=\n{∑ l∈Li\\{m}(um−l − um−l+1)\n2 + ω2u2m (if m ∈ Li)∑ l∈Li(um−l − um−l+1) 2\n≤ 2 ‖u‖22 ,\nwhere the last inequality is according to (x+y)2 ≤ 2(x2 +y2), and |l1− l2| ≥ n ≥ 2 for l1, l2 ∈ Li. Hence, ∥∥B>i Bi∥∥ ≤ 2, and∥∥∇2fi(x)∥∥ = ∥∥2λ1B>i Bi + 2λ2I∥∥ ≤ 4λ1 + 2λ2. Next, observe that\n‖∇fi(x)−∇fi(y)‖22 = ∥∥(2λ1B>i Bi + 2λ2I)(x− y)∥∥22\nLet u = x− y. Note that\nblb > l u = { (um−l − um−l+1)(em−l − em−l+1), l < m, ω2u1e1, l = m.\nThus, if m /∈ Li, then∥∥(2λ1B>i Bi + 2λ2I)u∥∥22 =\n∥∥∥∥∥2λ1 ∑ l∈Li (um−l − um−l+1)(em−l − em−l+1) + 2λ2u ∥∥∥∥∥ 2\n2 = ∑\nm−l∈Li\n[ (2λ1(ul − ul+1) + 2λ2ul)2 + (−2λ1(ul − ul+1) + 2λ2ul+1)2 ] + ∑ m−l/∈Li m−l+1/∈Li (2λ2ul) 2\n≤ ∑\nm−l∈Li\n8 [ (λ1 + λ2) 2 + λ21 ] (u2l + u 2 l+1) + 4λ 2 2 ‖u‖ 2 2 .\nSimilarly, if m ∈ Li, then\n∥∥(2λ1B>i Bi + 2λ2I)u∥∥22 ≤\n∑ m−l∈Li l 6=0 8 [ (λ1 + λ2) 2 + λ21 ] (u2l + u 2 l+1) + 4(λ1ω 2 + λ2) 2u21 + 4λ 2 2 ‖u‖ 2 2 .\nTherefore, we have\n1 n n∑ i=1 ‖∇fi(x)−∇fi(y)‖22\n≤ 1 n [ m−1∑ l=1 8 [ (λ1 + λ2) 2 + λ21 ] (u2l + u 2 l+1) + 4(2λ1 + λ2) 2u21 ] + 4λ22 ‖u‖ 2 2 ≤ 16 n [ (λ1 + λ2) 2 + λ21 ] ‖u‖22 + 4λ 2 2 ‖u‖ 2 2 ,\nwhere we have used (2λ1 + λ2)2 ≤ 2 [ (λ1 + λ2) 2 + λ21 ] . In summary, we get that {fi}1≤i≤n is L′-average smooth, where\nL′ = 2\n√ 4\nn [(λ1 + λ2)2 + λ21] + λ 2 2.\nProof of Lemma 2.6. For x ∈ Fk (k ≥ 1), we have\nb>l x = 0 for l > k, bl ∈ Fk for l < k, bk ∈ Fk+1.\nConsequently, for l 6= k, blb>l x = (b>l x)bl ∈ Fk, and bkb>k x ∈ Fk+1. For k = 0, we have x = 0, and\n∇f1(x) = λ0em ∈ F1, ∇fj(x) = 0 (j ≥ 2).\nMoreover, we suppose k ≥ 1, k ∈ Li. Since ∇fj(x) = 2λ1B>j Bjx + 2λ2x− ηjem\n= 2λ1 ∑ l∈Lj b>l blx + 2λ2x− ηjem.\nHence, ∇fi(x) ∈ Fk+1 and ∇fj(x) ∈ Fk (j 6= i).\nNow, we turn to consider u = proxγfj (x). We have( 2λ1B > j Bj + ( 2λ2 + 1\nγ\n) I ) u = ηjem + 1\nγ x,\ni.e.,\nu = c1(I + c2B > j Bj) −1y,\nwhere c1 = 12λ2+1/γ , c2 = 2λ1 2λ2+1/γ , and y = ηjem + 1γx. Note that\n(I + c2B > j Bj) −1 = I −B>j ( 1\nc2 I + BjB\n> j )−1 Bj .\nIf k = 0 and j > 1, we have y = 0 and u = 0. If k = 0 and j = 1, we have y = λ0em. On this case, B1em = 0, so u = c1y ∈ F1. For k ≥ 1, we know that y ∈ Fk. And observe that if |l − l′| ≥ 2, then b>l bl′ = 0, and consequently BjB>j is a diagonal matrix, so we can assume that 1 c2 I+BjB > j = diag(βj,1, · · · , βj,|Lj |). Therefore,\nu = c1y − c1 |Lj |∑ s=1 βj,sblj,sb > lj,sy,\nwhere we assume that Lj = {lj,1, · · · , lj,|Lj |}. Thus, we have proxγfi(x) ∈ Fk+1 for k ∈ Li and prox γ fj (x) ∈ Fk (j 6= i).\nProof of Corollary 2.7. Denote span{∇fi1(x0), · · · ,∇fit(xt−1),prox\nγ1 fi1 (x0), · · · ,proxγtfit (xt−1)}\nbyMt. We know that xt ∈Mt. Suppose thatMT ⊆ Fk−1 for some T and let T ′ = arg min t : t > T, it ≡ k(mod n). By Lemma 2.6, for T < t < T ′, we can use a simple induction to obtain that\nspan{∇fit(xt−1),prox γt fit (xt−1)} ⊆ Fk−1 andMt ⊆ Fk−1. Moreover, since iT ′ ≡ k(mod n), we have\nspan{∇fiT ′ (xT ′−1),prox γT ′ fi T ′ (xT ′−1)} ⊆ Fk andMT ′ ⊆ Fk. Following from above statement, it is easily to check that for t < Tk, we have xt ∈Mt ⊆ Fk−1. Next, note that\nP (Tk − Tk−1 = s) = P ( iTk−1+1 6≡ k(mod n), · · · , iTk−1+s−1 6≡ k(mod n), iTk−1+s ≡ k(mod n) ) = P ( iTk−1+1 6= k′, · · · , iTk−1+s−1 6= k′, iTk−1+s = k′\n) = (1− pk′)s−1pk′ ,\nwhere k′ ≡ k(mod n), 1 ≤ k′ ≤ n. So Tk − Tk−1 is a geometric random variable with success probability pk′ . On the other hand, Tk − Tk−1 is just dependent on iTk−1+1, · · · , iTk , thus for l 6= k, Tl − Tl−1 is independent with Tk − Tk−1. Therefore,\nTk = k∑ l=1 (Tl − Tl−1) = k∑ i=1 Yl,\nwhere Yl follows a geometric distribution with success probability ql = pl′ where l′ ≡ l(mod n), 1 ≤ l′ ≤ n.\nProof of Remark 2.3. If each fi is L-smooth, then for any x,y ∈ Rm we have\n‖∇fi(x)−∇fi(y)‖22 ≤ L 2 ‖x− y‖22 ,\nand consequently,\n1 n n∑ i=1 ‖∇fi(x)−∇fi(y)‖22 ≤ L 2 ‖x− y‖22 . (13)\nIf {fi}ni=1 is L-average smooth, then for any x,y ∈ Rm we have\n‖∇f(x)−∇f(y)‖22 = 1\nn2 ∥∥∥∥∥ n∑ i=1 (∇fi(x)−∇fi(y)) ∥∥∥∥∥ 2\n2\n≤ 1 n2 ( n∑ i=1 ‖∇fi(x)−∇fi(y)‖2 )2\n≤ 1 n n∑ i=1 ‖∇fi(x)−∇fi(y)‖22 ≤ L2 ‖x− y‖22 .\nProof of Lemma 2.9. Denote minx∈Rm f(x) by f∗. For t ≤ N , we have Ef(xt)− f∗ ≥ E[f(xt)− f∗|N < TM+1]P (N < TM+1)\n≥ E[ min x∈FM f(x)− f∗|N < TM+1]P (N < TM+1)\n≥ 9εP (TM+1 > N) , where TM+1 is defined in (7), and the second inequality follows from Corollary 2.7 (if N < TM+1, then xt ∈ FM for t ≤ N ).\nBy Corollary 2.7, TM+1 can be written as TM+1 = ∑M+1 l=1 Yl, where {Yl}1≤l≤M+1 are independent random variables, and Yl follows a geometric distribution with success probability ql = pl′ (l′ ≡ l(mod n), 1 ≤ l′ ≤ n). Moreover, recalling that p1 ≤ p2 ≤ · · · ≤ pn, we have ∑M+1 l=1 ql ≤ M+1 n . Therefore, by Lemma 2.8, we have\nP (TM+1 > N) = P ( M+1∑ l=1 Yl > (M + 1)n 4 ) ≥ 1− 16 9(M + 1) ≥ 1 9 ,\nHence, we can conclude that Ef(xN )− f∗ ≥ 9εP (TM+1 > N) ≥ ε.\nRemark In fact, a more strong conclusion hosts: E [ min t≤N f(xt) ] − min x∈Rm f(x) ≥ ε.\nD RESULTS ABOUT SUM OF GEOMETRIC DISTRIBUTED RANDOM VARIABLES\nLemma D.1. Let X1 ∼ Geo(p1), X2 ∼ Geo(p2) be independent random variables. For any positive integer j, if p1 6= p2, then\nP (X1 +X2 > j) = p2(1− p1)j − p1(1− p2)j\np2 − p1 , (14)\nand if p1 = p2, then\nP (X1 +X2 > j) = jp1(1− p1)j−1 + (1− p1)j . (15)\nProof.\nP (X1 +X2 > j) = j∑ l=1 P (X1 = l)P (X2 > j − l) + P (X1 > j)\n= j∑ l=1 (1− p1)l−1p1(1− p2)j−l + (1− p1)j\n= p1(1− p2)j−1 j∑ l=1 ( 1− p1 1− p2 )l−1 + (1− p1)j\nThus if p1 = p2, P (X1 +X2 > j) = jp1(1− p1)j−1 + (1− p1)j . For p1 6= p2,\nP (X1 +X2 > j) = p1 (1− p1)j − (1− p2)j\np2 − p1 + (1− p1)j\n= p2(1− p1)j − p1(1− p2)j\np2 − p1 .\nLemma D.2. For x ≥ 0 and j ≥ 2,\n1− j − 1 x+ j/2\n≤ ( x\nx+ 1\n)j−1 . (16)\nProof. We just need to show that (x+ 1)j−1(x+ j/2)− (j − 1)(x+ 1)j−1 ≤ xj−1(x+ j/2), that is\n(x+ 1)j − j(x+ 1)j−1/2− xj−1(x+ j/2) ≤ 0,\ni.e., j−2∑ l=0 [( j l ) − j 2 ( j − 1 l )] xl ≤ 0.\nNote that for l ≤ j − 2, ( j\nl\n) − j\n2 ( j − 1 l ) = ( 1− j − l 2 )( j l ) ≤ 0,\nthus inequality (16) hosts for x ≥ 0 and j ≥ 2.\nLemma D.3. Let X1 ∼ Geo(p1), X2 ∼ Geo(p2), Y1, Y2 ∼ Geo ( p1+p2\n2\n) be independent random\nvariables with 0 < p1 ≤ p2 ≤ 1. Then for any positive integer j, we have\nP (X1 +X2 > j) ≥ P (Y1 + Y2 > j) .\nProof. If j = 1, then P (X1 +X2 > j) = 1 = P (Y1 + Y2 > j). If p1 = p2 = 1, then P (X1 +X2 > j) = 0 = P (Y1 + Y2 > j) for j ≥ 2.\nLet j ≥ 2, and c , p1 + p2 < 2 be a given constant.\nWe prove that f(p1) , P (X1 +X2 > j) is a decreasing function.\nEmploying equation (14), for p1 < c/2, we have\nf(p1) = (c− p1)(1− p1)j − p1(1 + p1 − c)j\nc− 2p1 ,\nand f ′(p1) = −(1− p1)j − j(c− p1)(1− p1)j−1 − (1 + p1 − c)j − jp1(1 + p1 − c)j−1\nc− 2p1\n+ 2 (c− p1)(1− p1)j − p1(1 + p1 − c)j\n(c− 2p1)2\n= [c(1− p1)− j(c− p1)(c− 2p1)](1− p1)j−1 − [c(1 + p1 − c) + jp1(c− 2p1)](1 + p1 − c)j−1\n(c− 2p1)2 .\nHence f ′(p1) < 0 is equivalent to\nc(1− p1)− j(c− p1)(c− 2p1) c(1 + p1 − c) + jp1(c− 2p1) <\n( 1 + p1 − c\n1− p1\n)j−1 . (17)\nNote that c(1− p1)− j(c− p1)(c− 2p1) c(1 + p1 − c) + jp1(c− 2p1)\n= 1− (j − 1)c(c− 2p1) c(1 + p1 − c) + jp1(c− 2p1) = 1− j − 11+p1−c c−2p1 + j p1 c\nDenote x = 1+p1−cc−2p1 . If c ≤ 1, then p1 > 0 and x > 1−c c ≥ 0. And if c > 1, then p1 ≥ c − 1 and x ≥ 1+c−1−c2−c = 0. Rewrite inequality (17) as\n1− j − 1 x+ jp1/c <\n( x\nx+ 1\n)j−1 .\nRecall inequality (16), we have( x\nx+ 1\n)j−1 ≥ 1− j − 1\nx+ j/2 > 1− j − 1 x+ jp1/c .\nConsequently, f ′(p1) < 0 hosts for p1 < c/2 and j ≥ 2. With the fact that limp1→c/2 f(p1) = f(c/2) according to equation (15), we have\nP (X1 +X2 > j) ≥ P (Y1 + Y2 > j) .\nfor any positive integer j and 0 < p1 ≤ p2 ≤ 1.\nCorollary D.4. LetX1 ∼ Geo(p1), X2 ∼ Geo(p2), Y1, Y2 ∼ Geo ( p1+p2\n2\n) be independent random\nvariables with 0 < p1 ≤ p2 ≤ 1. Suppose Z is a random variable that takes nonnegative integer values, and Z is independent with X1, X2, Y1, Y2. Then for any positive integer j, we have\nP (Z +X1 +X2 > j) ≥ P (Z + Y1 + Y2 > j) .\nProof. With applying Lemma D.3, we have\nP (Z +X1 +X2 > j) = j−1∑ l=0 P (Z = l)P (X1 +X2 > l − j) + P (Z > j − 1)\n≥ j−1∑ l=0 P (Z = l)P (Y1 + Y2 > l − j) + P (Z > j − 1) = P (Z + Y1 + Y2 > j) .\nCorollary D.5. Let {Xi}1≤i≤m be independent variables, and Xi follow a geometric distribution with success probability pi. For any positive integer j, we have\nP ( m∑ i=1 Xi ≥ j ) ≥ P ( m∑ i=1 Yi ≥ j ) ,\nwhere {Yi}1≤i≤m are i.i.d. random variables, Yi ∼ Geo( ∑m i=1 pi/m), and Yi is independent with Xi′(1 ≤ i′ ≤ m).\nProof. Let\nf(p1, p2, · · · , pm) , P ( m∑ i=1 Xi ≥ j ) .\nOur goal is to minimize f(p1, p2, · · · , pm) such that ∑m i=1 pi = S < 1. By Corollary D.4, we know that\nf(p1, p2, · · · , pi, · · · , pj , · · · , pm) ≥ f(p1, p2, · · · , pi + pj 2 , · · · , pi + pj 2 , · · · , pm).\nThis fact implies that (p1, p2, · · · , pm) such that p1 = p2 = · · · = pm = S/m is a minimizer of the function f .\nLemma D.6. Let {Xi}1≤i≤m be i.i.d. random variables, and Xi follows a geometric distribution with success probability p. We have\nP ( m∑ i=1 Xi > m 4p ) ≥ 1− 16 9m (18)\nProof. Denote ∑m i=1Xi by τ . We know that\nEτ = m\np , Var(τ) = m(1− p) p2 .\nHence, we have\nP ( τ > 1 4 Eτ ) = P ( τ − Eτ > −3 4 Eτ )\n= 1− P ( τ − Eτ ≤ −3 4 Eτ )\n≥ 1− P ( |τ − Eτ | ≥ 3 4 Eτ )\n≥ 1− 16Var(τ) 9(Eτ)2 = 1− 16m(1− p) 9m2 ≥ 1− 16 9m .\nCorollary D.7. Let {Xi}1≤i≤m be independent random variables, and Xi follows a geometric distribution with success probability pi. Then\nP ( m∑ i=1 Xi > m2 4( ∑m i=1 pi) ) ≥ 1− 16 9m .\nE PROOF OF THEOREM 4.2\nProposition E.1. For any n ≥ 2, m ≥ 2, fSC,i and FSC in Definition 4.1 satisfy:\n1. fSC,i is L-smooth and µ-strongly convex.\n2. The minimizer of the function FSC is\nx∗ = arg min x∈Rm FSC(x) =\n√ 2∆n(α+ 1)2\n(L− µ)(α− 1) (qm, qm−1, · · · , q)>,\nwhere q = α−1α+1 . Moreover, FSC(x ∗) = −∆.\n3. For 1 ≤ k ≤ m− 1, we have min x∈Fk FSC(x)− FSC(x∗) ≥ ∆q2k. (19)\nProof.\n1. Just recall Proposition 2.5. 2. Denote ξ = √ 2∆n(α+1)2\n(L−µ)(α−1) .\nLet ∇FSC(x) = 0, that is( L− µ\n2n A\n(√ 2\nα+ 1\n) + µI ) x =\nL− µ n(α+ 1) ξem,\nor ω2 + 1 + 2nµL−µ −1 −1 2 + 2nµL−µ −1 . . . . . .\n−1 2 + 2nµL−µ −1 −1 1 + 2nµL−µ\nx = \n0 0 ... 0 2ξ α+1\n (20)\nNote that q = α−1α+1 is a root of the equation\nz2 − ( 2 + 2nµ\nL− µ\n) z + 1 = 0,\nand ω2 + 1 + 2nµ\nL− µ =\n1 q ,\n2 α+ 1 = 1− q = −q2 + (1 + 2nµ L− µ )q.\nHence, it is easily to check that the solution to Equation (20) is\nx∗ = ξ(qm, qm−1, · · · , q)>,\nand FSC(x\n∗) = − L− µ 2n(α+ 1) ξ2q = −∆.\n3. If x ∈ Fk, 1 ≤ k < m, then x1 = x2 = · · · = xm−k = 0. Let y = xm−k+1:m ∈ Rk and Ak be last k rows and columns of the matrix in Equation (21). Then we can rewrite F (x) as\nFk(y) , FSC(x) = L− µ\n4n y>Aky − L− µ n(α+ 1) ξ〈em,y〉.\nLet ∇Fk(y) = 0, that is 2 + 2nµL−µ −1 −1 2 + 2nµL−µ −1 . . . . . . −1 2 + 2nµL−µ −1\n−1 1 + 2nµL−µ\ny = \n0 0 ... 0 2ξ α+1\n . (21)\nBy some calculation, the solution to above equation is\nξqk+1 1 + q2k+1 ( q−1 − q, q−2 − q2, · · · , q−k − qk )> .\nThus\nmin x∈Fk FSC(x) = min y∈Rk\nFk(y) = − L− µ\n2n(α+ 1) ξ2q\n1− q2k\n1 + q2k+1 = ∆\n1− q2k\n1 + q2k+1 ,\nand\nmin x∈Fk\nFSC(x)− FSC(x∗) = ∆ ( 1− 1− q 2k\n1 + q2k+1 ) = ∆q2k 1 + q\n1 + q2k+1 ≥ ∆q2k.\nProof of Theorem 4.2. Let M = ⌊\nlog(9ε/∆) 2 log q\n⌋ , then we have\narg min x∈FM\nFSC(x)− FSC(x∗) ≥ ∆q2M ≥ 9ε,\nwhere the first inequality is according to the third property of Proposition E.1.\nFollowing from Lemma 2.9, for M ≥ 1 and N = (M + 1)n/4, we have min t≤N EFSC(xt)− FSC(x∗) ≥ ε. Therefore, in order to find x̂ ∈ Rm such that EFSC(x̂)− FSC(x∗) < ε, A needs at least N queries to hFSC . We estimate − log(q) and N in two cases.\n1. If L/µ ≥ n/2 + 1, then α = √ 2L/µ−1n + 1 ≥ √\n2. Observe that function h(β) = 1\nlog( β+1β−1 ) − β2 is increasing when β > 1. Thus, we have\n− 1 log(q) = 1 log ( α+1 α−1 ) ≥ α 2 + h( √ 2)\n= 1\n2\n√ 2 L/µ− 1\nn + 1 + h(\n√ 2)\n≥ √ 2\n4\n(√ 2 L/µ− 1\nn + 1\n) + h( √ 2)\n≥ 1 2\n√ L/µ− 1\nn +\n√ 2\n4 + h( √ 2),\nand\nN = (M + 1)n/4 = n\n4\n(⌊ log(9ε/∆)\n2 log q\n⌋ + 1 ) ≥ n\n8\n( − 1\nlog(q)\n) log ( ∆\n9ε ) ≥ n\n8\n( 1\n2\n√ L/µ− 1\nn +\n√ 2\n4 + h( √ 2)\n) log ( ∆\n9ε\n)\n= Ω (( n+ √ nL\nµ\n) log ( ∆\n9ε\n))\n2. If 2 ≤ L/µ < n/2 + 1, then we have − log(q) = log ( α+ 1\nα− 1\n) = log ( 1 +\n2(α− 1) α2 − 1\n)\n= log 1 + √\n2L/µ−1n + 1− 1 L/µ−1 n ≤ log(1 + (√2− 1)n L/µ− 1 )\n≤ log\n( ( √\n2− 1/2)n L/µ− 1\n) ≤ log ( (2 √\n2− 1)n L/µ\n) , (22)\nwhere the first inequality and second inequality follow from L/µ − 1 < n/2 and the last inequality is according to 1x−1 ≤ 2 x for x ≥ 2. Note that n ≥ 2, thus nn−1 ≤ 2 ≤ n L/µ−1 , and hence n ≥ L/µ, i.e. log(nµ/L) ≥ 0.\nTherefore,\nN = (M + 1)n/4 ≥ n 8\n( − 1\nlog(q)\n) log ( ∆\n9ε ) = Ω (( n\n1 + log(nµ/L)\n) log ( ∆\n9ε\n)) .\nRecalling that we assume that 9ε/∆ ≤ q2, thus we have\nN ≥ n 8\n( − 1\nlog(q)\n) log ( ∆\n9ε\n) ≥ n\n8\n( − 1\nlog(q)\n) (−2 log(q)) = n\n4 .\nTherefore, N = Ω ( n+ ( n\n1+log(nµ/L)\n) log (\n∆ 9ε\n)) .\nAt last, we must to ensure that 1 ≤M < m, that is\n1 ≤ log(9ε/∆) 2 log q < m. (23)\nNote that limβ→+∞ h(β) = 0, so −1/ log(q) ≤ α/2. Thus the above conditions are satisfied when\nm = log(∆/(9ε)) 2(− log q) + 1 ≤ 1 4\n(√ 2 L/µ− 1\nn + 1\n) log ( ∆\n9ε\n) + 1 = O (√ L\nnµ log\n( ∆\nε\n)) ,\nand\nε ∆ ≤ 1 9 ( α− 1 α+ 1 )2 .\nF LOWER BOUND FOR ANOTHER FORM OF SUBOPTIMAL SOLUTION\nDefazio (2016) showed that the PIFO algorithm Point SAGA has the convergence result E ‖xt − x∗‖22 ≤ (q′)t ‖x0 − x∗‖2, where q′ satisfies −1/ log(q′) = O ( n+ √ nL/µ ) . To match this form of upper bound, we point out that a similar result holds for {fSC,i}ni=1. Theorem F.1. Suppose that\nL µ ≥ n 2 + 1, ε ≤ 1 18 (√ 2− 1√ 2 + 1 )2 , and m = 1 2 (√ 2 L/µ− 1 n + 1 ) log ( 1 18ε ) + 1.\nIn order to find x̂ ∈ Rm such that E ‖x̂− x∗‖22 < ε ‖x0 − x∗‖ 2 2, PIFO algorithm A needs at least Ω (( n+ √ nL µ ) log ( 1 ε )) queries to hFSC . Proof. Denote ξ = √ 2∆n(α+1)2 (L−µ)(α−1) , and M = ⌊ log(18ε) 2 log q ⌋ . For 1 ≤M ≤ m/2, N = n(M + 1)/4 and t ≤ N , we have\nE ‖xt − x∗‖22 ≥ E [ ‖xt − x∗‖22 ∣∣∣∣N < TM+1]P (N < TM+1) ≥ E [ min\nx∈FM ‖x− x∗‖22 ∣∣∣∣N < TM+1]P (N < TM+1) ≥ 1\n9 min x∈FM ‖x− x∗‖22 .\nwhere TM+1 is defined in (7), the second inequality follows from Corollary 2.7 (if N < TM+1, then xt ∈ FM for t ≤ N ), and the last inequality is established because of our Corollary 2.7 (More detailed explanation refer to our proof of Lemma 2.9). By Proposition E.1, we know that x∗ = ξ(qm, qm−1, · · · , q)>, and\n‖x0 − x∗‖22 = ‖x ∗‖22 = ξ\n2 q 2 − q2(m+1)\n1− q2 .\nNote that if x ∈ FM , then x1 = x2 = · · · = xm−M = 0, thus\nmin x∈FM\n‖x− x∗‖22 = ξ 2 m∑ l=m−M q2(m−l+1) = ξ2 q2(M+1) − q2(m+1) 1− q2 .\nThus, for t ≤ N and M ≤ m/2, we have\nE ‖xt − x∗‖22 ‖xt − x∗‖22 ≥ 1 9 q2M − q2m 1− q2m\n≥ 1 18 q2M = 1 18 q2b log(18ε) 2 log q c ≥ ε,\nwhere the second inequality is due to\nq2M − q2m\n1− q2m − q\n2M 2 = q2M − 2q2m + q2(m+M) 2(1− q2m)\n= q2M\n2(1− q2m) (1− 2q2(m−M) + q2m)\n≥ q 2M\n2(1− q2m) (1− 2qm + q2m) ≥ 0.\nTherefore, in order to find x̂ ∈ Rm such that E‖x̂−x ∗‖22\n‖x0−x∗‖22 < ε, A needs at least N queries to hFSC .\nAs we have showed in proof of Theorem 4.2, for L/µ ≥ n/2 + 1, we have\n1 2\n√ 2 L/µ− 1\nn + 1 ≥ − 1 log(q) ≥ c1\n(√ L/µ− 1\nn + 1\n) ,\nand\nN = n 4 (M + 1) ≥ n 4 log(18ε) 2 log q\n≥ c1 8\n( n+ √ n(L/µ− 1) ) log ( 1\n18ε ) = Ω (( n+ √ nL\nµ\n) log ( 1\nε\n)) .\nAt last, we have to ensure that 1 ≤M ≤ m/2, that is\n1 ≤ log(18ε) 2 log q < m/2.\nThe above conditions are satisfied when\nm = log(1/(18ε)) − log q + 1 ≤ 1 2\n(√ 2 L/µ− 1\nn + 1\n) log ( 1\n18ε\n) + 1 = O (√ L\nnµ log\n( 1\nε\n)) ,\nand\nε ≤ 1 18 q2.\nObserve that when L/µ ≤ n/2 + 1, we have α ≥ √ 2 and q = α−1α+1 ≥ √ 2−1√ 2+1 . Hence, we just need\nε ≤ 118 (√ 2−1√ 2+1 )2 ≈ 0.00164.\nG PROOF OF THEOREM 4.5\nProposition G.1. For any n ≥ 2, m ≥ 2, following properties hold:\n1. fC,i is L-smooth and convex.\n2. The minimizer of the function FC is\nx∗ = arg min x∈Rm\nFC(x) = 2ξ L (1, 2, · · · ,m)> ,\nwhere ξ = √\n3 2 BL (m+1)3/2\n. Moreover, FC(x∗) = −mξ 2 nL and ‖x0 − x ∗‖22 ≤ B2.\n3. For 1 ≤ k ≤ m, we have\nmin x∈Fk\nFC(x)− FC(x∗) = ξ2\nnL (m− k). (24)\nProof.\n1. Just recall Proposition 2.5. 2. Denote ξ = √\n3 2 BL (m+1)3/2n . Let ∇FC(x) = 0, that is\nL\n2n A(1)x =\nξ n em,\nor 2 −1 −1 2 −1 . . . . . . −1 2 −1\n−1 1\nx = \n0 0 ... 0 2ξ L\n . (25)\nHence, it is easily to check that the solution to Equation (25) is\nx∗ = 2ξ\nL (1, 2, · · · ,m)>,\nand\nFC(x ∗) = −mξ\n2\nnL .\nMoreover, we have\n‖x0 − x∗‖22 = 4ξ2 L2 m(m+ 1)(2m+ 1) 6\n≤ 4ξ 2\n3L2 (m+ 1)3 = B2.\n3. By similar calculation to above proof, we have\narg min x∈Fk\nFC(x) = 2ξ\nL (1, 2, · · · , k)>,\nand\nmin x∈Fk\nFC(x) = − kξ2\nnL .\nThus\nmin x∈Fk\nFC(x)− FC(x∗) = ξ2\nnL (m− k).\nProof of Theorem 4.5. Since ε ≤ B 2L 384n , we have m ≥ 3. Let ξ = √ 3 2 BL (m+1)3/2 .\nFor M = ⌊ m−1\n2\n⌋ ≥ 1, we have m−M ≥ (m+ 1)/2, and\nmin x∈FM\nFC(x)− FC(x∗) = ξ2 nL (m−M) = 3B 2L 4n m−M (m+ 1)3\n≥ 3B 2L\n8n\n1\n(m+ 1)2 ≥ 9ε,\nwhere the first equation is according to the 3rd property in Proposition G.1 and the last inequality follows from m+ 1 ≤ B √ L/(24nε).\nSimilar to the proof of Theorem 4.2, by Lemma 2.9, we have\nmin t≤N\nEFC(xt)− FC(x∗) ≥ ε.\nIn other words, in order to find x̂ ∈ Rm such that EFC(x̂)−FC(x∗) < ε,A needs at leastN queries to hF .\nAt last, observe that\nN = (M + 1)n/4 = n\n4\n⌊ m+ 1\n2 ⌋ ≥ n(m− 1)\n8\n≥ n 8\n(√ B2L\n24nε − 2\n)\n= Ω ( n+B √ nL\nε\n) ,\nwhere we have recalled ε ≤ B 2L\n384n in last equation.\nH PROOF OF LEMMA 4.3, LEMMA 4.6 AND LEMMA 4.10\nProof of Lemma 4.6. Consider the following functions {gi}1≤i≤n, gi : R→ R, where\ng1(x) = L\n2 x2 − nLBx,\ngi(x) = L\n2 x2,\nG(x) = 1\nn n∑ i=1 gi(x) = L 2 x2 − LBx.\nFirst observe that\nx∗ = arg min x∈R G(x) = B,\nG(0)−G(x∗) = LB 2\n2 ,\nand |x0 − x∗| = B.\nFor i > 1, we have dgi(x)dx |x=0 = 0 and prox γ gi(0) = 0. Thus xt = 0 will host till our first-order method A draws the component f1. That is, for t < T = arg min{t : it = 1}, we have xt = 0.\nHence, for t ≤ 12p1 , we have EG(xt)− F (x∗) ≥ E [ G(xt)−G(x∗) ∣∣∣ 1 2p1 < T ] P ( 1 2p1 < T ) = LB2 2 P ( 1 2p1 < T ) .\nNote that T follows a geometric distribution with success probability p1 ≤ 1/n, and P ( T > 1\n2p1\n) = P ( T > ⌊ 1\n2p1\n⌋) = (1− p1) ⌊ 1 2p1 ⌋\n≥ (1− p1) 1 2p1 ≥ (1− 1/n)n/2 ≥ 1 2 ,\nwhere the second inequality follows from h(z) = log(1−z)2z is a decreasing function. Thus, for t ≤ 12p1 , we have\nEG(xt)− F (x∗) ≥ LB2\n4 ≥ ε\nThus, in order to find x̂ ∈ R such that EF (x̂) − F (x∗) < ε, A needs at least 12p1 ≥ n/2 = Ω (n) queries to hG.\nProof of Lemma 4.10. Note that {gi}ni=1 defined in proof of Lemma 4.6 is also L-average smooth, so Lemma 4.10 hosts for the same reason. Proof of Lemma 4.3. Let B = √\n2∆/L. Then ε/∆ ≤ 1/2 is equivalent to ε ≤ LB2/4. Note that {gi}ni=1 defined in proof of Lemma 4.6 is also µ-strongly convex for any µ ≤ L, and satisfy |G(0)−G(x∗)| = ∆. Therefore Lemma 4.3 hosts for the same reason.\nRemark H.1. Suppose that ε\n∆ >\n1 9 ( α− 1 α+ 1 )2 , α = √ 2 κ− 1 n + 1.\n1. If κ ≥ n/2 + 1, then we have α ≥ √\n2 and( n+ √ κn ) log\n( ∆\n9ε\n) ≤ 2 ( n+ √ κn ) log ( α+ 1\nα− 1 ) ≤ 4 (n+ √ κn)\nα− 1 = O(n) + 4\n√ κn\n(1− √ 2/2)α\n≤ O(n) + 4√ 2− 1 √ κn√ κ/n = O(n),\nwhere the second inequality follows from log(1+x) ≤ x and the last inequality is according to α ≥ √ 2κ/n. That is\nΩ(n) = Ω (( n+ √ κn ) log ( ∆\n9ε\n)) .\n2. If 2 ≤ L/µ < n/2 + 1, then we have( n\n1 + log(nµ/L)\n) log ( ∆\n9ε\n) ≤ (\nn\n1 + log(nµ/L)\n)( 2 log ( α+ 1\nα− 1 )) ≤ ( n\n1 + log(nµ/L)\n)( 2 log ( (2 √\n2− 1)n L/µ\n)) = O(n),\nwhere the second inequality follows from (22). That is\nΩ(n) = Ω\n(( n\n1 + log(nµ/L)\n) log ( ∆\n9ε\n) + n ) .\nI DETAILED PROOF FOR SECTION 4.3\nProof of Proposition 4.7.\n1. It is easily to check that FSC(x) is µ-strongly convex. Following from Proposition 2.5, then {fSC,i}ni=1 is L̂-average smooth, where\nL̂ = √√√√16 n [( L+ µ 4 )2 + ( L− µ 4 )2] + µ2\n=\n√ 2(L2 + µ2)\nn + µ2 = L′.\n2. Clearly, L = √\nn(L′2−µ2) 2 − µ2 ≤ √ n 2L ′.\nFurthermore, according to L ′ µ ≥ √ 3 n ( n 2 + 1), we have\nL2 − n 3 L′2 = n 2 (L′2 − µ2)− µ2 − n 3 L′2\n= 1\n2 (n 2 + 1 )2 µ2 − n+ 2 2 µ2\n=\n( n2\n8 − 1 2\n) µ2 ≥ 0,\nand, L/µ ≥ √\nn 3L ′/µ ≥ n/2 + 1.\nProof of Theorem 4.8. By 2nd property of Proposition 4.7, we know that L/µ ≥ n/2 + 1. Moreover,\nm = 1\n4 √√ 2 n L′ µ + 1 log(∆ 9ε ) + 1\n≥ 1 4\n(√ 2 L/µ− 1\nn + 1\n) log ( ∆\n9ε\n) + 1,\nThen, by Theorem 4.2 5, in order to find x̂ ∈ Rm such that EFSC(x̂) − FSC(x∗) < ε, A needs at least N queries to hFSC , where\nN = Ω (( n+ √ nL\nµ\n) log ( ∆\nε\n))\n= Ω n+ √ n √ n/3L′\nµ\n log(∆ ε ) = Ω (( n+ n3/4 √ L′\nµ\n) log ( ∆\nε\n)) .\n5By the proof of Theorem 4.2, a larger dimension m does not affect the conclusion of the theorem.\nProof of Theorem 4.9. Note that\nε ≤ √ 2\n768 B2L′√ n = B2L 384n ,\nm =\n⌊ 4 √\n18\n12 Bn−1/4\n√ L′\nε\n⌋ − 1 = ⌊√ B2L\n24nε\n⌋ − 1.\nBy Theorem 4.5, in order to find x̂ ∈ Rm such that EFC(x̂) − FC(x∗) < ε, A needs at least N queries to hFC , where\nN = Ω ( n+B √ nL\nε\n)\n= Ω n+B √ n √ n/2L′\nε = Ω ( n+Bn3/4 √ L′\nε\n) .\nJ NON-CONVEX CASE\nIn non-convex case, our goal is to find an ε-approximate stationary point x̂ of our objective function f , which satisfies\n‖∇f(x̂)‖2 ≤ ε. (26)\nJ.1 PRELIMINARIES\nWe first introduce a general concept about smoothness. Definition J.1. For any differentiable function f : Rm+1 → R, we say f is (l, L)-smooth, if for any x,y ∈ Rm we have\nl 2 ‖x− y‖22 ≤ f(x)− f(y)− 〈∇f(y),x− y〉 ≤ L 2 ‖x− y‖22 ,\nwhere L > 0, l ∈ R.\nEspecially, if f is L-smooth, then it can be checked that f is (−L,L)-smooth. If f is (−σ, L)-smooth, in order to make the operator proxγf valid, we set 1 γ > σ to ensure the function\nf̂(u) , f(u) + 1\n2γ ‖x− u‖22\nis a convex function.\nNext, we introduce a class of function which is original proposed in (Carmon et al., 2017). Let GNC : Rm+1 → R be\nGNC(x;α,m) = 1\n2\n∥∥B(m+ 1, 4√α)x∥∥2 2 − √ α〈e1,x〉+ α m∑ i=1 Γ(xi),\nwhere the non-convex function Γ : R→ R is Γ(x) , 120 ∫ x\n1\nt2(t− 1) 1 + t2 dt. (27)\nWe need following properties about GNC(x;α,m).\nProposition J.2 (Lemmas 3,4, Carmon et al. (2017)). For any 0 < α ≤ 1, it holds that\n1. Γ(x) is (−45( √ 3 − 1), 180)-smooth and GNC(x;α,m) is (−45( √\n3 − 1)α, 4 + 180α)smooth.\n2. GNC(0;α,m)−minx∈Rm+1 GNC(x;α,m) ≤ √ α/2 + 10αm.\n3. For x which satisfies that xm = xm+1 = 0, we have\n‖∇GNC(x;α,m)‖2 ≥ α 3/4/4.\nJ.2 OUR RESULT\nTheorem J.3. For any PIFO algorithmA and any L, σ, n,∆, ε such that ε2 ≤ ∆Lα81648n , there exist a dimension d =\n⌊ ∆L √ α\n40824nε2 ⌋ + 1 and n (−σ, L)-smooth nonconvex functions {fi : Rd → R}ni=1 such\nthat f(x0) − f(x∗) ≤ ∆. In order to find x̂ ∈ Rd such that E ‖∇f(x̂)‖2 < ε, A needs at least Ω ( ∆L √ α\nε2\n) queries to hf , where we set α = min { 1, ( √ 3+1)nσ 30L , n 180 } .\nRemark J.4. For n > 180, wehave\nΩ\n( ∆L √ α\nε2\n) = Ω ∆ ε2 min L, √√ 3 + 1 30 √ nσL, √ nL√ 180 = Ω(∆ ε2 min{L, √ nσL} ) .\nThus, our result is comparable to the one of Zhou and Gu’s result (their result only related to IFO algorithms, so our result is more strong), but our construction only requires the dimension be O ( 1 + ∆ε2 min{L/n, √ σL/n} ) , which is much smaller than O ( ∆ ε2 min{L, √ nσL} ) in (Zhou and Gu, 2019).\nJ.3 CONSTRUCTIONS\nConsider\nF (x;α,m, λ, β) = λGNC(x/β;α,m). (28)\nSimilar to our construction we introduced in Section 2, we denote the l-th row of the matrix B(m+ 1, 4 √ α) by bl and\nLi = {l : 1 ≤ l ≤ m,m+ 1− l ≡ i(mod n)}, i = 1, 2, · · · , n. (29)\nLet Gk = span{e1, e2, · · · , ek}, 1 ≤ k ≤ m, G0 = {0} and compose F (x;α,m, λ, β) to f1(x;α,m, λ, β) = λn 2β2 ∑ l∈Li ∥∥b>l x∥∥22 − λn√αβ 〈e1,x〉+ λα m∑ i=1 Γ(xi/β), fi(x;α,m, λ, β) = λn 2β2 ∑ l∈Li ∥∥b>l x∥∥22 + λα m∑ i=1 Γ(xi/β), for i ≥ 2. (30) Clearly, F (x;α,m, λ, β) = 1n ∑n i=1 fi(x;α,m, λ, β). Moreover, by Proposition J.2, we have following properties about F (x;α,m, λ, β) and {fi(x;α,m, λ, β)}ni=1. Proposition J.5. For any 0 < α ≤ 1, it holds that\n1. fi(x;α,m, λ, β) is ( −45( √ 3−1)αλ β2 , (2n+180α)λ β2 ) -smooth. 2. F (0;α,m, λ, β)−minx∈Rm+1 F (x;α,m, λ, β) ≤ λ( √ α/2 + 10αm).\n3. For x which satisfies that xm = xm+1 = 0, we have\n‖∇F (x;α,m, λ, β)‖2 ≥ α3/4λ\n4β .\nSimilar to Lemma 2.6, similar conclusion hosts for {fi(x;α,m, λ, β)}ni=1. Lemma J.6. For x ∈ Fk, 0 ≤ k < m and γ < √ 2+1 60 β2 λα , we have\n∇fi(x;α,m, λ, β),proxγfi(x) ∈ { Gk+1, if k ≡ i− 1(mod n), Gk, otherwise.\nProof. Let G(x) , m∑ i=1 Γ(xi) and Γ′(x) be the derivative of Γ(x). First note that Γ′(0) = 0, so if x ∈ Gk, then\n∇G(x) = ( Γ′(x1),Γ ′(x2), · · · ,Γ′(xm) )> ∈ Gk.\nMoreover, for x ∈ FG (k ≥ 1), we have\nb>l x = 0 for l < m− k, bl ∈ Gk for l > m− k,\nbm−k ∈ Gk+1.\nConsequently, for l 6= m− k, blb>l x = (b>l x)bl ∈ Gk, and bm−kb>m−kx ∈ Gk+1. For k = 0, we have x = 0, and\n∇f1(x) = λn √ α/β e1 ∈ G1,\n∇fj(x) = 0 (j ≥ 2).\nFor k ≥ 1, we suppose that m− k ∈ Li. Since\n∇fj(x) = λn\nβ2 ∑ l∈Lj b>l blx + λα β ∇G(x/β)− ηje1,\nwhere η1 = λn √ α/β, ηj = 0 for j ≥ 2. Hence, ∇fi(x) ∈ Fk+1 and ∇fj(x) ∈ Fk (j 6= i).\nNow, we turn to consider v = proxγfj (x).\nWe have\n∇fj(v) + 1\nγ (v − x) = 0,\nthat is λn β2 ∑ l∈Lj b>l bl + 1 γ I v + λα β ∇G(v/β) = ηje1 + 1 γ x. (31)\nDenote\nA = λn\nβ ∑ l∈Lj b>l bl + β γ I, u = 1 β v, y = ηje1 + 1 γ x,\nthen we have\nAu + λα\nβ ∇G(u) = y. (32)\nNext, if s satisfies { s > max{1, k} for j = 1, s > k for j > 1,\n(33)\nthen we know that the s-th element of y is 0.\nIf s satisfies (33) and m − s ∈ Lj , then the s-th and (s + 1)-th elements of Au is ((ξ + β/γ)us − ξus+1) and (−ξus + (ξ + β/γ)us+1) respectively where ξ = λn/β. So by Equation (32), we have β γ us + ξ(us − us+1) + 120λα β u2s(us−1) 1+u2s = 0.\nβ γ us+1 + ξ(us+1 − us) + 120λα β u2s+1(us+1−1) 1+u2s+1 = 0.\nFollowing from Lemma J.9, for 120λαβ < (2+2\n√ 2)β\nγ , we have us = us+1 = 0. That is\n1. if m− s ∈ Lj and s satisfies (33), then us = 0.\n2. if m− s+ 1 ∈ Lj and s− 1 satisfies (33), then us = 0.\nFor s which satisfies (33), if m − s 6∈ Lj and m − s + 1 6∈ Lj , then the s-th element of Au is (β/γ us). Similarly, by Equation (32), we have\nβ γ us + 120λα β u2s(us − 1) 1 + u2s = 0.\nFollowing from Lemma J.8, for 120λαβ < (2+2\n√ 2)β γ , we have us = 0.\nTherefore, we can conclude that\n1. if s− 1 satisfies (33), then us = 0.\n2. if s satisfies (33) and m− s+ 1 6∈ Lj , then us = 0.\nMoreover, we have that\n1. if k = 0 and j = 1, then m− 1,m− 2 6∈ Lj , so u2 = 0.\n2. if k = 0 and j > 1, then for s = 1, we have m− s+ 1 6∈ Lj , so u1 = 0.\n3. if k = 0, then for s > 2, we have s− 1 > 1 satisfies (33), so us = 0.\n4. if k > 0, then for s > k + 1, we have s− 1 > k satisfies (33), so us = 0.\n5. if k > 0 and m− k 6∈ Lj , then for s = k + 1, we have m− s+ 1 6∈ Lj , so uk+1 = 0.\nIn short,\n1. if k = 0 and j > 1, then u ∈ G0.\n2. if k = 0 and j = 1, then u ∈ G1.\n3. if k > 1 and m− k 6∈ Lj , then u ∈ Gk.\n4. if k > 1 and m− k ∈ Lj , then u ∈ Gk+1.\nRemark J.7. In order to make the operator proxγfi valid, γ need to satisfy\nγ <\n√ 3 + 1\n90\nβ2 λα <\n√ 2 + 1\n60\nβ2 λα .\nSo for any valid PIFO call, the condition about γ in Lemma J.6 must be satisfied.\nLemma J.8. Suppose that 0 < λ2 < (2 + 2 √\n2)λ1, then z = 0 is the only real solution to the equation\nλ1z + λ2 z2(z − 1)\n1 + z2 = 0. (34)\nProof. Since 0 < λ2 < (2 + 2 √ 2)λ1, we have\nλ22 − 4λ1(λ1 + λ2) < 0,\nand consequently, for any z, (λ1 + λ2)z2 − λ2z + λ1 > 0. On the other hand, we can rewrite Equation (34) as\nz ( (λ1 + λ2)z 2 − λ2z + λ1 ) = 0.\nClearly, z = 0 is the only real solution to Equation (34).\nLemma J.9. Suppose that 0 < λ2 < (2 + 2 √\n2)λ1 and λ3 > 0, then z1 = z2 = 0 is the only real solution to the equation λ1z1 + λ3(z1 − z2) + λ2 z21(z1−1) 1+z21 = 0.\nλ1z2 + λ3(z2 − z1) + λ2 z 2 2(z2−1) 1+z22\n= 0. (35)\nProof. If z1 = 0, then z2 = 0. So let assume that z1z2 6= 0. Rewrite the first equation of Equation (35) as\nλ1 + λ3 λ3 + λ2 λ3 z1(z1 − 1) 1 + z21 = z2 z1\nNote that\n1− √ 2\n2 ≤ z(z − 1) 1 + z2 .\nThus, we have\nλ1 + λ3 λ3 + λ2 λ3\n1− √ 2\n2 ≤ z2 z1 .\nSimilarly, it also holds\nλ1 + λ3 λ3 + λ2 λ3\n1− √ 2\n2 ≤ z1 z2 .\nBy 0 < λ2 < (2 + 2 √ 2)λ1, we know that λ1 + 1− √ 2 2 λ2 > 0. Thus\nλ1 + λ3 λ3 + λ2 λ3\n1− √ 2\n2 > 1.\nSince z1/z2 > 1 and z2/z1 > 1 can not hold at the same time, so we get a contradiction.\nFollowing from Lemma J.6, we know following Lemma which is similar to Lemma 2.9.\nLemma J.10. If M ≥ 1 satisfies minx∈GM ‖∇F (x)‖2 ≥ 9ε and N = n(M + 1)/4, then we have\nmin t≤N E ‖∇F (xt)‖2 ≥ ε.\nTheorem J.11. Set\nα = min { 1, ( √ 3 + 1)nσ\n30L , n 180\n} ,\nλ = 3888nε2\nLα3/2 , β = √ 3λn/L,\nm =\n⌊ ∆L √ α\n40824nε2 ⌋ Suppose that ε2 ≤ ∆Lα81648n . In order to find x̂ ∈ R\nm+1 such that E ‖∇F (x̂)‖2 < ε, PIFO algorithm A needs at least Ω\n( ∆L √ α\nε2\n) queries to hF .\nProof. First, note that fi is (−l1, l2)-smooth, where\nl1 = 45( √\n3− 1)αλ β2\n= 45( √\n3− 1)L 3n\nα ≤ 45( √\n3− 1)L 3n\n( √ 3 + 1)nσ\n30L = σ,\nl2 = (2n+ 180α)λ\nβ2 =\nL\n3n (2n+ 180α) ≤ L.\nThus each fi is (−σ, L)-smooth. Next, observe that\nF (x0)− F (x∗) ≤ λ( √ α/2 + 10αm) = 1944nε2\nLα +\n38880nε2\nL √ α m\n≤ 1944 40824 ∆ + 38880 40824 ∆ = ∆.\nFor M = m− 1, we know that\nmin x∈GM\n‖∇F (x)‖2 ≥ α3/4λ\n4β =\nα3/4λ 4 √ 3λn/L =\n√ λL\n3n\nα3/4\n4 = 9ε.\nWith recalling Lemma J.10, in order to find x̂ ∈ Rm+1 such that E ‖∇F (x̂)‖2 < ε, PIFO algorithm A needs at least N queries to hF , where\nN = n(M + 1)/4 = nm/4 = Ω\n( ∆L √ α\nε2\n) .\nAt last, we need to ensure that m ≥ 2. By ε2 ≤ ∆Lα81648n , we have\n∆L √ α\n40824nε2 ≥ ∆Lα 40824nε2 ≥ 2,\nand consequently m ≥ 2." } ]
2,019
null
SP:3e3e429ab3ba27875731c7cecf2d00bd959973b6
[ "This paper addresses some challenges of following natural language instructions for navigating in visual environments. The main challenge in such tasks is the scarcity of available training data, which results in generalization problems where the agent has difficulty navigating in unseen environments. Therefore, the authors propose two key ideas to tackle this issue and incorporate them in the reinforced cross-modal matching (RCM) model (Wang et al, 2019). First, they use a generalized multitask learning model to transfer knowledge across two different tasks: Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH). This results in learning features that explain both tasks simultaneously and hence generalize better. Moreover, by training on both tasks the effective size of training data is increased significantly. Second, they propose an environment-agnostic learning technique in order to learn invariant representations that are still efficient for navigation. This prevents overfitting to specific visual features of the environment and therefore helps improving generalization. The contribution of this paper is combining and incorporating these two key ideas in the RCM framework and verifying it on VLN and NDH tasks. This approach is novel compared to prior results in tackling the VLN task. Their experimental results show that the two proposed techniques improve generalization in a complementary fashion, measured by decreased performance gap between seen and unseen environments. They demonstrate that their technique outperforms state-of-the-art methods on unseen data on some evaluation metrics.", "This paper aims to apply the model of Wang 2019 to the new NDH task of Thomason '19. Both of these datasets are built on the same room-to-room environment and both are for natural language instruction following. Thomason's work extends the R2R paradigm to include a dialogue history which is collapsed into a single instruction. The contribution of this paper is to build a single model which alternatingly samples trajectories from each of the two datasets to train a more general actor and the authors also believe that the presence of an environment classifier assists in generalization." ]
Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. In order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives: (1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; (2) we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments. Extensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH, establishing the new state of the art for the NDH task.
[]
[ { "authors": [ "Peter Anderson", "Angel Chang", "Devendra Singh Chaplot", "Alexey Dosovitskiy", "Saurabh Gupta", "Vladlen Koltun", "Jana Kosecka", "Jitendra Malik", "Roozbeh Mottaghi", "Manolis Savva", "Amir R. Zamir" ], "title": "On evaluation of embodied navigation agents. arXiv, 2018a", "venue": null, "year": 2018 }, { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian Reid", "Stephen Gould", "Anton van den Hengel" ], "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Michael Bain", "Claude Sammut" ], "title": "A framework for behavioural cloning. In Machine Intelligence 15, Intelligent Agents [St", "venue": "Catherine’s College,", "year": 1995 }, { "authors": [ "Richard Caruana" ], "title": "Multitask learning: A knowledge-based source of inductive bias", "venue": "In Proceedings of the Tenth International Conference on Machine Learning,", "year": 1993 }, { "authors": [ "Angel Chang", "Angela Dai", "Thomas Funkhouser", "Maciej Halber", "Matthias Niessner", "Manolis Savva", "Shuran Song", "Andy Zeng", "Yinda Zhang" ], "title": "Matterport3d: Learning from rgb-d data in indoor environments", "venue": "International Conference on 3D Vision", "year": 2017 }, { "authors": [ "Howard Chen", "Alane Suhr", "Dipendra Misra", "Noah Snavely", "Yoav Artzi" ], "title": "Touchdown: Natural language navigation and spatial reasoning in visual street environments", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "Abhishek Das", "Satwik Kottur", "Khushi Gupta", "Avi Singh", "Deshraj Yadav", "José M.F. Moura", "Devi Parikh", "Dhruv Batra" ], "title": "Visual Dialog", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Abhishek Das", "Samyak Datta", "Georgia Gkioxari", "Stefan Lee", "Devi Parikh", "Dhruv Batra" ], "title": "Embodied question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Harm de Vries", "Kurt Shuster", "Dhruv Batra", "Devi Parikh", "Jason Weston", "Douwe Kiela" ], "title": "Talk the walk: Navigating new york city through grounded dialogue", "venue": "arXiv preprint arXiv:1807.03367,", "year": 2018 }, { "authors": [ "L. Deng", "G. Hinton", "B. Kingsbury" ], "title": "New types of deep neural network learning for speech recognition and related applications: an overview", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Daniel Fried", "Ronghang Hu", "Volkan Cirik", "Anna Rohrbach", "Jacob Andreas", "Louis-Philippe Morency", "Taylor Berg-Kirkpatrick", "Kate Saenko", "Dan Klein", "Trevor Darrell" ], "title": "Speaker-follower models for vision-and-language navigation", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": "In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "R. Girshick" ], "title": "Fast r-cnn", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Sachithra Hemachandra", "Felix Duvallet", "Thomas M Howard", "Nicholas Roy", "Anthony Stentz", "Matthew R Walter" ], "title": "Learning models for following natural language directions in unknown environments", "venue": "In 2015 IEEE International Conference on Robotics and Automation (ICRA),", "year": 2015 }, { "authors": [ "Haoshuo Huang", "Vihan Jain", "Harsh Mehta", "Jason Baldridge", "Eugene Ie" ], "title": "Multi-modal discriminative model for vision-and-language navigation. In Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP)", "venue": null, "year": 2019 }, { "authors": [ "Haoshuo Huang", "Vihan Jain", "Harsh Mehta", "Alexander Ku", "Gabriel Magalhães", "Jason Baldridge", "Eugene Ie" ], "title": "Transferable representation learning in vision-and-language navigation", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision178(ICCV),", "year": 2019 }, { "authors": [ "Vihan Jain", "Gabriel Magalhães", "Alexander Ku", "Ashish Vaswani", "Eugene Ie", "Jason Baldridge" ], "title": "Stay on the path: Instruction fidelity in vision-and-language navigation", "venue": null, "year": 2019 }, { "authors": [ "Liyiming Ke", "Xiujun Li", "Yonatan Bisk", "Ari Holtzman", "Zhe Gan", "Jingjing Liu", "Jianfeng Gao", "Yejin Choi", "Siddhartha Srinivasa" ], "title": "Tactical rewind: Self-correction via backtracking in vision-andlanguage navigation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Eric Kolve", "Roozbeh Mottaghi", "Winson Han", "Eli VanderBilt", "Luca Weihs", "Alvaro Herrasti", "Daniel Gordon", "Yuke Zhu", "Abhinav Gupta", "Ali Farhadi" ], "title": "AI2-THOR: An Interactive 3D Environment for Visual AI", "venue": null, "year": 2017 }, { "authors": [ "Yitong Li", "Timothy Baldwin", "Trevor Cohn" ], "title": "What’s in a Domain? Learning Domain-Robust Text Representations using Adversarial Training", "venue": "In NAACL-HLT,", "year": 2018 }, { "authors": [ "Alexander H Liu", "Yen-Cheng Liu", "Yu-Ying Yeh", "Yu-Chiang Frank Wang" ], "title": "A unified feature disentangler for multi-domain image translation and manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chih-Yao Ma", "Jiasen Lu", "Zuxuan Wu", "Ghassan AlRegib", "Zsolt Kira", "Richard Socher", "Caiming Xiong" ], "title": "Self-monitoring navigation agent via auxiliary progress estimation", "venue": "arXiv preprint arXiv:1901.03035,", "year": 2019 }, { "authors": [ "Chih-Yao Ma", "Zuxuan Wu", "Ghassan AlRegib", "Caiming Xiong", "Zsolt Kira" ], "title": "The regretful agent: Heuristic-aided navigation through progress estimation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Yili Zhao", "Erik Wijmans", "Bhavana Jain", "Julian Straub", "Jia Liu", "Vladlen Koltun", "Jitendra Malik", "Devi Parikh", "Dhruv Batra" ], "title": "Habitat: A Platform for Embodied AI Research", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Piotr Mirowski", "Matt Grimes", "Mateusz Malinowski", "Karl Moritz Hermann", "Keith Anderson", "Denis Teplyashin", "Karen Simonyan", "Koray Kavukcuoglu", "Andrew Zisserman", "Raia Hadsell" ], "title": "Learning to navigate in cities without a map", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Piotr W. Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andy Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Learning to navigate in complex", "venue": "environments. ArXiv,", "year": 2016 }, { "authors": [ "Khanh Nguyen", "Hal Daumé III" ], "title": "Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning", "venue": null, "year": 1909 }, { "authors": [ "Khanh Nguyen", "Debadeepta Dey", "Chris Brockett", "Bill Dolan" ], "title": "Vision-based navigation with language-based assistance via imitation learning with indirect intervention", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xingchao Peng", "Zijun Huang", "Ximeng Sun", "Kate Saenko" ], "title": "Domain agnostic learning with disentangled representations", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Bharath Ramsundar", "Steven M. Kearnes", "Patrick Riley", "Dale Webster", "David E. Konerding", "Vijay S. Pande" ], "title": "Massively multitask networks for drug", "venue": "discovery. ArXiv,", "year": 2015 }, { "authors": [ "Rob Romijnders", "Panagiotis Meletis", "Gijs Dubbelman" ], "title": "A domain agnostic normalization layer for unsupervised adversarial domain adaptation", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2019 }, { "authors": [ "Mike Schuster", "Kuldip K. Paliwal" ], "title": "Bidirectional recurrent neural networks", "venue": "IEEE Trans. Signal Processing,", "year": 1997 }, { "authors": [ "Hao Tan", "Licheng Yu", "Mohit Bansal" ], "title": "Learning to navigate unseen environments: Back translation with environmental dropout", "venue": null, "year": 2019 }, { "authors": [ "Yee Teh", "Victor Bapst", "Wojciech M Czarnecki", "John Quan", "James Kirkpatrick", "Raia Hadsell", "Nicolas Heess", "Razvan Pascanu" ], "title": "Distral: Robust multitask reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jesse Thomason", "Michael Murray", "Maya Cakmak", "Luke Zettlemoyer" ], "title": "Vision-and-dialog navigation", "venue": "In Conference on Robot Learning (CoRL),", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Xin Wang", "Wenhan Xiong", "Hongmin Wang", "William Yang Wang" ], "title": "Look before you leap: Bridging model-free and model-based reinforcement learning for planned-ahead vision-and-language navigation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Xin Wang", "Qiuyuan Huang", "Asli Celikyilmaz", "Jianfeng Gao", "Dinghan Shen", "Yuan-Fang Wang", "William Yang Wang", "Lei Zhang" ], "title": "Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Yi Wu", "Yuxin Wu", "Georgia Gkioxari", "Yuandong Tian" ], "title": "Building generalizable agents with a realistic and rich 3d environment", "venue": "arXiv preprint arXiv:1801.02209,", "year": 2018 }, { "authors": [ "Yi Wu", "Yuxin Wu", "Aviv Tamar", "Stuart Russell", "Georgia Gkioxari", "Yuandong Tian" ], "title": "Learning and planning with a semantic model", "venue": "arXiv preprint arXiv:1809.10842,", "year": 2018 }, { "authors": [ "Fei Xia", "Amir R. Zamir", "Zhi-Yang He", "Alexander Sax", "Jitendra Malik", "Silvio Savarese. Gibson" ], "title": "env: real-world perception for embodied agents", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2018 } ]
[ { "heading": null, "text": "Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. In order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives: (1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; (2) we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments. Extensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH, establishing the new state of the art for the NDH task." }, { "heading": "1 INTRODUCTION", "text": "Navigation in visual environments by following natural language guidance (Hemachandra et al., 2015) is a fundamental capability of intelligent robots that simulate human behaviors, because humans can easily reason about the language guidance and navigate efficiently by interacting with the visual environments. Recent efforts (Anderson et al., 2018b; Das et al., 2018; Thomason et al., 2019) empower large-scale learning of natural language grounded navigation that is situated in photorealistic simulation environments.\nNevertheless, the generalization problem commonly exists for these tasks, especially indoor navigation: the agent usually performs poorly on unknown environments that have never been seen during training. One of the main causes for such behavior is data scarcity as it is expensive and time-consuming to extend either visual environments or natural language guidance. The number of scanned houses for indoor navigation is limited due to high expense and privacy concerns. Besides, unlike vision-only navigation tasks (Mirowski et al., 2018; 2016; Xia et al., 2018; Manolis Savva* et al., 2019; Kolve et al., 2017) where episodes can be exhaustively sampled in simulation, natural language grounded navigation is supported by human demonstrated interaction and communication in natural language. It is impractical to fully collect and cover all the samples for individual tasks.\nTherefore, it is essential though challenging to efficiently learn a more generalized policy for natural language grounded navigation tasks from existing data (Wu et al., 2018a;b). In this paper, we study how to resolve the generalization and data scarcity issues from two different angles. First, previous methods are trained for one task at the time, so each new task requires training a brand new agent instance that can only solve the one task it was trained on. In this work, we propose a generalized multitask model for natural language grounded navigation tasks such as Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH), aiming at efficiently transferring knowledge across tasks and effectively solving both tasks with one agent simultaneously.\nMoreover, although there are thousands of trajectories paired with language guidance, the underlying house scans are restricted. For instance, the popular Matterport3D dataset (Chang et al., 2017) contains only 61 unique house scans in the training set. The current models perform much better in seen environments by taking advantage of the knowledge of specific houses they have acquired over multiple task completions during training, but fail to generalize to houses not seen during training. Hence we propose an environment-agnostic learning method to learn a visual representation that is invariant to specific environments but still able to support navigation. Endowed with the learned environment-agnostic representations, the agent is further prevented from the overfitting issue and generalizes better on unseen environments.\nTo the best of our knowledge, we are the first to introduce natural language grounded multitask and environment-agnostic training regimes and validate their effectiveness on VLN and NDH tasks. Extensive experiments demonstrate that our environment-agnostic multitask navigation model can not only efficiently execute different language guidance in indoor environments but also outperform the single-task baseline models by a large margin on both tasks. Besides, the performance gap between seen and unseen environments is significantly reduced. We also set a new state of the art on NDH with over 120% improvement in terms of goal progress." }, { "heading": "2 BACKGROUND", "text": "Vision-and-Language Navigation. Vision-and-Language Navigation (Anderson et al., 2018b; Chen et al., 2019) task requires an embodied agent to navigate in photo-realistic environments to carry out natural language instructions. The agent is spawned at an initial pose p0 = (v0, φ0, θ0), which includes the spatial location, heading and elevation angles. Given a natural language instruction X = {x1, x2, ..., xn}, the agent is expected to perform a sequence of actions {a1, a2, ..., aT } and arrive at the target position vtar specified by the language instruction X , which describes stepby-step instructions from the starting position to the target position. In this work, we consider VLN task defined for Room-to-Room (R2R) (Anderson et al., 2018b) dataset which contains instructiontrajectory pairs across 90 different indoor environments (houses).\nPrevious VLN methods have studied various aspects to improve the navigation performance, such as planning (Wang et al., 2018), data augmentation (Fried et al., 2018; Tan et al., 2019), cross-modal alignment (Wang et al., 2019; Huang et al., 2019b), progress estimation (Ma et al., 2019a), error correction (Ma et al., 2019b; Ke et al., 2019), interactive language assistance (Nguyen et al., 2019; Nguyen & Daumé III, 2019) etc. This work tackles VLN via multitask learning and environmentagnostic learning, which is orthogonal to all these prior arts.\nNavigation from Dialog History. Different from Visual Dialog (Das et al., 2017) which involves dialog grounded in a single image, the recently introduced Cooperative Vision-and-Dialog Navigation (CVDN) dataset (Thomason et al., 2019) includes interactive language assistance for indoor navigation, which consists of over 2,000 embodied, human-human dialogs situated in photo-realistic home environments. The task of Navigation from Dialog History (NDH) is defined as: given a target object t0 and a dialog history between humans cooperating to perform the task, the embodied agent must infer navigation actions towards the goal room that contains the target object. The dialog history is denoted as < t0, Q1, A1, Q2, A2, ..., Qi, Ai >, including the target object t0, the questions Q and answers A till the turn i (0 ≤ i ≤ k, where k is the total number of Q-A turns from the beginning to the goal room). The agent, located in p0, is trying to move closer to the goal room by inferring from the dialog history that happened before.\nMultitask Learning. The basis of Multitask (MT) learning is the notion that tasks can serve as mutual sources of inductive bias for each other (Caruana, 1993). When multiple tasks are trained jointly, MT learning causes the learner to prefer the hypothesis that explains all the tasks simultaneously, hence leading to more generalized solutions. MT learning has been successful in natural language processing (Collobert & Weston, 2008), speech recognition (Deng et al., 2013), computer vision (Girshick, 2015), drug discovery (Ramsundar et al., 2015), and Atari games (Teh et al., 2017). The deep reinforcement learning methods that have become very popular for training models on natural language grounded navigation tasks (Wang et al., 2019; Huang et al., 2019a;b; Tan et al., 2019) are known to be data inefficient. In this work, we introduce multitask reinforcement learning for such tasks to improve data efficiency by positive transfer across related tasks.\nEnvironment-agnostic Learning. A few studies on agnostic learning have been proposed recently. For example, Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017) aims to train a model on a variety of learning tasks and solve a new task using only a few training examples. Liu et al. (2018) proposes a unified feature disentangler that learns domain-invariant representation across multiple domains for image translation. Other domain-agnostic techniques are also proposed for supervised (Li et al., 2018) and unsupervised domain adaption (Romijnders et al., 2019; Peng et al., 2019). In this work, we pair the environment classifier with a gradient reversal layer (Ganin & Lempitsky, 2015) to learn an environment-agnostic representation that can be better generalized on unseen environments in a zero-shot fashion where no adaptation is involved.\nDistributed Actor-Learner Navigation Learning Framework. To train models for the various language grounded navigation tasks like VLN and NDH, we develop a distributed actor-learner learning infrastructure1. The framework design is inspired by IMPALA (Espeholt et al., 2018) and uses its off-policy correction method called V-trace to efficiently scale reinforcement learning methods to thousands of machines. The framework additionally supports a variety of supervision strategies important for navigation tasks such as teacher-forcing (Anderson et al., 2018b), studentforcing (Anderson et al., 2018b) and mixed supervision (Thomason et al., 2019). The framework is built using TensorFlow (Abadi et al., 2016) and supports ML accelerators (GPU, TPU)." }, { "heading": "3 ENVIRONMENT-AGNOSTIC MULTITASK LEARNING", "text": "" }, { "heading": "3.1 OVERVIEW", "text": "Our environment-agnostic multitask navigation model is illustrated in Figure 1. First, we adapt the reinforced cross-modal matching (RCM) model (Wang et al., 2019) and make it seamlessly transfer across tasks by sharing all the learnable parameters for both NDH and VLN, including joint word embedding layer, language encoder, trajectory encoder, cross-modal attention module (CM-ATT), and action predictor. Furthermore, to learn the environment-agnostic representation zt, we equip the navigation model with an environment classifier whose objective is to predict which house the agent is. But note that between trajectory encoder and environment classifier, a gradient reversal layer (Ganin & Lempitsky, 2015) is introduced to reverse the gradients backpropagated to the trajectory encoder, making it learn representations that are environment-agnostic and thus more generalizable in unseen environments. During training, the environment classifier is minimizing the environment classification loss Lenv , while the trajectory encoder is maximizing Lenv and minimizing the navigation lossLnav . The other modules are optimized with the navigation loss Lnav simultaneously. Below we introduce multitask reinforcement learning and environmentagnostic representation learning. A more detailed model architecture is presented in Section 4.\n1The identity is not disclosed to respect the anonymity of the submission." }, { "heading": "3.2 MULTITASK REINFORCEMENT LEARNING", "text": "Interleaved Multitask Data Sampling. To avoid overfitting to individual tasks, we adopt an interleaved multitask data sampling strategy to train the model. Particularly, each data sample within a mini-batch can be from either task, so that the VLN instruction-trajectory pairs and NDH dialogtrajectory pairs are interleaved in a mini-batch though they may have different learning objectives.\nReward Shaping. Following prior art (Wang et al., 2018; 2019), we first implement a discounted cumulative reward function R for the VLN and NDH tasks:\nR(st, at) = T∑ t′=t γt ′−tr(st′ , at′), where r(st′ , at′) = { d(st′ , vtar)− d(st′+1, vtar) if t′ < T 1[d(sT , vtar) ≤ dth] if t′ = T\n(1) where γ is the discounted factor, d(st′ , vtar) is the distance between state st and the target location vtar, and dth is the maximum distance from vtar that the agent is allowed to terminate for success.\nDifferent from VLN, NDH is essentially room navigation instead of point navigation because the agent is expected to reach a room that contains the target object. Suppose the goal room is occupied by a set of nodes {vi}N1 , we replace the distance function d(st, vtar) in Equation 1 with the minimum distance to the goal room droom(st, {vi}N1 ) for NDH:\ndroom(st, {vi}N1 ) = min 1≤i≤N d(st, vi) (2)\nNavigation Loss. Since human demonstrations are available for both VLN and NDH tasks, we use behavior cloning to constrain the learning algorithm to model state-action spaces that are most relevant to each task. Following previous works (Wang et al., 2019), we also use reinforcement learning to aid the agent’s ability to recover from erroneous actions in unseen environments. During multitask navigation model training, we adopt a mixed training strategy of reinforcement learning and behavior cloning, so the navigation loss function is:\nLnav = −Eat∼π[R(st, at)− b]− E[log π(a∗t |st)] (3)\nwhere we use REINFORCE policy gradients (Williams, 1992) and supervised learning gradients to update the policy π. b is the estimated baseline to reduce the variance and a∗t is the human demonstrated action." }, { "heading": "3.3 ENVIRONMENT-AGNOSTIC REPRESENTATION LEARNING", "text": "To further improve the generalizability of the navigation policy, we propose to learn a latent environment-agnostic representation that is invariant among seen environments. We would like to get rid of the environment-specific features that are irrelevant to general navigation (e.g. unique house appearances), preventing the model from overfitting to specific seen environments. We can reformulate the navigation policy as\nπ(at|st) = p(at|zt, st)p(zt|st) (4)\nwhere zt is a latent representation.\nAs shown in Figure 1, p(at|zt, st) is modeled by the policy module (including CM-ATT and action predictor) and p(zt|st) is modeled by the trajectory encoder. In order to learn the environmentagnostic representation, we employ an environment classifier and a gradient reversal layer (Ganin & Lempitsky, 2015). The environment classifier is parameterized to predict the identity of the house where the agent is, so its loss function Lenv is defined as\nLenv = −E[log p(y = y∗|zt)] (5)\nwhere y∗ is the ground-truth house label. The gradient reversal layer has no parameters. It acts as an identity transform during forward-propagation, but multiplies the gradient by −λ and passes it to the trajectory encoder during back-propagation. Therefore, in addition to minimizing the navigation loss Lnav , the trajectory encoder is also maximizing the environment classification loss Lenv , trying to increase the entropy of the classifier in an adversarial learning manner where the classifier is minimizing the classification loss conditioned on the latent representation zt." }, { "heading": "4 MODEL ARCHITECTURE", "text": "Language Encoder. The natural language guidance (instruction or dialog) is tokenized and embedded into n-dimensional space X = {x1,x2, ...,x3} where the word vectors xi are initialized randomly. The vocabulary is restricted to tokens that occur at least five times in the training instructions (The vocabulary used when jointly training VLN and NDH tasks is the union of the two tasks’ vocabularies.). All out-of-vocabulary tokens are mapped to a single out-of-vocabulary identifier. The token sequence is encoded using a bi-directional LSTM (Schuster & Paliwal, 1997) to create HX following:\nHX = [hX1 ;h X 2 ; ...;h X n ], h X t = σ( −→ hXt , ←− hXt ) (6) −→ hXt = LSTM(xt, −→ hXt−1), ←− hXt = LSTM(xt, ←− hXt+1) (7)\nwhere −→ hXt and ←− hXt are the hidden states of the forward and backward LSTM layers at time step t respectively, and the σ function is used to combine −→ hXt and ←− hXt into h X t .\nTrajectory Encoder. Similar to benchmark models (Fried et al., 2018; Wang et al., 2019; Huang et al., 2019b), at each time step t, the agent perceives a 360-degree panoramic view at its current location. The view is discretized into k view angles (k = 36 in our implementation, 3 elevations by 12 headings at 30-degree intervals). The image at view angle i, heading angle φ and elevation angle θ is represented by a concatenation of the pre-trained CNN image features with the 4- dimensional orientation feature [sin φ; cos φ; sin θ; cos θ] to form vt,i. The visual input sequence V = {v1,v2, ...,vm} is encoded using a LSTM to create HV following:\nHV = [hV1 ;h V 2 ; ...;h V m], where h V t = LSTM(vt,h V t−1) (8)\nvt = Attention(hVt−1,vt,1..k) is the attention-pooled representation of all view angles using previous agent state ht−1 as the query. We use the dot-product attention (Vaswani et al., 2017) hereafter.\nPolicy Module. The policy module comprises of cross-modal attention (CM-ATT) unit as well as an action predictor. The agent learns a policy πθ over parameters θ that maps the natural language instruction X and the initial visual scene v1 to a sequence of actions [a1, a2, ..., an]. The action space which is common to VLN and NDH tasks consists of navigable directions from the current location. The available actions at time t are denoted as ut,1..l, where ut,j is the representation of the navigable direction j from the current location obtained similarly to vt,i. The number of available actions, l, varies per location, since graph node connectivity varies. As in Wang et al. (2019), the model predicts the probability pd of each navigable direction d using a bilinear dot product:\npd = softmax([hVt ; c text t ; c visual t ]Wc(ut,dWu) T ) (9)\nwhere ctextt = Attention(h V t ,h X 1..n) and c visual t = Attention(c text t ,vt,1..k). Wc and Wu are learnable parameters.\nEnvironment Classifier. The environment classifier is a two-layer perceptron with a SoftMax layer as the last layer. Given the latent representation zt (which is hVt in our setting), the classifier generates a probability distribution over the house labels." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Implementation Details. In the experiments, we use a 2-layer bi-directional LSTM for the instruction encoder where the size of LSTM cells is 256 units in each direction. The inputs to the encoder are 300-dimensional embeddings initialized randomly. For the visual encoder, we use a 2-layer LSTM with a cell size of 512 units. The encoder inputs are image features derived as mentioned in Section 4. The cross-modal attention layer size is 128 units. The environment classifier has one hidden layer of size 128 units followed by an output layer of size equal to the number of classes. During training, some episodes in the batch are identical to available human demonstrations in the training dataset where the objective is to increase the agent’s likelihood of choosing human actions (behavioral cloning (Bain & Sammut, 1999)). The rest of the episodes are constructed by sampling\nfrom agent’s own policy. In the experiments, unless otherwise stated, we use entire dialog history from NDH task for model training. All the reported results in subsequent studies are averages of at least 3 independent runs.\nEvaluation Metrics. The agents are evaluated on two datasets, namely Validation Seen that contains new paths from the training environments and Validation Unseen that contains paths from previously unseen environments. The evaluation metrics for VLN task are as follows: Path Length (PL) measures the total length of the predicted path; Navigation Error (NE) measures the distance between the last nodes in the predicted and the reference paths; Success Rate (SR) measures how often the last node in the predicted path is within some threshold distance of the last node in the reference path; Success weighted by Path Length (SPL) (Anderson et al., 2018a) measures Success Rate weighted by the normalized Path Length; and Coverage weighted by Length Score (CLS) (Jain et al., 2019) measures predicted path’s conformity to the reference path weighted by length score. For NDH task, the agent’s progress is defined as reduction (in meters) from the distance to the goal region at agent’s first position versus at its last position (Thomason et al., 2019)." }, { "heading": "5.2 ENVIRONMENT-AGNOSTIC MULTITASK LEARNING", "text": "Table 1 shows the results of training the navigation model using environment-agnostic learning (EnvAg) as well as multitask learning (MT-RCM). First, both learning methods independently help the agent learn more generalized navigation policy as is evidenced by significant reduction in agent’s performance gap between seen and unseen environments. For instance, performance gap for agent’s goal progress on NDH task drops from 3.85m to 0.92m using multitask learning and agent’s success rate on VLN task between seen and unseen datasets drops from 9.26% to 8.39% using environmentagnostic learning. Second, the two techniques are complementary—the agent’s performance when trained with both the techniques simultaneously improves on unseen environments compared to when trained separately. Finally, we note here that MT-RCM + EnvAg outperforms the state-of-theart goal progress of 2.10m (Thomason et al., 2019) on NDH validation unseen dataset by more than 120%. At the same time, it outperforms the equivalent RCM baseline (Wang et al., 2019) of 40.6% success rate by more than 16% (relative measure) on VLN validation unseen dataset." }, { "heading": "5.3 MULTITASK LEARNING", "text": "Next, we conduct studies to examine cross-task transfer using multitask learning alone. One of the main advantages of multitask learning is that under-represented tokens in each of the individual tasks\n2We report the performance of the equivalent RCM model without intrinsic reward as the benchmark.\nget a significant boost in the number of training samples. Figure 2 illustrates that tokens with less than 40 occurrences end up with sometimes more than 300 occurrences during joint-training.\nTo examine the impact of dialog history in NDH task, we conduct studies with access to different parts of the dialog—the target object to, the last oracle answer Ai, the prefacing navigator question Qi and the full dialog history. Table 2 shows the results of jointly training MT-RCM model on VLN and NDH tasks. MT-RCM model learns a generalized policy that consistently outperforms the competing model with access to similar parts of the dialog on previously unseen environments. As noted before, multitask learning significantly reduces the gap between the agent’s performance on previously seen and unseen environments for both tasks. Furthermore, we see a consistent and gradual increase in the success rate of MT-RCM on VLN task as it is trained on paths with richer dialog history from the NDH task. This shows that the agent benefits from more complete information about the path implying the importance given by the agent to the language instructions in the task.\nWe also investigate the impact of parameter sharing of the language encoder for both tasks. As shown in Table 3, the model with shared language encoder for NDH and VLN tasks outperforms the model that has separate language encoders for the two tasks, hence demonstrating the importance of parameter sharing during multitask learning. A more detailed analysis can be found in the Appendix." }, { "heading": "5.4 ENVIRONMENT-AGNOSTIC LEARNING", "text": "From Table 1, it can be seen that both VLN and NDH tasks benefit from environment-agnostic learning independently. To further examine the generalization property due to environment-agnostic objective, we train a model with the opposite objective—learn to correctly predict the navigation environments by removing the gradient reversal layer (environment-aware learning). Interesting\nresults are observed in Table 4 that environment-aware learning leads to overfitting on the training dataset (performance on environments seen during training consistently increases for both tasks), while environment-agnostic learning leads to more generalizable policy which performs better on previously unseen environments. Figure 3 further shows that due to environment-aware objective, the model learns to represent visual inputs from the same environment closer to each other while the representations of different environments are farther from each other resulting in a clustering learning effect. On the other hand, the environment-agnostic objective leads to more general representation across different environments which results in better performance on unseen environments." }, { "heading": "5.5 REWARD SHAPING FOR NDH TASK", "text": "As discussed in Section 3.2, we conducted studies to shape the reward for NDH task. The results in Table 5 indicate that incentivizing the agent to get closer to the goal room is better than to the exact goal location, because it is aligned with the objective of NDH task, which is to reach the room containing the goal object. Detailed ablation is presented in Appendix showing that the same holds true consistently as the agent is provided access to different parts of the dialog history." }, { "heading": "6 CONCLUSION", "text": "In this work, we show that the model trained using environment-agnostic multitask learning approach learns a generalized policy for the two natural language grounded navigation tasks. It closes down the gap between seen and unseen environments, learns more generalized environment representations and effectively transfers knowledge across tasks outperforming baselines on both the tasks simultaneously by a significant margin. At the same time, the two approaches independently benefit the agent learning and are complementary to each other. There are possible future extensions to our work—the MT-RCM can further be adapted to other language-grounded navigation datasets, such as those using Street View (e.g., Touchdown (Chen et al., 2019), TalkTheWalk (de Vries et al., 2018)); and complementary techniques like environmental dropout (Tan et al., 2019) can be combined with environment-agnostic learning to learn more general representations." }, { "heading": "A APPENDIX", "text": "A.1 REWARD SHAPING FOR NDH TASK\nTable 6 presents a more detailed ablation of Table 5 using different parts of dialog history. The results prove that agents rewarded for getting closer to the goal room consistently outperform agents rewarded for getting closer to the exact goal location.\nA.2 DETAILED ABLATION ON PARAMETER SHARING OF LANGUAGE ENCODER\nTable 7 presents a more detailed analysis from Table 3 with access to different parts of dialog history. The models with shared language encoder consistently outperform those with separate encoders.\nA.3 PERFORMANCE GAP BETWEEN SEEN AND UNSEEN ENVIRONMENTS\nAs mentioned in Section 5.2, both multitask learning as well as environment-agnostic learning methods reduce the agent’s performance gap between seen and unseen environments which is demonstrated in Figure 4." } ]
2,019
null
SP:a0ba8e10e93f74cf923317f94b7dcd7f880d04c3
[ "This paper proposed a dual graph representation method to learn the representation of nodes in a graph. In particular, it learns the embedding of paired nodes simultaneously for multiple times, and use the mean values as the final representation. The experimental result demonstrates some improvement over existing methods. Overall, the idea is presented clearly and the writing is well structured. But the novelty is limited. Specifically, ", "This paper extends GraphSAGE in several dimensions: 1) applying attention when aggregating neighbors (already used by GAT and many other approached); 2) Ensembling node embedding by applying DualENC multiple times on positive pairs selected by random walk (this is doing aggregation of neighborhood again); and 3) adding global bias. All this makes the proposed method an incremental extension of existing solutions. There is no theoretical justification why these extensions should work." ]
Graph representation learning embeds nodes in large graphs as low-dimensional vectors and benefit to many downstream applications. Most embedding frameworks, however, are inherently transductive and unable to generalize to unseen nodes or learn representations across different graphs. Inductive approaches, such as GraphSAGE, neglect different contexts of nodes and cannot learn node embeddings dually. In this paper, we present a context-aware unsupervised dual encoding framework, CADE, to generate representation of nodes by combining real-time neighborhood structure with neighbor-attentioned representation, and preserving extra memory of known nodes. Experimently, we exhibit that our approach is effective by comparing to state-of-the-art methods.
[]
[ { "authors": [ "Talwar", "Paul A. Tucker", "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda B. Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "venue": null, "year": 2016 }, { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Rami Al-Rfou", "Alexander A. Alemi" ], "title": "Watch your step: Learning node embeddings via graph attention", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Smriti Bhagat", "Graham Cormode", "S. Muthukrishnan" ], "title": "Node classification in social networks", "venue": "In Social Network Data Analytics,", "year": 2011 }, { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Deep gaussian embedding of attributed graphs: Unsupervised inductive learning via ranking", "venue": null, "year": 2017 }, { "authors": [ "HongYun Cai", "Vincent W. Zheng", "Kevin Chen-Chuan Chang" ], "title": "A comprehensive survey of graph embedding: Problems, techniques, and applications", "venue": "IEEE Trans. Knowl. Data Eng.,", "year": 2018 }, { "authors": [ "Jan K Chorowski", "Dzmitry Bahdanau", "Dmitriy Serdyuk", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Attention-based models for speech recognition", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Tyler Derr", "Yao Ma", "Jiliang Tang" ], "title": "Signed graph convolutional networks", "venue": "In IEEE International Conference on Data Mining, ICDM 2018,", "year": 2018 }, { "authors": [ "Robert Desimone", "John Duncan" ], "title": "Neural mechanisms of selective visual attention", "venue": "Annual review of neuroscience,", "year": 1995 }, { "authors": [ "Chris H.Q. Ding", "Xiaofeng He", "Hongyuan Zha", "Ming Gu", "Horst D. Simon" ], "title": "A min-max cut algorithm for graph partitioning and data clustering", "venue": "In Proceedings of the 2001 IEEE International Conference on Data Mining,", "year": 2001 }, { "authors": [ "David K. Duvenaud", "Dougal Maclaurin", "Jorge Aguilera-Iparraguirre", "Rafael Gómez-Bombarelli", "Timothy Hirzel", "Alán Aspuru-Guzik", "Ryan P. Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018,", "year": 2018 }, { "authors": [ "Palash Goyal", "Emilio Ferrara" ], "title": "Graph embedding techniques, applications, and performance: A survey", "venue": "Knowl.-Based Syst.,", "year": 2018 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In KDD,", "year": 2016 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Representation learning on graphs: Methods and applications", "venue": "IEEE Data Eng. Bull.,", "year": 2017 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Laurent Itti", "Christof Koch", "Ernst Niebur" ], "title": "A model of saliency-based visual attention for rapid scene analysis", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 1998 }, { "authors": [ "Mital Kinderkhedia" ], "title": "Learning representations of graph data - A survey", "venue": "CoRR, abs/1906.02989,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "arXiv preprint arXiv:1508.04025,", "year": 2015 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Feiping Nie", "Wei Zhu", "Xuelong Li" ], "title": "Unsupervised large graph embedding", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February", "year": 2017 }, { "authors": [ "Shirui Pan", "Jia Wu", "Xingquan Zhu", "Chengqi Zhang", "Yang Wang" ], "title": "Tri-party deep network representation", "venue": "In IJCAI,", "year": 2016 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: online learning of social representations", "venue": "In KDD, pp", "year": 2014 }, { "authors": [ "Gerard Salton", "Clement T. Yu" ], "title": "On the construction of effective vocabularies for information retrieval", "venue": "In Proceedings of the 1973 meeting on Programming languages and information retrieval,", "year": 1973 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Gallagher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI Magazine,", "year": 2008 }, { "authors": [ "Min Joon Seo", "Aniruddha Kembhavi", "Ali Farhadi", "Hannaneh Hajishirzi" ], "title": "Bidirectional attention flow for machine comprehension", "venue": null, "year": 2016 }, { "authors": [ "Xiaofei Sun", "Jiang Guo", "Xiao Ding", "Ting Liu" ], "title": "A general framework for content-enhanced network representation learning", "venue": null, "year": 2016 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "LINE: large-scale information network embedding", "venue": "In WWW,", "year": 2015 }, { "authors": [ "Rakshit Trivedi", "Hanjun Dai", "Yichen Wang", "Le Song" ], "title": "Know-evolve: Deep temporal reasoning for dynamic knowledge graphs", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Daixin Wang", "Peng Cui", "Wenwu Zhu" ], "title": "Structural deep network embedding", "venue": "In KDD,", "year": 2016 }, { "authors": [ "Hongwei Wang", "Jia Wang", "Jialin Wang", "Miao Zhao", "Weinan Zhang", "Fuzheng Zhang", "Xing Xie", "Minyi Guo" ], "title": "Graphgan: Graph representation learning with generative adversarial nets", "venue": "In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Xiao Wang", "Peng Cui", "Jing Wang", "Jian Pei", "Wenwu Zhu", "Shiqiang Yang" ], "title": "Community preserving network embedding", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February", "year": 2017 }, { "authors": [ "Xiaokai Wei", "Linchuan Xu", "Bokai Cao", "Philip S. Yu" ], "title": "Cross view link prediction by learning noise-resilient representation consensus", "venue": "In Proceedings of the 26th International Conference on World Wide Web, WWW 2017,", "year": 2017 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron C. Courville", "Ruslan Salakhutdinov", "Richard S. Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In ICML, pp. 2048–2057,", "year": 2015 }, { "authors": [ "Cheng Yang", "Zhiyuan Liu", "Deli Zhao", "Maosong Sun", "Edward Y. Chang" ], "title": "Network representation learning with rich text information", "venue": "In IJCAI,", "year": 2015 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L. Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018,", "year": 2018 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "William L. Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Marinka Zitnik", "Jure Leskovec" ], "title": "Predicting multicellular function through multi-layer tissue", "venue": "networks. Bioinformatics,", "year": 2017 } ]
[ { "heading": null, "text": "Graph representation learning embeds nodes in large graphs as low-dimensional vectors and benefit to many downstream applications. Most embedding frameworks, however, are inherently transductive and unable to generalize to unseen nodes or learn representations across different graphs. Inductive approaches, such as GraphSAGE, neglect different contexts of nodes and cannot learn node embeddings dually. In this paper, we present a context-aware unsupervised dual encoding framework, CADE, to generate representation of nodes by combining real-time neighborhood structure with neighbor-attentioned representation, and preserving extra memory of known nodes. Experimently, we exhibit that our approach is effective by comparing to state-of-the-art methods." }, { "heading": "1 INTRODUCTION", "text": "The study of real world graphs, such as social network analysis (Hamilton et al. (2017a)), molecule screening (Duvenaud et al. (2015)), knowledge base reasoning (Trivedi et al. (2017)), and biological protein-protein networks (Zitnik & Leskovec (2017)), evolves with the development of computing technologies. Learning vector representations of graphs is effective for a variety of prediction and graph analysis tasks (Grover & Leskovec (2016); Tang et al. (2015)). High-dimensional information about neighbors of nodes are represented by dense vectors, which can be fed to off-the-shelf approaches to tasks, such as node classification (Wang et al. (2017); Bhagat et al. (2011)), link prediction (Perozzi et al. (2014); Wei et al. (2017)), node clustering (Nie et al. (2017); Ding et al. (2001)), recommender systems (Ying et al. (2018a)) and visualization (Maaten & Hinton (2008)).\nThere are mainly two types of models for graph representation learning. Transductive approaches (Perozzi et al. (2014); Grover & Leskovec (2016); Tang et al. (2015)) are able to learn representations of existing nodes but unable to generalize to new nodes. However, in real-world evolving graphs such as social networks, new users will join and must be represented. Inductive approaches were proposed to address this issue. GraphSAGE (Hamilton et al. (2017b)), a hierarchical sampling and aggregating framework, successfully leverages feature information to generate embeddings of the new nodes. However, GraphSAGE has its own faults. Firstly, it samples all neighborhood nodes randomly and uniformly; secondly, it treats the output of encoder as the final representation of node.\nBased on the hierarchical framework of GraphSAGE, GAT (Velickovic et al. (2017)) uses given class labels to guide attention over neighborhood so as to aggregate useful feature information. However, without knowledge of ground-truth class labels, it is difficult for unsupervised approaches to apply attention. To address this issue, we introduce a dual encoding framework for unsupervised inductive representation learning of graphs. Instead of learning self-attention over neighborhoods of nodes, we exploit the bi-attention between representations of two nodes that co-occur in a short random-walk (which we call a positive pair).\nIn Figure 1, we illustrate how nodes are embedded into low-dimensional vectors, where each node v has an optimal embeddings ov . Yet the direct output of encoder zv of GraphSAGE could be located anywhere. Specifically, given feature input from both sides of a positive pair (v, vp), a neural network is trained to encode the pair into K different embeddings zkv and z k vp through different sampled neighborhoods or different encoding functions. Then, a bi-attention layer is applied to generate the most adjacent matches zv|vp and zvp|v, which will be referred as dual-representations. By putting most attention on the pair of embeddings with smallest difference, dual representation of nodes with less deviation will be generated, which can be visualized as zv|· in Figure 1.\nGraphSAGE naively assumes that unseen graph structure should be (easily) represented by known graphs data. We combine the ground truth structure and the learned dual-encoder to generate final representation. Unseen nodes can be represented based on their neighborhood structure. Current inductive approaches have no direct memory of the training nodes. We combine the idea of both transductive and inductive approaches via associating an additive global embedding bias to each node, which can be seen as a memorable global identification of each node in training sets.\nOur contributions can be summarized as follows:\n• we introduce a dual encoding framework to produce context-aware representation for nodes, and conduct experiments to demonstrate its efficiency and effectiveness;\n• we apply bi-attention mechanism for graph representation dual learning, managing to learn dual representation of nodes more precisely;\n• we combine the training of transductive global bias with inductive encoding process, as memory of nodes that are already used for training." }, { "heading": "2 RELATED WORK", "text": "Following (Cai et al. (2018), Kinderkhedia (2019) and Goyal & Ferrara (2018)), there are mainly two types of approaches:" }, { "heading": "2.1 NETWORK EMBEDDING", "text": "For unsupervised embedding learning, DeepWalk (Perozzi et al. (2014)) and node2vec (Grover & Leskovec (2016)) are based on random-walks extending the Skip-Gram model; LINE (Tang et al. (2015))seeks to preserve first- and second-order proximity and trains the embedding via negative sampling; SDNE (Wang et al. (2016)) jointly uses unsupervised components to preserve secondorder proximity and expolit first-order proximity in its supervised components; TRIDNR (Pan et al. (2016)), CENE(Sun et al. (2016)), TADW (Yang et al. (2015)),GraphSAGE (Hamilton et al. (2017b)) utilize node attributes and potentially node labels. Convolutional neural networks are also applied to graph-structured data. For instance, GCN (Kipf & Welling (2017)) proposed an simplified graph convolutional network. These graph convolutional network based approaches are (semi)supervised. Recently, inductive graph embedding learning (Hamilton et al. (2017b) Velickovic et al. (2017) Bojchevski & Günnemann (2017) Derr et al. (2018) Gao et al. (2018), Li et al. (2018), Wang et al. (2018) and Ying et al. (2018b)) produce impressive performance across several large-scale benchmarks." }, { "heading": "2.2 ATTENTION", "text": "Attention mechanism in neural processes have been extensively studied in neuroscience and computational neuroscience (Itti et al. (1998); Desimone & Duncan (1995)) and frequently applied in deep learning for speech recognition (Chorowski et al. (2015)), translation (Luong et al. (2015)), question\nanswering (Seo et al. (2016)) and visual identification of objects (Xu et al. (2015)). Inspired by (Seo et al. (2016) and Abu-El-Haija et al. (2018)), we construct a bi-attention layer upon aggregators to capture useful parts of the neighborhood." }, { "heading": "3 MODEL", "text": "Let G = {V,E,X} be an undirected graph, where a set of nodes V are connected by a set of edges E, and X ∈ R|V |×f is the attribute matrix of nodes. A global embedding bias matrix is denoted by B ∈ R|V |×d, where a row of B represents the d-dimensional global embedding bias of a node. The hierarchical layer number, the embedding output of the l-th layer and the final output embedding are denoted by L, hl and z, respectively." }, { "heading": "3.1 CONTEXT-AWARE INDUCTIVE EMBEDDING ENCODING", "text": "The embedding generation process is described in Algorithm 1. Assume that the dual encoder is trained and parameters are fixed.\nAlgorithm 1 Context-Aware Dual-Encoding (CADE) input: the whole graph G = (V,E); the feature matrix X; the trained DualENC output: learned embeddings z;\n1: Run random walks on G to gain a set of positive pair P; 2: Zv ← ∅, ∀v ∈ V 3: for (v, vp) ∈ P do 4: zv, zvp = DualENC(v, vp,G,X); 5: Zv ← Zv ∪ zv 6: Zvp ← Zvp ∪ zvp 7: end for 8: for v ∈ V do 9: zv = Mean(Zv);\n10: end for\nAfter training, positive pairs are collected by random walks on the whole dataset. The features of each positive pair are passed through a dual-encoder. Embeddings of nodes are generated so that the components of a pair are related and adjacent to each other." }, { "heading": "3.2 DUAL-ENCODER WITH MULTI-SAMPLING", "text": "In this subsection, we explain the dual encoder.\nAlgorithm 2 DualENC input: Training graph G(V,E); node attributes X; global embedding bias matrix B; sampling times K; positive node pair (v, vp); output: adjacent embeddings zv and zvp ;\n1: For node v and vp, generate K embeddings, hv , hvp , using a base encoder SAGB 2: for i, j ∈ {1, ...,K} do 3: Si,j ← α(hvi,hvpj) 4: end for 5: softmax on flattened similarity matrix S: Si,j ← e\nSi,j∑K,K 0,0 e Si,j\n6: calculate attention av and avp : avi ← ∑K j=1 Si,j ,avpj ← ∑K i=1 Si,j\n7: zv ← ∑K t=1 avkh L vk\n8: zvp ← ∑K t=1 avpkh L vpk\nIn the hierarchical sampling and aggregating framework (Hamilton et al. (2017b)), it is challenging and vital to select relevant neighbor with the layer goes deeper and deeper. For example, given\nword ”mouse” and its positive node ”PC”, it is better to sample ”keyboard”, instead of ”cat”, as a neighbor node. However, to sample the satisfying node according to heuristic rules layer by layer is very time consuming, and it is difficult to learn attention over neighborhood for unsupervised embedding learning.\nAs a matter of fact, these neighbor nodes are considered to be useful because they are more welcome to be sampled as input so as to produce more relevant output of the dual-encoder. Therefore, instead of physically sampling these neighbor nodes, in Step 2 to Step 6 in Algorithm 2, we directly apply a bi-attention layer on the two sets of embedding outputs with different sampled neighborhood feature as input, so as to locate the most relevant representation match, as a more efficient approach to exploring the most useful neighborhood.\nWe use the hierarchical sampling and aggregating framework as a base encoder in our experiments, but it can also be designed in many other ways.\nGiven node v, and node vp as a positive node pair, after sampling and aggregating for K times, we have K different representations hvk/hvpk corresponding to K different sampled neighborhoods, their similarity matrix can be calculated by\nSi,j = α(hvi,hvpj), i, j = 1, ...,K (1)\nwhere α represents an dot-product operation.\nour goal is to find the closest neighborhood match between v and vp within K ×K possibilities, so we apply softmax on the flattened similarity matrix, and sum up by row(column). In this way, we manage to gain different attention over K neighborhoods of v(vp) with respect to vp(v),\nSij ← exp(Sij)∑K,K\n0,0 exp(Sij) (2)\navi = K∑ j=1 Sij (3)\navpj = K∑ i=1 Sij (4)\nand sum up the lth layer representations with attention as the dual-encoder outputs,\nzv = K∑\nk=1\navkhvk (5)\nzvp = K∑\nk=1\navpkhvpk (6)\nTo train our encoder before using it to generate final representations of nodes, we apply a typical pair reconstruction loss function with negative sampling Hamilton et al. (2017b):\nJG(zv) = −log(σ(zTv zvp))−Q · Evn∼Pn(v)log(σ(−z T v zvn)) (7)\nwhere node vp co-occurs with v on fixed-length random walk (Perozzi et al. (2014)), σ is the sigmoid function, Pn is a negative sampling distribution, Q defines the number of negative samples. Note that zv and zvp are dual representation to each other while zvn represents the direct encoder output of negative sample vn." }, { "heading": "3.3 DUAL-ENCODER WITH MULTI-AGGREGATING", "text": "Besides learning dual representation with multiple sampling, we introduce another version of our dual encoder with multiple aggregator function.The intuition is that through different perspective, a node can be represented differently corresponding to different kinds of positive nodes.\nIn Step 1 in Algorithm 2, for a node v, we sample neighborhood once and aggregate feature with K sets of parameters, gaining K different representations hvk corresponding to K different character of v. Given a positive node pair, v and vp, their dual representation are calculated by applying biattention as we described in the last section. There is but one difference that we use a weigh vector A ∈ R2d as parameter instead of dot-product, to calculate the K×K attention matrix between node v and node vp,\nSij ← exp(AT[hvi||hvpj ])∑K,K\n0,0 exp(A T[hvi||hvpj ])\n(8)\nwhere ·T represents transposition and || is the concatenation operation. The rest of calculation of dual representation is the same as section 3.2.\nAnother difference is during training. With K sets of parameter for aggregating, negative sample vn is now also represented by K different embeddings. As shown in Figure 2, we set K = 5 and use different shape to represent the embeddings of the positive node pair and the negative sampled nodes.\nAs we can see in Figure 2, to make sure that any embeddings of node vn as far away from any of node v as possible, it is equal to maximizing the distance between their support embeddings, which is the closest pair of embeddings of v and vn. The support embedding can be calculated by the learned dual encoder. In conclusion, our loss function can be modified as follows,\nJG(zv) = −log(σ(zTv zvp))−Q · Evn∼Pn(v)log(σ(−z ′T v zvn)) (9)\nzv, zvp = DualENC(v, vp,A) (10) z′v,zvn = DualENC(v, vp,A ∗) (11)\nwhere A∗ representing that we stop the back-propagation through A in dual encoding for negative sample node, since A are supposed to learn bi-attention between the positive node pair and be reused only to capture the support embedding of v and its negative sample nodes." }, { "heading": "3.4 MEMORABLE GLOBAL BIAS IN HIERARCHICAL ENCODING", "text": "In this section, we first explain the base encoder used in our proposed dual encoding framework, and then we introduce how we apply memorable global bias within this framework.\nThe general intuition of GraphSAGE is that at each iteration, nodes aggregate information from their local neighbors, and as this process iterates, nodes incrementally gather more and more information\nfrom further reaches of the graph. For generating embedding for one specific node u, we describe the process below. First, we construct a neighborhood tree with node u as the root, Nu, by iteratively sampling immediate neighborhood of nodes of the last layer as children. Nodes at the lth layer are represented by symbol Nlu, N0u = {u}. Then, at each iteration, each node i aggregates the representations of its children j, {hl−1j }, and of itself, h l−1 i , into a single vector h l i, as representation of the next layer. After L iterations, we gain the Lth layer representation of v, as the final output.\nWhile this framework generates good representation for nodes, it cannot preserve sufficient embedding informations for known nodes. More specifically, for nodes that are known but trained less than average, the learned model would have treated them like nodes unmet before. Therefore, we intuitively apply distinctive and trainable global bias to each node, as follows:\nhl−1S(i) ← AGGREGATEl({h l−1 j , ∀j ∈ S(i)}) (12)\nhli ← σ(W l · [hl−1i ||h l−1 S(i)]) (13)\nhli ← hli + bi, l < L (14) bi ← one hot(i)TB (15)\n(16)\nwhere B ∈ R|V |×d is the trainable global bias matrix, S(i) represents the sampled neighborhood and also the children nodes of node i in the neighborhood tree, AGGREGATE represents the neighborhood aggregator function, and || is a operator of concatenating vectors. On one hand, B can be reused to produce embeddings for the known nodes or the unknown connected with the known, as supplement to the neural network encoder. On another hand, the global bias vectors can partially offset the uncertainty of the encoding brought by the random sampling not only during the training but also the final generation. Lastly but not least, we use only one set of global bias for all nodes, which means for any node, its representations of hidden layers are all added by the same bias vector. As a result of that, we are able to update parameters of aggregator function in the lowest layer with the global updated bias of nodes, highly increasing the training efficiency.\nIt is important for us to apply no global bias to the last layer of output, which is also the candidate of the dual-encoder output of nodes before applied with attention. The reason is that applying extra bias onto the last layer would directly change the embedding distribution of known nodes, making it unequal to the embedding distribution of unseen nodes. In general, the implementation of the base encoder with global bias is shown in Algorithm 3. The * in Step 8 means the children of node i in the neighborhood tree Nu.\nAlgorithm 3 SAGB:sampling and aggregating with global bias input: node u; hierarchical depth L; weight matrices W l; non-linearity σ; differentiable neighbor aggregator AGGREGATEl; fixed-size uniform sampler S : v → 2V output: embedding zu;\n1: N0u = {u}; 2: for l = 1...L do 3: Nlu ← {S(i), ∀i ∈ Nl−1u }; 4: end for 5: for l=1...L do 6: for i ∈ N0u ∪ N1u ∪ ... ∪ NL−lu do 7: hl−1S∗(i) ← AGGREGATEl({h l−1 j , ∀j ∈ S∗(i)}) 8: hli ← σ(W l · [h l−1 i ||h l−1 S∗(i)])\n9: if l < L: hli ← hli + one hot(i)TB 10: end for 11: end for 12: return zu ← hLu" }, { "heading": "4 EXPERIMENTS", "text": "In this section, we compare CADE against two strong baselines in an inductive and unsupervised setting, on challenging benchmark tasks of node classification and link prediction. We also perform further studies of the proposed model in section 4.5." }, { "heading": "4.1 DATASETS", "text": "The following graph datasets are used in experiments and statistics are summarized in Table1:\n• Pubmed: The PubMed Diabetes (Sen et al. (2008))1 dataset is a citation dataset which consists of scientific publications from Pubemd database pertaining to diabetes classified into one of three classes. Each publication in the dataset is described by a TF/IDF (Salton & Yu (1973)) weighted word vector from a dictionary.\n• Blogcatalog: BlogCatalog2 is a social blog directory which manages bloggers and their blogs, where bloggers following each others forms the network dataset.\n• Reddit: Reddit3 is an internet forum where users can post or comment on any content. We use the exact dataset conducted by (Hamilton et al. (2017b)), where each link connects two posts if the same user comments on both of them.\n• PPI: The protein-protein-interaction (PPI) networks dataset contains 24 graphs corresponding to different human tissues(Zitnik & Leskovec (2017)). We use the preprocessed data also provided by (Hamilton et al. (2017b))." }, { "heading": "4.2 EXPERIMENTAL SETTINGS", "text": "We compare CADE against the following approaches in a fully unsupervised and inductive setting:\n• GraphSAGE: In our proposed model, CADE, the base encoder mainly originates from GraphSAGE, a hierarchical neighbor sampling and aggregating encoder for inductive learning. Three alternative aggregators are used in Graphsage and CADE: (1) Mean aggregator, which simply takes the elementwise mean of the vectors in hk−1u∈N(v); (2) LSTM aggregator, which adapts LSTMs to encode a random permutation of a node’s neighbors’ hk−1; (3) Maxpool aggregator, which apply an elementwise maxpooling operation to aggregate information across the neighbor nodes.\n• Graph2Gauss (Bojchevski & Günnemann (2017)): Unlike GraphSAGE and my method, G2G only uses the attributes of nodes to learn their representations, with no need for link information. Here we compare against G2G to prove that certain trade-off between sampling granularity control and embedding effectiveness does exists in inductive learning scenario.\nBeside the above two models, we also include experiment results of raw features as baselines. In comparison, we call the version of dual-encoder with multiple sampling as CADE-MS, while the version with multiple aggregator function as CADE-MA.\n1Available at https://linqs.soe.ucsc.edu/data. 2http://www.blogcatalog.com/ 3http://www.reddit.com/ 4PPI is a multi-label dataset.\nFor CADE-MS, CADE-MA and GraphSAGE, we set the depth of hierarchical aggregating as L = 2, the neighbor sampling sizes as s1 = 20, s2 = 10, and the number of random-walks for each node as 100 and the walk length as 4. The sampling time in CADE-MS or the number of aggregator in CADE-MA is set as K = 10. And for all emedding learning models, the dimension of embeddings is set to 256, as for raw feature, we use all the dimensions. Our approach is impemented in Tensorflow (Abadi et al. (2016)) and trained with the Adam optimizer (Kingma & Ba (2014)) at an initial learning rate of 0.0001." }, { "heading": "4.3 INDUCTIVE NODE CLASSIFICATION", "text": "We evaluate the node classification performance of methods on the four datasets. On Reddit and PPI, we follow the same training/validation/testing split used in GraphSAGE. On Pubmed and Blogcatalog, we randomly selected 10%/20%/30% nodes for training while the rest remain unseen. We report the averaged results over 10 random split.\nAfter spliting the graph dataset, the model is trained in an unsupervised manner, then the learnt model computes the embeddings for all nodes, a node classifier is trained with the embeddings of training nodes and finally the learnt classifier is evaluated with the learnt embeddings of the testing nodes, i.e the unseen nodes.\nComparation on node classification performance on Pubmed and Blogcatalog dataset with respect to varying ratios of unseen nodes, are reported in Table 2. CADE-MS and CADE-MA outperform other approaches on Pubmed. On Blogcatalog dataset, however, RawFeats performs best mainly because that, in Blogcatalog dataset, node features are not only directly extracted from a set of user-defined tags, but also are of very high dimensionality (up to 8,189). Hence extra neighborhood information is not needed. As shown in Table 2, CADE-MA performs better than CADE-MS, and both outperform GraphSAGE and G2G. CADE-MA is capable of reducing high dimensionality while losing less information than CADE-MS, CADE-MA is more likely to search for the best aggregator function that can focus on those important features of nodes. As a result, the 256-dimensional embedding learnt by CADE-MA shows the cloest node classification performance to the 8k-dimensional raw features.\nComparasion among GraphSAGE, CADE and other aggregator functions is reported in Figure3. Each dataset contains 30% unseen nodes. In general, the model CADE shows significant advance to the other two state-of-art embedding learning models in node classification on four different challenging graph datasets." }, { "heading": "4.4 INDUCTIVE LINK PREDICTION", "text": "Link prediction task evaluates how much network structural information is preserved by embeddings. We preform the following steps: (1) mark some nodes as unseen from the training of embedding learning models. For Pubmed 20% nodes are marked as unseen; (2) randomly hide certain percentage of edges and equal-number of non-edges as testing edge set for link prediction, and make sure not to produced any dangling node; (3) the rest of edges are then used to form the input graph for embedding learning and with equal number of non-edges form the training edge set for link predictor; (4) after training and inductively generation of embeddings, the training edge set and their corresponding embeddings will help to train a link predictor; (5) finally evaluate the performance on the testing edges by the area under the ROC curve (AUC) and the average precision (AP) scores.\nComparation on performance with respect to varying percentage of hidden edges are reported in Table3. CADE shows best link prediction performance on both datasets." }, { "heading": "4.5 MODEL STUDY", "text": "" }, { "heading": "4.5.1 SAMPLING COMPLEXITY IN CADE-MS", "text": "Our proposed CADE-MS requires multiple neighborhood sampling, which increases the complexity of embedding learning. Yet by comparing CADE-MS against GraphSAGE with the same quantity of sampled neighborhood per node, the superiority of CADE model over existing models is still vast. In practice, we set the sampling layer L = 2 and the first-layer sampling size as 20. For the second layer, denote by s′2 the sampling size in GraphSAGE, and by s2 and T the sampling size and sampling time in CADE-MS. We compare the two methods with s′2 = s2 ∗K. A variant of CADE, called CADE-gb, applies only memorable global bias and no dual-encoding framework,\nhas the same sampling complexity as GraphSAGE. For the efficiency of experiment, we conduct experiments of node classification on a small subset of PPI, denoted by subPPI, which includes 3 training graphs plus one validation graph and one test graph. Results are reported in Figure 4. With much smaller sampling width, CADE-MS still outperforms the original framework significantly.\nIt implicates that searching for the best representation match through multiple sampling and biattention is efficient to filtering userful neighbor nodes without supervision from any node labels, and that the context-aware dual-encoding framework is capable of improving the inductive embedding learning ability without increasing sampling complexity. We aslo observe that CADE-gb, the variant simply adding the memorable global bias, continually shows advance in different sampling sizes." }, { "heading": "4.5.2 THE NECESSARITY OF HIDDEN MEMORY", "text": "It is necessary not to apply global bias to the encoder output. We compare the original framework GraphSAGE, CADE with the variants of our method: applying global bias only to the last layer, and only to the former layers. As Table 4 shows, CADE-gl performs poorly, while CADE-gb demonstrates the effect of keeping memory of the hidden representation for each node." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed CADE, an unsupervised and inductive network embedding approach which is capable of preserving local connectiveness distinctively , as well as learning and memorizing global identities for seen nodes while generalizing to unseen nodes. We applied a bi-attention architeture upon hierarchical aggregating layers to capture the most relevant representations dually for any positive pair. We also present an effective way of combining inductive and transductive ideas by allowing trainable global embedding bias to be retrieved in hidden layers within the hierarchical aggregating framework. Experiments demonstrate the superiority of CADE over the state-of-art baselines on unsupervised and inductive tasks. In the future, we would explore several possibilities, such as expanding dual encoding from pair-wise to n-wise, or using dual encoding framework in supervised embedding learning, or combing dual encoding with G2G by learning distribution representations dually for positive pairs." } ]
2,019
null
SP:6ebcd4fc6279bf7662a6691dae25f1bf4616432d
[ "The paper proposes an Energy-Based-Model (EBM) for scoring the possible configurations of amino acid side chain conformations in protein structures with known amino acid backbone structure. The energy of the side-chain conformation (the chi-angle) for a given amino acid in the structure is calculated as a function of a local neighbourhood of atoms (A), where each atom is embedded into a 256d vector using its cartesian coordinates, atom identity, atom side-chain position and amino acid identity. The model is trained using approximate likelihood where the model samples are generated using precalculated table (from literature) of possible Chi angles conformations conditioned on the back-bone amino acid identity and back-bone angles. The results seem comprehensive comparing the transformer based energy function parameterization with two sensible baselines as well as the Rosetta energy function which is the de facto standard tool for these types of calculations. Using rotamer recovery accuracy as the benchmark measure the empirical results are close to performance as the Rosetta energy model however always slightly worse. Further visualizations of the energy levels for different Chi angles seems to support that the learned energy function captures well known characteristics of the rotamer configuration energy landscape.", "The authors propose a predictive model based on the energy based model that uses a Transformer architecture for the energy function. It accepts as input an atom and its neighboring atoms and computes an energy for their configuration. The input features include representations of physical properties (atom identity, atom location within a side chain and amino-acid type) and spatial coordinates (x, y, z). A set of 64 atoms closest to the beta carbon of the target residue are selected and each is projected to a 256-dimensional vector. The predictive model computes an energy for the configuration of these 64 atoms surrounding a residue under investigation. The model is reported to achieve a slightly worse but comparable performance to the Rosetta energy function, the state-of-the-art method widely used in protein structure prediction and design. The authors investigate model’s outputs and hidden representations and conclude that it captures physicochemical properties relevant to the protein energy in general." ]
We propose an energy-based model (EBM) of protein conformations that operates at atomic scale. The model is trained solely on crystallized protein data. By contrast, existing approaches for scoring conformations use energy functions that incorporate knowledge of physical principles and features that are the complex product of several decades of research and tuning. To evaluate the model, we benchmark on the rotamer recovery task, the problem of predicting the conformation of a side chain from its context within a protein structure, which has been used to evaluate energy functions for protein design. The model achieves performance close to that of the Rosetta energy function, a state-of-the-art method widely used in protein structure prediction and design. An investigation of the model’s outputs and hidden representations finds that it captures physicochemical properties relevant to protein energy.
[ { "affiliations": [], "name": "Yilun Du" }, { "affiliations": [], "name": "Joshua Meier" } ]
[ { "authors": [ "Rebecca F Alford", "Andrew Leaver-Fay", "Jeliazko R Jeliazkov", "Matthew J O’Meara", "Frank P DiMaio", "Hahnbeom Park", "Maxim V Shapovalov", "P Douglas Renfrew", "Vikram K Mulligan", "Kalli Kappel" ], "title": "The rosetta all-atom energy function for macromolecular modeling and design", "venue": "Journal of chemical theory and computation,", "year": 2017 }, { "authors": [ "Ethan C. Alley", "Grigory Khimulya", "Surojit Biswas", "Mohammed AlQuraishi", "George M. Church" ], "title": "Unified rational protein engineering with sequence-only deep representation learning", "venue": "bioRxiv,", "year": 2019 }, { "authors": [ "Mohammed AlQuraishi" ], "title": "End-to-end differentiable learning of protein structure", "venue": "Cell Systems,", "year": 2019 }, { "authors": [ "Xavier I Ambroggio", "Brian Kuhlman" ], "title": "Computational design of a single amino acid sequence that can switch between two distinct protein folds", "venue": "Journal of the American Chemical Society,", "year": 2006 }, { "authors": [ "Namrata Anand", "Possu Huang" ], "title": "Generative modeling for protein structures", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Tristan Bepler", "Bonnie Berger" ], "title": "Learning protein sequence embeddings using information from structure", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "F Edward Boas", "Pehr B Harbury" ], "title": "Potential energy functions for protein design", "venue": "Current opinion in structural biology,", "year": 2007 }, { "authors": [ "Michael J Bower", "Fred E Cohen", "Roland L Dunbrack Jr." ], "title": "Prediction of protein side-chain rotamers from a backbone-dependent rotamer library: a new homology modeling tool", "venue": "Journal of molecular biology,", "year": 1997 }, { "authors": [ "Scott E Boyken", "Zibo Chen", "Benjamin Groves", "Robert A Langan", "Gustav Oberdorfer", "Alex Ford", "Jason M Gilmore", "Chunfu Xu", "Frank DiMaio", "Jose Henrique Pereira" ], "title": "De novo design of protein homooligomers with modular hydrogen-bond network–mediated specificity", "venue": null, "year": 2016 }, { "authors": [ "Miguel A Carreira-Perpinan", "Geoffrey E Hinton" ], "title": "On contrastive divergence learning", "venue": "In Aistats,", "year": 2005 }, { "authors": [ "Wendy D Cornell", "Piotr Cieplak", "Christopher I Bayly", "Ian R Gould", "Kenneth M Merz", "David M Ferguson", "David C Spellmeyer", "Thomas Fox", "James W Caldwell", "Peter A Kollman" ], "title": "A second generation force field for the simulation of proteins, nucleic acids, and organic molecules", "venue": "Journal of the American Chemical Society,", "year": 1995 }, { "authors": [ "R. Das" ], "title": "Four small puzzles that Rosetta doesn’t solve", "venue": "PLoS ONE,", "year": 2011 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton", "Radford M Neal", "Richard S Zemel" ], "title": "The helmholtz machine", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Ken A Dill" ], "title": "Dominant forces in protein", "venue": "folding. Biochemistry,", "year": 1990 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "arXiv 1903.08689,", "year": 2019 }, { "authors": [ "Melissa A Edeling", "Luke W Guddat", "Renata A Fabianek", "Linda Thöny-Meyer", "Jennifer L Martin" ], "title": "Structure of ccmg/dsbe at 1.14 å resolution: high-fidelity reducing activity in an indiscriminately oxidizing environment", "venue": null, "year": 2002 }, { "authors": [ "Evan N. Feinberg", "Debnil Sur", "Zhenqin Wu", "Brooke E. Husic", "Huanghao Mai", "Yang Li", "Saisai Sun", "Jianyi Yang", "Bharath Ramsundar", "Vijay S. Pande" ], "title": "Potentialnet for molecular property prediction", "venue": "ACS Central Science,", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Hong Guo", "Dennis R Salahub" ], "title": "Cooperative hydrogen bonding and enzyme catalysis", "venue": "Angewandte Chemie International Edition,", "year": 1998 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Lisa Holm", "Chris Sander" ], "title": "Fast and simple monte carlo algorithm for side chain optimization in proteins: application to model building by homology. Proteins: Structure, Function, and Bioinformatics", "venue": null, "year": 1992 }, { "authors": [ "Po-Ssu Huang", "Scott E Boyken", "David Baker" ], "title": "The coming of age of de novo protein", "venue": "design. Nature,", "year": 2016 }, { "authors": [ "John Ingraham", "Adam Riesselman", "Chris Sander", "Debora Marks" ], "title": "Learning protein structure with a differentiable simulator", "venue": null, "year": 2018 }, { "authors": [ "John Ingraham", "Vikas K Garg", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Generative models for graph-based protein", "venue": null, "year": 2019 }, { "authors": [ "Matthew P Jacobson", "George A Kaminski", "Richard A Friesner", "Chaya S Rapp" ], "title": "Force field validation using protein side chain prediction", "venue": "The Journal of Physical Chemistry B,", "year": 2002 }, { "authors": [ "Joel Janin", "Shoshanna Wodak", "Michael Levitt", "Bernard Maigret" ], "title": "Conformation of amino acid side-chains in proteins", "venue": "Journal of molecular biology,", "year": 1978 }, { "authors": [ "Lin Jiang", "Eric A Althoff", "Fernando R Clemente", "Lindsey Doyle", "Daniela Röthlisberger", "Alexandre Zanghellini", "Jasmine L Gallaher", "Jamie L Betker", "Fujie Tanaka", "Carlos F Barbas" ], "title": "De novo computational design of retro-aldol enzymes", "venue": null, "year": 2008 }, { "authors": [ "William L Jorgensen", "David S Maxwell", "Julian Tirado-Rives" ], "title": "Development and testing of the opls all-atom force field on conformational energetics and properties of organic liquids", "venue": "Journal of the American Chemical Society,", "year": 1996 }, { "authors": [ "Neil P King", "Jacob B Bale", "William Sheffler", "Dan E McNamara", "Shane Gonen", "Tamir Gonen", "Todd O Yeates", "David Baker" ], "title": "Accurate design of co-assembling multi-component protein", "venue": "nanomaterials. Nature,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Brian Kuhlman", "Gautam Dantas", "Gregory C Ireton", "Gabriele Varani", "Barry L Stoddard", "David Baker" ], "title": "Design of a novel globular protein fold with atomic-level", "venue": "accuracy. science,", "year": 2003 }, { "authors": [ "Themis Lazaridis", "Martin Karplus" ], "title": "Effective energy functions for protein structure prediction", "venue": "Current opinion in structural biology,", "year": 2000 }, { "authors": [ "Andrew Leaver-Fay", "Matthew J O’Meara", "Mike Tyka", "Ron Jacak", "Yifan Song", "Elizabeth H Kellogg", "James Thompson", "Ian W Davis", "Roland A Pache", "Sergey Lyskov" ], "title": "Scientific benchmarks for guiding macromolecular energy function improvement", "venue": "In Methods in enzymology,", "year": 2013 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "M Ranzato", "F Huang" ], "title": "A tutorial on energy-based learning", "venue": "Predicting structured data,", "year": 2006 }, { "authors": [ "Dennis R Livesay", "Dang H Huynh", "Sargis Dallakyan", "Donald J Jacobs" ], "title": "Hydrogen bond networks determine emergent mechanical and thermodynamic properties across a protein family", "venue": "Chemistry Central Journal,", "year": 2008 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Alex D MacKerell Jr.", "Donald Bashford", "MLDR Bellott", "Roland Leslie Dunbrack Jr.", "Jeffrey D Evanseck", "Martin J Field", "Stefan Fischer", "Jiali Gao", "H Guo", "Sookhee Ha" ], "title": "All-atom empirical potential for molecular modeling and dynamics studies of proteins", "venue": "The journal of physical chemistry B,", "year": 1998 }, { "authors": [ "Jack B Maguire", "Scott E Boyken", "David Baker", "Brian Kuhlman" ], "title": "Rapid sampling of hydrogen bond networks for computational protein design", "venue": "Journal of chemical theory and computation,", "year": 2018 }, { "authors": [ "Elman Mansimov", "Omar Mahmood", "Seokho Kang", "Kyunghyun Cho" ], "title": "Molecular geometry prediction using a deep generative graph neural network", "venue": null, "year": 1904 }, { "authors": [ "CA McPhalen", "MNG James" ], "title": "Crystal and molecular structure of the serine proteinase inhibitor ci-2 from barley", "venue": "seeds. Biochemistry,", "year": 1987 }, { "authors": [ "Carl Pabo" ], "title": "Molecular technology: designing proteins and peptides", "venue": "Nature, 301(5897):200,", "year": 1983 }, { "authors": [ "Jasmina S Redzic", "Bruce E Bowler" ], "title": "Role of hydrogen bond networks and dynamics in positive and negative cooperative stabilization of a protein", "venue": null, "year": 2005 }, { "authors": [ "Jane S Richardson", "David C Richardson" ], "title": "Principles and patterns of protein conformation. In Prediction of protein structure and the principles of protein conformation", "venue": null, "year": 1989 }, { "authors": [ "Alexander Rives", "Siddharth Goyal", "Joshua Meier", "Demi Guo", "Myle Ott", "C. Lawrence Zitnick", "Jerry Ma", "Rob Fergus" ], "title": "Biological structure and function emerge from scaling unsupervised learning to 250 million protein", "venue": "sequences. bioRxiv,", "year": 2019 }, { "authors": [ "Andrew Senior", "John Jumper", "Demis Hassabis" ], "title": "AlphaFold: Using AI for scientific discovery, 12 2018", "venue": "URL https://deepmind.com/blog/alphafold/", "year": 2018 }, { "authors": [ "Maxim V Shapovalov", "Roland L Dunbrack Jr." ], "title": "A smoothed backbone-dependent rotamer library for proteins derived from adaptive kernel density estimates and regressions", "venue": null, "year": 2011 }, { "authors": [ "Manfred J Sippl" ], "title": "Calculation of conformational ensembles from potentials of mena force: an approach to the knowledge-based prediction of local structures in globular proteins", "venue": "Journal of molecular biology,", "year": 1990 }, { "authors": [ "Seiji Tanaka", "Harold A Scheraga" ], "title": "Medium-and long-range interaction parameters between amino acids for predicting three-dimensional structures of proteins. Macromolecules", "venue": null, "year": 1976 }, { "authors": [ "P Tuffery", "C Etchebest", "Serge Hazout", "R Lavery" ], "title": "A new approach to the rapid determination of protein side chain conformations", "venue": "Journal of Biomolecular structure and dynamics,", "year": 1991 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": "arXiv preprint arXiv:1511.06391,", "year": 2015 }, { "authors": [ "G. Wang", "Jr. R.L. Dunbrack" ], "title": "Pisces: a protein sequence culling", "venue": "server. Bioinformatics,", "year": 2003 }, { "authors": [ "Jingxue Wang", "Huali Cao", "John Z.H. Zhang", "Yifei Qi" ], "title": "Computational protein design with deep learning neural networks", "venue": "Scientific Reports,", "year": 2018 }, { "authors": [ "Jinbo Xu" ], "title": "Distance-based protein folding powered by deep learning", "venue": "arXiv preprint arXiv:1811.03481,", "year": 2018 }, { "authors": [ "Kevin K. Yang", "Zachary Wu", "Frances H. Arnold" ], "title": "Machine-learning-guided directed evolution for protein engineering", "venue": "Nature Methods,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Methods for the rational design of proteins make use of complex energy functions that approximate the physical forces that determine protein conformations (Cornell et al., 1995; Jorgensen et al., 1996; MacKerell Jr et al., 1998), incorporating knowledge about statistical patterns in databases of protein crystal structures (Boas & Harbury, 2007). The physical approximations and knowledge-derived features that are included in protein design energy functions have been developed over decades, building on results from a large community of researchers (Alford et al., 2017).\nIn this work1, we investigate learning an energy function for protein conformations directly from protein crystal structure data. To this end, we propose an energy-based model using the Transformer architecture (Vaswani et al., 2017), that accepts as inputs sets of atoms and computes an energy for their configuration. Our work is a logical extension of statistical potential methods (Tanaka & Scheraga, 1976; Sippl, 1990; Lazaridis & Karplus, 2000) that fit energetic terms from data, which, in combination with physically motivated force\n∗Work performed during an internship at Facebook 1Data and code for experiments are available at https://github.com/facebookresearch/\nprotein-ebm\nfields, have contributed to the feasibility of de novo design of protein structures and functions (Kuhlman et al., 2003; Ambroggio & Kuhlman, 2006; Jiang et al., 2008; King et al., 2014).\nTo date, energy functions for protein design have incorporated extensive feature engineering, encoding knowledge of physical and biochemical principles (Boas & Harbury, 2007; Alford et al., 2017). Learning from data can circumvent the process of developing knowledge-based potential functions by automatically discovering features that contribute to the protein’s energy, including terms that are unknown or are difficult to express with rules or simple functions. Since energy functions are additive, terms learned by neural energy-based models can be naturally composed with those derived from physical knowledge.\nIn principle, neural networks have the ability to identify and represent non-additive higher order dependencies that might uncover features such as hydrogen bonding networks. Such features have been shown to have important roles in protein structure and function (Guo & Salahub, 1998; Redzic & Bowler, 2005; Livesay et al., 2008), and are important in protein design (Boyken et al., 2016). Incorporation of higher order terms has been an active research area for energy function design (Maguire et al., 2018).\nEvaluations of molecular energy functions have used as a measure of fidelity, the ability to identify native side chain configurations (rotamers) from crystal structures where the ground-truth configuration has been masked out (Jacobson et al., 2002; Bower et al., 1997). Leaver-Fay et al. (2013) introduced a set of benchmarks for the Rosetta energy function that includes the task of rotamer recovery. In the benchmark, the ground-truth configuration of the side chain is masked and rotamers (possible configurations of the side chain) are sampled and evaluated within the surrounding molecular context (the rest of the atoms in the protein structure not belonging to the side chain). The energy function is scored by comparing the lowest-energy rotamer (as determined by the energy function) against the rotamer that was observed in the empirically-determined crystal structure.\nThis work takes an initial step toward fully learning an atomic-resolution energy function from data. Prediction of native rotamers from their context within a protein is a restricted problem setting for exploring how neural networks might be used to learn an atomic-resolution energy function for protein design. We compare the model to the Rosetta energy function, as detailed in Leaver-Fay et al. (2013), and find that on the rotamer recovery task, deep learning-based models obtain results approaching the performance of Rosetta. We investigate the outputs and representations of the model toward understanding its representation of molecular energies and exploring relationships to physical properties of proteins.\nOur results open for future work the more general problem settings of combinatorial side chain optimization for a fixed backbone (Tuffery et al., 1991; Holm & Sander, 1992) and the inverse folding problem (Pabo, 1983) – the recovery of native sequences for a fixed backbone – which has also been used in benchmarking and development of molecular energy functions for protein design (Leaver-Fay et al., 2013)." }, { "heading": "2 BACKGROUND", "text": "Protein conformation Proteins are linear polymers composed of an alphabet of twenty canonical amino acids (residues), each of which shares a common backbone moiety responsible for formation of the linear polymeric backbone chain, and a differing side chain moiety with biochemical properties that vary from amino acid to amino acid. The energetic interplay of tight packing of side chains within the core of the protein and exposure of polar residues at the surface drives folding of proteins into stable molecular conformations (Richardson & Richardson, 1989; Dill, 1990).\nThe conformation of a protein can be described through two interchangeable coordinate systems. Each atom has a set of spatial coordinates, which up to an arbitrary rotation and translation of all coordinates describes a unique conformation. In the internal coordinate system, the conformation is described by a sequence of rigid-body motions from each atom to the next, structured as a kinematic tree. The major degrees of freedom in protein conformation are the dihedral rotations (Richardson & Richardson, 1989), about the backbone\nbonds termed phi (φ) and psi (ψ) angles, and the dihedral rotations about the side chain bonds termed chi (χ) angles.\nWithin folded proteins, the side chains of amino acids preferentially adopt configurations that are determined by their molecular structure. A relatively small number of configurations separated by high energetic barriers are accessible to each side chain (Janin et al., 1978). These configurations are called rotamers. In Rosetta and other protein design methods, rotamers are commonly represented by libraries that estimate a probability distribution over side chain configurations, conditioned on the backbone φ and ψ torsion angles. We use the Dunbrack library (Shapovalov & Dunbrack Jr, 2011) for rotamer configurations.\nEnergy-based models A variety of methods have been proposed for learning distributions of highdimensional data, e.g. generative adversarial networks (Goodfellow et al., 2014) and variational autoencoders (Kingma & Welling, 2013). In this work, we adopt energy-based models (EBMs) (Dayan et al., 1995; Hinton & Salakhutdinov, 2006; LeCun et al., 2006). This is motivated by their simplicity and scalability, as well as their compelling results in other domains, such as image generation (Du & Mordatch, 2019).\nIn EBMs, a scalar parametric energy function Eθ(x) is fit to the data, with θ set through a learning procedure such that the energy is low in regions around x and high elsewhere. The energy function maps to a probability density using the Boltzmann distribution: pθ(x) = exp(−Eθ(x))/Z(θ), where Z = ∫ exp(−Eθ(x)) dx denotes the partition function.\nEBMs are typically trained using the maximum-likelihood method (ML), in which θ is adjusted to minimize KL(pD(x)||pθ(x)), the Kullback-Leibler divergence between the data and the model distribution. This corresponds to maximizing the log-likelihood of the data under the model:\nLML(θ) = Ex∼pD [log pθ(x)] = Ex∼pD [Eθ(x)− logZ(θ)]\nFollowing Carreira-Perpinan & Hinton (2005), the gradient of this objective can be written as:\n∇θLML ≈ Ex+∼pD [∇θEθ(x +)]− Ex−∼pθ [∇θEθ(x −)]\nIntuitively, this gradient decreases the energy of samples from the data distribution x+ and increases the energy of samples drawn from the model x−. Sampling from pθ can be done in a variety of ways, such as Markov chain Monte Carlo or Gibbs sampling (Hinton & Salakhutdinov, 2006), possibly accelerated using Langevin dynamics (Du & Mordatch, 2019). Our method uses a simpler scheme to approximate ∇θLML, detailed in Section 3.4." }, { "heading": "3 METHOD", "text": "Our goal is to score molecular configurations of the protein side chains given a fixed target backbone structure. We define an architecture for an energy-based model and describe its training procedure." }, { "heading": "3.1 MODEL", "text": "The model calculates scalar functions, fθ(A), of size-k subsets, A, of atoms within a protein.\nSelection of atom subsets In our experiments, we choose A to be nearest-neighbor sets around the residues of the protein and set k = 64. For a given residue, we construct A to be the k atoms that are nearest to the residue’s beta carbon.\nAtom input representations Each atom in A is described by its 3D Cartesian coordinates and categorical features: (i) the identity of the atom (N, C, O, S); (ii) an ordinal label of the atom in the side chain (i.e. which specific carbon, nitrogen, etc. atom it is in the side chain) and (iii) the amino acid type (which of the 20 types of amino acids the atom belongs to). The coordinates are normalized to have zero mean across the k\natoms. Each categorical feature is embedded into 28 dimensions, and the spatial coordinates are projected into 172 dimensions2, which are then concatenated into a 256-dimensional atom representation. The parameters for the input embeddings and projections of spatial information are learned via training. During training, a random rotation is applied to the coordinates in order to encourage rotational invariance of the model. For visualizations, a fixed number of random rotations (100) is applied and the results are averaged.\nArchitecture In our proposed approach, fθ(A) takes the form of a Transformer model (Vaswani et al., 2017) that processes a set of atom representations. The self-attention layers allow each atom to attend to the representations of other atoms in the set, modeling the energy of the molecular configuration as a non-linear interaction of single, pairwise, and higher-order interactions between the atoms. The final hidden representations of the Transformer are pooled across the atoms to produce a single vector, which is finally passed to a two-layer multilayer perceptron (MLP) that produces the scalar output of the model. Figure 1 illustrates the model.\nFor all experiments, we use a 6-layer Transformer with embedding dimension of 256 (split over 8 attention heads) and feed-forward dimension of 1024. The final MLP contains 256 hidden units. The models are trained without dropout. Layer normalization (Ba et al., 2016) is applied before the attention blocks." }, { "heading": "3.2 PARAMETERIZATION OF PROTEIN CONFORMATIONS", "text": "The structure of a protein can be represented by two parameterizations: (1) absolute Cartesian coordinates of the set of atoms, and (2) internal coordinates of the atoms encoded as a set of in-plane/out-of-plane rotations and displacements relative to each atom’s reference frame. Out-of-plane rotations are parameterized by χ angles which are the primary degrees of freedom in the rotamer configurations. The coordinate systems are interchangeable.\n2The high dimensionality of the spatial projection was important to ensure a high weighting on the spatial coordinates, which proved necessary for the model to train reliably." }, { "heading": "3.3 USAGE AS AN ENERGY FUNCTION", "text": "We specify our energy function Eθ(x, c) to take an input set composed of two parts: (1) the atoms belonging to a rotamer to be predicted, x, and (2) the atoms of the surrounding molecular context, c. The energy function is defined as follows:\nEθ(x, c) = fθ(A(x, c))\nwhere A(x, c) is the set of embeddings from k atoms nearest to the rotamer’s beta carbon." }, { "heading": "3.4 TRAINING AND LOSS FUNCTIONS", "text": "In all experiments, the energy function is trained to learn the conditional distribution of the rotamer given its context by approximately maximizing the log likelihood of the data.\nL(θ) = −Eθ(x, c)− logZθ(c)\nTo estimate the partition function, we note that:\nlogZθ(c) = log ∫ e−Eθ(x,c)dx = log(Eq(x|c)[ e−Eθ(x,c)\nq(x|c) ])\nfor some importance sampler q(x|c). Furthermore, if we assume q(x|c) is uniformly distributed on supported configurations, we obtain a simplified maximum likelihood objective given by\nL(θ) = −Eθ(x, c)− log(Eq(xi|c)[e−Eθ(x i,c)])\nfor some context dependent importance sampler q(x|c). We choose our sampler q(x|c) to be an empirically collected rotamer library (Shapovalov & Dunbrack Jr, 2011) conditioned on the amino acid identity and the backbone φ and ψ angles. We write the importance sampler as a function of atomic coordinates which are interchangeable with the angular coordinates in the rotamer library. The library consists of lists of means and standard deviations of possible χ angles for each 10 degree interval for both φ and ψ. We sample rotamers uniformly from this library, given by a continuous φ and ψ, by sampling from a weighted mixture of Gaussians of χ angles at each of the four surrounding bins, with weights given by distance to the bins via bilinear interpolation. Every candidate rotamer at each bin is assigned uniform probability. To ensure our context dependent importance sampler effectively samples high likelihood areas in the model, we further add the real context as a sample from q(x|c).\nTraining setup Models were trained for 180 thousand parameter updates using 32 NVIDIA V100 GPUs, a batch size of 16,384, and the Adam optimizer (α = 2 · 10−4, β1 = 0.99, β2 = 0.999). We evaluated training progress using a held-out 5% subset of the training data as a validation set." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS", "text": "We constructed a curated dataset of high-resolution PDB structures using the CullPDB database, with the following criteria: resolution finer than 1.8 Å; sequence identity less than 90%; and R value less than 0.25 as defined in Wang & R. L. Dunbrack (2003). To test the model on rotamer recovery, we use the test set of structures from Leaver-Fay et al. (2013). To prevent training on structures that are similar to those in the test set, we ran BLAST on sequences derived from the PDB structures and removed all train structures with more than 25% sequence identity to sequences in the test dataset. Ultimately, our train dataset consisted of 12,473 structures and our test dataset consisted of 129 structures." }, { "heading": "4.2 BASELINES", "text": "We compare to three baseline neural network architectures: a fully-connected network, the architecture for embedding sets in the set2set paper (Vinyals et al., 2015); and a graph neural network (Veličković et al., 2017). All models have around 10 million parameters. Details of the baseline architectures are given in Appendix A.1.2.\nResults are also compared to Rosetta. We ran Rosetta using score12 and and ref15 energy functions using the rotamer trials and rtmin protocols with default settings." }, { "heading": "4.2.1 EVALUATION", "text": "For the comparison of the model to Rosetta in Table 1, we reimplement the sampling scheme that Rosetta uses for rotamer trials evaluation. We take discrete samples from the rotamer library, with bilinear interpolation of the mean and standard deviations using the four grid points surrounding the backbone φ and ψ angles for the residue. We take discrete samples of the rotamers at µ, except that for buried residues we sample χ1 and χ2 at µ and µ± σ as was done in Leaver-Fay et al. (2013). We define buried residues to have ≥24 Cβ neighbors within 10Å of the residue’s Cβ (Cα for glycine residues). For buried positions we accumulate rotamers up to 98% of the distribution, and for other positions the accumulation is to 95%. We score a rotamer as recovered correctly if all χ angles are within 20◦ of the ground-truth residue.\nWe also use a continuous sampling scheme which approximates the empirical conditional distribution of the rotamers using a mixture of Gaussians with means and standard deviations computed by bilinear interpolation as above. Instead of sampling discretely, the component rotamers are sampled with the probabilities given by the library, and a sample is generated with the corresponding mean and standard deviation. This is the same sampling scheme used to train models, but with component rotamers now weighted by probability as opposed to uniform sampling." }, { "heading": "4.3 ROTAMER RECOVERY RESULTS", "text": "Table 1 directly compares our EBM model (which we refer to as the Atom Transformer) with two versions of the Rosetta energy function. We run Rosetta on the set of 152 proteins from the benchmark of Leaver-Fay et al. (2013). We also include published performance on the same test set from Leaver-Fay et al. (2013). As discussed above, comparable sampling strategies are used to evaluate the models, enabling a fair comparison of the energy functions. We find that a single model evaluated on the benchmark performs slightly worse than both versions of the Rosetta energy function. An ensemble of 10 models improves the results.\nTable 2 evaluates the performance of the energy function under alternative sampling strategies with the goal of optimizing recovery rates. We indicate performance of the Rosetta energy function on recovery rates using the rtmin protocol for continuous minimization. We evaluate the learned energy function with the continuous sampling from a mixture of Gaussians conditioned on the φ/ψ settings of the backbone angles as detailed\nabove. We find that with ensembling the model performance is close to that of the Rosetta energy functions. We also compare to three baselines for embedding sets with similar numbers of parameters to the Atom Transformer model and find that they have weaker performance.\nBuried residues are more constrained in their configurations by tight packing of the side chains within the core of the protein. In comparison, surface residues are more free to vary. Therefore we also report performance separately on both categories. We find that the ensembled Atom Transformer has a 91.2% rotamer recovery rate for buried residues, compared to 59.5% for surface residues.\nTable 3 reports recovery rates by residue comparing the Rosetta score12 results reported in Leaver-Fay et al. (2013) to the Atom Transformer model using the Rosetta discrete sampling method. The Atom Transformer model appears to perform well on smaller rotameric amino acids as well as polar amino acids such as glutamate/aspartate while Rosetta performs better on larger amino acids like phenylalanine and tryptophan and more common ones like leucine." }, { "heading": "4.4 VISUALIZING ENERGIES", "text": "In this section, we visualize and understand how the Atom Transformer models the energy of rotamers in their native contexts. We explore the response of the model to perturbations in the configuration of side chains away from their native state. We retrieve all protein structures in the test set and individually perturb rotameric χ angles across the unit circle, plotting results in Figures 2, 3, and 4.\nCore/Surface Energies Figure 2 shows that steeper response to variations away from the native state is observed for residues in the core of the protein (having ≥24 contacting side chains) than for residues on the surface (≤16), consistent with the observation that buried side chains are tightly packed (Richardson & Richardson, 1989).\nRotameric Energies Figure 3 shows a relation between the residue size and the depth of the energy well, with larger amino acids having steeper wells (more sensitive to perturbations). Furthermore Figure 4 shows that the model learns the symmetries of amino acids. We find that responses to perturbations of the χ2 angle for the residues Tyr, Asp, and Phe are symmetric about χ2. A 180◦ periodicity is observed, in contrast to the non-symmetric residues.\nEmbeddings of Atom Sets Building on the observation of a relation between the depth of the residue and its response to perturbation from the native state, we ask whether core and surface residues are clustered within the representations of the model. To visualize the final hidden representation of the molecular contexts within a protein, we compute the final vector embedding for the 64 atom context around the carbon-β atom (or for glycine, the carbon-α atom) for each residue. We find that a projection of these representations by t-SNE (Maaten & Hinton, 2008) into 2 dimensions shows a clear clustering between representations of core residues and surface residues. A representative example is shown in Figure 5.\nTyr\nAsp\nPhe" }, { "heading": "5 RELATED WORK", "text": "Energy functions have been widely used in the modeling of protein conformations and the design of protein sequences and structures (Boas & Harbury, 2007). Rosetta, for example, uses a combination of physically motivated terms and knowledge-based potentials (Alford et al., 2017) to model proteins and other macromolecules.\nLeaver-Fay et al. (2013) proposed optimizing the feature weights and parameters of the terms of an energy function for protein design; however their method used physical features designed with expert knowledge and data analysis. Our work draws on their development of rigorous benchmarks for energy functions, but in contrast automatically learns complex features from data.\nNeural networks have also been explored for protein folding. Xu (2018) developed a deep residual network that predicts the pairwise distances between residues in the protein structure from evolutionary covariation information. Senior et al. (2018) used evolutionary covariation to predict pairwise distance distributions,\nusing maximization of the probability of the backbone structure with respect to the predicted distance distribution to fold the protein. Ingraham et al. (2018) proposed learning an energy function for protein folding by backpropagating through a differentiable simulator. AlQuraishi (2019) investigated predicting protein structure from sequence without using co-evolution.\nDeep learning has shown practical utility in the related field of small molecule chemistry. Gilmer et al. (2017) achieved state-of-the-art performance on a suite of molecular property benchmarks. Similarly, Feinberg et al. (2018) achieved state-of-the-art performance on predicting the binding affinity between proteins and small molecules using graph convolutional networks. Mansimov et al. (2019) used a graph neural network to learn an energy function for small molecules. In contrast to our work, these methods operate over small molecular graphs and were not applied to large macromolecules, like proteins.\nIn parallel, recent work proposes that generative models pre-trained on protein sequences can transfer knowledge to downstream supervised tasks (Bepler & Berger, 2019; Alley et al., 2019; Yang et al., 2019; Rives et al., 2019). These methods have also been explored for protein design (Wang et al., 2018).\nGenerative models of protein structures have also been proposed for generating protein backbones (Anand & Huang, 2018) and for the inverse protein folding problem (Ingraham et al., 2019)." }, { "heading": "6 DISCUSSION", "text": "In this work we explore the possibility of learning an energy function of protein conformations at atomic resolution. We develop and evaluate the method in the benchmark problem setting of recovering protein side chain conformations from their native context, finding that a learned energy function nears the performance in this restricted domain to energy functions that have been developed through years of research into approximation of the physical forces guiding protein conformation and engineering of statistical terms.\nThe method developed here models sets of atoms and can discover and represent the energetic contribution of high order dependencies within its inputs. We find that learning an energy function from the data of protein crystal structures automatically discovers features relevant to computing molecular energies; and we observe that the model responds to its inputs in ways that are consistent with an intuitive understanding of protein conformation and energy.\nGenerative biology proposes that the design principles used by nature can be automatically learned from data and can be harnessed to generate and design new biological molecules and systems (Rives et al., 2019). High-fidelity generative modeling for proteins, operating at the level of structures and sequences, can enable generative protein design. To create new proteins outside the space of those discovered by nature, it is necessary to use design principles that generalize to all proteins. Huang et al. (2016) have argued that since the physical principles that govern protein conformation apply to all proteins, encoding knowledge of these physical and biochemical principles into an energy function will make it possible to design de novo new protein structures and functions that have not appeared before in nature.\nLearning features from data with generative methods is a possible direction for realizing this goal to enable design in the space of sequences not visited by evolution. The generalization of neural energy functions to harder problem settings used in the protein design community, e.g. combinatorial side chain optimization (Tuffery et al., 1991; Holm & Sander, 1992), and inverse-folding (Pabo, 1983), is a direction for future work. The methods explored here have the potential for extension into these settings." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Kyunghyun Cho, Siddharth Goyal, Andrew Leaver-Fay, and Yann LeCun for helpful discussions. Alexander Rives was supported by NSF Grant #1339362." } ]
2,020
ENERGY-BASED MODELS FOR ATOMIC-RESOLUTION PROTEIN CONFORMATIONS
SP:b71f0c38b5308ce902baeff4d457745c50034894
[ "This paper basically built upon [1]. The authors propose to do sampling in the high-frequency domain to increase the sample efficiency. They first argue that the high-frequency part of the function is hard to approximate (i.e., needs more sample points) in section 3.1. They argue that the gradient and Hessian can be used to identify the high-frequency region. And then they propose to use g(x)=||gradient||+||Hessian || as the sampling metric as illustrated in Algorithm 1. To be noticed that, they actually hybrid the proposed metric (6) and the value-based metric (7, proposed in [1]) in their algorithm.", "This paper proposes a new way to select states from which do do transitions in dyna algorithm (which trains policy from model experience as if it was a real experience). It proposes to look for states where frequency of value function as a function of a real valued state is large, because these are the states where the function is harder to approximate. The paper also shows that such frequency is large where the gradient of the function is large in magnitude which allows for finding such states in practice. In more detail, similar to previous algorithms, this algorithm keeps both an experience replay buffer as well as another buffer of states (search-control queue) and uses a hill climbing strategy to find states with both higher frequency and higher value. The paper tests the algorithm on toy domains - the mountain car and a maze with doors." ]
Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency. In particular, Dyna is an elegant model-based architecture integrating learning and planning that provides huge flexibility of using a model. One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences. Searchcontrol is critical in improving learning efficiency. In this work, we propose a simple and novel search-control strategy by searching high frequency regions of the value function. Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct. We empirically show that a high frequency function is more difficult to approximate. This suggests a search-control strategy: we should use states from high frequency regions of the value function to query the model to acquire more samples. We develop a simple strategy to locally measure the frequency of a function by gradient and hessian norms, and provide theoretical justification for this approach. We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.
[ { "affiliations": [], "name": "FREQUENCY-BASED SEARCH-CONTROL" }, { "affiliations": [], "name": "IN DYNA" }, { "affiliations": [], "name": "Yangchen Pan" }, { "affiliations": [], "name": "Jincheng Mei" } ]
[ { "authors": [ "Martı́n Abadi", "Ashish Agarwal", "Paul Barham", "Eugene Brevdo", "Zhifeng Chen" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": "Software available from tensorflow.org,", "year": 2015 }, { "authors": [ "Sander Adam", "Lucian Busoniu", "Robert Babuska" ], "title": "Experience replay for real-time reinforcement learning control", "venue": "IEEE Transactions on Systems, Man, and Cybernetics,", "year": 2012 }, { "authors": [ "Dane S. Corneil", "Wulfram Gerstner", "Johanni Brea" ], "title": "Efficient model-based deep reinforcement learning with variational state tabulation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nathaniel D. Daw" ], "title": "Model-based reinforcement learning as cognitive search: Neurocomputational theories. Cognitive search: Evolution, algorithms and the brain", "venue": null, "year": 2012 }, { "authors": [ "M Deisenroth", "C E Rasmussen" ], "title": "PILCO: A model-based and data-efficient approach to policy search", "venue": "In International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Amir-massoud Farahmand", "André M.S. Barreto", "Daniel N. Nikovski" ], "title": "Value-aware loss function for model-based reinforcement learning", "venue": "In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Steffen Grunewalder", "Guy Lever", "Luca Baldassarre", "Massi Pontil", "Arthur Gretton" ], "title": "Modelling transition dynamics in MDPs with RKHS embeddings", "venue": "In International Conference on Machine Learning,", "year": 2012 }, { "authors": [ "Shixiang Gu", "Timothy P. Lillicrap", "Ilya Sutskever", "Sergey Levine" ], "title": "Continuous Deep Q-Learning with Model-based Acceleration", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ha", "David", "Schmidhuber", "Jürgen" ], "title": "Recurrent world models facilitate policy evolution", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Steve Hanneke" ], "title": "Theory of disagreement-based active learning", "venue": "Foundations and Trends in Machine Learning,", "year": 2014 }, { "authors": [ "Hui Jiang" ], "title": "A new perspective on machine learning: How to do perfect supervised learning", "venue": null, "year": 1901 }, { "authors": [ "Joshua Joseph", "Alborz Geramifard", "John W Roberts", "Jonathan P How", "Nicholas Roy" ], "title": "Reinforcement learning with misspecified model classes", "venue": "In Proceedings of IEEE International Conference on Robotics and Automation,", "year": 2013 }, { "authors": [ "Łukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Miłos", "Błażej Osiński", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine", "Afroz Mohiuddin", "Ryan Sepassi", "George Tucker", "Henryk Michalewski" ], "title": "Model based reinforcement learning for atari", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Weiwei Li", "Emanuel Todorov" ], "title": "Iterative linear quadratic regulator design for nonlinear biological movement systems", "venue": "In International Conference on Informatics in Control, Automatin and Robotics,", "year": 2004 }, { "authors": [ "Long-Ji Lin" ], "title": "Self-Improving Reactive Agents Based On Reinforcement Learning, Planning and Teaching", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Andrew W. Moore", "Christopher G. Atkeson" ], "title": "Prioritized sweeping: Reinforcement learning with less data and less time", "venue": "Machine learning,", "year": 1993 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee" ], "title": "Value prediction network", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Yangchen Pan", "Muhammad Zaheer", "Adam White", "Andrew Patterson", "Martha White" ], "title": "Organizing experience: a deeper look at replay mechanisms for sample-based planning in continuous state domains", "venue": "In Proceedings of International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Yangchen Pan", "Hengshuai Yao", "Amir-massoud Farahmand", "Martha White" ], "title": "Hill climbing on value estimates for search-control in dyna", "venue": "In Proceedings of International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized Experience Replay", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin–Madison,", "year": 2010 }, { "authors": [ "David Silver", "Hado van Hasselt", "Matteo Hessel", "Tom Schaul", "Arthur Guez", "Tim Harley", "Gabriel Dulac-Arnold", "David Reichert", "Neil Rabinowitz", "André M.S. Barreto", "Thomas Degris" ], "title": "The predictron: End-to-end learning and planning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Steve Smale", "Ding-Xuan Zhou" ], "title": "Shannon sampling and function reconstruction from point values", "venue": "Bulletin of the American Mathematical Society,", "year": 2004 }, { "authors": [ "Steve Smale", "Ding-Xuan Zhou" ], "title": "Shannon sampling II: Connections to learning theory", "venue": "Applied and Computational Harmonic Analysis,", "year": 2005 }, { "authors": [ "Jonathan Sorg", "Satinder Singh" ], "title": "Linear options", "venue": "In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems,", "year": 2010 }, { "authors": [ "E.M. Stein", "R. Shakarchi" ], "title": "Fourier Analysis: An Introduction", "venue": null, "year": 2003 }, { "authors": [ "R.S. Sutton", "David McAllester", "Satinder Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Proceedings of the 12th International Conference on Neural Information Processing Systems. MIT Press,", "year": 1999 }, { "authors": [ "Richard S. Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "SIGART Bulletin,", "year": 1991 }, { "authors": [ "Richard S. Sutton" ], "title": "Integrated modeling and control based on reinforcement learning and dynamic programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 1991 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2018 }, { "authors": [ "Richard S. Sutton", "Csaba Szepesvári", "Alborz Geramifard", "Michael Bowling" ], "title": "Dyna-style planning with linear function approximation and prioritized sweeping", "venue": "In UAI, pp", "year": 2008 }, { "authors": [ "Csaba Szepesvári" ], "title": "Algorithms for Reinforcement Learning", "venue": "Morgan Claypool Publishers,", "year": 2010 }, { "authors": [ "Erik Talvitie" ], "title": "Model regularization for stable sample rollouts", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2014 }, { "authors": [ "Erik Talvitie" ], "title": "Self-Correcting Models for Model-Based Reinforcement Learning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Y. Tassa", "T. Erez", "E. Todorov" ], "title": "Synthesis and stabilization of complex behaviors through online trajectory optimization", "venue": "International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "E. Todorov", "T. Erez", "Y. Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Harm van Seijen", "Richard S. Sutton" ], "title": "A deeper look at planning as learning from replay", "venue": "In International Conference on Machine Learning,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Model-based reinforcement learning (MBRL) (Lin, 1992; Sutton, 1991b; Daw, 2012; Sutton & Barto, 2018) methods have successfully been applied to many benchmark domains (Gu et al., 2016; Ha, David and Schmidhuber, Jürgen, 2018; Kaiser et al., 2020). The Dyna architecture, introduced by Sutton (1991a), is one of the classical MBRL architectures, which integrates model-free and model-based policy updates in an online RL setting (Algorithm 2 in Appendix A.3). At each time step, a Dyna agent uses the real experience to learn a model and to perform model-free policy update, and during the planning stage, simulated experiences are acquired from the model to further improve the policy. A closely related method in model-free learning setting is experience replay (ER) (Lin, 1992; Adam et al., 2012), which utilizes a buffer to store experiences. An agent using the ER buffer randomly samples the recorded experiences at each time step to update the policy. Though ER can be thought of as a simplified form of MBRL (van Seijen & Sutton, 2015), a model provides more flexibility in acquiring simulated experiences.\nA crucial aspect of the Dyna architecture is the search-control mechanism. It is the mechanism for selecting states or state-action pairs to query the model in order to generate simulated experiences (cf. Section 8.2 of Sutton & Barto 2018). We call the corresponding data structure for storing those states or state-action pairs the search-control queue. Search-control is of vital importance in Dyna, as it can significantly affect the model-based agent’s sample efficiency. A simple approach to searchcontrol is to sample visited states or state-action pairs, i.e., use the initial state-action pairs stored in the ER buffer as the search-control queue. This approach, however, does not lead to an agent that outperforms a model-free agent that uses ER. To see this, consider a deterministic environment, and assume that we have the exact model. If we simply sample visited state-action pairs for searchcontrol, the next-state and reward would be the same as those in the ER buffer. In practice, we have\n∗Equal contribution.\nmodel errors too, which causes some performance deterioration (Talvitie, 2014; 2017). Without an elegant search-control mechanism, we are not likely to benefit from the flexibility given by a model.\nSeveral search-control mechanisms have already been explored. Prioritized sweeping (Moore & Atkeson, 1993) is one such method that is designed to speed up the value iteration process: the simulated transitions are updated based on the absolute temporal difference error. It has been adopted to continuous domains with function approximation too (Sutton et al., 2008; Pan et al., 2018; Corneil et al., 2018). Gu et al. (2016) utilizes local linear models to generate optimal trajectories through iLQR (Li & Todorov, 2004). Pan et al. (2019) suggest a method to generate states for the searchcontrol queue by hill climbing on the value function estimate.\nThis paper proposes an alternative perspective to design search-control strategy: we can sample more frequently from the state space where the value function is more difficult to estimate. We first review some basic background in MBRL (Section 2). Afterwards, we review some concepts in signal processing and conduct experiments in the supervised learning setting to show that a high frequency function is more difficult to approximate (Section 3). In order to quantify the difficulty of estimation, we borrow a crucial idea from the signal processing literature: a signal with higher frequency terms requires more samples for accurate reconstruction. We then propose a method to locally measure the frequency of a point in a function’s domain and provide a theoretical justification for our method (Theorem 1 in Section 3.2). We use the hill climbing approach of Pan et al. (2019) to adapt our method to design a search-control mechanism for the Dyna architecture (Section 4). We conduct experiments on benchmark and challenging domains to illustrate the properties and utilities of our method (Section 5)." }, { "heading": "2 BACKGROUND", "text": "Reinforcement learning (RL) problems are typically formulated as Markov Decision Processes (MDPs) (Sutton & Barto, 2018; Szepesvári, 2010). An MDP (S,A,P, R, γ) is determined by state space S, action space A, transition function P, reward function R : S × A × S → R, and discount factor γ ∈ [0, 1]. At each step t, an agent observes a state st ∈ S , and takes an action at ∈ A. The environment receives at, and transits to the next state st+1 ∼ P(·|st, at). The agent receives a reward scalar rt+1 = R(st, at, st+1). The agent maintains a policy π : S × A → [0, 1] that determines the probability of choosing an action at a given state. For a given state-action pair (s, a), the action-value function of policy π is defined as Qπ(s, a) = E[Gt|St = s,At = a;At+1:∞ ∼ π] where Gt def = ∑∞ t=0 γ\ntR(st, at, st+1) is the return of s0, a0, s1, a1, ... following the policy π and transition P. Value-based RL methods learn the action-value function (Watkins & Dayan, 1992), and act greedily w.r.t. the action-value function. Policy-based RL methods perform gradient update of parameters to learn policies with high expected rewards (Sutton et al., 1999). Both value and policy-based RL methods can be easily adopted in the Dyna framework.\nModel-based RL. A model is a mapping that takes a state-action pair as its input and outputs some projection of the future state. A model can be local (Tassa et al., 2012; Gu et al., 2016) or global (Ha, David and Schmidhuber, Jürgen, 2018; Pan et al., 2018), deterministic (Sutton et al., 2008) or stochastic (Deisenroth & Rasmussen, 2011; Ha, David and Schmidhuber, Jürgen, 2018), feature-to-feature (Corneil et al., 2018; Ha, David and Schmidhuber, Jürgen, 2018) or observationto-observation (Gu et al., 2016; Pan et al., 2018; Kaiser et al., 2020), one-step (Gu et al., 2016; Pan et al., 2018), or multi-step (Sorg & Singh, 2010; Oh et al., 2017), or decision-aware (Joseph et al., 2013; Farahmand et al., 2017; Silver et al., 2017). Modelling the environment dynamics through a reproducing kernel Hilbert space (RKHS) embedding has been also studied (Grunewalder et al., 2012), where the Bellman operator is approximated in an RKHS. The model we consider in this work is a one-step environment dynamics model, which takes a state-action pair as its input and returns the next-state and reward. Our proposed search-control approach, however, can be naturally used for different types of models.\nThe most relevant work to ours is hill climbing Dyna (Pan et al., 2019). Pan et al. (2019) proposes a search-control mechanism based on hill climbing on the value estimates (see Algorithm 3 in Appendix A.3). We briefly review the key steps of their algorithm, which is called (Hill Climbing)HCDyna, as it helps to understand ours. HC-Dyna maintains an ER buffer. At each step, a state is randomly sampled from the ER buffer and is used as the initial state to perform hill climbing (i.e.\ngradient ascent) on the learned value function. The states along the trajectory are stored in the search-control queue.1 During the planning stage, states are sampled from the search-control queue and are paired with their corresponding on-policy actions (i.e., actions selected by the current Q network at the sampled states). Afterwards, the model is queried for each of the state-action pairs to get the next-state and reward. These simulated transitions are then mixed with samples from the ER buffer, which are observed by the agent during its interaction with the real environment, to train the value function estimator, e.g., a deep neural network.\nThe heuristic idea behind the search-control mechanism of HC-Dyna is that the magnitude of the value function provides useful information for guiding where to query the model. This heuristic can intuitively be understood by noticing that an RL agent tends to move towards high-value regions of the state space; by performing gradient ascent on the (estimated) value function, we provide the agent with more samples from regions where it may move towards in the future. Even if the estimated value function is incorrect and the samples are indeed from the low-value regions of the state space, these extra samples lead to the fast correction of the estimated value in those regions. Nevertheless, the magnitude of the value function is only one source of extra information from which we can design a search-control mechanism. This work suggests a different perspective: we should sample more from the regions of the state space where learning the value function is more difficult." }, { "heading": "3 UNDERSTANDING THE DIFFICULTY OF FUNCTION APPROXIMATION", "text": "In a regular regression setting, we illustrate that high frequency regions of a function is difficult to approximate. We show that by assigning more training data to those regions, the learning performance considerably improves. To make this insight practically useful, we employ the sum of gradient and hessian norms of a function as a measure of the local frequency of a function. We establish a theoretical connection between our proposed criterion and the local frequency of a function. This would be the foundation of our frequency-based search-control method in Section 4." }, { "heading": "3.1 WHAT TYPE OF FUNCTION IS DIFFICULT TO APPROXIMATE?", "text": "Consider the standard regression problem with the mean square loss. Given a training set D = {(xi, yi)}i=1:n, our goal is to learn an unknown target function f∗(x) = E[Y |X = x] by empirical risk minimization. Formally, we aim to solve\nf = arg min f∈H\n1\nn n∑ i=1 (f(xi)− yi)2,\nwhere H is some hypothesis space. Suppose that we can choose the distributions of samples {xi}. How should we select them in order to improve the quality of the learned function? One intuitive heuristic is that if we know the regions in the domain of f∗ that are more difficult to approximate, we can assign more training data there in order to help the learning process. The important question is how to quantify the difficulty of approximating a function. We borrow an idea from the field of signal processing to suggest a method.\nThe Nyquist-Shannon sampling theorem in signal processing states that given a band-limited function (or signal) f : R → R with the highest frequency (in the Fourier domain) of ωbandwidth, we can perfectly reconstruct it based on regular samples (in the time domain) obtained at the sampling rate of 2ωbandwidth (Zayed, 1993).2 Therefore, if the Fourier transform of a function has high frequency terms, more samples are required to reconstruct it accurately. We note that the sampling theory has been applied in the sample complexity analysis of machine learning algorithms (Smale & Zhou, 2004; 2005; Jiang, 2019). Although the problem setting in machine learning is somewhat different from this result in signal processing, it still provides a high-level intuition for us: regions with more high frequency signals require more learning data.\n1According to the original paper, natural gradient is used to guarantee a certain level of coordinate invariance property, so it can handle state variables with vastly different numerical scales.\n2Sampling rate refers to number of samples per second used to reconstruct continuous signals.\nTo make this high-level intuition concrete, we consider the following function:\nfsin(x) = { sin(8πx) x ∈ [−2, 0), sin(πx) x ∈ [0, 2]. (1)\nIt is easy to check that the regions [−2, 0) and [0, 2] contain signals with frequency ratio 8 : 1. Based on the intuition from the sampling theorem, the [−2, 0) interval requires more training data than the [0, 2] interval. Given the same amount of training data, and the same learning algorithm, we would expect that assigning more fraction of the training data on [−2, 0) to perform better than distributing them uniformly or assigning more samples to the [0, 2] interval.\nAn illustrative experiment. To empirically verify the intuition, we conduct a simple regression task, with fsin as the target function. The training set D = {(xi, yi)}i=1:n is generated by sampling x ∈ [−2, 2], and adding Gaussian noise N(0, σ2) on Eq. (1), where the standard deviation is set to be σ = 0.1. We present the `2 regression learning curves of training datasets with different biased sampling ratios pb ∈ {60%, 70%, 80%}, as shown in Fig. 1 (a)-(c). We observe that biased training data sampling ratios towards high frequency region clearly speeds up learning. This is consistent with the intuitive insight and suggests that our heuristic to assign more data to high frequency regions leads to faster learning." }, { "heading": "3.2 IDENTIFYING HIGH FREQUENCY REGIONS OF A FUNCTION", "text": "Identifying the high frequency region of fsin in the previous toy problem was easy, as each region contained a signal with a constant known frequency. In practice, we face two main difficulties to identify the high frequency regions of a function. The first is that we do not have access to the underlying target function, but only to data or possibly an approximate function that is estimated using data, e.g., a trained neural network. The second is that frequency is a global property rather than a local one. The value of the function at each (non-zero measure) region of the domain has impact on its global frequency representation. To make the high frequency heuristic practically useful, we need a simple criterion that (a) uses function approximation, (b) characterizes local frequency information, and (c) can be efficiently calculated. Inspired by the function fsin in Eq. (1), a natural idea is to calculate the first order f ′(x) def= df(x)dx or second order derivative f ′′(x) def = d 2f(x) dx2 because they both satisfy (a) and (c). As a “sanity check” for property (b), consider the following examples.\nExample 1. For fsin defined in Eq. (1), calculate the integrals of squared first order derivative f ′sin on high frequency region [−2, 0) and low frequency region [0, 2], respectively:∫ 0\n−2 |f ′sin(x)| 2 dx = 64π2, ∫ 2 0 |f ′sin(x)| 2 dx = π2.\nExample 2. Let f : [−π, π]→ R be a band-limited real valued function defined as\nf(x) = a0 2 + N∑ n=1 an cos (nx) + bn sin (nx),\nwhere a0, an, bn ∈ R, n = 1, 2, . . . , N are Fourier coefficients of frequency n2π . Then,∫ π −π |f ′(x)|2 dx = π · N∑ n=1 n2 ( a2n + b 2 n ) , ∫ π −π |f ′′(x)|2 dx = π · N∑ n=1 n4 ( a2n + b 2 n ) .\nExample 1 shows that the integral of squared first order derivative ratio is 64 : 1 (the frequency ratio is 8 : 1), and the region with large derivative magnitude is indeed the high frequency region. Moreover, Example 2 indicates that for one dimensional real-valued functions over a bounded domain, the integral of a derivative magnitude is closely related to the frequency information. For the squared derivative, the integral is the same as weighting the frequency terms an and bn proportional to n2, and for the squared second-order derivative, the integral is the same as weighting the frequency terms proportional to n4. The weighting schemes n2 or n4 emphasize the higher frequency terms.\nEmpirical demonstration. Our calculation in the above examples implies that regions with large gradient and Hessian norm correspond to high frequency regions. Based on the same spirit of the l2 regression task in Section 3.2, we empirically verify this insight. Our expectation is that biasing training dataset towards high gradient norm and Hessian norm would achieve better learning results. In Fig. 2(a), Biased-GradientNorm corresponds to uniformly sampling x ∈ [−2, 2] for 60% of training data and sampling proportional to gradient norm (i.e., p(x) ∝ |f ′sin(x)|) for the remaining 40%; while Biased-HessianNorm corresponds to sampling proportional to Hessian norm (i.e., p(x) ∝ |f ′′sin(x)|) for the remaining 40% of training data. In Fig. 2(b)(c), we visualize the two types of biased training points. Sampling according to the gradient norm or the Hessian norm leads to denser point distribution in the high frequency region [−2, 0): there are 65.35%, 68.97% of training points fall in [−2, 0] in Fig. 2(b), (c) respectively. An important difference between Fig. 2(b) and (c) is that, sampling according to Hessian norm leads to denser points around spikes: there are 18.17% points fall in the yellow area in (b) and 27.45% such points in (c). Those areas around spikes should be more difficult to approximate as the underlying function changes sharply, which explains the superior performance on the data set biased by Hessian norm. Fig. 2(a) shows that such biased training datasets provide fast learning, similar to the high frequency biased training datasets in Fig. 1.\nGiven a function f : X 7→ Y and a point x ∈ X , we propose to measure frequency of f around a small neighborhood of x (we call this local frequency) using the following function:\ng(x) def = ‖∇xf(x)‖2 + ‖Hf (x)‖2F , (2)\nwhere ‖∇xf(x)‖ is the `2-norm of the gradient at x, and Hf (x)F is the Frobenius norm of the Hessian matrix of f at x. We claim that local frequency of f around x is proportional to g(x).We theoretically justify this claim. For real-valued functions in Euclidean spaces, our theory connects local gradient and Hessian norms, local function energy 3, to local frequency distribution. The proof of our theorem and its connection to the well-known uncertainty principle are in Appendix A.2. Theorem 1. Given any function f : Rn → R, for any frequency vector k ∈ Rn, define its local Fourier transform around x ∈ Rn,\nf̂(k) def = ∫ y∈B(x,1) f(y) exp { −2πi · y>k } dy,\nfor local function around x, i.e., y ∈ B(x, 1) def= {y : ‖y − x‖ < 1}. Assume the local function “energy” is finite, ∫\ny∈B(x,1) [f(y)]\n2 dy = ∫ Rn ‖f̂(k)‖2dk <∞, ∀x ∈ Rn. (3)\n3We consider the notion of energy in signal processing terminology: the energy of a continuous time signal x(t) is defined as ∫ x(t)2dt. In our theory, function f is the signal.\nDefine “local frequency distribution” of f(x) as:\nπf̂ (k) def = ‖f̂(k)‖2∫ Rn ‖f̂(k̃)‖2dk̃ , ∀k ∈ Rn. (4)\nThen, for any x ∈ Rn, we have: 1) The first order connection:∫\ny∈B(x,1) ‖∇f(y)‖2 dy = 4π2 · [∫ y∈B(x,1) [f(y)] 2 dy ] · [∫ Rn πf̂ (k) · ‖k‖ 2 dk ] , (5)\n2) The second order connection:∫ y∈B(x,1) ||Hf (y)||2F dy = 16π4 [∫ y∈B(x,1) [f(y)] 2 dy ] · [∫ Rn πf̂ (k) · ‖k‖ 4 dk ] (6)\nRemark 1. Note that πf̂ defined in Eq. (4) is a probability distribution over R n as:∫\nk∈Rn πf̂ (k)dk = 1, and πf̂ (k) ≥ 0, ∀k ∈ R n.\nWe use such a distribution to characterize local frequency behaviour for reasons. First, comparing frequencies of regions is more naturally captured by a distribution than one single scalar, since signals usually are within a range of frequencies. Second, to eliminate the impact of the function energy Eq. (3), we normalize the Fourier coefficient f̂ to get πf̂ .\nRemark 2. For a frequency vector k ∈ Rn, the larger its norm ‖k‖ is, the higher its frequency is. Given any x and its local function (i.e., f(·) around x), πf̂ (k) is the proportion/percentage that frequency k occupies. Therefore, the integral of πf̂ (k) · ‖k‖\n2 reflects the contribution of high frequency terms in the local frequency distribution of a function. Remark 3. Consider f as a value function in reinforcement learning setting. Theorem 1 indicates that regions with large gradient norm can either have large absolute value function, or high local frequency, or both. To prevent finding regions that only have large negative value function, our theory implies that it is reasonable to take both gradient norm and value function into account, as our proposed method does in the next section." }, { "heading": "4 FREQUENCY-BASED SEARCH-CONTROL IN DYNA", "text": "We present the Dyna architecture with the frequency-based search-control (Algorithm 1) in this section. It combines the idea that samples from high-frequency regions of the state space is important,\nas discussed in the previous section, and the hill climbing process to effectively draw samples from those regions, as introduced by Pan et al. (2019). We omit implementation details such as preconditioning, noisy gradient for the hill climbing process, and refer readers to Appendix A.6 and A.7.\nOur goal is to query the model more often from the regions of the state space where the local frequency of the value function is higher. The intuition behind this search-control mechanism, as discussed in the previous section in the context of supervised learning, is that those regions correspond to where learning the (value) function is more difficult, hence more samples from the model might be helpful. To populate the search-control queue with states from those regions, we can do hill climbing on g(s) = ‖∇sV (s)‖2 + ‖Hv(s)‖2F . Theorem 1, however, suggests that states with large gradient norm can either have large absolute value, or high local frequency, or both. We want to avoid many samples from regions with large negative value states, as those states may be rarely visited under the optimal policy anyway. A sensible strategy to get around this problem is to combine the proposed hill climbing method with the previous hill climbing on the value function (Pan et al., 2019), as the latter tends to generate samples from high value states.\nWe propose the following method for combining those approaches. At each time step, with certain probability, we perform hill climbing by either\ns← s+ α { ∇sg(s) with probability of p (7a) ∇sV (s) with probability of 1− p (7b)\nand store states along the gradient trajectory in the search-control queue. When hill climbing on the value function (7b), we sample the initial state from the ER buffer as suggested by the previous work (Pan et al., 2019). This populates the search-control queue with states from the high value regions of the state space. When hill climbing on g(s) (7a), however, we sample the initial state from the search-control queue itself (instead of the ER buffer). This way ensures that the initial state for searching high frequency region has relatively high value. Hill climbing on g(s) from an initial state with a high value populates the search-control queue with high frequency samples around high value regions of the state space. We discuss some other intuitive mechanisms that we have tested in Appendix A.4.\nSimilar to the previous work (Pan et al., 2019), we obtain the state-value function in both (7a) and (7b) by taking the maximum of the estimated action-value, i.e. V (s) = maxaQ(s, a) ≈ maxaQθ(s, a) where θ is the parameter of the Q-network. Similar to the Dyna architecture (Algorithm 2), during planning stage, we sample multiple mixed mini-batches to update the parameters (i.e. we call multiple planning steps/updates). The mixed mini-batch was also used in the work by Gu et al. (2016) and can alleviate off-policy sampling issue as studied by Pan et al. (2019)." }, { "heading": "5 EXPERIMENTS", "text": "In the experiments, we carefully study the properties of our algorithm on the MountainCar benchmark domain. Then we illustrate the utility of our algorithm on a challenging self-designed MazeGridWorld domain, by which we illustrate the practical implication of having samples from the high frequency regions. Though we mainly focuses on search-control instead of how to learn a model, we include the result of using an online learned model for our algorithm. We refer readers to Appendix A.5 for additional experiments and Appendix A.6 for the reproducibility detail." }, { "heading": "5.1 UTILITY OF FREQUENCY-BASED SEARCH-CONTROL", "text": "The MountainCar (Brockman et al., 2016) domain is well-studied, and it is known that the value function under the optimal value function has sharp changing regions (Sutton & Barto, 2018), which is the setting where our algorithm should be more effective. The agent needs to learn to reach the goal state within as few steps as possible since the reward is −1 per time step. The purposes of experimenting on this domain are: 1) verify that our search-control can outperform several natural competitors under different number of planning updates; 2) show that our search-control is robust to environment noise.\nWe use the following competitors. Dyna-Frequency is the Dyna architecture using the proposed search-control strategy (Algorithm 1); Dyna-Value is Algorithm 3 from the previous work (Pan et al., 2019); PrioritizedER is DQN with prioritized experience replay (Schaul et al., 2016); ER\nAlgorithm 1 Dyna architecture with Frequency-based search-control B: the ER buffer, Bs: search-control queue M : S ×A → S × R, the model outputs the next-state and reward m: number of states we want to fetch by hill climbing, d: number of planning steps a: the threshold for accepting a state Q,Q′: current and target Q networks, respectively b: the mini-batch size, β ∈ (0, 1): the proportion of simulated samples in a mini-batch τ : update target network Q′ every τ updates to Q t← 0 is the time step, nτ is the number of parameter updates while true do\nObserve st, take action at (i.e. -greedy w.r.t. Q) Observe st+1, rt+1, add (st, at, st+1, rt+1) to B // Gradient ascent hill climbing With probability p, 1− p, choose hill climbing rule (7a) or (7b) respectively; sample s from Bs if choose rule (7a), or from B otherwise; set c← 0, s̃← s while c < m do\nupdate s by executing the chosen hill climbing rule if s is out of state space then: // resample the initial state and hill climbing rule\nWith probability p, 1− p, choose hill climbing rule (7a) or (7b) respectively; sample s from Bs if choose (7a), or from B otherwise; set c← 0, s̃← s continue\nif ||s− s̃||2/ √ n > a then: // n is the state dimension, i.e. S ⊂ Rn\nadd s to Bs, s̃← s, c← c+ 1 for d times do // d planning updates: sample d mini-batches\ndraw βb sample states from the search-control queue Bs, pair them with their corresponding on-policy action, and queryM to get the corresponding next-states and rewards draw (1 − β)b sample transitions from the ER buffer B and add them to the simulated transitions\nuse the mixed mini-batch for parameter update of the estimator, e.g., DQN nτ ← nτ + 1 if mod (nτ , τ) == 0 then:\nQ′ ← Q t← t+ 1\nis simply DQN with experience replay (ER) (Mnih et al., 2015). Figure 3 shows the learning curves of all those algorithms using 10 planning updates (a)(b) and 30 planning updates (c)(d) under different stochasticity. In Figure 3(b)(d), the reward is sampled from the Gaussian distribution N(−1, σ2), σ ∈ {0.0, 0.1}. We make several important observations: 1) With increased number of planning updates, these algorithms do not necessarily perform better, as shown in Figure 3(c). The proposed algorithm, however, appears to gain more through more number of updates since the difference between DynaFrequency and Dyna-Value seems to be clearer in Figure 3(c) than in Figure 3(a). 2) Since both Dyna-Value and our algorithm fetch the same number of states (i.e. m = 20) by hill climbing, the superior performance of our algorithm indicates the advantage of using samples from the high frequency regions. 3) PrioritizedER clearly performs worse than our algorithm and Dyna-Value, which probably implies the utility of the generalization power of the value function to acquire additional samples. 4) Our algorithm maintains superior performance in the presence of noise. One reason is that, noisy perturbation leads to more “energy” in all frequencies. When taking derivative, those high frequency terms are amplified. Hence, even with perturbation, high frequency region remains while the value estimate itself may get affected in an unpredictable manner." }, { "heading": "5.2 A CASE STUDY: MAZEGRIDWORLD", "text": "We now illustrate the utility of our method on a challenging MazeGridWorld domain as shown in Figure 4(a). The domain has continuous state space S = [0, 1]2 and four discrete actions {up,down, left, right}. There are three walls in the middle, each of which has a hole for the agent\nto go through. Each episode starts from the bottom left and ends at top right and the agent receives a reward of −1 at each time step, hence the agent should learn to use as few steps as possible to reach the goal area. On this domain, we mainly study our algorithm and the Dyna-Value algorithm.\nFigure 4(b) shows the evaluation curves of the two algorithms. An important difference between our algorithm and the previous work is in the variance of the evaluation curve, which implies a robust policy learned by our method. In Figure 5, we further investigate the state distribution in searchcontrol queues of the two algorithms by uniformly sampling 1000 states from the two queues. Notice that a very important difference between the two distributions is that our search-control queue has a clearly high density around the bottleneck area, i.e., the hole areas where the agent can go across the walls. Learning a stable policy around such areas is extremely important: the agent simply fails to reach the goal state if they cannot pass any one of the holes. This distinguishes our algorithm with the previous work, which appears to acquire states near the goal area." }, { "heading": "6 DISCUSSION", "text": "We motivated and studied a new category of methods for search-control by considering the approximation difficulty of a function. We provided a method for identifying the high frequency regions of a function, and justified it theoretically. We conducted experiments to illustrate our theory. We incorporated the proposed method into the Dyna architecture and empirically investigated its benefits. The method achieved competitive learning performances on a difficult domain.\nThere are several promising future research directions. First, it is worth exploring the combination of different search-control strategies. Second, we can explore the use of active learning methods (Settles, 2010; Hanneke, 2014) in the design of search-control mechanisms, since active learning con-\ncerns about learning a function with as few samples as possible. This directly corresponds to our main purpose of using a smart search-control method in Dyna: to improve policy by using as few planning steps as possible." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank the anonymous reviewers for their helpful feedback. Amir-massoud Farahmand acknowledges the funding from the the Canada CIFAR AI Chairs program. Yangchen Pan acknowledges the funding from Amii." }, { "heading": "A APPENDIX", "text": "We provide calculations for Example 1 and Example 2 in Section A.1; theoretical proof of Theorem 1 in Section A.2. Background of Dyna is reviewed in Section A.3 and additional discussions regarding search-control are provided in Section A.4. Additional experiments regarding separate criterion of hill climbing, and on continuous control problems are shown in Section A.5. Experimental details for reproducing our empirical results are in Section A.6." }, { "heading": "A.1 CALCULATIONS FOR EXAMPLE 1 AND EXAMPLE 2", "text": "Example 1. For fsin defined in Eq. (1), calculate the integrals of squared first order derivative f ′sin on high frequency region [−2, 0) and low frequency region [0, 2], respectively:∫ 0\n−2 [f ′sin(x)] 2 dx = 64π2, ∫ 2 0 [f ′sin(x)] 2 dx = π2.\nProof. Taking derivative and integral,∫ 0 −2 [f ′(x)] 2 dx = 64π2 ∫ 0 −2\n[cos (8πx)] 2 dx = 64π2,∫ 2\n0\n[f ′(x)] 2 dx = π2 ∫ 2 0 [cos (πx)] 2 dx = π2.\nExample 2. Let f : [−π, π]→ R be a band-limited real valued function defined as\nf(x) = a0 2 + N∑ n=1 an cos (nx) + bn sin (nx),\nwhere a0, an, bn ∈ R, n = 1, 2, . . . , N are Fourier coefficients of frequency n2π . Then,∫ π −π |f ′(x)|2 dx = π · N∑ n=1 n2 ( a2n + b 2 n ) , ∫ π −π |f ′′(x)|2 dx = π · N∑ n=1 n4 ( a2n + b 2 n ) .\nProof. Taking derivative of f ,\nf ′(x) = N∑ n=1 [−nan sin (nx)] + N∑ n=1 [nbn cos (nx)].\nTaking square of f ′,\n[f ′(x)] 2 = N∑ n=1 N∑ m=1 [nmanam sin (nx) sin (mx)]− N∑ n=1 N∑ m=1 [nmanbm sin (nx) cos (mx)]\n− N∑ n=1 N∑ m=1 [mnambn sin (mx) cos (nx)] + N∑ n=1 N∑ m=1 [nmbnbm cos (nx) cos (mx)].\nTaking integral,∫ π −π [f ′(x)] 2 dx = ∫ π −π N∑ n=1 N∑ m=1 [nmanam sin (nx) sin (mx)]dx− ∫ π −π N∑ n=1 N∑ m=1 [nmanbm sin (nx) cos (mx)]dx\n− ∫ π −π N∑ n=1 N∑ m=1 [mnambn sin (mx) cos (nx)]dx+ ∫ π −π N∑ n=1 N∑ m=1 [nmbnbm cos (nx) cos (mx)]dx\n= N∑ n=1 N∑ m=1 [nmanamπδn,m − 0− 0 + nmbnbmπδn,m]\n= π · N∑ n=1 n2 ( a2n + b 2 n ) ,\nwhere\nδn,m def = { 1, if n = m, 0, otherwise.\nUsing similar arguments, taking derivative of f ′(x),\nf ′′(x) = N∑ n=1 [ −n2an cos (nx) ] + N∑ n=1 [ −n2bn sin (nx) ] .\nTaking integral, ∫ π −π [f ′′(x)] 2 dx = π · N∑ n=1 n4 ( a2n + b 2 n ) ." }, { "heading": "A.2 PROOF FOR THEOREM 1", "text": "Notations. For any vector norm || · ||, we mean l2 norm and we ignore the subscript unless clarification is needed. We use Frobenius norm || · ||F for matrix. We use subscript yl to denote the lth element in vector y. Let Hf (y) be the Hessian matrix of f(y). We write H for short unless clarification is needed. Let Hl,: be the lth row of the Hessian matrix.\nProof description. We establish the connection between local gradient norm, Hessian norm and local frequency. To build such connection, we introduce a definition of πf̂ as shown below and we call it “local frequency distribution” of f(x). πf̂ is a probability distribution over R\nn, i.e.,∫ k∈Rn πf̂ (k)dk = 1, and πf̂ (k) ≥ 0, ∀k ∈ R\nn. Within an open subset of domain (an unit ball), this distribution characterizes the proportion of a particular frequency component occupies. The proof can be described by three key steps:1) We use a local Fourier transform to express a function locally (i.e. within an unit ball). 2) we calculate the gradient/Hessian norm based on this local Fourier transform; 3) we take integration over the unit ball of the gradient/Hessian norm to build the connection with the local frequency distribution πf̂ and function energy.\nTheorem 1. Given any function f : Rn → R, for any frequency vector k ∈ Rn, define its local Fourier transform of x ∈ Rn,\nf̂(k) def = ∫ y∈B(x,1) f(y) exp { −2πi · y>k } dy,\nfor local function around x, i.e., y ∈ B(x, 1) def= {y : ‖y − x‖ < 1}. Assume the local function “energy” is finite, ∫\ny∈B(x,1) [f(y)]\n2 dy = ∫ Rn ‖f̂(k)‖2dk <∞, ∀x ∈ Rn.\nDefine “local frequency distribution” of f(x) as:\nπf̂ (k) def = ‖f̂(k)‖2∫ Rn ‖f̂(k̃)‖2dk̃ , ∀k ∈ Rn.\nThen, ∀x ∈ Rn, we have: 1) the first order connection:∫\ny∈B(x,1) ‖∇f(y)‖2 dy = 4π2 · [∫ y∈B(x,1) [f(y)] 2 dy ] · [∫ Rn πf̂ (k) · ‖k‖ 2 dk ] ,\n2) the second order connection:∫ y∈B(x,1) ||H(y)||2F dy = 16π4 [∫ y∈B(x,1) [f(y)] 2 dy ] · [∫ Rn πf̂ (k) · ‖k‖ 4 dk ]\nProof. 1) We first prove the first order connection. Consider the following function defined locally around x,\nfx(y) def = { f(y), if y ∈ B(x, 1), 0, otherwise.\nBy definition, the Fourier transform of fx is\nf̂(k) = ∫ y∈B(x,1) f(y) exp { −2πi · y>k } dy\n= ∫ Rn fx(y) exp { −2πi · y>k } dy.\nAnd the inverse Fourier transform of fx(y), ∀y ∈ B(x, 1) is,\nfx(y) = ∫ Rn f̂(k) exp { 2πi · y>k } dk,\nand then the gradient ∀y ∈ B(x, 1) is ∇f(y) = ∇fx(y) = ∫ Rn f̂(k) exp { 2πi · y>k } (2πi · k) dk. (8)\nTo calculate gradient norm, we use complex conjugate, ∇f∗(y) = ∫ Rn f̂∗(k′) exp { −2πi · y>k′ } (−2πi · k′) dk′,\nwhere\nf̂∗(k′) = ∫ Rn fx(y ′) exp { 2πi · y′>k′ } dy′\nis the complex conjugate of f̂(k). Therefore,\n‖∇f(y)‖2 = 〈∇f(y),∇f∗(y)〉\n= ∫ Rn ∫ Rn f̂(k)f̂∗(k′) exp { 2πi · y> (k − k′) } ( 4π2k>k′ ) dkdk′.\n(9)\nTaking integral of ‖∇f(y)‖2 within the unit ball centered at x,∫ y∈B(x,1) ‖∇f(y)‖2 dy = ∫ Rn ‖∇fx(y)‖2 dy, by function definition (10a)\n= ∫ Rn ∫ Rn f̂(k)f̂∗(k′) [∫ Rn exp { 2πi · y> (k − k′) } dy ] ( 4π2k>k′ ) dkdk′ (10b)\n= ∫ Rn ∫ Rn f̂(k)f̂∗(k′)δk−k′,0 ( 4π2k>k′ ) dkdk′ (10c)\n= 4π2 ∫ Rn ‖f̂(k)‖2 · ‖k‖2 dk. (10d)\nRecall the definition of local function “energy” around x,∫ Rn ‖f̂(k)‖2dk = ∫ Rn 〈 f̂(k), f̂∗(k) 〉 dk (11a)\n= ∫ y∈Rn ∫ y∈Rn fx(y)fx(y ′) [∫ Rn exp { 2πi · k> (y′ − y) } dk ] dydy′ (11b)\n= ∫ y∈Rn ∫ y∈Rn fx(y)fx(y ′)δy′−y,0dydy ′ (11c)\n= ∫ y∈Rn f2x(y)dy (11d)\n= ∫ y∈B(x,1) f2(y)dy <∞. (by definition of fx(y) and finite energy assumption)\n(11e)\nFor y ∈ B(x, 1), the local gradient information is related to local energy and frequency distribution,\n∫ y∈B(x,1) ‖∇f(y)‖2 dy = 4π2 ∫ Rn ‖f̂(k)‖2 · ‖k‖2 ∫ Rn ‖f̂(k̃)‖ 2dk̃∫ Rn ‖f̂(k̃)‖2dk̃ dk (12a)\n= 4π2 ∫ Rn πf̂ (k) ‖k‖ 2 ∫ Rn ‖f̂(k̃)‖2dk̃dk (12b)\n= 4π2 · [∫ y∈B(x,1) f2(y)dy ] · [∫ Rn πf̂ (k) · ‖k‖ 2 dk ] , (12c)\nwhere the last equality follows by ∫ Rn ‖f̂(k̃)‖ 2dk̃ = ∫ y∈B(x,1) f\n2(y)dy which is established in the derivation (11).\n2) Now we prove the second order connection. To show the second order connection, we start from Eq. (8). Then the lth row of the Hessian matrix Hl,: can be written as:\nHl,: = ∂∇f(y) ∂yl\n>\nwhere we use the notation ∂∇f(y)∂yl to denote the vector formed by taking partial derivative of each element in the gradient vector∇f(y) w.r.t. yl. Then,\n∂∇f(y) ∂yl = ∫ Rn f̂(k) exp { 2πi · y>k } ( 4π2i2(e>l k)k ) dk,\nwhere el is standard basis vector where the lth element is one. To calculate the norm of the vector Hl,: =\n∂∇f(y) ∂yl\n> , we use complex conjugate again and follow the similar derivation as done\nin Eq. (9):\n||Hl,:||22 = 〈Hl,:, Hl,:〉\n= ∫ Rn ∫ Rn f̂(k)f̂∗(k′) exp { 2πi · y>(k − k′) } ( 16π4i4(e>l k)(e > l k ′)k>k′ ) dkdk′\nNote that the square of Frobenius norm of the Hessian matrix can be written as ||H||2F =∑ i,j H 2 i,j = ∑n l=1 ||Hl,:||22. Then,\n||H||2F = n∑ l=1 ||Hl,:||22\n= ∫ Rn ∫ Rn f̂(k)f̂∗(k′) exp { 2πi · y>(k − k′) }( 16π4i4 n∑ l=1 (e>l k)(e > l k ′)k>k′ ) dkdk′\n= 16π4 ∫ Rn ∫ Rn f̂(k)f̂∗(k′) exp { 2πi · y>(k − k′) } (k>k′)2dkdk′\nTaking the integration of ||H||2F over y variable within a ball with center x and unit radius, we acquire: ∫\ny∈B(x,1) ||H(y)||2F dy = 16π4 ∫ Rn ||f̂(k)||2||k||4dk\n= 16π4 [∫ y∈B(x,1) [f(y)] 2 dy ] · [∫ Rn πf̂ (k) · ‖k‖ 4 dk ] where the derivation process for the first equation is a simple modification from the derivation (10) and the second equation follows the same derivation (12).\nAlgorithm 2 Generic Dyna Architecture: Tabular Setting Initialize Q(s, a) and modelM(s, a), ∀(s, a) ∈ S ×A while true do\nobserve s, take action a by -greedy w.r.t Q(s, ·) execute a, observe reward R and next state s′ Q-learning update for Q(s, a) update modelM(s, a) (i.e. by counting) store (s, a) into search-control queue for i=1:d do\nsample (s̃, ã) from search-control queue (s̃′, R̃)←M(s̃, ã) // simulated transition Q-learning update for Q(s̃, ã) // planning update\nDiscussion with Uncertainty Principle. We now provide an intuitive interpretation of our theorem from the well-known uncertainty principle. The Uncertainty Principle says that a function cannot be simultaneously concentrated in both spatial and frequency space. That is, the more concentrated a function is, the more spread out its Fourier transform must be, indicating that more high frequency terms are needed to express the function. For example, by uncertainty principle (Stein & Shakarchi, 2003), ∀x ∈ R we have[∫\n‖y−x‖≤1 (y − x)2 · [fx(y)]2 dy\n] · [∫\nRn ‖f̂(k)‖2 · ‖k‖2 dk\n] ≥ 1\n16π2 . (13)\nNote that the first term [∫ ‖y−x‖≤1 (y − x) 2 · [fx(y)]2 dy ]\nmeasures the dispersion of the function around the point x. Hence the smaller this term is, the more concentration of the function is on the specified domain. Plug into our Eq. (5) and replace the function energy term by Fourier transform and cancel out the normalization term, we get:[∫\n‖y−x‖≤1 (y − x)2 · [fx(y)]2 dy\n] · [∫ ‖y−x‖≤1 ‖∇f(y)‖2 dy ] ≥ 1 4 ,\nwhich means the more concentrated f is locally around x, the larger the lower bound of the local gradient norm must be." }, { "heading": "A.3 BACKGROUND IN DYNA", "text": "In this section, we provide the vanilla (tabular) Dyna (Sutton, 1991a; Sutton & Barto, 2018) in Algorithm 2, and Hill Climbing Dyna by Pan et al. (2019) in Algorithm 3. Dyna is a classic modelbased reinforcement learning architecture. As described in Algorithm 2, at each time step, the real experience is used to directly improve policy/value estimates, and is also used to learn the environment dynamics model. During planning stage, simulated experiences are acquired from the learned model and are used to further improve the policy. The critical component in Dyna is the search-control mechanism, which decides what simulated experiences to use during planning. This area is relatively unexplored, though abundant literature is available regarding how to learn a model." }, { "heading": "A.4 A DISCUSSION ON SEARCH-CONTROL DESIGN BASED ON HILL CLIMBING", "text": "There are different ways to combine hill climbing strategies. Here are some unsuccessful trials. For example, climbing on direct combinations of V (s) (value function) and g(s) (frequency criterion), such as V (s) + g(s), or V (s)g(s), did not work well. The reasons are as following. First, such combination can lead to unpredictable gradient behaviour. It can alter the trajectory solely based on either g(s) or V (s), and the effect is unclear. It may lead to states with neither high value or high frequency. Last, and probably the most important, hill climbing on V (s) and on g(s) have fundamentally different insights. The former is based on the intuition that the value information should be propagated from the high value region to low value region; as a result, it requires to store states along the whole trajectory, including those in low value region. This is empirically verified by Pan et al. (2019). However, the latter is based on the insight that the function value\nAlgorithm 3 HC-Dyna architecture Bs: search-control queue, B: the experience replay buffer m: number of states to fetch by search-control b: the mini-batch size while true do\nObserve st, take action at (i.e. -greedy w.r.t. action value function) Observe st+1, rt+1, add (st, at, st+1, rt+1) to B sample s from visited states, i.e. ER buffer B // Hill climbing by gradient ascent while get less than m states do\ns← s+∇sV (s), V (s) = maxsQθ(s, a) store s into search control queue Bs\n// Planning stage for d times do\n// sample states from Bs and pair them with on-policy actions, query the model to get next states and rewards // mix simulated and real experiences into a mini-batch and use it to update parameters (i.e. DQN update)\nt← t+ 1\nin high frequency region is more difficult to approximate and needs more samples, while there is no obvious reason to propagate those information back to low frequency region. As a result, this approach does not emphasize on recording states throughout the whole hill climbing trajectory." }, { "heading": "A.5 ADDITIONAL EXPERIMENTS", "text": "In this section, we briefly study the effect of doing hill climbing on only gradient norm or Hessian norm. Then we demonstrate that our search-control strategy can be also used for continuous control algorithms.\nHill climbing on only gradient norm or Hessian norm. Throughout our paper, we use the form of g(s) = ‖∇sV (s)‖2 + ‖Hv(s)‖2F to search states from high (local) frequency region of the value function. Besides the theoretical reason, there is a practical demand of such design. On value function surface, regions which have low (or even zero) gradient magnitude may have high Hessian magnitude, and vice versa. Hence, it can help move along the gradient trajectory in case that one of the term vanished at some point. Such cases can be a result of function approximation (smoothness/differentiability), or of the nature of the task, or both. In Fig. 6, we show the results of using only either gradient norm or Hessian norm. The reason we choose MountainCar and GridWorld (the same domain as described by Pan et al. (2019)) is that, the former has a value function surface with lots of variations; while the latter’s value function increases smoothly from the initial state to the goal state, which indicates a small magnitude second-order derivative. Indeed, we empirically observe that the term ∇s‖Hv(s)‖2F frequently gives a zero vector on GridWorld. This explains the bad performance of Dyna-HessNorm in Fig. 6(b). In contrast, Fig. 6(a) shows slightly better performance of Dyna-HessNorm and Dyna-GradNorm. Notice that, an intuitive and more general form of g(x) can be g(s) = η1‖∇sV (s)‖2 + η2‖Hv(s)‖2F , at the cost that additional meta-parameters are introduced.\nContinuous Control. In this section, we show a simple demonstration where our method is adapted to two continuous control tasks: Hopper-v2 and Walker2d-v2 from Mujoco (Todorov et al., 2012) by using a continuous Q learning algorithm called NAF (Normalized Advantage Function) (Gu et al., 2016). The algorithm parameterizes the action value function as Q(s, a) = V (s) − (a − µ(s))TP (a − µ(s)) where P is a positive semi-definite matrix and hence the action with maximum value can be easily found: arg maxaQ(s, a) = µ(s). Our search-control strategy naturally applies here by utilizing the value function V (s). From Fig. 7, one can see that our algorithm (DynaNAF-Frequency) finds a better policy comparing with the model-free NAF." }, { "heading": "A.6 REPRODUCIBILITY DETAIL", "text": "All of our implementations are based on tensorflow with version 1.13.0 (Abadi et al., 2015). For DQN update, we use Adam optimizer (Kingma & Ba, 2014). We use mini-batch size b = 32 except on the supervised learning experiment where we use 128. For reinforcement learning experiment, we use buffer size 100k. All activation functions are tanh except the output layer of the Q-value is linear. Except the output layer parameters which were initialized from a uniform distribution [−0.003, 0.003], all other parameters are initialized using Xavier initialization (Glorot & Bengio, 2010). For model learning, we use a 64 × 64 relu units neural network to predict s′ − s given a state-action pair with mini-batch size 128 and learning rate 0.0001.\nFor the supervised learning experiment shown in Section 3, we use 16 × 16 tanh units neural network, with learning rate 0.001 for all algorithms. The learning curve is plotted by computing the testing error every 20 iterations. When generating Fig. 2, in order to sample points according to p(x) ∝ |f ′(x)| or p(x) ∝ |f ′′(x)|, we use 10, 000 even spaced points on the domain [−2, 2] and the probabilities are computed by normalization across the 10k points.\nThe experiment on MountainCar is based on the implementation from OpenAI (Brockman et al., 2016), we use 32 × 32 tanh layer, with target network moving rate 1000 and learning rate 0.001. Exploration noise is 0.1 without decaying. For all algorithms, we use warm up steps = 5000 (i.e. random action is taken in the first 5k time steps). Prioritized experience replay (PrioritizedER) is implemented as the proportional version with sum tree data structure. We use prioritized ER without importance ratio but half of mini-batch samples are uniformly sampled from the buffer as a strategy for bias correction. For Dyna-Value and Dyna-Frequency, we use: gradient ascent step size (in\nsearch-control) 0.01, mixing rate β = 0.5 and m = 20, i.e., at each environment time step we fetch 20 states by hill climbing. We fix p = 0.5 across all experiments, hence the hill climbing rules (7a) and (7b) are selected with equal probability. We use natural projected gradient ascent for hill climbing as introduced by Pan et al. (2019).\nFor the experiment on MazeGridWorld, each wall’s width is 0.1 and each hole has height 0.1. The left-top point of the hole in the first wall (counting from left to right) has coordinate (0.2, 0.5); the hole in the second wall has coordinate (0.4, 1.0) and the third one is 0.7, 0.2. Each action leads to 0.05 unit move perturbed by a Gaussian noise from N(0, 0.01). On this domain, for both DynaValue and Dyna-Frequency, all parameters are set the same with that used on MountainCar except that we use 64 × 64 tanh units for Q network, and number of search-control samples is set as m = 50, number of planning updates is 30. As a supplement to the Section 5.2, we also provide the state distribution from ER buffer in Figure 8. One can see that ER buffer has very different state distribution with search-control queue." }, { "heading": "A.7 ALGORITHMIC DETAILS", "text": "We provide the pseudo-code in Algorithm 4 with sufficient details to recreate our experimental results. The hill climbing rules we used is the same as introduced by Pan et al. (2019). Define\nvs def = ∇s max\na Q(s, a),\nthen\ngs def = ∇sg(s) = ∇s(||∇s max a Q(s, a)||22 + ||Hv(s)||2F ) = ∇s(||vs||22 + ||∇svs||2F ).\nNote that we use a squared norm to ensure numerical stability when taking gradient. Then for value-based search-control, we use\ns← s+ α ||Σ̂svs|| Σ̂svs +Xi, Xi∼ N(0, ηΣ̂s) (14)\nand for frequency-based search-control, we use\ns← s+ α ||Σ̂sgs|| Σ̂sgs +Xi, Xi∼ N(0, ηΣ̂s) (15)\nwhere Σ̂s is empirical covariance matrix estimated from visited states, and we set η = 0.01, α = 0.01 across all experiments. Notice that comparing with the previous work, we omitted the projection step as we found it is unnecessary in our experiments.\nAlgorithm 4 Dyna architecture with Frequency-based search-control with additional details Bs: search-control queue, B: the experience replay buffer M : S ×A → S × R, the environment model m: number of search-control samples to fetch at each step p: probability of choosing value-based hill climbing rule (we set p = 0.5 for all experiments) β ∈ [0, 1]: mixing factor in a mini-batch, i.e. βb samples in a mini-batch are simulated from model n: number of state variables, i.e. S ⊂ Rn a: empirically learned threshold as sample average of ||st+1 − st||2/ √ n\nd: number of planning steps Q,Q′: current and target Q networks, respectively b: the mini-batch size τ : update target network Q′ every τ updates to Q t← 0 is the time step nτ ← 0 is the number of parameter updates // Gradient ascent hill climbing With probability p, 1− p, choose hill climbing Eq. (14) o Eq. (15) respectively; sample s from Bs if choose rule Eq. (14), or from B otherwise; set c← 0, s̃← s while c < m do\nupdate s by executing the chosen hill climbing rule if s is out of state space then: // resample the initial state and hill climbing rule\nWith probability p, 1− p, choose hill climbing rule Eq. (14) or Eq. (15) respectively; sample s from Bs if choose Eq. (7), or from B otherwise; set c← 0, s̃← s continue\nif ||s− s̃||2/ √ n > a then:\nadd s to Bs, s̃← s, c← c+ 1 // d planning updates: sample d mini-batches for d times do // d planning updates\nsample βb states from Bs and pair them with on-policy actions, and query M to get next states and rewards\nsample b(1− β) transitions from B an stack these with the simulated transitions use the mixed mini-batch for parameter (i.e. DQN) update nτ ← nτ + 1 if mod(nτ , τ) == 0 then:\nQ′ ← Q t← t+ 1" } ]
2,020
null
SP:7c36047790a8d3e229748fea4d9ff7572a97fd0a
[ "This paper studies initialization techniques for deep ReLU networks from a theoretical standpoint and derives finite layer width concentration bounds to show that with the He initialization scheme, deep ReLU networks preserve the norm of the input sample during a forward pass and the norm of the gradient with respect to the output during a backward pass. The concentration bounds also suggest lower bounds on the width of the ReLU layers. The authors verify their theory with experiments on synthetic data.", "This work considers random parameter initialization in neural networks (In particular the initialization presented in He et al.) and develops non-asymptotic bounds for the norms and gradients of neural networks during initialization. The authors show that the norms of the outputs and gradients (for gradients, under a different assumption on the dimension of the matrix) remain constant through the different layers. The results presented differ from previous work in that they give nice concentration bounds for such output and gradient norms. In addition the authors prove results in the case of infinite samples under the assumption that they arise from a finite dimensional space." ]
It has been noted in existing literature that over-parameterization in ReLU networks generally improves performance. While there could be several factors involved behind this, we prove some desirable theoretical properties at initialization which may be enjoyed by ReLU networks. Specifically, it is known that He initialization in deep ReLU networks asymptotically preserves variance of activations in the forward pass and variance of gradients in the backward pass for infinitely wide networks, thus preserving the flow of information in both directions. Our paper goes beyond these results and shows novel properties that hold under He initialization: i) the norm of hidden activation of each layer is equal to the norm of the input, and, ii) the norm of weight gradient of each layer is equal to the product of norm of the input vector and the error at output layer. These results are derived using the PAC analysis framework, and hold true for finitely sized datasets such that the width of the ReLU network only needs to be larger than a certain finite lower bound. As we show, this lower bound depends on the depth of the network and the number of samples, and by the virtue of being a lower bound, over-parameterized ReLU networks are endowed with these desirable properties. For the aforementioned hidden activation norm property under He initialization, we further extend our theory and show that this property holds for a finite width network even when the number of data samples is infinite. Thus we overcome several limitations of existing papers, and show new properties of deep ReLU networks at initialization.
[]
[ { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "arXiv preprint arXiv:1802.06509,", "year": 2018 }, { "authors": [ "Devansh Arpit", "Stanislaw Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron Courville", "Yoshua Bengio" ], "title": "A closer look at memorization in deep networks", "venue": "arXiv preprint arXiv:1706.05394,", "year": 2017 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Boris Hanin" ], "title": "Which neural net architectures give rise to exploding and vanishing gradients", "venue": "arXiv preprint arXiv:1801.03744,", "year": 2018 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "How to start training: The effect of initialization and architecture", "venue": "arXiv preprint arXiv:1803.01719,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sham Kakade", "Greg Shakhnarovich" ], "title": "URL http://ttic. uchicago.edu/ ̃gregory/courses/LargeScaleLearning/lectures/jl.pdf", "venue": "Random projections,", "year": 2009 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "In search of the real inductive bias: On the role of implicit regularization in deep learning", "venue": "arXiv preprint arXiv:1412.6614,", "year": 2014 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "Towards understanding the role of over-parametrization in generalization of neural networks", "venue": null, "year": 1805 }, { "authors": [ "Jeffrey Pennington", "Samuel Schoenholz", "Surya Ganguli" ], "title": "Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jeffrey Pennington", "Samuel S Schoenholz", "Surya Ganguli" ], "title": "The emergence of spectral universality in deep networks", "venue": "arXiv preprint arXiv:1802.09979,", "year": 2018 }, { "authors": [ "Ben Poole", "Subhaneil Lahiri", "Maithra Raghu", "Jascha Sohl-Dickstein", "Surya Ganguli" ], "title": "Exponential expressivity in deep neural networks through transient chaos", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tamas Sarlos" ], "title": "Improved approximation algorithms for large matrices via random projections", "venue": "In Foundations of Computer Science,", "year": 2006 }, { "authors": [ "Andrew M Saxe", "James L McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "arXiv preprint arXiv:1312.6120,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep rectifier (ReLU) networks are popular in deep learning due to their ease of training and stateof-the-art generalization. This success of deep rectifier networks can be partly attributed to good initialization strategies (for example Glorot & Bengio (2010); He et al. (2015)). Essentially, good parameter initializations guarantee that there is no exploding or vanishing of information across hidden layers. These properties help gradient descent based optimization methods in navigating the complex non-linear loss landscape of deep networks by initializing them at a good starting point where training can begin. Such favorable properties promised by these initialization strategies are (in most cases) shown to hold true in asymptotic settings where the network width tends to infinity and/or under strict assumptions made about the distribution of the input data. A detailed account of these existing papers and a contrast between these papers and our work is discussed in section 2.\nOur paper relaxes the aforementioned assumptions made in previous papers. Further, we show novel properties that hold for deep ReLU networks at initialization when using the He initialization scheme (He et al., 2015). Specifically, we show that deep ReLU networks obey the following properties in the forward (Eq. 1) back backward (Eq. 2) pass (see section 3 for notations),\n‖hl‖2 ≈ ‖x‖2 ∀l ∈ {1, 2, · · · , L} (1)\n‖∂`(fθ(x),y) ∂Wl ‖F ≈ ‖δ(x,y)‖2 · ‖x‖2 ∀l ∈ {1, 2, · · · , L} (2)\nWe refer to the above properties as as the the activation norm equality and the gradient norm equality property.\nFurther, we derive a finite lower bound on the width of the hidden layers for which the above results hold (i.e., the network needs to be sufficiently over-parameterized) in contrast to a number of previous papers that assume infinitely wide layers.\nWe do not make any assumption on the data distribution as done in a number of previous papers that study initialization. Further, our results hold for an infinite stream of data for the activation norm equality property and for any finite dataset in the backward pass.\nThus we have relaxed a number of assumptions made in previous research work that focus on deriving initialization strategies for deep ReLU networks. Our results showing the connection between activation norm and input norm (and similarly the property for gradients) for deep ReLU networks can be utilized in further research studies." }, { "heading": "2 RELATION WITH EXISTING WORK", "text": "The seminal work of Glorot & Bengio (2010) studied for the first time a principled way to initialize deep networks to avoid exploding/vanishing gradient problem (EVGP). Their analysis however is done for deep linear networks. The analysis by He et al. (2015) follows the derivation strategy of Glorot & Bengio (2010) except they tailor their derivation for deep ReLU networks. However, both these papers make a strong assumption that the dimensions of the input are statistically independent and that the network width is infinite. Our results do not make these assumptions.\nSaxe et al. (2013) introduce the notion of dynamical isometry which is achieved when all the singular values of the input-output Jacobian of the network is 1. They show that deep linear networks achieve dynamical isometry when initialized using orthogonal weights and this property allows fast learning in such networks.\nPoole et al. (2016) study how the norm of hidden activations evolve when propagating an input through the network. Pennington et al. (2017; 2018) study the exploding and vanishing gradient problem in deep ReLU networks using tools from free probability theory. Under the assumption of an infinitely wide network, they show that the average squared singular value of the input-output Jacobian for deep ReLU network is 1 when initialized appropriately. Our paper on the other hand shows that deep ReLU networks are norm preserving maps at appropriate initialization. Further, we show there exists a finite lower bound on the width of the network for which the Frobenius norm of the hidden layer-output Jacobian (equivalently the sum of its squared singular values) are equal across all hidden layers.\nHanin & Rolnick (2018) show that for a fixed input, the variance of the squared norm of hidden layer activations are bounded from above and below for deep ReLU networks to be near the squared norm of the input such that the bound depends on the sum of reciprocal of layer widths of the network. Our paper shows a similar result in a PAC bound sense but as an important difference, we show that these results hold even for an infinite stream of data by making the bound depend on the dimensionality of the input.\nHanin (2018) show that sufficiently wide deep ReLU networks with appropriately initialized weights prevent EVGP in the sense that the fluctuation between the elements of the input-output Jacobian matrix of the network is small. This avoids EVGP because a large fluctuation between the elements of the input-output Jacobian implies a large variation in its singular values. Our paper shows that sufficiently wide deep ReLU networks avoid EVGP in the sense that the norm of the gradient for the weights of each layer is roughly equal to a fixed quantity that depends on the input and target.\nOver-parameterization in deep networks has previously been shown to have advantages. Neyshabur et al. (2014); Arpit et al. (2017) show empirically that wider networks train faster (number of epochs) and have better generalization performance. From a theoretical view point, Neyshabur et al. (2018) derive a generalization bound for a two layer ReLU network where they show that a wider network has a lower complexity. Lee et al. (2017) show that infinitely wide deep networks act as a Gaussian process. Arora et al. (2018) show that over-parameterization in deep linear networks acts as a conditioning on the gradient leading to faster convergence, although in this case overparameterization in terms of depth is studied. Our analysis complements this line of work by showing another advantage of over-parameterization in deep ReLU networks." }, { "heading": "3 THEORETICAL RESULTS", "text": "Let D = {xi,yi}Ni=1 be N training sample pairs of inputs vectors xi ∈ Rno and target vectors yKi where xi’s are sampled from a distribution with support X . Define a L layer deep ReLU network fθ(x) = h L with the lth hidden layer’s activation given by,\nhl := ReLU(al)\nal := Wlhl−1 + bl l ∈ {1, 2, · · ·L} (3)\nwhere hl ∈ Rnl are the hidden activations, ho is the input to the network and can be one of the input vectors xi, Wl ∈ Rnl×nl−1 are the weight matrices, b ∈ Rnl are the bias vectors which are initialized as 0s, al are the pre-activations and θ = {(Wl,bl)}Ll=1. Define a loss on the deep network function for any given training data sample (x,y) as,\n`(fθ(x),y) (4)\nwhere `(.) is any desired loss function. For instance, `(.) can be log loss for a classification problem, in which case fθ(x) is transformed using a weight matrix to have dimensions equal to the number of classes and the softmax activation is applied to yield class probabilities (i.e., a logistic regression like model on top of fθ(x)). However for our purpose, we do not need to restrict `(.) to a specific choice, we only need it to be differentiable. We will make use of the notation,\nδ(x,y) := ∂`(fθ(x),y)\n∂aL (5)\nWe organize our theoretical results as follows. We first derive the activation norm equality property for finite datasets and then extend these results to infinite dataset setting in section 3.1. We then derive the gradient norm equality property for finite datasets in section 3.2. All formal proofs, if not shown in the main text, are available in the appendix." }, { "heading": "3.1 ACTIVATION NORM EQUALITY", "text": "Consider an L layer deep ReLU network and data x ∈ X . We show in this section that the norm of hidden layer activation of any layer is roughly equal to the norm of the input at initialization for all x ∈ X if the network weights are initialized appropriately and the network width is sufficiently large but finite. Specifically we show ∀l ∈ [L] and x ∈ X ,\n‖hl‖2 ≈ ‖x‖2 (6)\nTo achieve this goal, we start with a very simple result– in expectation, ReLU transformation in each layer preserves the norm of its corresponding input if the weights are sampled appropriately. Evaluating this expectation also helps determining the scale of the random initialization that leads to norm preservation.\nLemma 1 Let v = ReLU (Ru), where u ∈ Rn, R ∈ Rm×n. If Rij i.i.d.∼ N (0, 2m ), then for any fixed vector u, E[‖v‖2] = ‖u‖2.\nThe proof for the above lemma involves simply computing the expectation analytically by exploting the fact that each dimension of the vector u is a weighted sum of Gaussian random variables. The above result thus shows that for each layer, initializing its weights from an i.i.d. Gaussian distribution with 0 mean and 2/fan-out variance (viz. He initialization (He et al., 2015)) preserves the norm of its input in expectation. We now derive a lower bound on the width of a ReLU layer so that it can preserve the norm of the input for a single fixed input with error margin.\nLemma 2 Let v = ReLU (Ru), where u ∈ Rn, R ∈ Rm×n. If Rij i.i.d.∼ N (0, 2m ), and ∈ [0, 1), then for any fixed vector u,\nPr ( |‖v‖2 − ‖u‖2| ≤ ‖u‖2 ) ≥ 1− 2 exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (7)\nThe proof of this lemma involves a direct application of the Chernoff bounding technique. Now we use the above lemma to show that the norm of hidden activations equal the norm of inputs within a specified margin and for a finite size dataset for a deep ReLU network.\nTheorem 1 Let D be a fixed dataset with N samples and define a L layer ReLU network fθ(.) as shown in Eq. 3 such that each weight matrix Wl ∈ Rnl×nl−1 has its elements sampled as W lij i.i.d.∼ N (0, 2nl ) and biases b l are set to zeros. Then for any sample (x,y) ∈ D and ∈ [0, 1), we have that,\nPr ( (1− )L‖x‖2 ≤ ‖fθ(x)‖2 ≤ (1 + )L‖x‖2 ) ≥ 1−\nL∑ l′=1 2N exp ( −nl′ ( 4 + log\n2\n1 + √ 1 +\n)) (8)\nWhile the statement of the above theorem only talks about the norm of the final output of the network, it equally applies to any hidden layer l as well since the theorem can be applied equivalently to a l layer network.\nHaving proved the activation norm equality property in the finite dataset setting above, we now turn our attention to the case of infinite dataset case. To do so, we first prove a non-trivial result where we use lemma 2 to show how a lower bound on the width of an individual ReLU layer can be computed such that this layer preserves the norm of an infinite stream of inputs.\nLemma 3 Let X be a d ≤ n dimensional subspace of Rn and R ∈ Rm×n. If Rij i.i.d.∼ N (0, 2m ),\n∈ [0, 1), and,\nm ≥ 1 /12− log(0.5(1 + √ 1 + /3))\n· ( d log 2\n∆ + log\n4\nδ\n) (9)\nthen with probability at least 1− δ,\n(1− )‖u‖2 ≤ ‖ReLU(Ru)‖2 ≤ (1 + )‖u‖2 ∀u ∈ X (10)\nwhere ∆ := min{ 3 √ d , √ √ 3d }.\nProof Sketch: The core idea behind the proof is inspired by lemma 10 of Sarlos (2006). Without any loss of generality, we will show the norm preserving property for any unit vector u in the d dimensional subspace X of Rn. This is because for any arbitrary length vector u, ‖ReLU(Ru)‖ = ‖u‖ · ‖ReLU(Rû)‖. The idea then is to define a grid of finite points over X on [−1, 1]d with interval size depending on , such that every unit vector û in X is close enough to one of the grid points. Then, if we choose the width of the layer to be large enough to approximately preserve the length of the finite number of grid points, we can guarantee that the length of any arbitrary unit vector approximately remains preserved as well within the derived margin of error. The formal proof can be found in the appendix.\nWe now extend the above lemma to a deep ReLU network and show our main result for the forward pass that the norm of hidden activations equal the norm of input within some distortion margin, for an infinite stream of input data for a sufficiently large (but finite) width deep ReLU network.\nTheorem 2 Define a L layer ReLU network fθ(.) as shown in Eq. 3 such that each weight matrix Wl ∈ Rnl×nl−1 has its elements sampled as W lij i.i.d.∼ N (0, 2nl ) and biases b l are set to zeros. Let X be a d ≤ n dimensional subspace of Rn. If W lij i.i.d.∼ N (0, 2nl ), ∈ [0, 1), and,\nnl ≥ 1 /12− log(0.5(1 + √ 1 + /3)) · ( d log 2 ∆ + log 4L δ ) ∀l ∈ [L] (11)\nthen with probability at least 1− δ,\n(1− )l‖x‖2 ≤ ‖hl‖2 ≤ (1 + )l‖x‖2 ∀x ∈ X ∀l ∈ [L] (12)\nProof: Since the input lies on a d dimensional subspace of Rn, we apply theorem 3 to the first layer and get the guarantee that the norm of all inputs on the d dimensional subspace are preserved by this layer. Next, we show that since each layer is a linear transform followed by pointwise ReLU non-linearity, and the input takes values in a set defined by the d dimensional subspace of Rn, the output of the first layer will take values in a set that is strictly a subset of a d dimensional subspace. To see this, let B ∈ Rn×d denote a matrix with orthonormal columns describing the basis of the subspace X on which the input lies, and let z ∈ Rd. Then we have that,\nx ∈ {Bz|z ∈ Rd} (13)\nThe first layer transforms any input x as h1 = ReLU(W0x) (14)\n= ReLU(W0Bz) (15) Denote B′ = W0B. Then note that rank(B′) ≤ d. Let S1 denote the set of values that h1 can take. Then we have that,\nS1 = {B′z|z ∈ Rd} ∩ Rn + 1 (16)\nwhere Rn+ denote the subset of Rn where all dimensions take non-negative values. This shows that h1 takes values in a set that is strictly a subset of a d dimensional subspace of Rn1 .\nHaving proved this for first layer, we can recursively apply this strategy to all higher layers since the output of each layer lies on a subset of a subspace. Notice that while doing so, the lower bound on the width of each layer depends only on the subspace dimensionality d. Applying union bound over the result of theorem 3 for L layers proves the claim.\nWe note that the lower bound on width derived above depends on two quantities– the depth of the network L, and the dimensionality of the subspace d on which the input lies. Specifically, the lower bound on the width becomes larger for larger input dimensionality d and larger network depth L irrespective of the number of data samples, meaning that a wider network is needed as the depth of the network and/or the intrinsic input dimensionality increases." }, { "heading": "3.2 GRADIENT NORM EQUALITY", "text": "Consider any given loss function `(.) and a data sample (x,y), we show in this section that the norm of gradient for the parameter Wl of the lth layer depends only on the input and output. Specifically, for a wide enough network, the following holds at initialization for all l ∈ {1, 2, . . . , L} and ∀x ∈ D,\n‖∂`(fθ(x),y) ∂Wl ‖F ≈ ‖δ(x,y)‖2 · ‖x‖2 ∀l (17)\nAs a first step, we note that the gradient for a parameter Wl for a sample (x,y) is given, ∂`(fθ(x),y)\n∂Wl = diag\n( ∂`(fθ(x),y)\n∂al\n) · Mnl(hl−1) (18)\nwhereMnl(hl−1) is a matrix of size nl × nl−1 such that each row is the vector hl−1. Therefore, a simple algebraic manipulation shows that,\n‖∂`(fθ(x),y) ∂Wl ‖F = ‖ ∂`(fθ(x),y) ∂al ‖2 · ‖hl−1‖2 (19)\nIn the previous section, we showed that for a sufficiently wide network, ‖hl‖2 ≈ ‖x‖2 ∀l with high probability. To show that gradient norms of parameters are preserved in the sense shown in Eq. (17), we essentially show that ‖∂`(fθ(x),y)\n∂al ‖2 ≈ ‖δ(x,y)‖2 ∀l with high probability for sufficiently wide\nnetworks.\nNote that ‖∂`(fθ(x),y) ∂aL ‖2 = ‖δ(x,y)‖2 by definition. To show the norm is preserved for all layers, we begin by noting that,\n∂`(fθ(x),y) ∂al = ∂hl ∂al\n( ∂al+1\n∂hl\nT ∂`(fθ(x),y)\n∂al+1\n)\n= 1(al) ( Wl+1 T ∂`(fθ(x),y)\n∂al+1\n) (20)\nwhere is the point-wise product (or Hadamard product) and 1(.) is the heaviside step function. The following proposition shows that 1(.) follows a Bernoulli distribution w.r.t. the weights given any fixed input at the previous layer.\nProposition 1 If network weights are sampled i.i.d. from a Gaussian distribution with mean 0 and biases are 0 at initialization, then conditioned on hl−1, each dimension of 1(al) follows an i.i.d. Bernoulli distribution with probability 0.5 at initialization.\nGiven this property of 1(al), we show below that the transformation of type shown in Eq. (20) is norm preserving in expectation.\nLemma 4 Let v = (Ru) z, where u ∈ Rn, R ∈ Rm×n and z ∈ Rm. If Rij i.i.d.∼ N (0, 1pm ) and zi i.i.d.∼ Bernoulli(p), then for any fixed vector u, E[‖v‖2] = ‖u‖2.\nThe proof of this lemma involves analytically computing the expectation of the vector norm by exploiting the fact that each dimension of v is a sum of Gaussian random variables multiplied to an independent Bernoulli random variable. This lemma reveals the variance of the 0 mean Gaussian distribution from which the weights must be sampled in order for the vector norm to be preserved in expectation. Since 1(al) is sampled from a 0.5 probability Bernoulli, we have that the weights must be sampled from a Gaussian with variance 2/m. We now show this property holds for a finite width network.\nLemma 5 Let v = (Ru) z, where u ∈ Rn, z ∈ Rm, and R ∈ Rm×n. If Rij i.i.d.∼ N (0, 10.5m ), zi i.i.d.∼ Bernoulli(0.5) and ∈ [0, 1), then for any fixed vector u,\nPr ( |‖v‖2 − ‖u‖2| ≤ ‖u‖2 ) ≥ 1− 2 exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (21)\nThe proof of this lemma involves a direct application of the Chernoff bounding technique. Having shown that a finite width ReLU layer can preserve gradient norm, we now note that we need to apply this result to Eq. (20). In this case, we must substitute the matrix R in the above lemma with the network’s weight matrix Wl+1 T\n. In the previous subsection, we showed that each element of the matrix Wl+1 must be sampled from N (0, 2/nl+1) in order for the norm of the input vector to be preserved. However, in order for the Jacobian norm to be preserved, we require Wl+1 to be sampled from N (0, 2/nl) as per the above lemma. This suggests that if we want the norms to be preserved in the forward and backward pass for a single layer simultaneously, it is beneficial for the width of the network to be close to uniform. The reason we want them to simultaneously hold is because as shown in Eq. (19), in order for the parameter gradient norm to be same for all layers, we need the norm of both the Jacobian ‖∂`(fθ(x),y)\n∂al ‖2 as well as the hidden activation ‖hl−1‖2 to be preserved throughout\nthe hidden layers. Therefore, assuming the network has a uniform width, we now prove that in deep ReLU networks with He initialization, the norm of weight gradient for each layer is simply a product of norm of the input and norm of the error at output.\nTheorem 3 1 Let D be a fixed dataset with N samples and define a L layer ReLU network as shown in Eq. 3 such that each weight matrix Wl ∈ Rn×n has its elements sampled asW lij\ni.i.d.∼ N (0, 2n ) and biases bl are set to zeros. Then for any sample (x,y) ∈ D, ∈ [0, 1), and for all l ∈ {1, 2, . . . , L} with probability at least,\n1− 4NL exp ( −n (\n4 + log\n2\n1 + √ 1 +\n)) (22)\nthe following hold true,\n(1− )L‖x‖2 · ‖δ(x,y)‖2 ≤ ‖∂`(fθ(x),y) ∂Wl ‖2\n≤ (1 + )L‖x‖2 · ‖δ(x,y)‖2 (23) 1Similar to He et al. (2016), we have assumed that ∂`(fθ(x),y)\n∂al+1 in independent from 1(al) and Wl+1 at\ninitialization.\nand\n(1− )l‖x‖2 ≤ ‖hl‖2 ≤ (1 + )l‖x‖2 (24)\nWe note that even though the theorem relies on the specified independence assumption similar to He et al. (2016), we show that our predictions hold in practice in the next section." }, { "heading": "4 EMPIRICAL VERIFICATION", "text": "" }, { "heading": "4.1 NORM PRESERVATION OF ACTIVATION AND GRADIENTS", "text": "In this section, we verify the hidden activations have the same norm as input norm ‖h i‖2 ‖x‖2 ≈ 1 (Eq. 6), and the parameter gradient norm approximately equal the product of input norm and output error norm ‖ ∂`(fθ(x),y) ∂Wi ‖F\n‖δ(x,y)‖2·‖x‖2 ≈ 1 (Eq. 17) for all layer indices i for sufficiently wide deep ReLU networks. For this experiment we choose a 10 layer network with 2000 randomly generated input samples in R500 and randomly generated target labels in R20 and cross-entropy loss. We add a linear layer along with softmax activation to the ReLU network’s outputs to make the final output in R20. We use network width from the set {100, 500, 2000, 4060}. We show results for both He initialization (He et al., 2015) which we theoretically show is optimal, as well as Glorot initialization (Glorot & Bengio, 2010) which is not optimal for deep ReLU nets. As can be seen in figure 1 (left), the mean ratio of hidden activation norm to the input norm over the dataset is roughly 1 with a small standard deviation for He initializaiton. This approximation becomes better with larger width. On the other hand, Glorot initialization fails at preserving activation norm for deep ReLU nets. A similar result can be seen for parameter gradients norms (figure 1 (right)). In the figure we denote ∂`(fθ(x),y)∂Wi by ∂Wi. Here we find for He initialization that the norm of weight gradient for each layer is roughly equal to the product of norm of input and norm of error at output, and this approximation becomes stronger for wider networks. Once again Glorot initialization does not have this property." }, { "heading": "4.2 TIGHTNESS OF BOUND", "text": "In the following experiment we verify the tightness of the bound in lemma 2 (for forward pass) and lemma 5 (for backward pass). To do so, we vary the network width of a one hidden layer ReLU transformation from 500 to 4000, and feed 2000 randomly sampled inputs x through it. For each sample we measure the distortion defined as,\n:= ∣∣∣∣1− ‖h‖‖x‖ ∣∣∣∣ (25)\nfor the forward pass, and,\n:= ∣∣∣∣1− ‖∂`(fθ(x),y)∂Wi ‖F‖δ(x,y)‖2 · ‖x‖2 ∣∣∣∣ (26)\nfor the backward pass. Here h is the output of the one hidden layer ReLU transformation. We compute the mean value of for the 2000 examples and plot them against the network width used. We call this the empirical estimate. We simultaneously plot the values of predicted by lemma 2 and lemma 5 for failure probability δ = 0.05. We call this the theoretical value. The plot for the forward pass is shown in figure 2 (left). As can be seen, our lower bound on width is an over-estimation but becomes tighter for smaller values of . A similar result can be seen for lemma 5 in figure 2 (right). Thus our proposed bounds can be improved and we leave that as future work." }, { "heading": "4.3 EFFECT OF NON-UNIFORMITY OF WIDTH ON GRADIENT NORM EQUALITY", "text": "As discussed in section 3.2, gradient norm equality property holds more accurately when deep networks have a more uniform width throughout the layers. To verify this, we construct a 20 layer deep ReLU network such that the width of each layer is determined by independently sampling uniformly between 1000− v and 1000 + v, where v denotes the amount of width variation chosen for a particular experiment. Once the network architecture is fixed, we initialize the weights with He initialization. We then generate 1000 pairs of input samples and output error similar to the process described in section 4.1 and compute the ratio ‖ ∂`(fθ(x),y) ∂Wi ‖F\n‖δ(x,y)‖2·‖x‖2 . The mean and standard deviation of this value across samples are shown in figure 3 for v ∈ {1, 200, 500}. It can be seen that the ratio is closer to 1 with smaller variance when width variation v is small, thus verifying our theoretical prediction." }, { "heading": "5 CONCLUSION", "text": "We derived novel properties that are possessed by deep ReLU networks initialized with He initialization. Specifically, we show that the norm of hidden activations and the norm of weight gradients are a function of the norm of input data and error at output. While deriving these properties, we relaxed most of the assumptions (such as those on input distribution and width of network) made by previous work that study weight initialization in deep ReLU networks. Thus our work establishes that He initialization optimally preserves the flow of information in the forward and backward directions in a stronger setting, and uncovers novel properties." }, { "heading": "A PROOFS", "text": "A.1 PROOFS FOR FORWARD PASS\nLemma 1 Let v = ReLU (Ru), where u ∈ Rn and R ∈ Rm×n. If Rij i.i.d.∼ N (0, 2m ), then for any fixed vector u, E[‖v‖2] = ‖u‖2. Proof: Define ai = RTi u, where Ri denotes the ith row of R. Since each element Rij is an independent sample from Gaussian distribution, each ai is essentially a weighted sum of these independent random variables. Thus, each ai ∼ N ( 0, 2m‖u‖ 2 )\nand independent from one another. Thus each element vi = ReLU(ai) ∼ NR ( 0, 2m‖u‖ 2 )\nwhere NR denotes the rectified Normal distribution. Our goal is to compute,\nE[‖v‖2] = E[ m∑ i=1 v2i ] (27)\n= mE[v2i ] (28)\nFrom the definition of vi,\nE[vi] = 1 2 · 0 + 1 2 E[Z] (29)\nwhere Z follows a half-Normal distribution corresponding to the Normal distributionN ( 0, 2m‖u‖ 2 ) .\nThus E[Z] = √\n2‖u‖2 m · √ 2 π = 2 √ ‖u‖2 mπ . Similarly,\nE[v2i ] = 0.5E[Z2] (30) = 0.5(var(Z) + E[Z]2) (31)\nSince var(Z) = 2m‖u‖ 2(1− 2π ), we get,\nE[v2i ] = 0.5\n( 2\nm ‖u‖2(1− 2 π ) + (2 √ ‖u‖2 mπ )2 ) (32)\n= ‖u‖2\nm (33)\nThus,\nmE[v2i ] = ‖u‖2 (34)\nwhich proves the claim.\nLemma 2 Let v = ReLU (Ru), where u ∈ Rn, R ∈ Rm×n. If Rij i.i.d.∼ N (0, 2m ), and ∈ [0, 1), then for any fixed vector u,\nPr ( |‖v‖2 − ‖u‖2| ≤ ‖u‖2 ) ≥ 1− 2 exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (35)\nProof: Define ṽ = √\n0.5m ‖u‖ v. Then we have that each element ṽi ∼ N R (0, 1) and independent from one another since vi = ReLU(ai) ∼ NR ( 0, 2m‖u‖ 2 )\nwhere NR denotes the rectified Normal distribution. Thus to bound the probability of failure for the R.H.S.,\nPr ( ‖v‖2 ≥ (1 + )‖u‖2 ) = Pr\n( ‖u‖2\n0.5m ‖ṽ‖2 ≥ (1 + )‖u‖2\n) (36)\n= Pr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) (37)\nUsing Chernoff’s bound, we get for any λ > 0, Pr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) = Pr ( exp(λ‖ṽ‖2) ≥ exp(λ0.5m(1 + )) ) (38)\n≤ E[exp(λ‖ṽ‖ 2)]\nexp(0.5mλ(1 + )) (39)\n= E[exp(\n∑m i=1 λṽi 2)]\nexp(0.5mλ(1 + )) (40)\n= Πmi=1E[exp(λṽi 2)]\nexp(0.5mλ(1 + )) (41)\n=\n( E[exp(λṽi2)]\nexp(0.5λ(1 + ))\n)m (42)\nDenote p(ṽi) as the probability distribution of the rectified Normal random variable ṽi. Then, E[exp(λṽi2)] = ∫ ∞ −∞ exp(λṽi 2)p(ṽi) (43)\nWe know that the mass at vi = 0 is 0.5 and the density between vi = 0 and vi = ∞ follows the Normal distribution. Thus,\nE[exp(λṽi2)] = 0.5 exp(0) + 1√ 2π ∫ ∞ 0 exp(λṽi 2 − ṽi2/2) (44)\n= 0.5 + 1 2 √ (1− 2λ) √ 2√ π/(1− 2λ) ∫ ∞ 0 exp(− ṽi 2 2 (1− 2λ)) (45)\nNote that ∫∞\n0\n√ 2√\nπ/(1−2λ)\n∫∞ 0 exp(− ṽi 2\n2 (1− 2λ)) is the integral of a half Normal distribution corresponding to the Normal distribution N (0, 1/(1− 2λ)). Thus,\nE[exp(λṽi2)] = 0.5 + 1 2 √ (1− 2λ) (46)\nHence, we get,\nPr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) ≤ ( 0.5 ( 1 +\n1√ (1− 2λ)\n) exp(−0.5λ(1 + )) )m (47)\nThe above failure probability can be bounded to be smaller by finding an appropriate value of λ. We find that λ ≈ 0.5 1+ approximately minimizes the above bound. Substituting this value of λ above, we get,\nPr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) ≤ ( 0.5 ( 1 + √ 1 + )\nexp(− 4\n) )m\n(48)\n= exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (49)\nThus,\nPr ( ‖v‖2 ≤ (1 + )‖u‖2 ) ≥ 1− exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (50)\nSimilarly, to prove the L.H.S. by bounding the probability of failure from the other side, Pr ( ‖v‖2 ≤ (1− )‖u‖2 ) = Pr ( −‖v‖2 ≥ −(1− )‖u‖2 ) (51)\n= Pr ( −‖u‖ 2\n0.5m ‖ṽ‖2 ≥ −(1− )‖u‖2\n) (52)\n= Pr ( −‖ṽ‖2 ≥ −0.5m(1− ) ) (53)\nUsing Chernoff’s bound, we get for any λ > 0, Pr ( −‖ṽ‖2 ≥ −0.5m(1− ) ) = Pr ( exp(−λ‖ṽ‖2) ≥ exp(−λ0.5m(1− )) ) (54)\n≤ E[exp(−λ‖ṽ‖ 2)]\nexp(−0.5mλ(1− )) (55)\n= E[exp(−\n∑m i=1 λṽi 2)]\nexp(−0.5mλ(1− )) (56)\n= Πmi=1E[exp(−λṽi2)] exp(−0.5mλ(1− ))\n(57)\n=\n( E[exp(−λṽi2)]\nexp(−0.5λ(1− ))\n)m (58)\nPerforming computations similar to those above to compute the expectation term, we get,\nE[exp(−λṽi2)] = 0.5 + 1 2 √ (1 + 2λ) (59)\nHence, we get,\nPr ( ‖ṽ‖2 ≤ 0.5m(1− ) ) ≤ ( 0.5 ( 1 +\n1√ (1 + 2λ)\n) exp(0.5λ(1− )) )m (60)\nSimilar to the R.H.S. case, we find that λ ≈ 0.5 1− approximately minimizes the failure probability,\nPr ( ‖ṽ‖2 ≤ 0.5m(1− ) ) ≤ ( 0.5 ( 1 + √ 1− ) exp( 4 ) )m\n(61)\n= exp ( m (\n4 − log 2 1 + √ 1−\n)) (62)\nIt can be shown that,\nexp ( m (\n4 − log 2 1 + √ 1−\n)) ≤ exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (63)\nThus,\nPr ( ‖v‖2 ≥ (1− )‖u‖2 ) ≥ 1− exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (64)\nUsing union bound, Eq. (50) and (64) hold together with probability,\nPr ( (1− )‖u‖2 ≤ ‖v‖2 ≤ (1 + )‖u‖2 ) ≥ 1− 2 exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (65)\nThis proves the claim.\nTheorem 1 LetD be a fixed dataset with N samples and define a L layer ReLU network as shown in Eq. 3 such that each weight matrix Wl ∈ Rnl×nl−1 has its elements sampled as W lij\ni.i.d.∼ N (0, 2nl ) and biases bl are set to zeros. Then for any sample (x,y) ∈ D and ∈ [0, 1), we have that, Pr ( (1− )L‖x‖2 ≤ ‖fθ(x)‖2 ≤ (1 + )L‖x‖2 ) ≥ 1−\nL∑ l′=1 2N exp ( −nl′ ( 4 + log\n2\n1 + √ 1 + )) (66)\nProof: When feed-forwarding a fixed input through the layers of a deep ReLU network, each hidden layer’s activation corresponding to the given input is also fixed because the network is\ndeterministic. Thus applying lemma 2, on each layer’s transformation, the following holds true for each l ∈ {1, 2, · · ·L},\nPr ( (1− )‖hl−1‖2 ≤ ‖hl‖2 ≤ (1 + )‖hl−1‖2 ) ≥ 1− 2 exp ( −nl (\n4 + log\n2\n1 + √ 1 + )) (67)\nThus, using union bound, we have the lengths of all the layers until layer l are simultaneously preserved with probability at least,\n1− l∑\nl′=1\n2 exp ( −nl′ (\n4 + log\n2\n1 + √ 1 +\n)) (68)\nApplying union bound again, all the lengths until layer l are preserved simultaneously for N inputs with probability,\n1− l∑\nl′=1\n2N exp ( −nl′ (\n4 + log\n2\n1 + √ 1 +\n)) (69)\nFinally, we note that the following hold true with the above probability,\n(1− )‖x‖2 ≤ ‖h1‖2 ≤ (1 + )‖x‖2 (70) (1− )‖h1‖2 ≤ ‖h2‖2 ≤ (1 + )‖h1‖2 (71)\nSubstituting ‖h1‖2 ≤ (1 + )‖x‖2 in the R.H.S. of the last equation, and (1− )‖x‖2 ≤ ‖h1‖2 in the L.H.S. of the last equation, we get,\n(1− )2‖x‖2 ≤ ‖h2‖2 ≤ (1 + )2‖x‖2 (72)\nPerforming substitutions for higher layers similarly yields the claim.\nLemma 3 Let X be a d ≤ n dimensional subspace of Rn and R ∈ Rm×n. If Rij i.i.d.∼ N (0, 2m ),\n∈ [0, 1), and,\nm ≥ 1 /12− log(0.5(1 + √ 1 + /3))\n· ( d log 2\n∆ + log\n4\nδ\n) (73)\nthen for all vectors u ∈ X , with probability at least 1− δ,∣∣∣‖ReLU(Ru)‖ − ‖u‖∣∣∣ ≤ ‖u‖ (74) where ∆ := min{\n3 √ d , √ √ 3d }.\nProof: The core idea behind the proof is inspired by lemma 10 of Sarlos (2006). Without any loss of generality, we will show the norm preserving property for any unit vector u in the d dimensional subspace X of Rn. This is because for any arbitrary length vector u, ‖ReLU(Ru)‖ = ‖u‖ · ‖ReLU(Rû)‖. The idea then is to define a grid of finite points over X such that every unit vector û in X is close enough to one of the grid points. Then, if we choose the width of the layer to be large enough to approximately preserve the length of the finite number of grid points, we essentially guarantee that the length of any arbitrary vector approximately remains preserved. To this end, we define a grid G on [−1, 1]d with interval of size ∆ := min{ / √ d, √ /d}. Note the\nnumber of points on this grid is (\n2 ∆\n)d . Also, let column vectors of B ∈ Rn×d be the orthonormal\nbasis of X . We now prove the R.H.S. of the bound in the claim. If we consider any unit vector û in X , we can find a point g on the grid G such that ‖g‖ ≤ 1, and it is closest to û in `2 norm, and define r′ := û− g. Thus the vector û can essentially be decomposed as,\nû = g + r′ (75)\nAlso note that since r′ lies in the span of X , we can represent r′ := Br for some vector r.\nNow consider the norm of the vector û after the ReLU transformation give by ‖ReLU(Rû)‖. Then we have,\n‖ReLU(Rû)‖ = ‖ReLU(R(g + r′))‖ (76) ≤ ‖ReLU(Rg) +ReLU(Rr′))‖ (77) ≤ ‖ReLU(Rg)‖+ ‖ReLU(Rr′))‖ (78) ≤ ‖ReLU(Rg)‖+ ‖Rr′‖ (79)\nSimilarly, we have,\n‖ReLU(Rg)‖ = ‖ReLU(R(g + û− û))‖ (80) ≤ ‖ReLU(Rû) +ReLU(R(g − û)))‖ (81) ≤ ‖ReLU(Rû)‖+ ‖ReLU(−Rr′))‖ (82) ≤ ‖ReLU(Rû)‖+ ‖Rr′‖ (83)\nTherefore,\n‖ReLU(Rg)‖ − ‖Rr′‖ ≤ ‖ReLU(Rû)‖ ≤ ‖ReLU(Rg)‖+ ‖Rr′‖ (84)\nApplying union bound on all the points in G, from lemma 2, we know that with probability at least 1− ( 2 ∆ )d exp ( −m ( 4 + log 2 1+ √ 1+ )) ,\n‖ReLU(Rg)‖2 ≤ (1 + )‖g‖2\n≤ 1 + (85) ≤ (1 + )2 (86)\nThis can be substituted in the R.H.S. of Eq. (84). Now we only need to upper bound ‖Rr′‖. To this end, we rewrite ‖Rr′‖ = ‖RBr‖. Then,\n‖RBr‖2 = d∑ i=1 d∑ j=1 < RBiri,RBjrj > (87)\n≤ 2 d∑ i=1 d∑ j=1 |ri| · |rj |· < 1√ 2 RBi, 1√ 2 RBj > (88)\nNote that 1√ 2 R is a matrix whose entries are sampled from N (0, 1). Invoking lemma 6 on the d2 terms in the above sum, we have that with probability at least 1− 2d2 exp ( −m4 ( 2 − 3 )) ,\n2 d∑ i=1 d∑ j=1 |ri| · |rj |· < 1√ 2 RBi, 1√ 2 RBj > ≤ 2 d∑ i=1 d∑ j=1 |ri| · |rj | · (< Bi,Bj > + ) (89)\n= 2 d∑ i=1 r2i ‖Bi‖2 + 2 d∑ i=1 d∑ j=1 |ri| · |rj | · (90)\n= 2‖r‖2 + 2 ‖r‖21 (91)\nSince r′, and hence r is a point inside one of the grid cells containing the origin, its length can be at most the length of the main diagonal of the grid cell. Formally, ‖r‖ ≤ √ d∆ ≤ , and ‖r‖1 ≤ d∆ ≤ √ . Subsituting these inequalities in the above equations, we get,\n‖RBr‖2 ≤ 4 2 (92)\nLooking back at the R.H.S. of Eq. (84), we have that with probability at least 1 −( 2 ∆ )d exp ( −m ( 4 + log 2 1+ √ 1+ )) − 2d2 exp ( −m4 ( 2 − 3 )) ,\n‖ReLU(Rû)‖ ≤ 1 + + 2 (93) = 1 + 3 (94)\nTo prove the L.H.S. of the claimed bound, we can similarly find a point g on the grid G such that ‖g‖ ≥ 1, and it is closest to û in `2 norm, and define r′ := û− g. Then invoking lemma 2, we know that with probability at least 1− ( 2 ∆ )d exp ( −m ( 4 + log 2 1+ √ 1+ )) ,\n‖ReLU(Rg)‖2 ≥ (1− )‖g‖2\n≥ 1− (95) ≥ (1− )2 (96)\nThis can be substituted in the L.H.S. of Eq. (84). We then substitute the previously computed upper bound of ‖RBr‖2 once again and have that with probability at least 1 − 2 (\n2 ∆\n)d exp ( −m ( 4 + log 2 1+ √ 1+ )) − 2d2 exp ( −m4 ( 2 − 3 )) ,\n1− 3 ≤ ‖ReLU(Rû)‖ ≤ 1 + 3 (97)\nScaling û arbitrarily, we equivalently have,\n(1− 3 )‖u‖ ≤ ‖ReLU(Ru)‖ ≤ (1 + 3 )‖u‖ (98)\nFinally, since,( 2\n∆\n)d exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) ≥ d2 exp ( −m\n4\n( 2 − 3 )) (99)\nWe can further lower bound the success probability of Eq. (98) for mathematical ease as,\n1− 4 ( 2\n∆\n)d exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (100)\nTherefore to guarantee a success probability of at least 1− δ, we bound,\n1− 4 ( 2\n∆\n)d exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) ≥ 1− δ (101)\nRearranging the terms in the equality to get a lower bound on m and rescaling proves the claim.\nA.2 PROOFS FOR BACKWARD PASS\nProposition 1 If network weights are sampled i.i.d. from a Gaussian distribution with mean 0 and biases are 0 at initialization, then conditioned on hl−1, each dimension of 1(al) follows an i.i.d. Bernoulli distribution with probability 0.5 at initialization.\nProof: Note that al := Wlhl−1 at initialization (biases are 0) and Wl are sampled i.i.d. from a random distribution with mean 0. Therefore, each dimension ali is simply a weighted sum of i.i.d. zero mean Gaussian, which is also a 0 mean Gaussian random variable.\nTo prove the claim, note that the indicator operator applied on a random variable with 0 mean and symmetric distribution will have equal probability mass on both sides of 0, which is the same as a Bernoulli distributed random variable with probability 0.5. Finally, each dimension of al is i.i.d. simply because all the elements of Wl are sampled i.i.d., and hence each dimension of al is a weighted sum of a different set of i.i.d. random variables.\nLemma 4 Let v = (Ru) z, where u ∈ Rn, R ∈ Rm×n and z ∈ Rm. If Rij i.i.d.∼ N (0, 1pm ) and zi i.i.d.∼ Bernoulli(p), then for any fixed vector u, E[‖v‖2] = ‖u‖2.\nProof: Define ai = RTi u, where Ri denotes the ith row of R. Since each element Rij is an independent sample from Gaussian distribution, each ai is essentially a weighted sum of these independent random variables. Thus, each ai ∼ N ( 0, 1pm‖u‖ 2 ) and independent from one another.\nOur goal is to compute,\nE[‖v‖2] = m∑ i=1 E[(aizi)2] (102)\n= m∑ i=1 E[a2i ]E[z2i ] (103)\n= mE[a2i ]E[z2i ] (104) = mp(var(ai) + E[ai]2) (105) = ‖u‖2 (106)\nwhich proves the claim.\nLemma 5 Let v = (Ru) z, where u ∈ Rn, z ∈ Rm, and R ∈ Rm×n. If Rij i.i.d.∼ N (0, 10.5m ), zi i.i.d.∼ Bernoulli(0.5) and ∈ [0, 1), then for any fixed vector u,\nPr ( |‖v‖2 − ‖u‖2| ≤ ‖u‖2 ) ≥ 1− 2 exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (107)\nProof: Define ai = RTi u, where Ri denotes the ith row of R. Then, each ai ∼ N ( 0, 10.5m‖u‖ 2 ) and independent from one another. Define ã = √\n0.5m ‖u‖ a. Then we have that each element ãi ∼ N (0, 1).\nDefine ṽ such that ṽi = ãizi. Thus to bound the probability of failure for the R.H.S., Pr ( ‖v‖2 ≥ (1 + )‖u‖2 ) = Pr\n( ‖u‖2\n0.5m ‖ṽ‖2 ≥ (1 + )‖u‖2\n) (108)\n= Pr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) (109)\nUsing Chernoff’s bound, we get for any λ > 0, Pr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) = Pr ( exp(λ‖ṽ‖2) ≥ exp(λ0.5m(1 + )) ) (110)\n≤ E[exp(λ‖ṽ‖ 2)]\nexp(0.5mλ(1 + )) (111)\n= E[exp(\n∑m i=1 λṽi 2)]\nexp(0.5mλ(1 + )) (112)\n= Πmi=1E[exp(λṽi 2)]\nexp(0.5mλ(1 + )) (113)\n=\n( E[exp(λṽi2)]\nexp(0.5λ(1 + ))\n)m (114)\nDenote p(ãi) and p(zi) as the probability distribution of the random variables ãi and zi respectively. Then,\nE[exp(λṽi2)] = ∑ zi p(zi) ∫ ãi p(ãi) exp(λãi 2z2i ) (115)\nSubstituting p(ãi) with a standard Normal distribution, we get, E[exp(λṽi2)] = ∑ zi p(zi) ∫ ãi 1√ 2π exp(λãi 2z2i − ãi 2 2 ) (116)\n= ∑ zi p(zi) ∫ ãi 1√ 2π exp(− ãi 2 2 (1− 2λz2i )) (117)\n= ∑ zi p(zi) ∫ ãi 1√ 2π · √ 1− 2λz2i√ 1− 2λz2i exp(− ãi 2 2 (1− 2λz2i )) (118)\n= ∑ zi p(zi) · 1√ 1− 2λz2i (119)\nwhere the last equality holds because the integral of a Gaussian distribution over its domain is 1. Finally, summing over the Bernoulli random variable zi, we get,\nE[exp(λṽi2)] = (1− 0.5) + 1√\n1− 2λ (120)\nHence, we get,\nPr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) ≤ ( 0.5 ( 1 +\n0.5√ (1− 2λ)\n) exp(−0.5λ(1 + )) )m (121)\n≤ ( 0.5 ( 1 +\n1√ (1− 2λ)\n) exp(−0.5λ(1 + )) )m (122)\nWe find that the above inequality is identical to that in Eq. (47). Thus λ ≈ 0.5 1+ approximately minimizes the above bound as before. Substituting this value of λ above, we get,\nPr ( ‖ṽ‖2 ≥ 0.5m(1 + ) ) ≤ ( 0.5 ( 1 + √ 1 + )\nexp(− 4\n) )m\n(123)\n= exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (124)\nThus,\nPr ( ‖v‖2 ≤ (1 + )‖u‖2 ) ≥ 1− exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (125)\nSimilarly, to prove the L.H.S. by bounding the probability of failure from the other side, Pr ( ‖v‖2 ≤ (1− )‖u‖2 ) = Pr ( −‖v‖2 ≥ −(1− )‖u‖2 ) (126)\n= Pr ( −‖u‖ 2\n0.5m ‖ṽ‖2 ≥ −(1− )‖u‖2\n) (127)\n= Pr ( −‖ṽ‖2 ≥ −0.5m(1− ) ) (128)\nUsing Chernoff’s bound, we get for any λ > 0, Pr ( −‖ṽ‖2 ≥ −0.5m(1− ) ) = Pr ( exp(−λ‖ṽ‖2) ≥ exp(−λ0.5m(1− )) ) (129)\n≤ E[exp(−λ‖ṽ‖ 2)]\nexp(−0.5mλ(1− )) (130)\n= E[exp(−\n∑m i=1 λṽi 2)]\nexp(−0.5mλ(1− )) (131)\n= Πmi=1E[exp(−λṽi2)] exp(−0.5mλ(1− ))\n(132)\n=\n( E[exp(−λṽi2)]\nexp(−0.5λ(1− ))\n)m (133)\nPerforming computations similar to those above to compute the expectation term, we get,\nE[exp(−λṽi2)] = 0.5 + 1√\n(1 + 2λ) (134)\nHence, we get,\nPr ( ‖ṽ‖2 ≤ 0.5m(1− ) ) ≤ ( 0.5 ( 1 +\n0.5√ (1 + 2λ)\n) exp(0.5λ(1− )) )m (135)\n≤ ( 0.5 ( 1 +\n1√ (1 + 2λ)\n) exp(0.5λ(1− )) )m (136)\nSimilar to the R.H.S. case, we find that λ ≈ 0.5 1− approximately minimizes the failure probability,\nPr ( ‖ṽ‖2 ≤ 0.5m(1− ) ) ≤ ( 0.5 ( 1 + √ 1− ) exp( 4 ) )m\n(137)\n= exp ( m (\n4 − log 2 1 + √ 1−\n)) (138)\nIt can be shown that,\nexp ( m (\n4 − log 2 1 + √ 1−\n)) ≤ exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (139)\nThus,\nPr ( ‖v‖2 ≥ (1− )‖u‖2 ) ≥ 1− exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (140)\nUsing union bound, Eq. (125) and (140) hold together with probability,\nPr ( (1− )‖u‖2 ≤ ‖v‖2 ≤ (1 + )‖u‖2 ) ≥ 1− 2 exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (141)\nThis proves the claim.\nTheorem 2 LetD be a fixed dataset with N samples and define a L layer ReLU network as shown in Eq. 3 such that each weight matrix Wl ∈ Rn×n has its elements sampled as W lij\ni.i.d.∼ N (0, 2n ) and biases bl are set to zeros. Then for any sample (x,y) ∈ D, ∈ [0, 1), and for all l ∈ {1, 2, . . . , L} with probability at least,\n1− 4NL exp ( −n (\n4 + log\n2\n1 + √ 1 +\n)) (142)\nthe following hold true,\n(1− )L‖x‖2 · ‖δ(x,y)‖2 ≤ ‖∂`(fθ(x),y) ∂Wl ‖2 ≤ (1 + )L‖x‖2 · ‖δ(x,y)‖2 (143)\nand\n(1− )l‖x‖2 ≤ ‖hl‖2 ≤ (1 + )l‖x‖2 (144)\nProof: From theorem 1, we know that the following holds for all l,\nPr ( (1− )l‖x‖2 ≤ ‖hl‖2 ≤ (1 + )l‖x‖2 ) ≥ 1− 2NL exp ( −n (\n4 + log\n2\n1 + √ 1 + )) (145)\nOn the other hand, we have that,\n∂`(fθ(x),y)\n∂aL−1 = 1(aL−1)\n( WL T δ(x,y) )\n(146)\nFrom proposition 1, we know that each element of 1(aL−1) follows a Bernoulli distribution with probability 0.5. Thus applying lemma 5 to the above equation (under the assumption that δ(x,y) and 1(al) are statistically independent), the following holds for a fixed data sample (x,y),\nPr ( (1− )‖δ(x,y)‖2 ≤ ‖∂`(fθ(x),y)\n∂aL−1 ‖2 ≤ (1 + )‖δ(x,y)‖2\n) ≥ 1− 2 exp ( −n (\n4 + log\n2\n1 + √ 1 + )) (147)\nApplying union bound on N fixed samples, the following holds for all N samples,\nPr ( (1− )‖δ(x,y)‖2 ≤ ‖∂`(fθ(x),y)\n∂aL−1 ‖2 ≤ (1 + )‖δ(x,y)‖2\n) ≥ 1− 2N exp ( −n (\n4 + log\n2\n1 + √ 1 + )) (148)\nSimilarly,\nPr ( (1− )‖∂`(fθ(x),y)\n∂aL−1 ‖2 ≤ ‖∂`(fθ(x),y) ∂aL−2 ‖2 ≤ (1 + )‖∂`(fθ(x),y) ∂aL−1 ‖2 ) ≥ 1− 2N exp ( −n ( 4 + log\n2\n1 + √ 1 + )) (149)\nCombining the the above two results and applying union bound, we get,\nPr ( (1− )2‖δ(x,y)‖2 ≤ ‖∂`(fθ(x),y)\n∂aL−2 ‖2 ≤ (1 + )2‖δ(x,y)‖2\n) ≥ 1− 4N exp ( −n (\n4 + log\n2\n1 + √ 1 + )) (150)\nExtending this to all L layers, we have for all l that,\nPr ( (1− )L−l‖δ(x,y)‖2 ≤ ‖∂`(fθ(x),y)\n∂al ‖2 ≤ (1 + )L−l‖δ(x,y)‖2\n) ≥ 1− 2NL exp ( −n (\n4 + log\n2\n1 + √ 1 + )) (151)\nCombining the above result with Eq. (145) using union bound, we get for all l,\nPr ( (1− )L−1‖δ(x,y)‖2‖x‖2 ≤ ‖∂`(fθ(x),y)\n∂al ‖2‖hl−1‖2 ≤ (1 + )L−1‖δ(x,y)‖2‖x‖2 ) ≥ 1− 4NL exp ( −n (\n4 + log\n2\n1 + √ 1 + )) (152)\nSince,\n‖∂`(fθ(x),y) ∂Wl ‖2 = ‖ ∂`(fθ(x),y) ∂al ‖2 · ‖hl−1‖2 ∀l (153)\nwe have proved the claim.\nLemma 6 (Corollary 2.1 of Kakade & Shakhnarovich (2009)) Let u1,u2 ∈ Rn be any two fixed vectors such that ‖u1‖ ≤ 1 and ‖u2‖ ≤ 1, R ∈ Rm×n be a projection matrix where each element of R is drawn i.i.d. from a standard Gaussian distribution, Rij ∼ N (0, 1m ) and any ∈ (0, 1/2)\nPr (| < Ru1,Ru2 > − < u1,u2 > | ≤ ) ≥ 1− 4 exp ( −m\n4\n( 2 − 3 )) (154)\nLemma 7 Let v1 = (Ru1) z and v2 = (Ru2) z, where u1,u2 ∈ Rn, z ∈ Rm, and R ∈ Rm×n. If Rij i.i.d.∼ N (0, 10.5m ), zi i.i.d.∼ Bernoulli(0.5) and ∈ [0, 1), then for any fixed vectors u1 and u2 s.t. ‖u1‖ ≤ 1 and ‖u2‖ ≤ 1,\nPr (| < v1,v2 > − < u1,u2 > | ≤ ) ≥ 1− 4 exp ( −m (\n4 + log\n2\n1 + √ 1 +\n)) (155)\nProof: Applying lemma 5 to vectors u1 + u2 and u1 − u2, we have with probability at least 1− 4 exp ( −m ( 4 + log 2 1+ √ 1+ )) ,\n(1− ) · ‖u1 + u2‖2 ≤ ‖z Ru1 + z Ru2‖2 ≤ (1 + ) · ‖u1 + u2‖2 (156) (1− ) · ‖u1 − u2‖2 ≤ ‖z Ru1 − z Ru2‖2 ≤ (1 + ) · ‖u1 − u2‖2 (157)\nThen notice, 4 < v1,v2 > = 4 < z Ru1, z Ru2 > (158)\n= ‖z Ru1 + z Ru2‖2 − ‖z Ru1 − z Ru2‖2 (159) ≥ (1− ) · ‖u1 + u2‖2 − (1 + ) · ‖u1 − u2‖2 (160) = 4· < u1,u2 > −2 · (‖u1‖2 + ‖u2‖2) (161) ≥ 4· < u1,u2 > −4 (162)\nEquivalently, · < u1,u2 > − < v1,v2 >≤ (163)\nThe other side of the claim can be proved similarly." } ]
2,019
null
SP:2034dffa26a8e68c466d835ada625fe635a71b66
[ "This paper proposes a new attack framework AdvCodec for adversarial text generation. The main idea is to use a tree-based autoencoder to embed text data into the continuous vector space and then optimize to find the adversarial perturbation in the vector space. The authors consider two types of attacks: concat attack and scatter attack. Experimental results on sentiment analysis and question answering, together with human evaluation on the generated adversarial text, are provided. ", "Motivated by recent development of attack/defense methods addressing the vulnerability of deep CNN classifiers for images, this paper proposes an attack framework for adversarial text generation, in which an autoencoder is employed to map discrete text to a high-dimensional continuous latent space, standard iterative optimization based attack method is performed in the continuous latent space to generate adversarial latent embeddings, and a decoder generates adversarial text from the adversarial embeddings. Different generation strategies of perturbing latent embeddings at sentence level or masked word level are both explored. Adversarial text generation can take either a form of appending an adversarial sentence or a form of scattering adversarial words into different specified positions. Experiments on both sentiment classification and question answering show that the proposed attack framework outperforms some baselines. Human evaluations are also conducted." ]
While there has been great interest on generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating adversarial text in the discrete domain is still challenging. The main contribution of this paper is to propose a general targeted attack framework AdvCodec for adversarial text generation which addresses the challenge of discrete input space and is easily adapted to general natural language processing (NLP) tasks. In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. A tree based decoder is then applied to ensure the grammar correctness of the generated text. It also enables the flexibility of making manipulations on different levels of text, such as sentence (AdvCodec(Sent)) and word (AdvCodec(Word)) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve arbitrary targeted attack. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results and human studies show that AdvCodec generated adversarial text can successfully attack the neural models without misleading the human. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from 0.703 to 0.006, and a BERT-based QA model’s F1 score to drop from 88.62 to 33.21 (with best targeted attack F1 score as 46.54). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models.
[]
[ { "authors": [ "Moustafa Alzantot", "Yash Sharma", "Ahmed Elgohary", "Bo-Jhang Ho", "Mani B. Srivastava", "Kai-Wei Chang" ], "title": "Generating natural language adversarial examples", "venue": null, "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "CoRR, abs/1409.0473,", "year": 2015 }, { "authors": [ "Nicholas Carlini", "David A. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Minhao Cheng", "Jinfeng Yi", "Huan Zhang", "Pin-Yu Chen", "Cho-Jui Hsieh" ], "title": "Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "Krishnamurthy Dvijotham", "Sven Gowal", "Robert Stanforth", "Relja Arandjelovic", "Brendan O’Donoghue", "Jonathan Uesato", "Pushmeet Kohli" ], "title": "Training verified learners with learned", "venue": "verifiers. ArXiv,", "year": 2018 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Xiaodong Song" ], "title": "Robust physical-world attacks on deep learning", "venue": null, "year": 2017 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "CoRR, abs/1412.6572,", "year": 2015 }, { "authors": [ "Luheng He", "Kenton Lee", "Mike Lewis", "Luke S. Zettlemoyer" ], "title": "Deep semantic role labeling: What works and what’s next", "venue": null, "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Po-Sen Huang", "Robert Stanforth", "Johannes Welbl", "Chris Dyer", "Dani Yogatama", "Sven Gowal", "Krishnamurthy Dvijotham", "Pushmeet Kohli" ], "title": "Achieving verified robustness to symbol substitutions via interval bound propagation", "venue": null, "year": 1909 }, { "authors": [ "Mohit Iyyer", "Jordan L. Boyd-Graber", "Hal Daumé" ], "title": "Generating sentences from semantic vector space representations", "venue": null, "year": 2014 }, { "authors": [ "Mohit Iyyer", "John Wieting", "Kevin Gimpel", "Luke S. Zettlemoyer" ], "title": "Adversarial example generation with syntactically controlled paraphrase networks", "venue": "In NAACL-HLT,", "year": 2018 }, { "authors": [ "Robin Jia", "Percy Liang" ], "title": "Adversarial examples for evaluating reading comprehension systems", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Robin Jia", "Aditi Raghunathan", "Kerem Göksel", "Percy Liang" ], "title": "Certified robustness to adversarial word", "venue": "substitutions. ArXiv,", "year": 2019 }, { "authors": [ "Di Jin", "Zhijing Jin", "Joey Tianyi Zhou", "Peter Szolovits" ], "title": "Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Qi Lei", "Lingfei Wu", "Pin-Yu Chen", "Alexand ros G. Dimakis", "Inderjit S. Dhillon", "Michael Witbrock" ], "title": "Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Jinfeng Li", "Shouling Ji", "Tianyu Du", "Bo Li", "Ting Wang" ], "title": "Textbugger: Generating adversarial text against real-world applications", "venue": "arXiv preprint arXiv:1812.05271,", "year": 2018 }, { "authors": [ "Jiwei Li", "Thang Luong", "Daniel Jurafsky", "Eduard H. Hovy" ], "title": "When are tree structures necessary for deep learning of representations", "venue": "In EMNLP,", "year": 2015 }, { "authors": [ "Bin Liang", "Hongcheng Li", "Miaoqiang Su", "Pan Bian", "Xirong Li", "Wenchang Shi" ], "title": "Deep text classification can be fooled", "venue": "arXiv preprint arXiv:1704.08006,", "year": 2017 }, { "authors": [ "Zhouhan Lin", "Minwei Feng", "Cı́cero Nogueira dos Santos", "Mo Yu", "Bing Xiang", "Bowen Zhou", "Yoshua Bengio" ], "title": "A structured self-attentive sentence", "venue": "embedding. ArXiv,", "year": 2017 }, { "authors": [ "Christopher D. Manning", "Mihai Surdeanu", "John Bauer", "Jenny Rose Finkel", "Steven Bethard", "David McClosky" ], "title": "The stanford corenlp natural language processing toolkit", "venue": "In ACL,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Andrew M. Dai", "Ian Goodfellow" ], "title": "Adversarial Training Methods for SemiSupervised Text Classification", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: A simple and accurate method to fool deep neural networks. 2016", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ananthram Swami", "Richard Harang" ], "title": "Crafting Adversarial Input Sequences for Recurrent Neural Networks", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In EMNLP,", "year": 2014 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Suranjana Samanta", "Sameep Mehta" ], "title": "Towards crafting text adversarial samples", "venue": "arXiv preprint arXiv:1707.02812,", "year": 2017 }, { "authors": [ "Min Joon Seo", "Aniruddha Kembhavi", "Ali Farhadi", "Hannaneh Hajishirzi" ], "title": "Bidirectional attention flow for machine comprehension", "venue": "CoRR, abs/1611.01603,", "year": 2016 }, { "authors": [ "Kai Sheng Tai", "Richard Socher", "Christopher D. Manning" ], "title": "Improved semantic representations from tree-structured long short-term memory networks", "venue": "In ACL,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Eric Wallace", "Shi Feng", "Nikhil Kandpal", "Matt Gardner", "Sameer Singh" ], "title": "Universal Adversarial Triggers for Attacking and Analyzing NLP", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Z. Zhao", "D. Dua", "S. Singh" ], "title": "Generating Natural Adversarial Examples", "venue": "ArXiv e-prints,", "year": 2017 }, { "authors": [ "Donald Trump" ], "title": "Input (Italic = Inserted or appended tokens, underline = Model prediction, red = Ground truth) Question: Who ended the series in 1989? Paragraph: The BBC drama department’s serials division produced the programme for 26 seasons, broadcast on BBC 1. Falling viewing numbers, a decline in the public perception of the show and a less-prominent", "venue": null, "year": 1989 }, { "authors": [ "the Boss" ], "title": "Just the Ten of Us, The Wonder Years, Full House and Perfect Strangers. donald trump networks regain a rating leads on american", "venue": null, "year": 1985 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent studies have demonstrated that deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples (Goodfellow et al., 2015; Papernot et al., 2016; Eykholt et al., 2017; Moosavi-Dezfooli et al., 2016). While there are a lot of successful attacks proposed in the continuous data domain including images, audios, and videos, how to effectively generate adversarial examples in the discrete text domain still remains a hard problem. There are several challenges for generating adversarial text: 1) most existing gradient-based adversarial attack approaches are not directly applicable to the discrete structured data; 2) it is less clear how to appropriately measure the naturalness of the generated text compared to the original ones; 3) the manipulation space of text is limited, and it is unclear whether generating a new appended sentence or manipulating individual words will affect human judgements.\nSo far, existing works on adversarial text generation either leverage heuristic solutions such as genetic algorithms (Jin et al., 2019) to search for potential adversarial sentences, or are limited to attacking specific NLP tasks (Cheng et al., 2018; Lei et al., 2018). In addition, effective targeted attacks have not been achieved by current attacks for any task. In this paper, we aim to provide more insights towards solving these challenges by proposing a unified optimization framework AdvCodec to generate adversarial text against general NLP tasks. In particular, the core component of AdvCodec is a tree based autoencoder which converts discrete text tokens into continuous semantic embedding, upon which the adversarial perturbation will be optimized regarding the chosen adversarial target. Finally, a tree based decoder will decode the generated adversarial continuous embedding vector back to the sentence level based on the tree grammar rules, aiming to both pre-\nserve the original semantic meaning and linguistic coherence. An iterative process can be applied here to ensure the attack success rate.\nIn addition to the general adversarial text generation framework AdvCodec, this paper also aims to explore several scientific questions: 1) Since AdvCodec allows the flexibility of manipulating on different hierarchies of the tree structures, which is more attack effective and which way preserves better grammatical correctness? 2) Is it possible to achieve targeted attack for general NLP tasks such as sentiment classification and QA, given the limited degree of freedom for manipulation? 3) Is it possible to perform blackbox attack in general NLP tasks? 4) Is BERT robust in practice? 5) Do these adversarial examples affect human reader performances?\nTo address the above questions, we explore two types of tree based autoencoders on the word (AdvCodec(Word)) and sentence level (AdvCodec(Sent)). For each encoding scenario, we generate adversarial text against different sentiment classification and QA models. Compared with the state-of-the-art adversarial text generation methods, our approach achieves significantly higher untargeted and targeted attack success rate. In addition, we perform both whitebox and blackbox settings for each attack to evaluate the model vulnerabilities. Within each attack setting, we evaluate attack strategies as appending an additional adversarial sentence or adding scatter of adversarial words to a paragraph, to evaluate the quantitative attack effectiveness. To provide thorough adversarial text quality assessment, we also perform 7 groups of human studies to evaluate the quality of generated adversarial text compared with the baselines methods, and whether human can still get the ground truth answers for these tasks based on adversarial text. We find that: 1) both word and sentence level attacks can achieve high attack success rate, while the sentence level manipulation can consider the global grammatical constraints and generate high quality adversarial sentences. 2) various targeted attacks on general NLP tasks are possible (e.g. when attacking QA, we can ensure the target to be a specific answer or a specific location within a sentence); 3) the transferability based blackbox attacks are successful in NLP tasks. Transferring adversarial text from stronger models (in terms of performances) to weaker ones is more successful; 4) Although BERT has achieved state-ofthe-art performances, we observe the performance drops are also larger than other standard models when confronted with adversarial examples, which indicates BERT is not robust under the adversarial settings; 5) Most human readers are not sensitive to our adversarial examples and can still answer the right answers when confronted with the adversary-injected paragraphs.\nIn summary, our main contribution lies on: (1) We propose a general adversarial text generation framework AdvCodec that addresses the challenge of discrete text input to achieve targeted attacks against general NLP tasks (e.g. sentiment classification and QA) while preserving the semantic meaning and linguistic coherence; (2) we propose a novel tree-based text autoencoder that ensures the grammar correctness of generated text; (3) we conduct extensive experiments and successfully attack different sentiment classifiers and QA models with significant higher attack success rate than the state-of-the-art baseline methods; (4) we also perform comprehensive ablation studies including evaluating the attack scenarios of appending an adversarial sentence or adding scatter of adversarial words, as well as appending the adversarial sentence at different positions within a paragraph, and draw several interesting conclusions; (5) we leverage extensive human studies to show that the adversarial text generated by AdvCodec is natural and effective to attack neural models, while barely affecting human’s judgement." }, { "heading": "2 RELATED WORK", "text": "A large body of works on adversarial examples focus on perturbing the continuous input space. Though some progress has been made on generating adversarial perturbations in the discrete space, several challenges still remain unsolved. For example, Zhao et al. (2017) exploit the generative adversarial network (GAN) to generate natural adversarial text. However, this approach cannot explicitly control the quality of the generated instances. Most existing methods (Liang et al., 2017; Samanta & Mehta, 2017; Jia & Liang, 2017; Li et al., 2018; Jin et al., 2019) apply heuristic strategies to synthesize adversarial text: 1) first identify the features (e.g. characters, words, and sentences) that have the influence on the prediction, 2) follow different search strategies to perturb these features with the constructed perturbation candidates (e.g. typos, synonyms, antonyms, frequent words). For instance, Liang et al. (2017) employ the loss gradient∇L to select important characters and phrases to perturb, while Samanta & Mehta (2017) use typos, synonyms, and important adverbs/adjectives as candidates for insertion and replacement. Once the influential features are obtained, the strategies to" }, { "heading": "Perturbation", "text": "apply the perturbation generally include insertion, deletion, and replacement. Such adversarial text generation approaches cannot guarantee the grammar correctness of generated text. For instance, text generated by Liang et al. (2017) are almost random stream of characters. To generate grammarly correct perturbation, Jia & Liang (2017) adopt another heuristic strategy which adds manually constructed legit distracting sentences to the paragraph to introduce fake information. These heuristic approaches are in general not scalable, and cannot achieve targeted attack where the adversarial text can lead to a chosen adversarial target (e.g. adversarial label in classification). Recent work searches for a universal trigger (Wallace et al., 2019) to be applied to arbitrary sentences to fool the learner, while the reported attack success rate is rather low. In contrast, with the tree based autoencoder, the proposed AdvCodec framework is able to generate grammarly correct adversarial text efficiently, achieving high attack success rates on different models." }, { "heading": "3 THE ADVCODEC FRAMEWORK FOR ADVERSARIAL TEXT GENERATION", "text": "We describe the AdvCodec framework in this section. As illustrated in Figure 1, the key component of the AdvCodec framework is a tree-based autoencoder. The hierarchical and discrete nature of language motivates us to make use of tree-based autoencoder to map discrete text into a high dimensional latent space, which empowers us to leverage the existing optimization based attacking method such as Carlini & Wagner (2016) to both efficiently and effectively generate adversarial text.\nLet X be the domain of text and S be the domain of dependency parsing trees over element in X . Formally, a tree-based autoencoder consists of an encoder E : X × S → Z that encodes text x ∈ X along with its dependency parsing tree s ∈ S into a high dimensional latent representation z ∈ Z\nand a decoder G : Z × S → X that generates the corresponding text x from the given context vector z and the expected dependency parsing tree s. Given a dependency tree s, E and G form an antoencoder. We thus have the following reconstruction loss to train our tree-based autoencoder:\nL = −Ex∼X [log pG(x|s, E(x, s)] (1) As Figure 1 suggests, AdvCodec can operate on different granularity levels to generate either wordlevel or sentence-level contextual representation, and decode it into the adversarial text. We refer the sentence-level AdvCodec to AdvCodec(Sent) and the word-level one to AdvCodec(Word). Both of them will be described in more details in the later part of this section.\n3.1 OVERVIEW OF THE ADVCODEC FRAMEWORK Before diving into details, we provide a high level overview of AdvCodec according to the attack scenario and attack capability supported by this framework.\nAttack Scenario. Different from the previous adversarial text generation works (Lei et al., 2018; Cheng et al., 2018; Papernot et al., 2016; Miyato et al., 2016; Alzantot et al., 2018) that directly modify critical words in place and might risk changing the semantic meaning or editing the ground truth answers, we are generating the concatenative adversaries. First proposed by Jia & Liang (2017), the concatenative adversary does not change any words in the original paragraph or question, but instead appends a new adversarial sentence to the paragraph to fool the model. However, the concatenative attack also needs to ensure the appended sentence is compatible (Jia & Liang, 2017) with the original paragraph, which in other words means it should not contradict any stated facts in the paragraph, especially the correct answer. In our work, we further push the concept of concatenative adversaries further and propose a more general notion called scatter attack, which means we can inject adversarial words sporadically over the whole paragraph. The concatenative adversarial example falls into our case when those adversarial tokens form a sentence and on the same time the semantic meaning of the sentence does not contradict the original paragraph. Examples of concatenative attack and scatter attack is shown in table 1.\nAttack Capability. AdvCodec is essentially an optimization based framework to find the adversarial texts with the optimization goal set to achieve targeted attack. For the sentiment classification task, AdvCodec can perform targeted attack to make the original positive reviews be classified as the most negative one, and vice versa. Particularly in the QA task, we design and implement two kinds of targeted attack: position targeted attack and answer targeted attack. A successful position targeted attack means the model can be fooled to output the answers at specific targeted positions in the paragraph, but the content on the targeted span cannot be guaranteed. In contrast, a successful answer targeted attack is a stronger targeted attack, which refers to the situation when the model always outputs the preset targeted answer pair on the target no matter what the question looks like. An\nexample of word targeted attack can be found in the table 1. Although our framework is designed as a whitebox attack, our experimental results demonstrate our whitebox generated adversarial words can transfer to other blackbox models with high attack success rate. Finally, because AdvCodec is a unified adversarial text generation framework whose outputs are discrete tokens, it can be applied to different downstream NLP tasks. In this paper, we perform adversarial evaluation on sentiment classification and QA as examples to demonstrate how our framework is adapted to different works.\n3.2 ADVCODEC(SENT)\nIn this subsection, we describe AdvCodec(Sent) and explain how to utilize it to attack sentiment classification models and question answering systems. The main idea comes from the fact that tree structures sometimes have better performances than sequential recurrent models(Li et al., 2015; Iyyer et al., 2014; 2018) and the fact that it is inherently flexible to add perturbations on hierarchical nodes of the tree structures. Motivated by this, we design a novel tree-based autoencoder to simultaneously preserve similar semantic meaning and original syntactic structure.\nEncoder. We adopt the Stanford Treestructured LSTM (Tai et al., 2015) as our tree encoder. In the encoding phase, features are extracted and summed from bottom (leaf\nnode, i.e. word) to top (root node) along the dependency tree, extracted by Stanford CoreNLP Parser (Manning et al., 2014). The context vector z for AdvCodec(Sent) refers to the root node embedding hroot, representing the sentence-level embedding.\nDecoder. Following the same dependency tree, we design the text decoder as illustrated in Figure 2. In the decoding phase, we start from the root node and traverse along the dependency tree in level-order. The hidden state hj of the next node j comes from (i) the hidden state hi of the current tree node, (ii) current node predicted word embedding wi, and (iii) the dependency embedding dij between the current node i and the next node j based on the dependency tree. The next node’s corresponding word yj is generated based on the output of the LSTM Cell oj via a linear layer that maps from the hidden presentation oj to the logits that represent the probability distribution of the tree’s vocabulary.\noj , hj = LSTM([hi;wi; dij ]) (2) yj = W · oj + b (3)" }, { "heading": "3.2.1 ATTACK SENTIMENT CLASSIFICATION MODEL", "text": "Initial Seed. Following our pipeline to optimize adversarial sentence AdvSentence appended to the paragraph, we need to first start with an initial seed for optimization. Such initial seed for sentiment classification task can be arbitrary. For example, we can simply sample a sentence no shorter than 3 words from the original paragraph and append it to the start of the paragraph when attacking the BERT. The append position does have a influence on the attack success rate for adversarial attack, and more detailed ablation analysis will be discussed in the next section.\nOptimization Procedure. Finding the optimal perturbation z∗ on context vector z, we aim to find z∗ that solves minimize ||z∗||p + cf(z + z∗), (4) where f is the objective function for the targeted attack and c is the constant balancing between the perturbation magnitude and attack target. Specifically, we use the objective function f proposed in Carlini & Wagner (2016) as follows\nf(z′) = max(max{Z(G(z′, s))i : i 6= t} − Z(G(z′, s))t,−κ) (5)\nwhere z′ = z + z∗, t is the target class, Z(·) is the logit output of the classification model before softmax and κ is the confidence score to adjust the misclassification rate. The optimal solution is iteratively searched via Adam optimizer (Kingma & Ba, 2014)." }, { "heading": "3.2.2 ATTACK QUESTION ANSWERING SYSTEM", "text": "Initial Seed. Different from attacking sentiment analysis, it is important to choose a good initial seed that is semantically close to the context or the question when attacking QA model. In this way we can reduce the number of iteration steps and attack the QA model more efficiently. Based on the heuristic experiments conducted in the Appendix A.4, we choose to use question words to craft an initial seed. We design a set of coarse grained rules to convert a question sentence to a meaningful declarative statement and assign a target fake answer. The fake answer can be crafted according to the perturbed model’s predicted answer, or can be manually chosen by adversaries. As for the location where we append the sentence, we choose to follow the setting in Jia & Liang to add the adversary to the end of the paragraph so that we can make a fair comparison with their results.\nIt is worth noting unlike Jia & Liang (2017) that uses complicated rules to ensure the adversarial sentence does not change the ground truth, this heuristic step is the very first step of our framework followed by a series of optimization steps to ensure the ground truth is not changed. In this paper, we ensure our appended adversarial sentences are not contradictory to the ground truth by a) choosing an initial sentence as the initial seed of optimization, b) adding perturbation to the sentence, c) searching for the optimal adversarial sentence, d) ensuring that the adversarial sentence and context sentence are disjoint, otherwise keep the iteration steps. If the maximum steps are reached, the optimization is regarded as a failure.\nOptimization Procedure. We follow the same optimization procedure as attacking sentiment classification task except a subtle change of the objective function f due to the difference between QA model and classification model:\nf(z′)= 2∑ j=1 max(max{Zj(G(z′, s))i : i 6= t} − Zj(G(z′, s))tj ,−κ) (6)\nwhere Z1(·) is the output logits of answer starting position and Z2(·) is the output logits of answer ending position in the QA system. t1 and t2 are respectively the targeted start position and the targeted end position. For the position targeted attack mentioned in Section 3.1, we expect the model output to be a span in the paragraph from the targeted start position t1 to the targeted end position t2. In contrast, the answer targeted attack requires the model to output the predefined answer spans in the targeted positions and keep them unmodified during the optimization steps by setting gates to the targeted answer span: yj = g1 yj + g2 xj , (j = t1, t1 + 1, ..., t2),where yj refers to the tree decoded adversarial tokens. We set g1 = 1 and g2 = 0 in the position targeted attack, and g1 = 0 and g2 = 1 in the answer targeted attack." }, { "heading": "3.3 ADVCODEC(WORD)", "text": "Not only we can apply perturbations to the root node of our tree-based autoencoder to generate adversarial sentence, we can also perturb nodes at different hierachical levels of the tree to generate adversarial word. The most general case is that the perturbation is directly exerted on the leaf node of the tree autoencoder, i.e. the word-level perturbation.\nAdvCodec(Word) shares the exactly same architectures and optimization steps mentioned above to attack the targeted models. The distinction between AdvCodec(Word) and AdvCodec(Sent) is the context vector z. Formally for the word-level attack, the context vector z are the concatenation of leaf node embedding zi (which corresponds to each word) z = [z1, z2, . . . , zn]. Different from the AdvCodec(Sent) that perturbation is added on the whole sentence, we can control where the perturbations are added by assigning each node a mask as follows: z′i = zi + mask · z∗i (7) When we expect some token zi to be adversarially changed, we can simply assign mask = 1, thus adding the perturbation on the token.\nAs the perturbation can be controlled on individual words, we propose a new attack scenario scatter attack, which scatters some initial tokens over the paragraph, adds perturbation only to those tokens\nand find the best adversarial tokens via the same optimization procedure mentioned above. Moreover, the concatenative adversarial examples (e.g. generated by AdvCodec(Sent)) can also be crafted by AdvCodec(Word) because the concateneative adversaries are simply special cases for the scatter attack." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "In this section we will present the experimental evaluation results for AdvCodec. In particular, we target on two popular NLP tasks, sentiment classification and QA. For both models, we perform whitebox and transferability based blackbox attacks. In addition to the model accuracy (untargeted attack evaluation), we also report the targeted attack success rate for AdvCodec. We show that the proposed AdvCodec can outperform other state of the art baseline methods on different models." }, { "heading": "4.1 SENTIMENT ANALYSIS", "text": "Task and Dataset. In this task, sentiment analysis model takes the user reviews from restaurants and stores as input and is expected to predict the number of stars (from 1 to 5 star) that the user was assigned. We choose the Yelp dataset (Challenge) for sentiment analysis task. It consists of 2.7M yelp reviews, in which we follow the process of Lin et al. (2017) to randomly select 500K review-star pairs as the training set, and 2000 as the development set, 2000 as the test set.\nModel. BERT (Devlin et al., 2019) is a transformer (Vaswani et al., 2017) based model, which is unsupervisedly pretrained on a large corpus and is proven to be effective for downstream NLP tasks. Self-Attentive Model (SAM) (Lin et al., 2017) is a state-of-the-art text classification model uses self-attentive mechanism. More detailed model settings are listed in the appendix.\nBaseline. Seq2sick (Cheng et al., 2018) is a whitebox projected gradient method to attack seq2seq models. Here, we perform seq2sick attack on sentiment classification models by changing its loss function, which was not evaluated in the original paper.TextFooler (Jin et al., 2019) is a simple yet strong blackbox attack method to generate adversarial text. Following the same setting, Seq2Sick and TextFooler is only allowed to edit the appended sentence or tokens.\nAdversarial Evaluation. We perform the baseline attacks and our AdvCodec attack in scatter attack scenario and concat attack scenario under the whitebox settings. Our targeted goal for sentiment classification is the opposite sentiment. Specifically, we set the targeted attack goal as 5-star for reviews originally below 3-star and 1-star for reviews above. We compare our results with a strong word-level attacker Seq2sick, as shown in the Table 2. We can see our AdvCodec(Word) outperforms the baselines and achieves nearly 100% attack success rate on the BERT model. Also, we realize the targeted success rate for AdvCodec(Sent) is lower than the word-level baseline. We assume the reason is that AdvCodec(Sent) has the dependency tree constraints during decoding phase, thus increasing the difficulty to find both grammatical correct and adversarial sentences that can successful attack. On the contrary, the Seq2Sick baseline can edit any words under no semantic or syntactic constraints. Moreover, our following human evaluation exactly confirms AdvCodec(Sent) has better language quality.\nScatter Attack v.s. Concat Attack. In addition, we find scatter attack success rate is slightly lower than concat attack. We think there are two reasons to explain this phenomenon: Firstly, the average number of tokens added in scatter attack is 10, while the average number of tokens added in concat attack is 19. Therefore concat attack has the freedom to manipulate on more words than scatter\nattack, thus resulting in higher attack accuracy. Secondly, inserting adversarial tokens to different positions of the passage also affects the success rate, which is shown in Appendix A.5.\nBlackbox Attack. We perform transferability based blackbox attacks. We compare our blackbox attack success rate with the blackbox baseline TextFooler and blackbox Seq2Sick based on transferability. Table 3 demonstrates our AdvCodec(Word) model still has the best blackbox targeted and untargeted success rate among all the baseline models." }, { "heading": "4.2 QUESTION ANSWERING (QA)", "text": "Task and Dataset. In this task, we choose the SQuAD dataset (Rajpurkar et al., 2016) for question answering task. The SQuAD dataset is a reading comprehension dataset consisting of 107,785 questions posed by crowd workers on a set of Wikipedia articles, where the answer to each question must be a segment of text from the corresponding reading passage. To compare our method with other adversarial evaluation works (Jia & Liang, 2017) on the QA task, we evaluate our adversarial attacks on the same test set as Jia & Liang (2017), which consists of 1000 randomly sampled examples from the SQuAD development set. We use the official script of the SQuAD dataset (Rajpurkar et al., 2016) to measure both adversarial exact match rates and F1 scores.\nModel. We adapt the BERT model to run on SQuAD v1.1 with the same strategy as that in Devlin et al. (2019), and we reproduce the result on the development set. BiDAF(Seo et al., 2016) is a multi-stage hierarchical process that represents the context at different levels of granularity and uses bidirectional attention flow mechanism to obtain a query-aware context representation.\nBaseline. Universal Adversarial Triggers (Wallace et al., 2019) are input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. Here, we compare the targeted attack ability of AdvCodec with it. AddSent (Jia & Liang, 2017) appends a manually constructed legit distracting sentence to the given text so as to introduce fake information, which can only perform untargeted attack.\nAdversarial Evaluation. We perform the whitebox attack with different attack methods on our testing models. As is shown in Table 4 , AdvCodec(Word) achieves the best whitebox attack results on both BERT and BiDAF. It is worth noting although BERT has better performances than BiDAF, the performance drop for BERT ∆F1BERT is 55.4 larger than the performance drop for BiDAF ∆F1BiDAF = 53.0, which again proves the BERT is insecure under the adversarial evaluation. We also find the position targeted attack is slightly stronger than the answer targeted attack. We assume it is because the answer targeted attack has fixed targeted answer and limited freedom to alter the appended sentence, but the position targeted attack has more freedom to alter the fake answer from\nthe targeted position spans. We also tried the scatter attack on QA though the performances are not good. It turns out QA systems highly rely on the relationship between questions and contextual clues, which is hard to break when setting an arbitrary token to a target answer. We discussed in Appendix A.3 the untargeted scatter attack can work well and outperform the baseline methods.\nBERT target EM 32.1 43.4 1.4target F1 32.4 46.5 2.1\nBiDAF target EM 53.3 71.2 21.2target F1 56.8 75.6 22.6\nThen we test the targeted results of whitebox attack methods on QA models. The results are shown in Table 5. It shows that AdvCodec(Word) has the best targeted attack ability on QA. And all our attack methods outperform the baseline(Universal Triggers) when it comes to the targeted results.\nBlackbox Attack. We also transfer adversarial texts generated from whitebox attacks to perform blackbox attacks. Table 6 shows the result of the blackbox attack on testing models. All our proposed methods outperform the baseline method(AddSent) when transferring the adversaries among models with same architectures." }, { "heading": "5 HUMAN EVALUATION", "text": "We conduct a thorough human subject evaluation to assess the human response to different types of generated adversarial text. The main conclusion is that even though these adversarial examples are effective at attacking machine learning models, they are much less noticeable by humans." }, { "heading": "5.1 COMPARISON OF ADVERSARIAL TEXT QUALITY", "text": "To understand what humans think of our adversarial data quality, we present the adversarial text generated by AdvCodec(Sent) and AdvCodec(Word) based on the same initial seed. Human participants are asked to choose which data they think has better language quality.\nIn this experiement, we prepare 600 adversarial text pairs from the same paragraphs and initial seeds. We hand out these pairs to 28 Amazon Turks. Each turk is required to annotate at least 20 pairs and at most 140 pairs to ensure the task has been well understood. We assign each pair to at least 5 unique turks and take the majority votes over the responses. Human evaluation results are shown in Table 7, from which we see that the overall vote ratio for AdvCodec(Sent) is 66%, meaning\nAdvCodec(Sent) has better language quality than AdvCodec(Word) from a human perspective. This is due to the fact that AdvCodec(Sent) more fully harness the tree-based autoencoder structure compared to AdvCodec(Sent). And it is no surprise that better language quality comes\nat the expense of a lower adversarial success rate. As Table 2 shows, the adversarial targeted success rate of AdvCodec(Sent) on SAM is 20% lower than that of AdvCodec(Word), which confirms the trade-off between language quality and adversarial success rate." }, { "heading": "5.2 HUMAN PERFORMANCE ON ADVERSARIAL TEXT", "text": "Table 8: Human performance on Sentiment Analysis\nMethod Majority Acc\nOrigin 0.95 AdvCodec(Word) 0.82 AdvCodec(Sent) 0.82\nTable 9: Human performance on QA\nMethod Majority F1\nOrigin 90.987 AdvCodec(Word) 82.897 AdvCodec(Sent) 81.784\nTo ensure that our generated adversarial text are compatible with the original paragraph, we ask human participants to perform the sentiment classification and question answering task both on the original dataset and adversarial dataset. Adversarial dataset on sentiment classification consists of AdvCodec(Sent) concatenative adversarial examples and AdvCodec(Word) scatter attack exmaples. Adversarial dataset on QA consists of concatenative adversarial examples genereated by both AdvCodec(Sent) and AdvCodec(Word). More specifically, we respectively prepare 100 benign and adversarial data pairs for both QA and sentiment classification, and hand out them to 505 Amazon Turks. Each turk is requested to answer at least 5 question and at most 15 questions for the QA task and judge the sentiment for at least 10 paragraphs and at most 20 paragraphs for the sentiment classification task. We also perform a majority vote over Turk’s answers for the same question. The human evaluation results are displayed in Table 8 and Table 9, from which we see that most of our concatenated adversarial text are compatible to the paragraph. While we can spot a drop from the benign to adversarial datasets, we conduct an error analysis in QA and find the error examples are noisy and not necessarily caused by our adversarial text. For adversarial data in the sentiment classification task, we notice that the generated tokens or appended sentences have opposite sentiment from the benign one. However, our evaluation results show human readers can naturally ignore abnormal tokens and make correct judgement according to the context." }, { "heading": "6 DISCUSSION AND FUTURE WORKS", "text": "Besides the conclusions pointed out in the Introduction section, we also summarize some interesting findings: (1) While AdvCodec(Word) achieves best attack success rate among multiple tasks, we observe a trade-off between the freedom of manipulation and the attack capability. For instance, AdvCodec(Sent) has dependency tree constraints and becomes more natural for human readers than but less effective to attack models than AdvCodec(Word). Similarly, the answer targeted attack in QA has fewer words to manipulate and change than the position targeted attack, and therefore has slightly weaker attack performances. (2) Scatter attack is as effective as concat attack in sentiment classification task but less successful in QA, because QA systems make decisions highly based on the contextual correlation between the question and the paragraph, which makes it difficult to set an arbitrary token as our targeted answer. (3) Transferring adversarial text from models with better performances to weaker ones is more successful. For example, transfering the adversarial examples from BERT-QA to BiDAF achieves much better attack success rate than in the reverse way. (4) We also notice adversarial examples have better transferability among the models with similar architectures than different architectures. (5) BERT models pay more attention to the both ends of the paragraphs and tend to overlook the content in the middle, as shown in Appendix A.5 ablation study that adding adversarial sentences in the middle of the paragraph is less effective than in the front or the end. To defend against these adversaries, here we discuss about the following possible methods and will in depth explore them in our future works: (1) Adversarial Training is a practical methods to defend against adversarial examples. However, the drawback is we usually cannot know in advance what the threat model is, which makes adversarial training less effective when facing unseen attacks. (2) Interval Bound Propagation (IBP) (Dvijotham et al., 2018) is proposed as a new technique to theoretically consider the worst-case perturbation. Recent works (Jia et al., 2019; Huang et al., 2019) have applied IBP in the NLP domain to certify the robustness of models. (3) Language models including GPT2 (Radford et al., 2019) may also function as an anomaly detector to probe the inconsistent and unnatural adversarial sentences." }, { "heading": "A ADVCODEC SETTINGS", "text": "" }, { "heading": "A.1 AUTOENCODER SELECTION", "text": "Seq2seq autoencoder. We also tried the tradition sequential architecture (seq2seq) as a different autoencoder in the AdvCodec pipeline. For the seq2seq encoder-decoder, we use a bi-directional LSTM as the encoder (Hochreiter & Schmidhuber, 1997) and a two-layer LSTM plus soft attention mechanism over the encoded states as the decoder (Bahdanau et al., 2015).\nDuring the attack, the LSTM cell sequentially takes the embedding of each word xi as input and output the encoded state hi. The context vector z here refers to the last step’s output hn of the encoder LSTM cell. The perturbation z∗ is added only on the context vector hn without influencing previous encoded states hi (i < n).\nAs the ablation study, we compare its whitebox attack capability with our AdvCodec on BiDAF on QA task. As table 10 shows, we can see seq2seq based AdvCodec cannot achieve good attack success rate. Moreover, because seq2seq model does not take grammatical constraints into consideration, the quality of generated adversarial text cannot be ensured.\nTree autoencoder. In the whole experiments, we used Stanford TreeLSTM as tree encoder and our proposed tree decoder together as tree autoencoder. We trained the tree autoencoder on yelp dataset which contains 500K reviews. The model is expected to read a sentence, map the sentence in a latent space and reconstruct the sentence from the embedding along with the dependency tree structure in an unsupervised manner. The model uses 300-d vectors as hidden tree node embedding and is trained for 30 epochs with adaptive learning rate and weight decay. After training, the average reconstruction loss on test set is 0.63." }, { "heading": "A.2 ATTACK SETTINGS", "text": "We used the Carlini & Wagner (2016) attack as the optimization procudure to search for the optimal z∗ that can attack the targeted model. We update z∗ iteratively via gradient descent over the optimization function (5) and (6) for different tasks. We use Adam (Kingma & Ba, 2014) as the optimizer, set the learning rate to 0.6 and the optimization steps to 100. We follow the Carlini & Wagner (2016) method to find the suitable parameters in the object function (weight const c and confidence score κ) by binary search.\nAlgorithm 1 Algorithm of AdvCodec generating adversarial examples 1: procedure ADVCODEC(x, s) . x: initial seed, s: corresponding dependency tree 2: z := E(x, s) . E : encoder of AdvCodec, z: context vector 3: z∗ = 0 . z∗: perturbation on context vector 4: z′ := z + z∗ . z′: perturbed context vector 5: y := G(z′, s) . G: decoder of AdvCodec, y: adversarial sentence 6: f(z′) := the objective function to attack the targeted model 7: while y does not achieve targeted attack do 8: update z∗ by gradient descent over objective function f(z′) 9: end while 10: return y 11: end procedure\nWe also include our attack algorithm via pseudo-code in Algorithm 1." }, { "heading": "A.3 UNTARGETED SCATTER ATTACK ON QA", "text": "We tried the scatter attack on QA, however, the targeted attack success rate is not satisfactory. It turns out QA systems highly rely on the relationship between questions and contextual clues, which is hard to break when setting an arbitrary token to a target answer. This is also why we use some preliminary approaches to creating a similar fake context when initializing QA appended sentence.\nWe also performed the untargeted scatter attack on QA. The results are shown in table 11. We insert 30 random tokens (but no more than 1/3 the total words of the paragraph) over the paragraph, optimize and find the adversarial tokens that can cause model output the wrong answers in the untargeted manner. We can see the untargeted scatter attack can also achieve a higher untargeted attack success rate than Jia & Liang (2017)." }, { "heading": "A.4 HEURISTIC EXPERIMENTS ON CHOOSING THE INITIAL SEED FOR QA", "text": "We conduct the following heuristic experiments about how to choose a good initialization sentence to more effectively attack QA models. Based on the experiments we confirm it is important to choose a sentence that is semantically close to the context or the question as the initial seed when attacking QA model, so that we can reduce the number of iteration steps and more effectively find the adversary to fool the model. Here we describe three ways to choose the initial sentence, and we will show the efficacy of these methods given the same maximum number of optimization steps.\nRandom initial sentence. Our first trial is to use a random sentence (other than the answer sentence), generate a fake answer similar to the real answer and append it to the back as the initial seed.\nQuestion-based initial sentence. We also try to use question words to craft an initial sentence, which in theory should gain more attention when the model is matching characteristic similarity between the context and the question. To convert a question sentence to a meaningful declarative statement, we use the following steps:\nIn step 1, we use the state-of-the-art semantic role labeling (SRL) tools (He et al., 2017) to parse the question into verbs and arguments. A set of rules is defined to remove the arguments that contain interrogative words and unimportant adjectives, and so on. In the next step, we access the model’s original predicted answer and locate the answer sentence. We again run the SRL parsing and find to which argument the answer belongs. The whole answer argument is extracted, but the answer tokens are substituted with the nearest words in the GloVe word vectors (Pennington et al., 2014) that is also used in the QA model. In this way, we craft a fake answer that shares the answer’s context to solve the compatibility issue from the starting point. Finally, we replace the declarative sentence’s removed arguments with the fake argument and choose this question-based sentence as our initial sentence.\nAnswer-based initial sentence. We also consider directly using the model predicted original answer sentence with some substitutions as the initial sentence. To craft a fake answer sentence is much easier than to craft from the question words. Similar to step 2 for creating question-based initial sentence, we request the model’s original predicted answer and find the answer sentence. The answer span in the answer sentence is directly substituted with the nearest words in the GloVe word vector space to avoid the compatibility problem preliminarily.\nExperimental Results. We tried the above initial sentence selection methods on AdvCodec(Word) and perform position targeted attack on BERT-QA given the same maximum optimization steps. The experiments results are shown in table 12. From the table, we find using different initialization methods will greatly affect the attack success rates. Therefore, the initial sentence selection methods are indeed important to help reduce the number of iteration steps and fastly converge to the optimal z∗ that can attack the model." }, { "heading": "A.5 ABLATION STUDY ON MODEL ATTENTION", "text": "To further explore how the location of adversarial sentences affects the attack success rate, we conduct the ablation experiments by varying the position of appended adversarial sentence. We generate the adversarial sentences from the whitebox BERT classification and QA models. Then we inject those adversaries into different positions of the original paragraph and test in another blackbox BERT with the same architecture but different parameters. The results are shown in Table 13 and 14. We see in most time appending the adversarial sentence at the beginning of the paragraph achieves the best attack performance. Also the performance of appending the adversarial sentence at the end of the paragraph is usually slightly weaker than front. This observation suggests that the BERT model might pay more attention to the both ends of the paragraphs and tend to overlook the content in the middle.\nTable 14: Blackbox Attack Success Rate after inserting the whitebox generated adversarial sentence to different positions for BERT-QA.\nMethod Back Mid Front\nAdv(Word) EM 32.3 39.1 31.9F1 36.4 43.4 36.3\nAdv(Sent) EM 47.0 51.3 42.4F1 52.0 56.7 47.0" }, { "heading": "B MODEL SETTINGS & HUMAN EVALUATION", "text": "" }, { "heading": "B.1 SENTIMENT CLASSIFICATION MODEL", "text": "BERT. We use the 12-layer BERT-base model 1 with 768 hidden units, 12 self-attention heads and 110M parameters. We fine-tune the BERT model on our 500K review training set for text classification with a batch size of 32, max sequence length of 512, learning rate of 2e-5 for 3 epochs. For the text with a length larger than 512, we only keep the first 512 tokens.\nSelf-Attentive Model (SAM). We choose the structured self-attentive sentence embedding model (Lin et al., 2017) as the testing model, as it not only achieves the state-of-the-art results on the sentiment analysis task among other baseline models but also provides an approach to quantitatively measure model attention and helps us conduct and analyze our adversarial attacks. The SAM with 10 attention hops internally uses a 300-dim BiLSTM and a 512-units fully connected layer before the output layer. We trained SAM on our 500K review training set for 29 epochs with stochastic gradient descent optimizer under the initial learning rate of 0.1.\n1https://github.com/huggingface/pytorch-pretrained-BERT" }, { "heading": "B.2 SENTIMENT CLASSIFICATION ATTACK BASELINE", "text": "Seq2sick (Cheng et al., 2018) is a whitebox projected gradient method combined with group lasso and gradient regularization to craft adversarial examples to fool seq2seq models. Here, we define the loss function as Ltarget = max\nk∈Y\n{ z(k) } − z(t) to perform attack on sentiment classification models\nwhich was not evaluated in the original paper. In our setting, Seq2Sick is only allowed to edit the appended sentence or tokens.\nTextFooler (Jin et al., 2019) is a simple but strong black-box attack method to generate adversarial text. Here, TextFooler is also only allowed to edit the appended sentence." }, { "heading": "B.3 QA MODEL", "text": "BiDAF. Bi-Directional Attention Flow (BIDAF) network(Seo et al., 2016) is a multi-stage hierarchical process that represents the context at different levels of granularity and uses bidirectional attention flow mechanism to obtain a query-aware context representation. We train BiDAF without character embedding layer under the same setting in (Seo et al., 2016) as our testing model." }, { "heading": "B.4 HUMAN ERROR ANALYSIS IN ADVERSARIAL DATASET", "text": "We compare the human accuracy on both benign and adversarial texts for both tasks (QA and classification) in revision section 5.2. We spot the human performance drops a bit on adversarial texts. In particular, it drops around 10% for both QA and classification tasks based on AdvCodec as shown in Table 10 and 11. We believe this performance drop is tolerable and the stoa generic based QA attack algorithm experienced around 14% performance drop for human performance (Jia & Liang, 2017).\nWe also try to analyze the human error cases. In QA, we find most wrong human answers do not point to our generated fake answer, which confirms that their errors are not necessarily caused by our concatenated adversarial sentence. Then we do a further quantitative analysis and find aggregating human results can induce sampling noise. Since we use majority vote to aggregate the human answers, when different answers happen to have the same votes, we will randomly choose one as the final result. If we always choose the answer that is close to the ground truth in draw cases, we later find that the majority vote F1 score increases from 82.897 to 89.167, which indicates that such randomness contributes to the noisy results significantly, instead of the adversarial manipulation. Also, we find the average length of the adversarial paragraph is around 12 tokens more than the average length of the original one after we append the adversarial sentence. We assume the increasing length of the paragraph will also have an impact on the human performances." }, { "heading": "C ADVERSARIAL EXAMPLES", "text": "" }, { "heading": "C.1 ADVERSARIAL EXAMPLES FOR QA", "text": "C.1.1 ADVERSARIAL EXAMPLES GENERATED BY ADVCODEC(SENT)" }, { "heading": "C.1.2 ADVERSARIAL EXAMPLES GENERATED BY ADVCODEC(WORD)", "text": "" }, { "heading": "C.2 ADVERSARIAL EXAMPLES FOR CLASSIFICATION", "text": "" }, { "heading": "C.2.1 ADVERSARIAL EXAMPLES GENERATED BY ADVCODEC(SENT)", "text": "" }, { "heading": "C.2.2 ADVERSARIAL EXAMPLES GENERATED BY ADVCODEC(WORD)", "text": "Again for both scatter attack and concat attack, the word level manipulation does not take global (sentence-level) grammatical constraints into consideration, it is expected to observe more “free” manipulation than AdvCodec(Sent) and achieves a higher attack success rate at the expense of grammatical correctness." } ]
2,019
null
SP:b5433e6f4dc436a4a15554124a790aa794d5dc0d
[ "The paper proposes adding two mechanisms to the BERT architecture for NLU. The first is based on integrating information from all layers of the encoder via a method called Squeeze and Excitation. The second uses Gaussian blurring to encourage information sharing among neighboring words. The proposed method improves modestly on BERT on the GLUE suite of problems. It also substantially improves on BERT with respect to a class of examples that are designed to confound models that learn superficial heuristics based on word occurrence.", "This paper proposes a novel BERT based neural architecture, SESAME-BERT, which consists of “Squeeze and Excitation” method and Gaussian blurring. “Squeeze and Excitation” method extracts features from BERT by calculating a weighted sum of layers in BERT to feed the feature vectors to a downstream classifier. To capture the local context of a word, they apply Gaussian blurring on output layers of the self-attention layer in BERT. The authors show their model’s performance on GLUE and HANS dataset." ]
Fine-tuning with pre-trained models has achieved exceptional results for many language tasks. In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks. However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning. In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts. In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that (1) enables the extraction of global information among all layers through Squeeze and Excitation and (2) enriches local information by capturing neighboring contexts via Gaussian blurring. Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations. The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": null, "year": 2015 }, { "authors": [ "Zhaopeng Tu" ], "title": "Modeling localness for self-attention", "venue": null, "year": 2018 }, { "authors": [ "V. Ivan Sanchez Carmona", "Jeff Mitchell", "Sebastian Riedel" ], "title": "Behavior analysis of nli models: Uncovering the influence of three factors on robustness", "venue": null, "year": 2018 }, { "authors": [ "Ishita Dasgupta", "Demi Guo", "Andreas Stuhlmuller", "Samuel J. Gershman", "Noah D. Goodman" ], "title": "Evaluating compositionality in sentence embeddings", "venue": null, "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Linyuan Gong", "Di He", "Zhuohan Li", "Tao Qin", "Liwei Wang", "Tieyan Liu" ], "title": "Efficient training of bert by progressively stacking", "venue": null, "year": 2019 }, { "authors": [ "Jian Li", "Baosong Yang", "Zi-Yi Dou", "Xing Wang", "Michael R. Lyu", "Zhaopeng Tu" ], "title": "Information aggregation for multi-head attention with routing-by-agreement", "venue": null, "year": 2019 }, { "authors": [ "R. Thomas McCoy", "Tal Linzen" ], "title": "Non-entailed subsequences as a challenge for natural language inference", "venue": "SCiL,", "year": 2019 }, { "authors": [ "R. Thomas McCoy", "Ellie Pavlick", "Tal Linzen" ], "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "venue": null, "year": 2019 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": null, "year": 2013 }, { "authors": [ "Alexander H. Miller", "Adam Fisch", "Jesse Dodge", "Amir-Hossein Karimi", "Antoine Bordes", "Jason Weston" ], "title": "Key-value memory networks for directly reading", "venue": null, "year": 2016 }, { "authors": [ "Aakanksha Naik", "Abhilasha Ravichander", "Norman Sadeh", "Carolyn Rose", "Graham Neubig" ], "title": "Stress test evaluation for natural language inference", "venue": null, "year": 2018 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": null, "year": 2014 }, { "authors": [ "Matthew E. Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": null, "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": null, "year": 2019 }, { "authors": [ "Jianyu Wang", "Zhishuai Zhang", "Cihang Xie", "Yuyin Zhou", "Vittal Premachandran", "Jun Zhu", "Lingxi Xie", "Alan Yuille" ], "title": "Visual concepts and compositional voting", "venue": "Annals of Mathematical Sciences and Applications,", "year": 2017 }, { "authors": [ "Baosong Yang", "Longyue Wang", "Derek Wong", "Lidia S. Chao", "Zhaopeng Tu" ], "title": "Convolutional self-attention networks", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, unsupervised pretrained models have dominated the field of natural language processing (NLP). The construction of a framework for such a model involves two steps: pretraining and fine-tuning. During pretraining, an encoder neural network model is trained using large-scale unlabeled data to learn word embeddings; parameters are then fine-tuned with labeled data related to downstream tasks.\nTraditionally, word embeddings are vector representations learned from large quantities of unstructured textual data such as those from Wikipedia corpora (Mikolov et al., 2013). Each word is represented by an independent vector, even though many words are morphologically similar. To solve this problem, techniques for contextualized word representation (Peters et al., 2018; Devlin et al., 2019) have been developed; some have proven to be more effective than conventional word-embedding techniques, which extract only local semantic information of individual words. By contrast, pretrained contextual representations learn sentence-level information from sentence encoders and can generate multiple word embeddings for a word. Pretraining methods related to contextualized word representation, such as BERT (Devlin et al., 2019), OpenAI GPT (Radford et al., 2018), and ELMo (Peters et al., 2018), have attracted considerable attention in the field of NLP and have achieved high accuracy in GLUE tasks such as single-sentence, similarity and paraphrasing, and inference tasks (Wang et al., 2019). Among the aforementioned pretraining methods, BERT, a state-of-the-art network, is the leading method that applies the architecture of the Transformer encoder, which outperforms other models with respect to the GLUE benchmark. BERT’s performance suggests that self-attention is highly effective in extracting the latent meanings of sentence embeddings.\nThis study aimed to improve contextualized word embeddings, which constitute the output of encoder layers to be fed into a classifier. We used the original method of the pretraining stage in the BERT model. During the fine-tuning process, we introduced a new architecture known as Squeeze and Excitation alongside Gaussian blurring with symmetrically SAME padding (”SESAME” hereafter). First, although the developer of the BERT model initially presented several options for its use, whether the selective layer approaches involved information contained in all layers was unclear. In a previous study, by investigating relationships between layers, we observed that the Squeeze and Excitation method (Hu et al., 2018) is key for focusing on information between layer weights. This\nmethod enables the network to perform feature recalibration and improves the quality of representations by selectively emphasizing informative features and suppressing redundant ones. Second, the self-attention mechanism enables a word to analyze other words in an input sequence; this process can lead to more accurate encoding. The main benefit of the self-attention mechanism method is its high ability to capture global dependencies. Therefore, this paper proposes the strategy, namely Gaussian blurring, to focus on local contexts. We created a Gaussian matrix and performed convolution alongside a fixed window size for sentence embedding. Convolution helps a word to focus on not only its own importance but also its relationships with neighboring words. Through such focus, each word in a sentence can simultaneously maintain global and local dependencies.\nWe conducted experiments with our proposed method to determine whether the trained model could outperform the BERT model. We observed that SesameBERT yielded marked improvement across most GLUE tasks. In addition, we adopted a new evaluation set called HANS (McCoy et al., 2019), which was designed to diagnose the use of fallible structural heuristics, namely the lexical overlap heuristic, subsequent heuristic, and constituent heuristic. Models that apply these heuristics are guaranteed to fail in the HANS dataset. For example, although BERT scores highly in the given test set, it performs poorly in the HANS dataset; BERT may label an example correctly not based on reasoning regarding the meanings of sentences but rather by assuming that the premise entails any hypothesis whose words all appear in the premise (Dasgupta et al., 2018). By contrast, SesameBERT performs well in the HANS dataset; this implies that this model does not merely rely on heuristics. In summary, our final model proved to be competitive on multiple downstream tasks." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 UNSUPERVISED PRETRAINING IN NLP", "text": "Most related studies have used pretrained word vectors (Mikolov et al., 2013; Pennington et al., 2014) as the primary components of NLP architectures. This is problematic because word vectors capture semantics only from a word’s surrounding text. Therefore, a vector has the same embedding for the same word in different contexts, even though the word’s meaning may be different.\nPretrained contextualized word representations overcome the shortcomings of word vectors by capturing the meanings of words with respect to context. ELMo (Peters et al., 2018) can extract contextsensitive representations from a language model by using hidden states in stacked LSTMs. Generative pretraining (Radford et al., 2018) uses the ”Transformer encoder” rather than LSTMs to acquire textual representations for NLP downstream tasks; however, one limitation of this model is that it is trained to predict future left-to-right contexts of a unidirectional nature. BERT (Devlin et al., 2019) involves a masked language modeling task and achieves high performance on multiple natural language-understanding tasks. In BERT architecture, however, because the output data of different layers encode a wide variety of information, the most appropriate pooling strategy depends on the case. Therefore, layer selection is a challenge in learning how to apply the aforementioned models." }, { "heading": "2.2 SQUEEZE AND EXCITATION", "text": "The Squeeze and Excitation method was introduced by Hu et al. (2018), who aimed to enhance the quality of representations produced by a network. Convolutional neural networks traditionally use convolutional filters to extract informative features from images. Such extraction is achieved by fusing the spatial and channel-wise information of the image in question. However, the channels of such networks’ convolutional features have no interdependencies with one another. The network weighs each of its channels equally during the creation of output feature maps. Through Squeeze and Excitation, a network can take advantage of feature recalibration and use global information to emphasize informative features and suppress less important ones." }, { "heading": "2.3 LOCALNESS MODELING", "text": "The self-attention network relies on an attention mechanism to capture global dependencies without considering their distances by calculating all the positions in an input sequence. Our Gaussianblurring method focuses on learning local contexts while maintaining a high ability to capture longrange dependencies. Localness modeling was considered a learnable form of Gaussian bias (Yang\net al., 2019) in which a central position and dynamic window are predicted alongside intermediate representations in a neural network. However, instead of using Gaussian bias to mask the logit similarity of a word, we performed Gaussian bias in the layer after the embedding layer to demonstrate that performing element-wise operations in this layer can improve the model performance." }, { "heading": "2.4 DIAGNOSING SYNTACTIC HEURISTICS", "text": "A recent study (McCoy et al., 2019) investigated whether neural network architectures are prone to adopting shallow heuristics to achieve success in training examples rather than learning the underlying generalizations that need to be captured. For example, in computer vision, neural networks trained to recognize objects are misled by contextual heuristics in cases of monkey recognition (Wang et al., 2017). For example, in the field of natural language inference (NLI), a model may predict a label that contradicts the input because the word ”not”, which often appears in examples of contradiction in standard NLI training sets, is present (Naik et al., 2018; Carmona et al., 2018). In the present study, we aimed to make SesameBERT robust with respect to all training sets. Consequently, our experiments used HANS datasets to diagnose some fallible structural heuristics presented in this paper (McCoy et al., 2019)." }, { "heading": "3 METHODS", "text": "We focused on BERT, which is the encoder architecture of a multilayer Transformer (Vaswani et al., 2017), featuring some improvements. The encoder consists of L encoder layers, each containing two sublayers, namely a multihead self-attention layer and a feed-forward network. The multihead mechanism runs through a scaled dot product attention function, which can be formulated by querying a dictionary entry with key value pairs (Miller et al., 2016). The self-attention input consists of a query Q ∈ Rl×d, a key K ∈ Rl×d, and a value V ∈ Rl×d, where l is the length of the input sentence, and d is the dimension of embedding for query, key and value. For subsequent layers, Q, K, V comes from the output of the previous layer. The scaled dot product attention (Vaswani et al., 2017) is defined as follows:\nAttention(Q,K,V ) = softmax( QKT√\nd ) · V (1)\nThe output represents the multiplication of the attention weights A and the vector v, where A = softmax(QK T\n√ d ) ∈ Rl×l. The attention weights Ai,j enabled us to better understand about the importance of the i-th key-value pairs with respect to the j-th query in generating the output (Bahdanau et al., 2015). During fine-tuning, We used the output encoder layer from the pretrained BERT model to create contextualized word embeddings and feed these embeddings into the model. Although several methods have been developed for extracting contextualized embeddings from various layers, we believed that these methods had substantial room for improvement. Therefore, we used Squeeze and Excitation to solve the aforementioned problem." }, { "heading": "3.1 SQUEEZE AND EXCITATION", "text": "In this study, we proposed the application of Squeeze and Excitation (Hu et al., 2018); its application to the output of the encoder layer was straightforward once we realized that the number of channels was equivalent to the number of layers. Therefore, we intended to use the term channels and layers interchangeably.\nFirst, we defined U:,:,k as the output of the k-th encoder layer, for all 1 ≤ k ≤ n. We wanted to acquire global information from between the layers before feeding the input into the classifier; therefore, we concatenated all the output from each encoder layer to form the feature maps U ∈ Rl×d×n. In the squeeze step, by using global average pooling on the kth layer, we were able to squeeze the global spatial information into a layer descriptor. In other words, we set the kth layer’s output of the squeeze function as Z:,:,k.\nZ:,:,k = fsq(Uk) = 1\nl × d l∑ i=1 d∑ j=1 Ui,j,k (2)\nIn the excitation step, we aimed to fully capture layer-wise dependencies. This method uses the layer-wise output of the squeeze operation fsq to modulate interdependencies of all layers. Excitation is a gating mechanism with a sigmoid activation function that contains two fully connected layers. Let W1 and W2 be the weights of the first and second fully connected layers, respectively, and let r be the bottleneck in the layer excitation that encodes the layer-wise dependencies; therefore, W1 ∈ Rn× n r , and W2 ∈ R n r×n. The excitation function fex:\ns = fex(z) = σ(ReLU(z,W1),W2) (3) where z is the vector squeezed from tensor Z.\nFinally, we rescaled the output Z:,:,k by multiplying it by sk. The rescaled output is deonted as ũk. The scaling function fscale is defined as follows:\nũk = fscale(sk,U:,:,k) (4) We concatenated all rescaled outputs from all encoder layers to form our rescaled feature maps ũ. The architecture is shown in Figure 1. We then extracted layers from the rescaled feature maps, or calculated a weighted average layer ũavg .\nũavg = ∑n k=1 fscale(sk,U:,:,k)∑n\nk=1 sk (5)" }, { "heading": "3.2 GAUSSIAN BLURRING", "text": "Given an input sequenceX = {x1, x2, ..., xl} ∈ Rl×d, the model transformed it into queries Q, keys K, and values V , where Q,K, and V ∈ Rl×d. Multihead attention enabled the model to jointly attend to information from different representation subspaces at different positions. Thus, the three types of representations are split into h subspaces of size dh to attend to different information. For example, Q = (Q1,Q2, ...,Qh) with Qi ∈ Rl× dh for all 1 ≤ i ≤ h. In each subspace h, the element ohi in the output sequence O h = (oh1 , o h 2 , ..., o h l ) is computed as follows:\nohi = Attention(q h i ,K h)V h (6)\nwhere ohi ∈ R d h .\nTo capture the local dependency related to each word, we first used a predefined fixed window size k to create a Gaussian blur g, where g ∈ Rk:\ng(x;σ, k) = exp( −(x− bk2 c) 2\n2σ2 ) (7)\nwhere σ refers to the standard deviation. Several Gaussian-blurring strategies are feasible for applying convolutional operations to attention outputs." }, { "heading": "3.2.1 GAUSSIAN BLURRING ON ATTENTION OUTPUTS", "text": "The first strategy focuses on each attention output Oh. We restrict Ôhi,j,: to a local scope with a fixed size k centered at the position i and dimension j, where 1 ≤ j ≤ d, and k can be any odd number between 1 and l, expressed as follows:\nÔhi,j,: = [O h i−b k2 c,j , ..., Ohi,j ..., O h i+b k2 c,j ] (8)\nWe then enhance the localness of Ôhi,j,: through a parameter-free 1D convolution operation with g.\nÕhi,j = Ô h i,j,: · g (9)\nThe final attention output is Õh, which is the dot product between the Gaussian kernel and the corresponding input array elements at every position of Ôhi,j,:,\nÕh = Oh ∗ g (10)\nwhere ∗ is defined as a convolution operation, as illustrated in Figure 2.\nMore specifically, Õhi,j , the entry of Õ h in the i-th row and j-th column, equals blur(Ohi,j):\nÕhij = blur(O h i,j) = ∑\nx∈[−k,k]\ng(x;σ, k)Oi+x,j\n= ∑\nx∈[−k,k] g(x;σ, k) ∑ l Ai+x,lVl,j (11)" }, { "heading": "3.2.2 GAUSSIAN BLURRING ON VALUES", "text": "Another option focuses on values V. We applied the aforementioned method again but restrict V h\nto a local scope. The final attention output Õh is denoted as follows:\nÕh = Attention(Qh,Kh)(V h ∗ g) (12)\nThe difference between the present method and the method of performing Gaussian blurring on attention outputs and values is that the method of performing Gaussian blurring on attention outputs and values places greater emphasis on the interaction of cross-query vectors, whereas the present method focuses on cross-value vectors. Finally, the outputs of the h attention heads are concatenated to form the final output representation Õ:\nÕ = (Õ1, Õ2, ..., Õh) (13)\nwhere Õ ∈ Rl×d. The multihead mechanism enables each head to capture distinct linguistic input properties (Li et al., 2019). Furthermore, because our model is based on BERT, which builds an encoder framework with a stack of 12 layers, we were able to apply locality modeling to all layers through Squeeze and Excitation. Therefore, we expected that the global information and local properties captured by all layers could be exploited." }, { "heading": "4 EXPERIMENTS", "text": "We evaluated the proposed SesameBERT model by conducting multiple classification tasks. For comparison with the results of a previous study on BERT (Devlin et al., 2019), we reimplemented the BERT model in TensorFlow in our experiments. 1 In addition, we set most of the parameters to be identical to those in the original BERT model, namely, batch size: 16, 32, learning rate: 5e-5, 3e-5, 2e-5, and number of epochs: 3, 4. All of the results in this paper can be replicated in no more than 12 hours by a graphics processing unit with nine GLUE datasets. We trained all of the models in the same computation environment with an NVIDIA Tesla V100 graphics processing unit." }, { "heading": "4.1 GLUE DATASETS", "text": "GLUE benchmark is a collection of nine natural language-understanding tasks, including questionanswering, sentiment analysis, identification of textual similarities, and recognition of textual entailment (Wang et al., 2019). GLUE datasets were employed because they are sets of tools used to evaluate the performance of models for a diverse set of existing NLU tasks. The datasets and metrics used for the experiments in this study are detailed in the appendix A." }, { "heading": "4.2 HANS DATASET", "text": "We used a new evaluation set, namely the HANS dataset, to diagnose fallible structural heuristics presented in a previous study (McCoy et al., 2019) based on syntactic properties. More specifically, models might apply accurate labels not based on reasoning regarding the meanings of words but rather by assuming that the premise entails any hypothesis whose words all appear in the premise (Dasgupta et al., 2018; Naik et al., 2018). Furthermore, an instance that contradicts the lexical overlap heuristics in MNLI is likely too rare to prevent a model from learning heuristics. Models may learn to assume that a label is contradictory whenever a negation word is contained in the premise but not the hypothesis (McCoy & Linzen, 2019). Therefore, whether a model scored well on a given test set because it relied on heuristics can be observed. For example, BERT performed well on MNLI tasks but poorly on the HANS dataset; this finding suggested that the BERT model employs the aforementioned heuristics.\nThe main difference between the MNLI and HANS datasets is their numbers of labels. The MNLI dataset has three labels, namely Entailment, Neutral, and Contradiction. In the HANS dataset, instances labeled as Contradiction or Neutral are translated into non-entailment. Therefore, this dataset has only two labels: Entailment and Non-entailment. The HANS dataset targets three heuristics, namely Lexical overlap, Subsequence, and Constituent, with more details in appendix B. This dataset not only serves as a tool for measuring progress in this field but also enables the visualization of interpretable shortcomings in models trained using MNLI." }, { "heading": "4.3 RESULTS", "text": "" }, { "heading": "4.3.1 GLUE DATASETS RESULTS", "text": "This subsection provides the experiment results of the baseline model and the models trained using our proposed method. We performed Gaussian blurring on attention outputs in the experiment. In addition, we employed a batch size of 32, learning rates of 3e-5, and 3 epochs over the data for all GLUE tasks. We fine-tuned the SesameBERT model through 9 downstream tasks in the datasets. For each task, we performed fine-tuning alongside Gaussian blur kernel sigmas 1e-2, 1e-1, 3e-1, and 5e-1 and selected that with the most favorable performance in the dev set. Because GLUE datasets do not distribute labels for test sets, we uploaded our predictions to the GLUE server for evaluation. The results are presented in Table 1; GLUE benchmark is provided for reference. In most tasks, our proposed method outperformed the original BERT-Base model (Devlin et al., 2019). For example, in the RTE and AX datasets, SesameBERT yielded improvements of 1.2% and 1.6%, respectively. We conducted experiments on GLUE datasets to test the effects of Gaussian blurring alongside BERT on the value layer and context layer. Table 2 shows the degrees of accuracy in the dev set. The performance of Gaussian blurring with respect to self-attention layers varied among cases.\n1Our code will be released upon acceptance.\nGong et al. (2019) demonstrated that different layers vary in terms of their abilities to distinguish and capture neighboring positions and global dependency between words. We evaluated the weights learned from all layers. These weights indicated that a heavier weight represents greater importance. The results are shown in appendix C. Because the lower layer represents word embeddings that are deficient in terms of context (Baosong Yang, 2018), the self-attention model in the lower layer may need to encode representations with global context and may struggle to learn localness. Table 3 shows the degree of accuracy predicted by each extracted attention output layer method. The results indicated that the lower layers had lower accuracy.\nWe performed three ablation studies. First, we examined the performance of our method without blurring; we observed that Squeeze and Excitation helped the higher layer. This trend suggested that higher layers benefit more than do lower layers from Squeeze and Excitation. Second, we analyzed the effect of Gaussian blurring on the context layer. The results revealed that the method with blurring achieved higher accuracy in lower layers. We assumed that capturing short-range dependencies among neighboring words in lower layers is an effective strategy. Even if self-attention models capture long-range dependencies beyond phrase boundaries in higher layers, modeling localness remains a helpful metric. Finally, we observed the direct effects of SesameBERT. Although our proposed architecture performed poorly in lower layers, it outperformed the other methods in higher layers. This finding indicated that in higher layers, using Squeeze and Excitation alongside Gaussian blurring helps self-attention models to capture global information in all layers." }, { "heading": "4.3.2 HANS DATASET RESULTS", "text": "We trained both BERT and SesameBERT on the MNLI-m dataset to evaluate their classification accuracy. Similar to the results of another study (Devlin et al., 2019), BERT achieved 84.6% accuracy, which is higher than that of SesameBERT, as shown in Table 1. In the HANS dataset, we explored the effects of two models on each type of heuristic. The results are presented in Figure 3; we first examined heuristics for which the label was Entailment. We can see that both models performed well; they assigned the correct labels almost 100% of the time, as we had expected them to do after adopting the heuristics targeted by HANS.\nNext, we evaluated the heuristics labeled as Non-entailment. BERT performed poorly for all three cases, meaning that BERT assigned correct labels based on heuristics instead of applying the correct rules of inference. By contrast, our proposed method performed almost three times as well as BERT in the case of ”Lexical overlap”.\nThis paper argues that capturing local contexts for self-attention networks with Gaussian blurring can prevent models from easily adopting heuristics. Although our models performed poorly in cases of ”Subsequence” and ”Constituent”, both of these heuristics may be hierarchical cases of the lexical overlap heuristic, meaning that the performance of this hierarchy would not necessarily match the performance of our models (McCoy et al., 2019)." }, { "heading": "5 CONCLUSION", "text": "This paper proposes a fine-tuning approach named SesameBERT based on the pretraining model BERT to improve the performance of self-attention networks. Specifically, we aimed to find highquality attention output layers and then extract information from aspects in all layers through Squeeze and Excitation. Additionally, we adopted Gaussian blurring to help capture local contexts. Experiments using GLUE datasets revealed that SesameBERT outperformed the BERT baseline model. The results also revealed the weight distributions of different layers and the effects of applying different Gaussian-blurring approaches when training the model. Finally, we used the HANS dataset to determine whether our models were learning what we wanted them to learn rather than using shallow heuristics. We highlighted the use of lexical overlap heuristics as an advantage over the BERT model. SesameBERT could be further applied to prevent models from easily adopting shallow heuristics." }, { "heading": "C LAYER WEIGHTS CALCULATED BY SQUEEZE AND EXCITATION", "text": "" }, { "heading": "B DESCRIPTION OF HANS DATASET", "text": "" }, { "heading": "A DESCRIPTIONS OF GLUE DATASETS", "text": "" } ]
2,019
null
SP:bfdade05a90180a4b30d9593598b9d7c2a9e0533
[ "The search/design space of neural network architectures is vast, so researchers tend to use simple heuristics to guide their designs; alternatively neural architecture search methods may minimise heuristics in order to remove bias within the search. The authors propose applying a few simple heuristics in the form of \"templates\" for the number of convolutional filters across the different layers within an architecture. Apart from reversing the filter distribution, which starts a network with a large amount of filters and then reduces them (given how the filter number is generally increased with depth and reduction in spatial resolution), the authors also propose using the same number of filters per layer (\"uniform\"), as well as \"quadratic\" and \"negative quadratic\" distributions. There does not seem to be any particular motivation for these patterns, but this is at least better than poor justifications.", "This paper presents a simple methodological study on the effect of the distribution of convolutional filters on the accuracy of deep convolutional networks on the CIFAR 10 and CIFAR 100 data sets. There are five different kind of distributions studied: constant number of filters, monotonically increasing and decreasing number of filters and convex/concave with a local extremum at the layer in the middle. For these distributions, the total number of filters is varied to study the trade-off between running-time vs. accuracy, memory vs. accuracy and parameter count vs. accuracy." ]
Automatic neural network discovery methods face an enormous challenge caused by the size of the search space. A common practice is to split this space at different levels and to explore only a fraction of it. On one hand, neural architecture search methods look at how to combine a subset of layers to create an architecture while keeping a predefined number of filters in each layer. On the other hand, pruning techniques take a well known architecture and look for the appropriate number of filters per layer. In both cases, the exploration is made iteratively, training models several times during the search. Inspired by the constraints and advantages of these two approaches, we propose a straight-forward and fast option to find models with improved characteristics. We apply a small set of templates, that have been heuristically and experimentally evaluated, to make a one-shot redistribution of the number of filters in an already existing neural network. When compared to the initial base models we found that the resulting architectures, when trained from scratch, surpass the original accuracy even after been reduced to fit to the original amount of resources. Specifically, we show accuracy improvement for some network-task pairs of up to 5.5%, a reduction of up to 45% in parameters and 60% reduction in memory footprint.
[]
[ { "authors": [ "Joseph Lin Chu", "Adam Krzyżak" ], "title": "Analysis of feature maps selection in supervised learning using convolutional neural networks", "venue": "In Canadian Conference on Artificial Intelligence,", "year": 2014 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "arXiv preprint arXiv:1808.05377,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Kunihiko Fukushima" ], "title": "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position", "venue": "Biological cybernetics,", "year": 1980 }, { "authors": [ "Ariel Gordon", "Elad Eban", "Ofir Nachum", "Bo Chen", "Hao Wu", "Tien-Ju Yang", "Edward Choi" ], "title": "Morphnet: Fast & simple resource-constrained structure learning of deep networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yang He", "Ping Liu", "Ziwei Wang", "Zhilan Hu", "Yi Yang" ], "title": "Filter pruning via geometric median for deep convolutional neural networks acceleration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Angel Fernando Kuri-Morales" ], "title": "The best neural network architecture", "venue": "In Mexican International Conference on Artificial Intelligence,", "year": 2014 }, { "authors": [ "Guillaume Leclerc", "Manasi Vartak", "Raul Castro Fernandez", "Tim Kraska", "Samuel Madden" ], "title": "Smallify: Learning network size while training", "venue": "arXiv preprint arXiv:1806.03723,", "year": 2018 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "arXiv preprint arXiv:1810.05270,", "year": 2018 }, { "authors": [ "Nicolas Pinto", "David Doukhan", "James J DiCarlo", "David D Cox" ], "title": "A high-throughput screening approach to discovering good forms of biologically inspired visual representation", "venue": "PLoS computational biology,", "year": 2009 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional neural networks are built by stacking layers of neurons following the principle explained by Fukushima’s Neocognitron model (Fukushima, 1980). The Neocognitron design made neural networks invariant to shift in feature locations by arranging cells locally connected in a hierarchical architecture.\nInstead of connecting every neuron of the previous layer to all the neurons in the next layer, convolutional networks connections are only made to a small region. Given that different regions share the same weights, the layer can be implemented as a convolution operation using the set of shared weights known as kernel. The complete input is processed by shifting the kernel at uniform steps, normally overlapping parts of the input. To improve shift invariance, convolutional networks needs to rely less in the exact position of a feature and a simple solution is to have a lower resolution performed by averaging the values of neighbouring points in the image in a operation known as spatial subsampling.\nAn important consideration to create a convolutional network model is the number of filters, required at every layer. The Neocognitron implementation for example, keeps a fixed number of filters for each layer in the model. A very common practice has been to use a by-pyramidal architecture. The number of filters across the different layers is usually increased as the size of the feature maps decrease. This pattern was first proposed by LeCun et al. (1998) with the introduction of LeNet and can be observed in a diverse set of models such as VGG, ResNet and MobileNet (See Figure 5 in Appendix). Even models obtained from automatic model discovery, like NASNet, follow this principle inasmuch as neural network methods are mainly formulated to search for layers and connections.\nIt can be found in (LeCun et al., 1998) that the reason behind this progressive increase in the number of kernels is to compensated a possible loss of the representation caused by the spatial resolution reduction. In recent models, what seems to be the real reason is a practical issue (Chu & Krzyżak, 2014), to improve performance by keeping a constant number of operations in each layer.\nThe Pyramidal distribution of filters has perpetuated over two areas of model discovery. The methods in Automatic Neural Architecture Search (Liu et al., 2018a; Tan et al., 2019) explore the models built by combining a predefined set of layers, commonly with a pyramidal distribution of filters. On the other side, Network Pruning aims to reduce a model computational resources demands by selecting and removing weights that match some rule, commonly the closest to zero values, but starting from models that present this pyramidal distribution.\nTo the best of our knowledge, it remains unknown if this pyramidal distribution of filters is also beneficial to different aspects of model performances other than the number of operations. How the distribution of resources in a deep network model affect accuracy, memory footprint, inference time and model compression level are also of high interest in a multi-performing space that networks have to operate in.\nThis paper explores the topic by comparing models against versions of themselves with the same general structures but with the distribution of filters across the layers changed. We present a straightforward and fast to implement method for model discovery, that takes the goal of structured pruning methods, that is finding the lowest number of filters for a model while maintaining accuracy. Our exploration technique only tests over a small subset of diverse filter distributions, that we call templates, and that reduce the model proportionally to match some resource budget. Our experiments show that by using our proposed templates, the resulting models keep comparable accuracy as the original models in classification tasks but present reductions in the number of parameters and/or memory footprint.\nThe contributions of this paper are the following: 1) it provides evidence that pyramidal distribution of filters in convolutional network models is usually optimised for a distributed GPU operation across layers, and simple changes to that distribution leads to improvements in metrics such as number of parameters or memory footprint; 2) it highlights that most recent models, which have had a more detailed tuning in the filter distribution, present resiliency in accuracy to changes in the filter distribution, a phenomena that requires further research and explanation; 3) it shows that redistributing filters in a model and then applying a width multiplier operation can be seen as a pruning technique which produces smaller models than just applying the width multiplier to the original models; 4) it gives classical models a repositioning of their merit when measuring other practical implementation resources including and beyond the number of operations. And, 5) A practical and fast alternative approach for model search that can work in addition of conventional architecture search and network pruning.\nThe rest of the paper is structured as follows: Chapter 2 explores the most recent methods to reduce the size of neural network architectures. Chapter 3 describes the set of templates for filter distribution and how to implement this change in a convolutional network model. Chapter 5.1 compares dissimilar allocations of filters and their effect in model performances. Finally, chapter 5 briefly explains the findings derived from experiments." }, { "heading": "2 RELATED WORK", "text": "The process of designing a Neural Networks is a task that largely has been based on experience and experimentation that consumes a lot of time and computational resources. Of note are reference models such as VGG, ResNet, Inception and similar that have been developed entirely or with significant use of heuristics. With the increase in the use of Neural Networks, and particularly Convolutional Networks for computer vision problems, a mechanism to automatically find the best architecture has become a requirement in the field of Deep Learning. Although some works have been published several years ago (Pinto et al., 2009; Kuri-Morales, 2014) trying to address the topic of automatic architecture generation, they have not provided competitive results compared to many hand-crafted architectures. However, current works start to lead the state of the art models (Elsken et al., 2018).\nBut even with automatic methods, one key feature that constantly has been adopted is the selection of the number of filters in each layer in the final model. The filters are set in such a way to have an increasing number as the layers go deeper. Pruning methods have done some work in this field\nbut with the belief that the weights obtained at the end of the training process are important to the pruning method.\nOne common characteristic of many model discovery methods is that the search process is very time consuming, normally in the order of thousands of GPU hours. In this sense, a remarkable improvement is presented by Liu et al. (2018b). That method uses a relaxation condition to transform the selection of layers in the architecture to a continuous space using a softmax function. The relaxation allows to perform simultaneous search of weights and architecture using gradient descent. The method converges after one day of GPU time.\nOn the side of Pruning methods the search also involves training models for several iterations to select the correct weights to remove (Frankle & Carbin, 2018; He et al., 2019), or at least increasing the computation during the training when doing jointly training and search (Leclerc et al., 2018). Recently, Liu et al. (2018c) suggest that accuracy obtained by pruning techniques can be reached by training from scratch.\nOur work relates to (Gordon et al., 2018) in the sense that their method is not restricted to reducing filters but also to increase them to see if the increment is beneficial. Our approach differs however, because it does not require to train the model other than in the final stage, after making some predefined changes to the number of filters using the redistribution template." }, { "heading": "3 FILTER DISTRIBUTION TEMPLATES", "text": "Recent pruning methods such as Gordon et al. (2018); Leclerc et al. (2018), have shown different filter distribution patterns emerging when reducing models like VGG that defy the notion of pyramidal design as the best distribution for a model. This is a motivational insight into what other distributions can and should be considered when designing models. On one side the combinatorial space of distributions make this a challenging exploration, on the other however, it importantly highlights the need to pursue such exploration if gains in accuracy and overall performance can be made. In this work, rather than attempting to find the optimal filter distribution with expensive automatic pruning or growing techniques, we propose to first adjust the filters of a convolutional network model via a small number of pre-defined templates. These templates such as those depicted in figure 1, are inspired by existing models that have already been found to perform well and thus candidates that could be beneficial for model performance improvement beyond the number of operations. Performance criteria such as accuracy, memory footprint and inference time and model compression level are arguably as important as the number of operations required.\nIn particular, we adopt as one template a distribution with a fixed number of filters, as with the original Neocognitron design, but also other templates inspired by the patterns found in (Gordon et al., 2018) where at least three behaviours are present in different blocks from the resulting ResNet101 model: 1) filters increase in deeper layers, 2) filters agglomerate in the centre and 3) filters are reduced in the centre of the block. (Leclerc et al., 2018) shows also a filter pattern with more filters in the centre of a VGG model. Based on these observations we define the templates we use in this work.\nWe define a convolutional neural network base model as a set of layers L = 1, ..., D + 1 each of them containing a number of filters fl and a total number of filters F = ∑D l=1 fl. We want to test\nif the common heuristic of distributing F having fl+1 = 2fl each time the feature map is halved, is advantageous to the model over other distributions of fl when evaluating performance, memory footprint and inference time.\nIt should be noted that that final layer D + 1 remains with the same number of filters according to the task under evaluation, therefore it is not taken into account in the equations. Another important consideration is that, in architectures composed for modules or blocks (ResNet and Inception), it is easier to change the number of filters in the module as a whole than to change the filters in each particular layer inside the module and then to ensure that concatenations and additions from the previous layers match the correct number of filters. We have adopted then, this assumption of taking blocks as single layers to set the values of fl. As a result, a final ResNet or Inception module marked with fl filters, is set to fl filters in each layer inside the module.\nUniform Template. The most immediate distribution to evaluate is, as the original Neocognitron, an uniform distribution of filters. Computing the number of filters in an uniform distribution is straightforward. Adding up the filters in each layer from the base model and divide them by the number of layers give the number of filters to be set in each of them. Formally, we compute the new number in each layer as f ′l = F/D ∀l ∈ {1, ..., D}. In this way, changing the distribution for a VGG19 model built exclusively with sixteen convolutional layers, one final unchangeable fully connected layer and a total number of filters of F = 5504 produces a model with f ′l = 5504/16 = 344 filters in each layer.\nReverse Template. Another straight-forward transformation for the filter distribution adopted in this paper is reversing the number of filters in every layer. Our final model with this template is defined by the filters f ′l = fD−l+1.\nQuadratic Template. The third distribution we evaluated is characterised by a quadratic equation f ′l = al\n2 + bl + c and consequently, has a parabolic shape with the vertex in the middle layer. We set this layer to the minimal number of filters in the base model fmin = min (fl) l ∈ {1, ..., D} so, the number of filters is described by f ′D/2 = fmin. Also, we find the maximum value in both the initial and final convolutional layers , thus f ′1 = f ′ D.\nTo compute the new number of filters in each layer we solve the system of linear equations given by i) the restriction of the total number of filters in ∑D l=1 (f ′ l ) = ∑D l=1 ( al2 + bl + c ) = F , that can\nbe reduced to ( D3\n3 + D2 2 + D 6\n) a+ ( D2\n2 + D 2\n) b+Dc = F , ii) the equation produced by the value\nin the vertex f ′D/2 = D 2 2 a+ D2 b+ c = fmin and iii) the equality from the maximum values which reduces to (D2 − 1)a+ (D − 1)b = 0. Negative Quadratic Template. The final template is also a parabola but with the vertex in a maximum, that is, a negative quadratic curve. The equation is the same quadratic equation that the previous template but the restrictions change. Instead of defining a value in the vertex, f ′l at the initial and final convolutional layers are set to the minimal number of filters in the base model f ′l = fmin l ∈ {1, D}. The number of filters in each layer is computed again with a system of equations specified by i) the restriction of the total number of filters as in the quadratic template, and the two points already known in the first and last convolutional layers defined by ii) a+b+c = fmin and iii) D2a+Db+ c = fmin.\nOnce the model has been readjusted with the new number of filters per layer we use a width multiplier to test different levels of model compression to make comparable evaluations given that the change in the distributions of kernels modifies the number of parameters, memory consumption and speed. The width multiplier only reduces or increases the set of new filters f ′l proportionally in every layer." }, { "heading": "DATASETS", "text": "We trained over two datasets traditionally used for convolutional network evaluation: CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009). Both datasets contain a train set of 50,000 images and a test set of 10,000 images with a resolution of 32x32 and three colour channels. They were published for classification tasks for ten and one hundred classes respectively. Some models were tested also\nin Tiny-Imagenet dataset which is a reduced version of the original Imagenet dataset with only 200 classes and images with a resolution of 64 x 64 pixels." }, { "heading": "CONVOLUTIONAL NETWORK MODELS", "text": "The state-of-the-art networks evaluated represent some of the highest performing CNNs on the ImageNet challenge in the previous years (Russakovsky et al., 2015). They have been primarily tested on classification tasks and also have demonstrated a strong ability to generalise to images outside the ImageNet dataset. Therefore, it is expected they perform well in the CIFAR datasets.\nThe VGG network architecture (Simonyan & Zisserman, 2014) is recognised by its simplicity. It is composed by sequential convolutional layers followed by max-pooling reduction layers. The final classification is managed by fully-connected layer and a Softmax classifier. The main disadvantage of these networks is the size of their parameters. In theses paper we use the version of the model with just one fully connected layer in the final classification section.\nResNet (He et al., 2016) succeeds on the problem of training very deep CNNs by reformulating the assumption that the network blocks are modelling a function closer to an identity mapping than to a zero mapping. Therefore, it should be easier to find differences with reference to an identity rather than to a zero mapping. This assumption is carry out by adding additional references at the end of building blocks.\nThe Inception/GoogleNet architecture (Szegedy et al., 2015) make use of the Inception module conceived as a multi-level feature extractor allowing simultaneous extraction features of several sizes within the same module of the network.\nThe MobileNet network (Howard et al., 2017) is built on depthwise separable convolutions except for the first layer which is a full convolution. All layers are followed by a batch normalisation and ReLU nonlinearity with the exception of the final fully connected layer which consist of a softmax layer for classification." }, { "heading": "4 MODELS COMPARISON UNDER SIZE, MEMORY FOOTPRINT AND SPEED", "text": "In this section we first investigate the effects of applying different templates to the global distribution of kernels in well known convolutional neural network models (VGG, ResNet, Inception and MobileNet). We compare models under the basis of size, memory and speed in two of the popular datasets for classification tasks.\nAll experiments have models fed with images with the common augmentation techniques of padding, random cropping and horizontal flipping. Our experiments were run in a NVidia Titan X Pascal 12GB GPU adjusting the batch size to 16 samples to fit in GPU Memory at the maximum model scaling." }, { "heading": "TEMPLATE EFFECT OVER THE BASELINE MODELS", "text": "We conducted a first experiment to test our proposed templates on the selected architectures. All convolutional models, with and without templates, were trained for 160 epochs using the same conditions: stochastic gradient descent (SGD) with a scheduled learning rate starting in 0.1 for the first 80 epochs, 0.01 for the next 40 epochs and finally 0.001 for the remaining epochs.\nThe results are presented in Table 1. It is shown that for VGG, ResNet and MobileNet, the model accuracy improves in CIFAR-10 when the templates are applied. Inception architecture presents the highest accuracy of all base models in both datasets and templates are only able to change its accuracy in less than 1.5%. This is surprising given the drastic modifications that the model is suffering after the change of filter distribution. Models that share a sequential classical architecture such as VGG and MobileNet, show a better improvement when using templates. We additionally performed an experiment with some of the models on the Tiny-Imagenet dataset (See Table 2). While there is a very modest reduction in parameters with VGG, a very remarkable accuracy improvement is produced in MobileNet.\nWhen evaluated under other metrics (Table 3), models are affected differently with each template and model. The Reverse-Base, Uniform and Quadratic templates show some reductions in the number of parameters while Negative Quadratic template reduces the memory usage. Inference Time is affected negatively for the templates. This is an expected result as original models are designed to perform well in the GPU. The Inception model shows an improvement in speed with a reduction of 14% over inference time respect to the base model. It is important to notice that a reduced number of parameters does not correspond to a low consumption of memory, not even a small inference time. Some of the causes are the difference in feature map resolution for filters in different layers, the need to keep early feature maps in memory for late layers and the restrictions for improving parallelisation in the computational graph of the model." }, { "heading": "TEMPLATE EFFECT WITH SIMILAR RESOURCES", "text": "It can be argued that models obtained with templates make use of more resources such as memory or number of operations in the GPU (reflected in the low inference speed). So, we formulated a second experiment that makes proportional changes in the models after applying the templates. We not only apply reductions to the models but also increments in order to observe if the actual total number of filters is adequate for the task the model is performing or if the model accuracy could improve by adding more filters. Thus, we create curves for each template applying an uniform reduction using a width multiplier with values of 1,6, 1,3, 1.0, 0.8, 0.5, 0.25, 0.1 and 0.05. These curves of reduction allow comparison under the same amount of resources as well as compares the use of resources under the same accuracy. The experiment also shows the level of reduction that our models can tolerate without a significant loss in accuracy.\nWe add dashed lines to every plot to be used as a reference for the model with the original distribution and no reductions which is the point where both vertical and horizontal dashed lines cross. In general, any arbitrary vertical line in the plot compares accuracy between models with amount of resources (parameters, memory or speed). On the other side, any arbitrary horizontal line compares the resources taken for each model under each template to produce similar accuracy.\nEvaluating a model performance using accuracy and parameters is by far the default approach. We show models performances with these metrics in figure 2. VGG and MobileNet models improve in accuracy almost with any template in CIFAR-10 and CIFAR-100. Under reductions, their original accuracy can be reached with less than 25% of the original parameters in the two models. ResNet shows less improvement when compared with networks with similar resources, yet templates can reduce the model further before accuracy drops. Inception behavior considering the same resources remains similar no matter the template used. In general, for this test, the uniform template seems to get the best parameter efficiency for all the models.\nWe are convinced that for practical implementations, comparing parameters is not a good option. Table 3 has shown that models with a small number of parameters are not necessary related with a small memory footprint or bigger speed and more results are presented in Figure 3 in the Appendix. We observe again that VGG and MobileNet accuracy is enhanced by templates. More than 50% of memory consumption can be reduced in both models while producing the same accuracy. With this metric, ResNet and Inception improve slightly in CIFAR-10 with the Negative-Quadratic template but they perform lower with the rest of templates. We attribute the lower efficiency in memory to the fact that in all the templates but Negative-Quadratic, the number of filters is increased in the initial layers. At these layers the size of feature maps produced for each filter is bigger, and therefore, more memory costly.\nOne final comparison also important for practical issues is inference time. Our experiments show the patter of improvement for VGG and MobileNet and a degradation of inference time when adopting templates in ResNet and Inception (See Figure 4 in Appendix). In particular Inception shows an improvement with the Negative-Quadratic template in CIFAR-10.\nBy looking results in inference time it can look unpromising to apply templates. However we can take a different perspective, by sacrificing inference speed it is possible to obtain models with a better accuracy. This could be an unwanted decision but it is frequently taken. It is clearly stated in the inference time between different original models. For example by using ResNet the accuracy improves compared to the obtained by VGG in the two datasets tested, but at the cost of increasing the time for inference. On the contrary, looking for speed enhancement MobileNet has sacrificed\naccuracy. In these sense, our templates are still competitive when compared to searching for a totally different model in order to improve accuracy." }, { "heading": "5 CONCLUSIONS", "text": "The most common design of convolutional neural networks when choosing the distribution of the number of filters is to start with a few and then to increase the number in deeper layers. We challenged this design by evaluating some architectures with a varied set of distributions on the CIFAR and TinyImagenet datasets. Our results suggest that this pyramidal distribution of filters is not necessarily the best option for obtaining the highest accuracy or even the highest parameter efficiency.\nThe method presented allows a model architect to apply a set of templates for redistributing the number of filters originally assigned to each layer in existent convolutional network models before training the models from scratch. This redesign and the following proportional reduction can be achieved without any previous training process to select particular weights. In essence, the application of filter redistribution templates offers an alternative and or additional approach to the iteration-intensive architecture search or model pruning.\nOur experiments show that the models, with the same amount of filters but a different distribution produced by our templates, improves accuracy with up to 5.5% improvement for some model-task pairs. But after being pruned uniformly, they can obtain the same accuracy than the original models using less resources such as number of parameters with up to 45% less parameters and a memory footprint up to 60% smaller.\nResults also reveal an interesting behaviour in the evaluated models: a strong resilience to changes in filter distribution. The variation in accuracy for all models after administering templates is less than 5% despite the modifications in the distributions, and therefore in the original design, are considerable. This finding strengthen our belief that it is not worth exploring the whole space of filter distributions to find the best solution at the cost of training models for a large number of iterations, as this solution possibly won’t be too distant of the baseline performance. However, it is possible to explore just some distinct and easy to implement distributions such as those represented by our templates, that produces benefits depending on the resource to be optimised.\nOur work overall offers an additional tool to model designers, both automated and manual, and we hope it motivates further work for iteration-less methods and help gather data to build understanding of the design process for model-task pairs." }, { "heading": "ACKNOWLEDGMENTS", "text": "Removed for blind review" }, { "heading": "APPENDIX", "text": "5.1 EXPERIMENTS COMPARING MODELS MEMORY FOOTPRINT AND SPEED" }, { "heading": "5.2 FILTER DISTRIBUTION FOR EXISTING CONVOLUTIONAL DEEP NETWORK MODELS", "text": "During the design of a convolutional deep network architecture the number of filters, required at every layer, is selected. Neocognitron model keeps the same number of filters across all layers in the model. A distribution of filter widely used forms a by-pyramidal architecture. The number of filters across the different layers is usually increased as the size of the feature maps decrease. We present in Figure 5 the distribution for the models VGG, ResNet, Inception and MobileNet which were tested in our experiments." } ]
2,019
null
SP:47bcfdf057ce7e9fc379ea2e58b3c5e00b2b61a1
[ "This work presents the idea of deformable kernels (DKs). As opposed to rigid kernels in standard convolutional networks, DKs allow each of their grid locations to be moved around in a larger kernel field. The offset by which a DK grid cell is moved is computed conditioned on the input to the network. To motivate the idea of DKs, the authors give some background on convolution, receptive and effective receptive fields (ERFs). The authors argue that since ERFs are spatially porous and irregularly distributed, one way to model them is to convolve square grids of input with DKs, which are composed of samples drawn from larger kernels. The authors define the concept of global and local DKs. They further contrast DKs with spatial sampling (deformable convolutions) and argue that although conceptually similar, both approaches are complementary to each other and can be used in combination in practice. Numerical experiments show competitive performance of DKs on image classification and object detection tasks. In the end empirical analysis is performed to analyze the characteristics of DKs.", "This paper introduce a simple algorithm called deformable kernels. It learns to generate a collection of coordinate offset Δk for each of the convolutional kernel element. Then during convolution, the kernel is treated as a 2D regular grid and sampled (interpolated) according to the generated coordinate offset before applying to the inputs. An auxiliary shallow network is learned to generate those coordinate offsets based in inputs. This method is very similar to the existing \"deformable convolution\" algorithm, though this operate on the kernels instead. Numerical experiments on image classification and object detection tasks show that the method performs better or comparably to strong baselines. It boost the performance even more when combined with existing methods." ]
Convolutional networks are not aware of an object’s geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation. This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field. Yet adapting the receptive field does not quite reach the actual goal – what really matters to the network is the effective receptive field (ERF), which reflects how much each pixel contributes. It is thus natural to design other approaches to adapt the ERF directly during runtime. In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched. At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects. This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values. We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.
[ { "affiliations": [], "name": "OBJECT DEFORMATION" }, { "affiliations": [], "name": "Hang Gao" }, { "affiliations": [], "name": "Xizhou Zhu" }, { "affiliations": [], "name": "Steve Lin" }, { "affiliations": [], "name": "Jifeng Dai" } ]
[ { "authors": [ "Joan Bruna", "Stéphane Mallat" ], "title": "Invariant scattering convolution networks", "venue": null, "year": 2013 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Carlos Esteves", "Christine Allen-Blanchette", "Xiaowei Zhou", "Kostas Daniilidis" ], "title": "Polar transformer networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "James J Gibson" ], "title": "The perception of the visual world", "venue": null, "year": 1950 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": null, "year": 2017 }, { "authors": [ "Drew A Hudson", "Christopher D Manning" ], "title": "Learning by abstraction: The neural state machine", "venue": "arXiv preprint arXiv:1907.03950,", "year": 2019 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman" ], "title": "Spatial transformer networks", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Xu Jia", "Bert De Brabandere", "Tinne Tuytelaars", "Luc V Gool" ], "title": "Dynamic filter networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Angjoo Kanazawa", "Abhishek Sharma", "David W. Jacobs" ], "title": "Locally scale-invariant convolutional neural networks", "venue": "In NeurIPS Workshop,", "year": 2016 }, { "authors": [ "Svetlana Lazebnik", "Cordelia Schmid", "Jean Ponce" ], "title": "Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories", "venue": "In CVPR,", "year": 2006 }, { "authors": [ "Xiang Li", "Wenhai Wang", "Xiaolin Hu", "Jian Yang" ], "title": "Selective kernel networks", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": null, "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "David G Lowe" ], "title": "Object recognition from local scale-invariant features", "venue": "In ICCV,", "year": 1999 }, { "authors": [ "Wenjie Luo", "Yujia Li", "Raquel Urtasun", "Richard Zemel" ], "title": "Understanding the effective receptive field in deep convolutional neural networks", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Ignacio Rocco", "Relja Arandjelovic", "Josef Sivic" ], "title": "Convolutional neural network architecture for geometric matching", "venue": null, "year": 2017 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": null, "year": 2018 }, { "authors": [ "Evan Shelhamer", "Dequan Wang", "Trevor Darrell" ], "title": "Blurring the line between structure and learning to optimize and adapt receptive fields", "venue": null, "year": 1904 }, { "authors": [ "Laurent Sifre", "Stéphane Mallat" ], "title": "Rotation, scaling and deformation invariant scattering for texture discrimination", "venue": "In CVPR,", "year": 2013 }, { "authors": [ "Hugues Thomas", "Charles R Qi", "Jean-Emmanuel Deschaud", "Beatriz Marcotegui", "François Goulette", "Leonidas J Guibas" ], "title": "Kpconv: Flexible and deformable convolution for point clouds", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Dequan Wang", "Evan Shelhamer", "Bruno Olshausen", "Trevor Darrell" ], "title": "Dynamic scale inference by entropy optimization", "venue": "arXiv preprint arXiv:1908.03182,", "year": 2019 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Daniel E Worrall", "Stephan J Garbin", "Daniyar Turmukhambetov", "Gabriel J Brostow" ], "title": "Harmonic networks: Deep translation and rotation equivariance", "venue": null, "year": 2017 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Yuwen Xiong", "Mengye Ren", "Renjie Liao", "Kelvin Wong", "Raquel Urtasun" ], "title": "Deformable filter convolution for point cloud reasoning", "venue": null, "year": 1907 }, { "authors": [ "Brandon Yang", "Gabriel Bender", "Quoc V Le", "Jiquan Ngiam" ], "title": "Soft conditional computation", "venue": "arXiv preprint arXiv:1904.04971,", "year": 2019 }, { "authors": [ "Richard Zhang" ], "title": "Making convolutional networks shift-invariant again", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Xizhou Zhu", "Han Hu", "Stephen Lin", "Jifeng Dai" ], "title": "Deformable convnets v2: More deformable, better results", "venue": null, "year": 2019 }, { "authors": [ "Goyal" ], "title": "2017), training is performed by SGD for 90 epochs with momentum 0.9 and batch size 256. We set our learning rate of 10−1 so that it linearly warms up from zero within first 5 epochs. A cosine training schedule is applied over the training epochs. We use scale and aspect ratio augmentation with color perturbation as standard data augmentations. We evaluate the performance of trained models on the ImageNet", "venue": "Image Classification:", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The rich diversity of object appearance in images arises from variations in object semantics and deformation. Semantics describe the high-level abstraction of what we perceive, and deformation defines the geometric transformation tied to specific data (Gibson, 1950). Humans are remarkably adept at making abstractions of the world (Hudson & Manning, 2019); we see in raw visual signals, abstract semantics away from deformation, and form concepts.\nInterestingly, modern convolutional networks follow an analogous process by making abstractions through local connectivity and weight sharing (Zhang, 2019). However, such a mechanism is an inefficient one, as the emergent representations encode semantics and deformation together, instead of as disjoint notions. Though a convolution responds accordingly to each input, how it responds is primarily programmed by its rigid kernels, as in Figure 1(a, b). In effect, this consumes large model capacity and data modes (Shelhamer et al., 2019).\nWe argue that the awareness of deformations emerges from adaptivity – the ability to adapt at runtime (Kanazawa et al., 2016; Jia et al., 2016; Li et al., 2019). Modeling of geometric transformations has been a constant pursuit for vision researchers over decades (Lowe et al., 1999; Lazebnik et al., 2006; Jaderberg et al., 2015; Dai et al., 2017). A basic idea is to spatially recompose data towards a common mode such that semantic recognition suffers less from deformation. A recent work that ∗Equal contributions. Work is done when Hang and Xizhou are interns at Microsoft Research Asia.\nis representative of this direction is Deformable Convolution (Dai et al., 2017; Zhu et al., 2019). As shown in Figure 1(c), it augments the convolutions with free-form sampling grids in the data space. It is previously justified as adapting receptive field, or what we phrase as the “theoretical receptive field”, that defines which input pixels can contribute to the final output. However, theoretical receptive field does not measure how much impact an input pixel actually has. On the other hand, Luo et al. (2016) propose to measure the effective receptive field (ERF), i.e. the partial derivative of the output with respect to the input data, to quantify the exact contribution of each raw pixel to the convolution. Since adapting the theoretical receptive field is not the goal but a means to adapt the ERF, why not directly tune the ERF to specific data and tasks at runtime?\nToward this end, we introduce Deformable Kernels (DKs), a family of novel and generic convolutional operators for deformation modeling. We aim to augment rigid kernels with the expressiveness to directly interact with the ERF of the computation during inference. Illustrated in Figure 1(d), DKs learn free-form offsets on kernel coordinates to deform the original kernel space towards specific data modality, rather than recomposing data. This can directly adapt ERF while leaving receptive field untouched. The design of DKs that is agnostic to data coordinates naturally leads to two variants – the global DK and the local DK, which behave differently in practice as we later investigate. We justify our approach with theoretical results which show that ERF is strictly determined by data sampling locations and kernel values. Used as a generic drop-in replacement of rigid kernels, DKs achieve empirical results coherent with our developed theory. Concretely, we evaluate our operator with standard base models on image classification and object detection. DKs perform favorably against prior works that adapt during runtime. With both quantitative and qualitative analysis, we further show that DKs can work orthogonally and complementarily with previous techniques." }, { "heading": "2 RELATED WORKS", "text": "We distinguish our work within the context of deformation modeling as our goal, and dynamic inference as our means.\nDeformation Modeling: We refer to deformation modeling as learning geometric transformations in 2D image space without regard to 3D. One angle to attack deformation modeling is to craft certain geometric invariances into networks. However, this usually requires designs specific to certain kinds of deformation, such as shift, rotation, reflection and scaling (Sifre & Mallat, 2013; Bruna & Mallat, 2013; Kanazawa et al., 2016; Cohen & Welling, 2016; Worrall et al., 2017; Esteves et al., 2018). Another line of work on this topic learns to recompose data by either semi-parameterized or completely free-form sampling in image space: Spatial Transformers (Jaderberg et al., 2015) learns 2D affine transformations, Deep Geometric Matchers (Rocco et al., 2017) learns thin-plate spline transformations, Deformable Convolutions (Dai et al., 2017; Zhu et al., 2019) learns free-form transformations.\nWe interpret sampling data space as an effective approach to adapt effective receptive fields (ERF) by directly changing receptive field. At a high-level, our Deformable Kernels (DKs) share intuitions with this line of works for learning geometric transformations, yet are instantiated by learning to sample in kernel space which directly adapt ERF while leaving theoretical receptive fields untouched. While kernel space sampling is also studied in Deformable Filter (Xiong et al., 2019) and KPConv (Thomas et al., 2019), but in their contexts, sampling grids are computed from input point clouds rather than learned from data corpora.\nDynamic Inference: Dynamic inference adapts the model or individual operators to the observed data. The computation of our approach differs from self-attention (Vaswani et al., 2017; Wang et al., 2018) in which linear or convolution modules are augmented with subsequent queries that extract from the same input. We consider our closest related works in terms of implementation as those approaches that adapt convolutional kernels at run time. It includes but is not limited to Dynamic Filters (Jia et al., 2016), Selective Kernels (Li et al., 2019) and Conditional Convolutions (Yang et al., 2019). All of these approaches can learn and infer customized kernel spaces with respect to the data, but are either less inefficient or are loosely formulated. Dynamic Filters generate new filters from scratch, while Conditional Convolutions extend this idea to linear combinations of a set of synthesized filters. Selective Kernels are, on the other hand, comparably lightweight, but aggregating activations from kernels of different size is not as compact as directly sampling the original kernel space. Another line of works contemporary to ours (Shelhamer et al., 2019; Wang et al., 2019) is to compose free-form filters with structured Gaussian filters, which essentially transforms kernel spaces by data. Our DKs also differ from these works with the emphasize of direct adaptation the ERF rather than the theoretical receptive field. As mentioned previously, the true goal should be to adapt the ERF, and to our knowledge, our work is the first to study dynamic inference of ERFs." }, { "heading": "3 APPROACH", "text": "We start by covering preliminaries on convolutions, including the definition of effective receptive field (ERF). We then formulate a theoretical framework for analyzing ERFs, and thus motivate our Deformable Kernels (DKs). We finally elaborate different DK variants within such a framework. Our analysis suggests compatibility between DKs and the prior work." }, { "heading": "3.1 A DIVE INTO CONVOLUTIONS", "text": "2D Convolution: Let us first consider an input image I ∈ RD×D. By convolving it with a kernel W ∈ RK×K of stride 1, we have an output image O whose pixel values at each coordinate j ∈ R2 can be expressed as\nOj = ∑ k∈K Ij+kWk, (1)\nby enumerating discrete kernel positions k within the support K = [−K/2,K/2]2∩Z. This defines a rigid grid for sampling data and kernels.\nTheoretical Receptive Field: The same kernel W can be stacked repeatedly to form a linear convolutional network with n layers. The theoretical receptive field can then be imagined as the “accumulative coverage” of kernels at each given output unit on the input image by deconvolving back through the network. This property characterizes a set of input fields that could fire percepts onto corresponding output pixels. The size of a theoretical receptive field scales linearly with respect to the network depth n and kernel size K (He et al., 2016).\nEffective Receptive Field: Intuitively, not all pixels within a theoretical receptive field contribute equally. The influence of different fields varies from region to region thanks to the central emphasis of stacked convolutions and also to the non-linearity induced by activations. The notion of effective receptive field (ERF) (Luo et al., 2016) is thus introduced to measure the impact of each input pixel on the output at given locations. It is defined as a partial derivative field of the output with respect to the input data. With the numerical approximations in linear convolution networks, the ERF was previously identified as a Gaussian-like soft attention map over input images whose size grows fractionally with respect to the network depth n and linearly to the kernel size K. Empirical results validate this idea under more complex and realistic cases when networks exploit non-linearities, striding, padding, skip connections, and subsampling." }, { "heading": "3.2 ANALYSIS ON EFFECTIVE RECEPTIVE FIELDS", "text": "We aim to revisit and complement the previous analysis on ERFs by Luo et al. (2016). While the previous analysis concentrates on studying the expectation of an ERF, i.e., when network depth n approaches infinity or all kernels are randomly distributed without learning in general, our analysis focuses on how we can perturb the computation such that the change in ERF is predictable, given an input and a set of kernel spaces.\nWe start our analysis by considering a linear convolutional network, without any unit activations, as defined in Section 3.1. For consistency, superscripts are introduced to image I , kernel W , and subscripts to kernel positions k to denote the index s ∈ [1, n] of each layer. Formally, given an input image I(0) and a set of K ×K kernels {W (s)}ns=1 of stride 1, we can roll out the final output O ≡ I(n) by unfolding Equation 1 as\nI (n) j = ∑ kn∈K I (n−1) j+kn W (n) kn = ∑ (kn−1,kn)∈K2 I (n−2) j+kn+kn−1 W (n) kn W (n−1) kn−1 = · · ·\n= ∑ (k1,k2,...,kn)∈Kn ( I (0) j+ ∑n s=1 ks · n∏ s=1 W (s) ks ) .\n(2)\nBy definition1, the effective receptive field value R(n)(i; j) ≡ ∂I(n)j /∂I (0) i of output coordinate j that takes input coordinate i can be computed by\nR(n)(i; j) = ∑\n(k1,k2,...,kn)∈Kn\n( 1 [ j +\nn∑ s=1 ks = i ] · n∏ s=1 W (s) ks\n) , (3)\nwhere 1[·] denotes the indicator function. This result indicates that ERF is related only to the data sampling location j, kernel sampling location k, and kernel matrices {W (s)}.\nIf we replace the mth kernel W (m) with a 1 × 1 kernel of a single parameter W (m) k̃m sampled from it, the value of ERF becomes to\nR(n)(i; j, k̃m) = ∑\n(k1,...,km−1,km+1,...,kn)∈Kn−1\n( 1 [ j +\n∑ s∈S ks = i ] · ∏ s∈S W (s) ks ·W (m) k̃m\n) , (4)\nwhere S = [1, n]\\{m}. Since a K×K kernel can be deemed as a composition of K2 1×1 kernels distributed on a square grid, Equation 3 can thus be reformulated as\nR(n)(i; j) = ∑\nkm∈K\nR(n)(i; j + km,km). (5)\nFor the case of complex non-linearities, where we here consider post ReLU2 activations in Equation 1,\nOj = max( ∑ k∈K Ij+kWk, 0). (6)\n1 The original definition of ERF in Luo et al. (2016) focuses on the central coordinate of the output, i.e. j = (0, 0), to partially avoid the effects of zero padding. In this work, we will keep j in favor of generality while explicitly assuming input size D →∞.\n2 Our analysis currently only considers the ReLU network for its nice properties and prevalent popularity.\nWe can follow a similar analysis and derive corresponding ERF as\nR ′(n)(i; j, k̃m) = ∑ (k1,··· ,km−1,km+1,··· ,kn)∈Kn−1\n( C(n)(i; j,k1, · · · ,kn, k̃m) ·\n∏ s∈S W (s) ks ·Wmk̃m\n)\nwhere C(n)(i; j,k1, · · · ,kn, k̃m) = 1 [ j + ∑ s∈S ks = i ]∏ s∈S 1 [ I (s−1) j W (s) k̃s > 0 ] 1 [ I (m−1) j W (m) k̃m > 0 ] .\nHere we can see that the ERF becomes data-dependent due to the coefficient C, which is tied to input coordinates, kernel sampling locations, and input data I(0). The more detailed analysis of this coefficient is beyond the scope of this paper. However, it should be noted that this coefficient only “gates” the contribution of the input pixels to the output. So in practice, ERF is “porous” – there are inactive (or gated) pixel units irregularly distributed around the ones that fire. This phenomenon also appeared in previous studies (such as in Luo et al. (2016), Figure 1). The maximal size of an ERF is still controlled by the data sampling location and kernel values as in the linear cases in Equation 5.\nA nice property of Equation 4 and Equation 5 is that all computations are linear, making it compatible with any linear sampling operators for querying kernel values of fractional coordinates. In other words, sampling kernels in effect samples the ERF on the data in the linear case, but also roughly generalizes to non-linear cases as well. This finding motivates our design of Deformable Kernels (DKs) in Section 3.3." }, { "heading": "3.3 DEFORMABLE KERNELS", "text": "In the context of Equation 1, we resample the kernel W with a group of learned kernel offsets denoted as {∆k} that correspond to each discrete kernel position k. This defines our DK as\nOj = ∑ k∈K Ij+kWk+∆k, (7)\nand the value of ERF as R(n)DK(i; j) = ∑\nkm∈K\nR(n)(i; j + km,km + ∆km). (8)\nNote that this operation leads to sub-pixel sampling in the kernel space. In practice, we use bilinear sampling to interpolate within the discrete kernel grid.\nIntuitively, the size (resolution) of the original kernel space can affect sampling performance. Concretely, suppose we want to sample a 3 × 3 kernel. DKs do not have any constraint on the size of the original kernel space, which we call the “scope size” of DKs. That said, we can use a W of any size K ′ even though the number of sampling locations is fixed as K2. We can thus exploit large kernels – the largest ones can reach 9×9 in our experiments with nearly no overhead in computation since bilinear interpolations are extremely lightweight compared to the cost of convolutions. This can also increase the number of learning parameters, which in practice might become intractable if not handled properly. In our implementation, we will exploit depthwise convolutions (Howard et al., 2017) such that increasing scope size induces a negligible amount of extra parameters.\nAs previously discussed, sampling the kernel space in effect transforms into sampling the ERF. On the design of locality and spatial granularity of our learned offsets, DK naturally delivers two variants – the global DK and the local DKs, as illustrated in Figure 2. In both operators, we learn a kernel offset generator G that maps an input patch into a set of kernel offsets that are later applied to rigid kernels.\nIn practice, we implement Gglobal as a stack of one global average pooling layer, which reduces feature maps into a vector, and another fully-connected layer without non-linearities, which projects the reduced vector into an offset vector of 2K2 dimensions. Then, we apply these offsets to all convolutions for the input image following Equation 7. For local DKs, we implement Glocal as an extra convolution that has the same configuration as the target kernel, except that it only has 2K2 output channels. This produces kernel sampling offsets {∆k} that are additionally indexed by output locations j. It should be noted that similar designs were also discussed in Jia et al. (2016), in which filters are generated given either an image or individual patches from scratch rather than by resampling.\nIntuitively, we expect the global DK to adapt kernel space between different images but not within a single input. The local DK can further adpat to specific image patches: for smaller objects, it is better to have shaper kernels and thus denser ERF; for larger objects, flatter kernels can be more beneficial for accumulating a wider ERF. On a high level, local DKs can preserve better locality and have larger freedom to adapt kernel spaces comparing to its global counterpart. We later compare these operators in our experiments." }, { "heading": "3.4 LINK WITH DEFORMABLE CONVOLUTIONS", "text": "The core idea of DKs is to learn adaptive offsets to sample the kernel space for modeling deformation, which makes them similar to Deformable Convolutions (Dai et al., 2017; Zhu et al., 2019), at both the conceptual and implementation levels. Here, we distinguish DKs from Deformable Convolutions and show how they can be unified.\nDeformable Convolutions can be reformulated in a general form as Oj = ∑ k∈K Ij+k+∆jWk, (9)\nwhere they aim to learn a group of data offsets {∆j} with respect to discrete data positions j. For consistency for analysis, the value of effective receptive field becomes\nR(n)DC (i; j) = ∑\nkm∈K\nR(n)(i; j + km + ∆jm,km). (10)\nThis approach essentially recomposes the input image towards common modes such that semantic recognition suffers less from deformation. Moreover, according to our previous analysis in Equation 5, sampling data is another way of sampling the ERF. This, to a certain extent, also explains why Deformable Convolutions are well suited for learning deformation-agnostic representations.\nMoreover, we can learn both data and kernel offsets in one convolutional operator. Conceptually, this can be done by merging Equation 7 with Equation 9, which leads to\nOj = ∑ k∈K Ij+k+∆jWk+∆k,\nR(n)DC+DK(i; j) = ∑\nkm∈K\nR(n)(i; j + km + ∆jm,km + ∆km). (11)\nWe also investigate this operator in our experiments. Although the two techniques may be viewed as serving a similar purpose, we find the collaboration between Deformable Kernels and Deformable Convolutions to be powerful in practice, suggesting strong compatibility." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our Deformable Kernels (DKs) on image classification using ILSVRC and object detection using the COCO benchmark. Necessary details are provided to reproduce our results, together with descriptions on base models and strong baselines for all experiments and ablations. For taskspecific considerations, we refer to each corresponding section.\nImplementation Details: We implement our operators in PyTorch and CUDA. We exploit depthwise convolutions when designing our operator for better computational efficiency3. We initialize kernel grids to be uniformly distributed within the scope size. For the kernel offset generator, we set its learning rate to be a fraction of that of the main network, which we cross-validate for each base model. We also find it important to clip sampling locations inside the original kernel space, such that k + ∆k ∈ K in Equation 7. Base Models: We choose our base models to be ResNet-50 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018), following the standard practice for most vision applications. As mentioned, we exploit depthwise convolution and thus make changes to the ResNet model. Concretely, we define our ResNet-50-DW base model by replacing all 3× 3 convolutions by its depthwise counterpart while doubling the dimension of intermediate channels in all residual blocks. We find it to be a reasonable base model compared to the original ResNet-50, with comparable performance on both tasks. During training, we set the weight decay to be 4 × 10−5 rather than the common 10−4 for both models since depthwise models usually underfit rather than overfit (Xie et al., 2017; Howard et al., 2017; Hu et al., 2018). We set the learning rate multiplier of DK operators as 10−2 for ResNet-50-DW and 10−1 for MobileNet-V2 in all of our experiments.\nStrong Baselines: We develop our comparison with two previous works: Conditional Convolutions (Yang et al., 2019) for dynamics inference, and Deformable Convolutions (Dai et al., 2017; Zhu et al., 2019) for deformation modeling. We choose Conditional Convolutions due to similar computation forms – sampling can be deemed as an elementewise “expert voting” mechanism. For fair comparisons, We reimplement and reproduce their results. We also combine our operator with these previous approach to show both quantitative evidence and qualitative insight that our working mechanisms are compatible." }, { "heading": "4.1 IMAGE CLASSIFICATION", "text": "We first train our networks on the ImageNet 2012 training set (Deng et al., 2009). We adopt a common experiment protocol for fair comparisons as in Goyal et al. (2017); Loshchilov & Hutter (2017). For more details, please refer to our supplement.\nWe first ablate the scope size of kernels for our DKs and study how it can affect model performance using ResNet-50-DW. As shown in Table 1, our DKs are sensitive to the choice of the scope size. We shown that when only applied to 3× 3 convolutions inside residual bottlenecks, local DKs induce a +0.7 performance gain within the original scope. By further enlarging the scope size, performance increases yet quickly plateaus at scope 4 × 4, yielding largest +1.4 gain for top-1 accuracy. Our speculation is that, although increasing scope size theoretically means better interpolation, it also makes the optimization space exponentially larger for each convolutional layer. And since number of entries for updating is fixed, this also leads to relatively sparse gradient flows. In principle, we set default scope size at 4× 4 for our DKs. We next move on and ablate our designs by comparing the global DK with the local DK, shown in the table. Both operators helps while the local variants consistently performs better than their global counterparts, bringing a +0.5 gap on both base models. We also study the effect of using more DKs in the models – the 1 × 1 convolutions are replaced by global DKs4 with scope 2 × 2. Note that all 1 × 1 convolutions are not depthwise, and therefore this operation induces nearly 4 times of parameters. We refer their results only for ablation and show that adding more DKs still helps – especially for MobileNet-V2 since it is under-parameterized. This finding also holds for previous models (Yang et al., 2019) as well.\n3 This makes enlarging the kernel scope size tractable and prevents extensive resource competition in CUDA kernels when applying local DKs.\n4 The implementation of local DKs right now cannot support large number of output channels.\nWe further compare and combine DKs with Conditional Convolutions and Deformable Convolutions. Results are recorded in Table 2. We can see that DKs perform comparably on ResNet-V2 and compare favorably on MobileNet-V2 – improve +0.9 from Deformable Convolutions and achieve comparable results with less than a quarter number of parameters compared to Conditional Convolutions. Remarkably, we also show that if combined together, even larger performance gains are in reach. We see consistent boost in top-1 accuracy compared to strong baselines: +1.3/+1.0 on ResNet-50-DW, and +1.2/+1.2 on MobileNet-V2. These gaps are bigger than those from our own ablation, suggesting the working mechanisms across the operators to be orthogonal and compatible." }, { "heading": "4.2 OBJECT DETECTION", "text": "We examine DKs on the COCO benchmark (Lin et al., 2014). For all experiments, we use Faster R-CNN (Ren et al., 2015) with FPN (Lin et al., 2017) as the base detector, plugging in the backbones we previously trained on ImageNet. For MobileNet-V2, we last feature maps of the each resolution for FPN post aggregation. Following the standard protocol, training and evaluation are performed on the 120k images in the train-val split and the 20k images in the test-dev split, respectively. For evaluation, we measure the standard mean average precision (mAP) and shattered scores for small, medium and large objects.\nTable 3 and Table 4 follow the same style of analysis as in image classification. While the baseline methods of ResNet achieve 36.6 mAP, indicating a strong baseline detector, applying local DKs brings a +1.2 mAP improvement when replacing 3x3 rigid kernels alone and a +1.8 mAP improve-\nment when replacing both 1x1 and 3x3 rigid kernels. This trend magnifies on MobileNet-v2 models, where we see an improvement of +1.6 mAP and +2.4 mAP, respectively. Results also confirm the effectiveness of local DKs against global DKs, which is again in line with our expectation that local DKs can model locality better.\nFor the comparisons with strong baselines, an interesting phenomenon worth noting is that though DKs perform better than Deformable Convolutions on image classification, they fall noticeably short for object detection measured by mAP. We speculate that even though both techniques can adapt ERF in theory (as justified in Section 3.2), directly shifting sampling locations on data is easier to optimize. Yet after combining DKs with previous approaches, we can consistently boost performance for all the methods – +0.7/+1.2 for Deformable Convolutions on each base models, and +1.7/+1.1 for Conditional Convolutions. These findings align with the results from image classification. We next investigate what DKs learn and why they are compatible with previous methods in general." }, { "heading": "4.3 WHAT DO DEFORMABLE KERNELS LEARN?", "text": "Awareness of Object Scale: Since deformation is hard to quantify, we use object scale as a rough proxy to understand what DKs learn. In Figure 3, we show the t-SNE (Maaten & Hinton, 2008) of learned model dynamics by the last convolutional layers in MobileNet-V2 using Conditional Convolution and our DKs. We validate the finding as claimed by Yang et al. (2019) that the experts of Conditional Convolutions have better correlation with object semantics than their scales (in reference to Figure 6 from their paper). Instead, our DKs learn kernel sampling offsets that strongly correlate to scales rather than semantics. This sheds light on why the two operators are complementary in our previous experiments.\nAdaptation of Effective Receptive Fields: To verify our claim that DK indeed adapts ERFs in practice, we show ERF visualizations on a set of images in which they display different degrees of deformations. We compare the results of rigid kernels, Deformable Convolutions, our DKs, and the combination of the two operators. For all examples, note that the theoretical receptive field covers every pixel in the image but ERFs contain only a central portion of it. Deformable Convolutions\nand DKs perform similarly in terms of adapting ERFs, but Deformable Convolutions tend to spread out and have sparse responses while DKs tend to concentrate and densely activate within an object region. Combining both operators yields more consistent ERFs that exploit both of their merits." }, { "heading": "5 CONCLUSION", "text": "In this paper, we introduced Deformable Kernels (DKs) to adapt effective receptive fields (ERFs) of convolutional networks for object deformation. We proposed to sample kernel values from the original kernel space. This in effect samples the ERF in linear networks and also roughly generalizes to non-linear cases. We instantiated two variants of DKs and validate our designs, showing connections to previous works. Consistent improvements over them and compatibility with them were found, as illustrated in visualizations." }, { "heading": "A COMPUTATION FLOW OF DEFORMABLE KERNELS", "text": "We now cover more details on implementing DKs by elaborating the computation flow of their forward and backward passes. We will focus on the local DK given its superior performance in practice. The extension to global DK implementation is straight-forward.\nA.1 FORWARD PASS\nIn Section 3.3, we introduce a kernel offset generator G and a bilinear sampler B. Figure 5 illustrates an example of the forward pass.\nConcretely, given a kernel W and a learned group of kernel offsets {∆k} on top of a regular 2D grid {k}, we can resample a new kernel W ′ by a bilinear operator B as\nW ′ ≡Wk+∆k = ∑ k′∈K B(k + ∆k,k′)Wk′ , (12)\nwhere B(k + ∆k,k′) = max(0, 1− |kx + ∆kx − k′x|) ·max(0, 1− |ky + ∆ky − k′y|). Given this resampled kernel, DK convolves it with the input image just as in normal convolutions using rigid kernels, characterized by Equation 1.\nA.2 BACKWARD PASS\nThe backward pass of local DK consists of three types of gradients: (1) the gradient to the data of the previous layer, (2) the gradient to the full scope kernel of the current layer and (3) the additional gradient to the kernel offset generator of the current layer. The first two types of gradients share same forms of the computation comparing to the normal convolutions. We now cover the computation for the third flow of gradient that directs where to sample kernel values.\nIn the context of Equation 7, the partial derivative of a output item Oj w.r.t. x component of a given kernel offset ∆kx (similar for its y component ∆ky) can be computed as\n∂Oj ∂∆kx = ∑ k Ij+k (∑ k′ Wk′ ∂B(k + ∆k,k′) ∂∆kx ) , (13)\nwhere ∂B(k + ∆k,k′)\n∂∆kx = max(0, 1− |ky + ∆ky − k′y|) · 0 |kx + ∆kx − k′x| ≥ 1 1 kx + ∆kx < k ′ x\n−1 kx + ∆kx ≥ k′x ." }, { "heading": "B NETWORK ARCHITECTURES", "text": "Table 5 shows the comparison between the original ResNet-50 (He et al., 2016) and our modified ResNet-50-DW. The motivation of introducing depthwise convolutions to ResNet is to accelerate the computation of local DKs based on our current implementations. The ResNet-50-DW model has similar model capacity/complexity and performance (see Table 1) compared to its non-depthwise counterpart, making it an ideal base architecture for our experiments.\nOn the other hand, in all of our experiments, MobileNet-V2 (Sandler et al., 2018) base model is left untouched." }, { "heading": "C ADDITIONAL COMPARISON OF EFFECTIVE RECEPTIVE FIELDS", "text": "We here show additional comparison of ERFs when objects have different kinds of deformations in Figure 6. Comparing to baseline, our method can adapt ERFs to be more persistent to object’s semantic rather than its geometric configuration." }, { "heading": "D ADDITIONAL EXPERIMENT DETAILS", "text": "Image Classification: Similar to Goyal et al. (2017); Loshchilov & Hutter (2017), training is performed by SGD for 90 epochs with momentum 0.9 and batch size 256. We set our learning rate of 10−1 so that it linearly warms up from zero within first 5 epochs. A cosine training schedule is applied over the training epochs. We use scale and aspect ratio augmentation with color perturbation as standard data augmentations. We evaluate the performance of trained models on the ImageNet 2012 validation set. The images are resized so that the shorter side is of 256 pixels. We then centrally crop 224× 224 windows from the images as input to measure recognition accuracy." } ]
2,020
null
SP:1442b188a5bac1931c3023a5529fa366f22bd8e6
[ "The authors propose using k-winner take all (k-WTA) activation functions to prevent white box adversarial attacks. A k-WTA activation functions outputs the k highest activations in a layer while setting all other activations to zero. The reasoning given by the authors is that k-WTA activation functions have many discontinuities with respect to the input space. This makes it more difficult for attacks to use gradient information. The authors note that networks with k-WTA activation functions are still easy to train because, for a given input, the sub-network that is activated becomes more stable as training progresses. Therefore, it is not as discontinuous in the parameter space.", "This paper addresses the important question of improving the robustness of deep neural networks against adversarial attacks. The authors propose a surprisingly simple measure to improve adversarial robustness, namely replacing typical activation functions such as ReLU with a k-winners-take-all (k-WTA) functions, whereby the k largest values of the input vector are copied, and all other elements of the output vector are set to zero. Since the size of input and output maps varies drastically within networks, the authors instead use a sparsity ratio \\gamma that calculates k as a function of input size. k-WTA networks can be trained without special treatment, but for low \\gamma values the authors propose a training schedule, whereby \\gamma is slowly reduced, then re-training takes place, until the desired value of \\gamma is reached. The presented effect is backed up by extensive theoretical investigations that relate the increased robustness to the dense introduction of discontinuities, which makes gradient-based adversarial attacks harder. A small change in an input signal can change the identity of the \"winning\" inputs, and thus in a sub-sequent matrix multiplication make use of other rows or columns, thus allowing arbitrarily large effects due to small input variations. Empirical evaluations in CIFAR and SVHN for a variety of attacks and defense mechanisms demonstrate the desired effects, and illustrate the loss landscapes due to using k-WTA." ]
We propose a simple change to existing neural network structures for better defending against gradient-based adversarial attacks. Instead of using popular activation functions (such as ReLU), we advocate the use of k-Winners-Take-All (k-WTA) activation, a C discontinuous function that purposely invalidates the neural network model’s gradient at densely distributed input data points. The proposed k-WTA activation can be readily used in nearly all existing networks and training methods with no significant overhead. Our proposal is theoretically rationalized. We analyze why the discontinuities in k-WTA networks can largely prevent gradient-based search of adversarial examples and why they at the same time remain innocuous to the network training. This understanding is also empirically backed. We test k-WTA activation on various network structures optimized by a training method, be it adversarial training or not. In all cases, the robustness of k-WTA networks outperforms that of traditional networks under white-box attacks.
[ { "affiliations": [], "name": "Chang Xiao" }, { "affiliations": [], "name": "Peilin Zhong" }, { "affiliations": [], "name": "Changxi Zheng" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "On the convergence rate of training recurrent neural networks", "venue": "In NeurIPS. https://arxiv.org/pdf/1810.12065,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In ICML. https://arxiv.org/pdf/1811.03962,", "year": 2019 }, { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "arXiv preprint arXiv:1707.07397,", "year": 2017 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marco Barreno", "Blaine Nelson", "Russell Sears", "Anthony D Joseph", "J Doug Tygar" ], "title": "Can machine learning be secure", "venue": "In Proceedings of the 2006 ACM Symposium on Information, computer and communications security,", "year": 2006 }, { "authors": [ "Marco Barreno", "Blaine Nelson", "Anthony D Joseph", "J Doug Tygar" ], "title": "The security of machine learning", "venue": "Machine Learning,", "year": 2010 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Jacob Buckman", "Aurko Roy", "Colin Raffel", "Ian Goodfellow" ], "title": "Thermometer encoding: One hot way to resist adversarial examples. 2018", "venue": "URL https://openreview.net/pdf?id= S18Su--CW", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Jianbo Chen", "Michael I Jordan" ], "title": "Boundary attack++: Query-efficient decision-based adversarial attack", "venue": "arXiv preprint arXiv:1904.02144,", "year": 2019 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Thomas H Cormen", "Charles E Leiserson", "Ronald L Rivest", "Clifford Stein" ], "title": "Introduction to algorithms", "venue": "MIT press,", "year": 2009 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Guneet S. Dhillon", "Kamyar Azizzadenesheli", "Jeremy D. Bernstein", "Jean Kossaifi", "Aran Khanna", "Zachary C. Lipton", "Animashree Anandkumar" ], "title": "Stochastic activation pruning for robust adversarial defense", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Rodney J Douglas", "Kevan AC Martin" ], "title": "Neuronal circuits of the neocortex", "venue": "Annu. Rev. Neurosci.,", "year": 2004 }, { "authors": [ "Rodney J Douglas", "Kevan AC Martin", "David Whitteridge" ], "title": "A canonical microcircuit for neocortex", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Simon S Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "arXiv preprint arXiv:1810.02054,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Stephen Grossberg" ], "title": "Contour enhancement, short term memory, and constancies in reverberating neural networks", "venue": "In Studies of mind and brain,", "year": 1982 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Ruitong Huang", "Bing Xu", "Dale Schuurmans", "Csaba Szepesvári" ], "title": "Learning with a strong", "venue": "adversary. http://arxiv.org/abs/1511.03034,", "year": 2015 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Defensive quantization: When efficiency meets robustness", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M. Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Michael E. Houle", "Dawn Song", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Wolfgang Maass" ], "title": "On the computational power of winner-take-all", "venue": "Neural computation,", "year": 2000 }, { "authors": [ "Wolfgang Maass" ], "title": "Neural computation with winner-take-all as the only nonlinear operation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "E Majani", "Ruth Erlanson", "Yaser S Abu-Mostafa" ], "title": "On the k-winners-take-all network", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Guido F Montufar", "Razvan Pascanu", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "On the number of linear regions of deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Nina Narodytska", "Shiva Prasad Kasiviswanathan" ], "title": "Simple black-box adversarial perturbations for deep networks", "venue": "arXiv preprint arXiv:1612.06299,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Maithra Raghu", "Ben Poole", "Jon Kleinberg", "Surya Ganguli", "Jascha Sohl Dickstein" ], "title": "On the expressive power of deep neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "arXiv preprint arXiv:1707.04131,", "year": 2017 }, { "authors": [ "Maximilian Riesenhuber", "Tomaso Poggio" ], "title": "Hierarchical models of object recognition in cortex", "venue": "Nature neuroscience,", "year": 1999 }, { "authors": [ "Andrew Slavin Ross", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "CoRR, abs/1711.09404,", "year": 2017 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-GAN: Protecting classifiers against adversarial attacks using generative models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ali Shafahi", "W. Ronny Huang", "Christoph Studer", "Soheil Feizi", "Tom Goldstein" ], "title": "Are adversarial examples inevitable", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "arXiv preprint arXiv:1904.12843,", "year": 2019 }, { "authors": [ "Mahmood Sharif", "Sruti Bhagavatula", "Lujo Bauer", "Michael K. Reiter" ], "title": "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS", "year": 2016 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zhao Song", "Xin Yang" ], "title": "Quadratic suffices for over-parametrization via matrix chernoff bound", "venue": "arXiv preprint arXiv:1906.03593,", "year": 2019 }, { "authors": [ "Rupesh K Srivastava", "Jonathan Masci", "Sohrob Kazerounian", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "Compete to compute", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Rupesh Kumar Srivastava", "Jonathan Masci", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "Understanding locally competitive networks", "venue": "arXiv preprint arXiv:1410.1165,", "year": 2014 }, { "authors": [ "Dong Su", "Huan Zhang", "Hongge Chen", "Jinfeng Yi", "Pin-Yu Chen", "Yupeng Gao" ], "title": "Is robustness the cost of accuracy? – a comprehensive study on the robustness of 18 deep image classification models", "venue": "Computer Vision – ECCV", "year": 2018 }, { "authors": [ "Dong Su", "Huan Zhang", "Hongge Chen", "Jinfeng Yi", "Pin-Yu Chen", "Yupeng Gao" ], "title": "Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Kouichi Sakurai" ], "title": "One pixel attack for fooling deep neural networks", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Simen Thys", "Wiebe Van Ranst", "Toon Goedemé" ], "title": "Fooling automated surveillance cameras: adversarial patches to attack person detection", "venue": null, "year": 1904 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": null, "year": 2018 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Duane Boning", "Inderjit S Dhillon", "Luca Daniel" ], "title": "Towards fast computation of certified robustness for relu", "venue": null, "year": 2018 }, { "authors": [ "Przemyslaw Wojtaszczyk" ], "title": "Banach spaces for analysts, volume 25", "venue": null, "year": 1996 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "arXiv preprint arXiv:1812.03411,", "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": null, "year": 1901 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Huang" ], "title": "2015) is by far the most successful method against adversarial attacks. It trains the network model with adversarial images generated during the training time", "venue": "Madry et al", "year": 2017 }, { "authors": [ "Papernot" ], "title": "by-far the strongest black-box attack", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the tremendous success of deep learning techniques, there is a grain of salt. It has become wellknown that deep neural networks can be easily fooled by adversarial examples (Szegedy et al., 2014). Those deliberately crafted input samples can mislead the networks to produce an output drastically different from what we expect. In many important applications, from face recognition authorization to autonomous cars, this vulnerability gives rise to serious security concerns (Barreno et al., 2010; 2006; Sharif et al., 2016; Thys et al., 2019). Attacking the network is straightforward. Provided a labeled data item (x, y), the attacker finds a perturbation x′ perceptually similar to x but misleading enough to cause the network to output a label different from y. By far, the most effective way of finding such a perturbation (or adversarial example) is by exploiting the gradient information of the network with respect to its input: the gradient indicates how to perturb x to trigger the maximal change of y. The defense, however, is challenging. Recent studies showed that adversarial examples always exist if one tends to pursue a high classification accuracy—adversarial robustness seems at odds with the accuracy (Tsipras et al., 2018; Shafahi et al., 2019a; Su et al., 2018a; Weng et al., 2018; Zhang et al., 2019). This intrinsic difficulty of eliminating adversarial examples suggests an alternative path: can we design a network whose adversarial examples are evasive rather than eliminated? Indeed, along with this thought is a series of works using obfuscated gradients as a defense mechanism (Athalye et al., 2018). Those methods hide the network’s gradient information by artificially discretizing the input (Buckman et al., 2018; Lin et al., 2019) or introducing certain randomness to the input (Xie et al., 2018a; Guo et al., 2018) or the network structure (Dhillon et al., 2018; Cohen et al., 2019) (see more discussion in Sec. 1.1). Yet, the hidden gradient in those methods can still be approximated, and as recently pointed out by Athalye et al. (2018), those methods remain vulnerable.\nTechnical contribution I) Rather than obfuscating the network’s gradient, we make the gradient undefined. This is achieved by a simple change to the standard neural network structure: we advocate the use of aC0 discontinuous activation function, namely the k-Winners-Take-All (k-WTA) activation, to replace the popular activation functions such as rectified linear units (ReLU). This is the only change we propose to a deep neural network. All other components (such as BatchNorm, convolution, and pooling) as well as the training methods remain unaltered. With no significant overhead, k-WTA activation can be readily used in nearly all existing networks and training methods.\nk-WTA activation takes as input the entire output of a layer, retains its k largest values and deactivates all others to zero. As we will show in this paper, even an infinitesimal perturbation to the input may cause a complete change to the network neurons’ activation pattern, thereby resulting in a large jump in the network’s output. This means that, mathematically, if we use f(x;w) to denote a k-WTA network taking an input x and parameterized by weights w, then the gradient∇xf(x;w) at certain x is undefined—f(x;w) is C0 discontinuous.\nTechnical contribution II) More intriguing than the mere replacement of the activation function is why k-WTA helps improve the adversarial robustness. We offer our theoretical reasoning of its behavior from two perspectives. On the one hand, we show that the discontinuities of f(x;w) is densely distributed in the space of x. Dense enough such that a tiny perturbation from any x almost always comes across some discontinuities, where the gradients are undefined and thus the attacker’s search of adversarial examples becomes blinded (see Figure 1). On the other hand, a paradox seemingly exists. The discontinuities in the activation function also renders f(x;w) discontinuous with respect to the network weights w (at\ncertain w values). But training the network relies on the presumption that the gradient with respect to the weights is almost always available. Interestingly, we show that, under k-WTA activation, the discontinuities of f(x;w) is rather sparse in the space of w, intuitively because the dimension of w (in parameter space) is much larger than the dimension of x (in data space). Thus, the network can be trained successfully.\nSummary of results. We conducted extensive experiments on multiple datasets under different network architectures, including ResNet (He et al., 2016), DenseNet (Huang et al., 2017), and Wide ResNet (Zagoruyko & Komodakis, 2016), that are optimized by regular training as well as various adversarial training methods (Madry et al., 2017; Zhang et al., 2019; Shafahi et al., 2019b). In all these setups, we compare the robustness performance of using the proposed k-WTA activation with commonly used ReLU activation under several white-box attacks, including PGD (Kurakin et al., 2016), Deepfool (Moosavi-Dezfooli et al., 2016), C&W (Carlini & Wagner, 2017), MIM (Dong et al., 2018), and others. In all tests, k-WTA networks outperform ReLU networks. The use of k-WTA activation is motivated for defending against gradient-based adversarial attacks. Our experiments suggest that the robustness improvement gained by simply switching to k-WTA activation is universal, not tied to specific network architectures or training methods. To promote reproducible research, we will release our implementation of k-WTA networks, along with our experiment code, configuration files and pre-trained models1." }, { "heading": "1.1 RELATED WORK: OBFUSCATED GRADIENTS AS A DEFENSE MECHANISM", "text": "Before delving into k-WTA details, we review prior adversarial defense methods that share the same philosophy with our method and highlight our advantages. For a review of other attack and defense methods, we refer to Appendix A. Methods aiming for concealing the gradient information from the attacker has been termed as obfuscated gradients (Athalye et al., 2018) or gradient masking (Papernot et al., 2017; Tramèr et al., 2017) techniques. One type of such methods is by exploiting randomness, either randomly transforming the input before feeding it to the network (Xie et al., 2018a; Guo et al., 2018) or introducing stochastic layers in the network (Dhillon et al., 2018). However, the gradient information in these methods can be estimated by taking the average over multiple trials (Athalye et al., 2018; 2017). As a result, these methods are vulnerable. Another type of obfuscated gradient methods relies on the so-called shattered gradient (Athalye et al., 2018), which aims to make the network gradients nonexistent or incorrect to the attacker, by purposely discretizing the input (Buckman et al., 2018; Ma et al., 2018) or artificially raising numerical instability for gradient evaluation (Song et al., 2018; Samangouei et al., 2018). Unfortunately, these methods are also vulnerable. As shown by Athalye et al. (2018), they can be compromised by backward pass\n1https://github.com/a554b554/kWTA-Activation\ndifferentiable approximation (BPDA). Suppose fi(x) is a non-differentiable component of a network expressed by f = f1 ◦ f2 ◦ · · · ◦ fn. The gradient ∇xf can be estimated as long as one can find a smooth delegate g that approximates well fi (i.e., g(x) ≈ fi(x)). In stark contrast to all those methods, a slight change of the k-WTA activation pattern in an earlier layer of a network can cause a radical reorganization of activation patterns in later layers (as shown in Sec. 3). Thereby, k-WTA activation not just obfuscates the network’s gradients but destroys them at certain input samples, introducing discontinuities densely distributed in the input data space. We are not aware of any possible smooth approximation of a k-WTA network to launch BPDA attacks.\n2 k-WINNERS-TAKE-ALL ACTIVATION The debut of the Winner-Takes-All (WTA) activation on the stage of neural networks dates back to 1980s, when Grossberg (1982) introduced shunting short-term memory equations in on-center off-surround networks and showed the ability to identify the largest of N real numbers. Later, Majani et al. (1989) generalized the WTA network to identify the K largest of N real numbers, and they termed the network as the K-Winners-Take-All (KWTA) network. These early WTA-type activation functions output only boolean values, mainly motivated by the properties of biological neural circuits. In particular, Maass (2000a;b) has proved that any boolean function can be computed by a single KWTA unit. Yet, the boolean nature of these activation functions differs starkly from the modern activation functions, including the one we use.\n2.1 DEEP NEURAL NETWORKS ACTIVATED BY k-WINNERS-TAKE-ALL\nWe propose to use k-Winners-Take-All (k-WTA) activation, a natural generalization of the boolean KWTA2 (Majani et al., 1989). k-WTA retains the k largest values of an N × 1 input vector and sets all others to be zero before feeding the vector to the next network layer, namely,\nφk(y)j = { yj , yj ∈ {k largest elements of y}, 0, Otherwise. (1)\nHere φk : RN → RN is the k-WTA function (parameterized by an integer k), y ∈ RN is the input to the activation, and φk(y)j denote the j-the element of the output φk(y) (see the rightmost subfigure of Figure 2). Note that if y has multiple elements that are equally k-th largest, we break the tie by retaining the element with smaller indices until the k slots are taken. When using k-WTA activation, we need to choose k. Yet it makes no sense to fix k throughout all layers of the neural network, because these layers often have different output dimensions; a small k to one layer’s dimension can be relatively large to the other. Instead of specifying k, we introduce\n2In this paper, we use k-WTA to refer our activation function, while using KWTA to refer the original boolean version by Majani et al. (1989).\na parameter γ ∈ (0, 1) called sparsity ratio. If a layer has an output dimension N , then its k-WTA activation has k = bγ ·Nc. Even though the sparsity ratio can be set differently for different layers, in practice we found no clear gain from introducing such a variation. Therefore, we use a fixed γ—the only additional hyperparameter needed for the neural network. In convolutional neural networks (CNN), the output of a layer is a C ×H ×W tensor. C denotes the number of output channels; H and W indicate the feature resolution. While there are multiple choices of applying k-WTA on the tensor—for example, one can apply k-WTA individually to each channel—empirically we found that the most effective (and conceptually the simplest) way is to treat the tensor as a long C ·H ·W × 1 vector input to the k-WTA activation. Using k-WTA in this way is also backed by our theoretical understanding (see Sec. 3). The runtime cost of computing a k-WTA activation is asymptotically O(N), because finding k largest values in a list is asymptotically equivalent to finding the k-th largest value, which has an O(N) complexity (Cormen et al., 2009). This cost is comparable to ReLU’s O(N) cost on a N -length vector. Thus, replacing ReLU with k-WTA introduces no significant overhead.\nRemark: other WTA-type activations. Relevant to k-WTA is the local Winner-Take-All (LWTA) activation (Srivastava et al., 2013; 2014), which divides each layer’s output values into local groups and applies WTA to each group individually. LWTA is similar to max-pooling (Riesenhuber & Poggio, 1999) for dividing the layer output and choosing group maximums. But unlike ReLU and max-pooling being C0 continuous, LWTA and our k-WTA are both discontinuous with respect to the input. The differences among ReLU, max-pooling, LWTA, and k-WTA are illusrated in Figure 2. LWTA is motivated toward preventing catastrophic forgetting (McCloskey & Cohen, 1989), whereas our use of k-WTA is for defending against adversarial threat. Both are discontinuous. But it remains unclear what the LWTA’s discontinuity properties are and how its discontinuities affect the network training. Our theoretical analysis (Sec. 3), in contrast, sheds some light on these fundamental questions about k-WTA, rationalizing its ability for improving adversarial robustness. Indeed, our experiments confirm that k-WTA outperforms LWTA in terms of robustness (see Appendix D.3). WTA-type activation, albeit originated decades ago and widely studied in computational neuroscience (Douglas et al., 1989; Douglas & Martin, 2004), remains elusive in modern neural networks. Perhaps this is because it has not demonstrated a considerable improvement to the network’s standard test accuracy, though it can offer an accuracy comparable to ReLU (Srivastava et al., 2013). Our analysis and proposed use of k-WTA and its enabled improvement on adversarial defense may suggest a renaissance of studying WTA.\n2.2 TRAINING k-WTA NETWORKS\nk-WTA networks require no special treatment in training. Any optimization algorithm (such as stochastic gradient descent) for training ReLU networks can be directly used to train k-WTA networks. Our experiments have found that when the sparsity ratio γ is relatively small (≤ 0.2), the network training converges slowly. This is not a surprise. A smaller γ activates fewer neurons, effectively reducing more of the layer width and in turn the network size, and the stripped “subnetwork” is much less expressive (Srivastava et al., 2013). Since different training examples activate different subnetworks, collectively they make the training harder. Nevertheless, we prefer a smaller γ. As we will discuss in the next section, a smaller γ usually leads to better robustness against finding adversarial examples. Therefore, to ease the training (when γ is small), we propose to use an iterative fine-tuning approach. Suppose the target sparsity ratio is γ1. We first train the network with a larger sparsity ratio γ0 using the standard training process. Then, we iteratively fine tune the network. In each iteration, we reduce its sparsity ratio by a small δ and train the network for two epochs. The iteration repeats until γ0 is reduced to γ1. This incremental process introduces little training overhead, because the cost of each fine tuning is negligible in comparison to training from scratch toward γ0. We also note that this process is optional. In practice we use it only when γ < 0.2. We show more experiments on the efficacy of the incremental training in Appendix D.2.\n3 UNDERSTANDING k-WTA DISCONTINUITY\nWe now present our theoretical understanding of k-WTA’s discontinuity behavior in the context of deep neural networks, revealing some implication toward the network’s adversarial robustness.\nActivation pattern. To understand k-WTA’s discontinuity, consider one layer outputting values x, passed through a k-WTA activation, and followed by the next layer whose linear weight matrix is W (see adjacent figure). Then, the value fed into the next activation can be expressed as Wφk(x), where φk(·) is the k-WTA function defined in (1). Suppose the vector x has a length l. We define the k-WTA’s activation pattern under the input x as\nA(x) := {i ∈ [l] | xi is one of the k largest values in x} ⊆ [l]. (2)\nHere (and throughout this paper), we use [l] to denote the integer set {1, 2, ..., l}. Discontinuity. The activation pattern A(x) is a key notion for analyzing k-WTA’s discontinuity behavior. Even an infinitesimal perturbation of x may change A(x): some element i is removed from A(x) while another element j is added in. Corresponding to this change, in the evaluation of Wφk(x), the contribution of W’s column vector Wi vanishes while another column Wj suddenly takes effect. It is this abrupt change that renders the result of Wφk(x) C0 discontinuous. Such a discontinuity jump can be arbitrarily large, because the column vectors Wi and Wj can be of any difference. Once W is determined, the discontinuity jump then depends on the value of xi and xj . As explained in Appendix B, when the discontinuity occurs, xi and xj have about the same value, depending on the choice of the sparsity ratio γ (recall Sec. 2.1)—the smaller the γ is, the larger the jump will be. This relationship suggests that a smaller γ will make the search of adversarial examples harder. Indeed, this is confirmed through our experiments (see Appendix D.6).\nPiecewise linearity. Now, consider an n-layer k-WTA network, which can be expressed as f(x) = W(1) · φk(W(2) · φk(· · ·φk(W(n)x + b(n))) + b(2)) + b(1), where W(i) and b(i) are the i-th layer’s weight matrix and bias vector, respectively. If the activation patterns of all layers are fixed, then f(x) is a linear function. When the activation pattern changes, f(x) switches from one linear function to another linear function. Over the entire space of x, f(x) is piecewise linear. The specific activation patterns of all layers define a specific linear piece of the function, or a linear region (following the notion introduced by Montufar et al. (2014)). Conventional ReLU (or hard tanh) networks also represent piecewise linear functions and their linear regions are joined together at their boundaries, whereas in k-WTA networks the linear regions are disconnected (see Figure 1).\nLinear region density. Next, we gain some insight on the distribution of those linear regions. This is of our interest because if the linear regions are densely distributed, a small ∆x perturbation from any data point x will likely cross the boundary of the linear region where x locates. Whenever boundary crossing occurs, the gradient becomes undefined (see Figure 3-a). For the purpose of analysis, consider an input x passing through a layer followed by a k-WTA activation (see adjacent figure). The output from the activation is φk(Wx + b). We would like to understand, when x is changed into x′, how likely the activation pattern of φk will change. First, notice that if x′ and x satisfy x′ = c · x with some c > 0, the activation pattern remains unchanged. Therefore, we introduce a notation d(x,x′) that measures the “perpendicular” distance between x and x′, one\nthat satisfies x′ = c · (x + d(x,x′)x⊥) for some scalar c, where x⊥ is a unit vector perpendicular to x and on the plane spanned by x and x′. With this notion, and if the elements in W is initialized by sampling from N (0, 1l ) and b is initialized as zero, we find the following property: Theorem 1 (Dense discontinuities). Given any input x ∈ Rm and some β, and ∀x′ ∈ Rm such that d2(x,x′) ‖x‖22 ≥ β, if the following condition\nl ≥ Ω (( m\nγ · 1 β\n) · log ( m\nγ · 1 β )) is satisfied, then with a probability at least 1− ·2−m, we have A(Wx + b) 6= A(Wx′ + b).\nHere l is the width of the layer, and γ is again the sparsity ratio in k-WTA. This theorem informs us that the larger the layer width l is, the smaller β—and thus the smaller perpendicular perturbation distance d(x,x′)—is needed to trigger a change of the activation pattern, that is, as the layer width increases, the piecewise linear regions become finer (see Appendix C for proof and more discussion). This property also echos a similar trend in ReLU networks, as pointed out by Raghu et al. (2017).\nWhy is the k-WTA network trainable? While k-WTA networks are highly discontinuous as revealed by Theorem 1 and our experiments (Figure 3-a), in practice we experience no difficulty on training these networks. Our next theorem sheds some light on the reason behind the training success.\nTheorem 2. Consider N data points x1,x2, · · · ,xN ∈ Rm. Suppose ∀i 6= j, xi‖xi‖2 6= xj ‖xj‖2 . If l is sufficiently large, then with a high probability, we have ∀i 6= j,A(Wxi + b) ∩ A(Wxj + b) = ∅. This theorem is more formally stated in Theorem 10 in Appendix C together with a proof there. Intuitively speaking, it states that if the network is sufficiently wide, then for any i 6= j, activation pattern of input data xi is almost separated from that of xj . Thus, the weights for predicting xi’s and xj’s labels can be optimized almost independently, without changing their individual activation patterns. In practice, the activation patterns of xi and xj are not fully separated but weakly correlated. During the optimization, the activation pattern of a data point xi may change, but the chance is relatively low—a similar behavior has also been found in ReLU networks (Li & Liang, 2018; Du et al., 2018; Allen-Zhu et al., 2019a;b; Song & Yang, 2019). Further, notice that the training loss takes a summation over all training data points. This means a weight update would change only a small set of activation patterns (since the chance of having the pattern changed is low); the discontinuous change on the loss value, after taking the summation, will be negligible (see Figure 3-c). Thus, the discontinuities in k-WTA is not harmful to network training." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We evaluate the robustness of k-WTA networks under adversarial attacks. Our evaluation considers multiple training methods on different network architectures (see details below). When reporting statistics, we use Arob to indicate the worst-case robustness accuracy on test data under all adversarial attacks we evaluated, and use Astd to indicate the accuracy on the clean test data. We use k-WTA-γ to represent k-WTA activation with sparsity ratio γ." }, { "heading": "4.1 ROBUSTNESS UNDER WHITE-BOX ATTACKS", "text": "The rationale behind k-WTA activation is to destroy network gradients—information needed in white-box attacks. We therefore evaluate k-WTA networks under multiple recently proposed whitebox attack methods, including Projected Gradient Descent (PGD) (Madry et al., 2017), Deepfool (Moosavi-Dezfooli et al., 2016), C&W attack (Carlini & Wagner, 2017), and Momentum Iterative Method (MIM) (Dong et al., 2018). Since k-WTA activation can be used in almost any training method, be it adversarial training or not, we also consider multiple training methods, including natural (non-adversarial) training, adversarial training (AT) (Madry et al., 2017), TRADES (Zhang et al., 2019) and free adversarial training (FAT) (Shafahi et al., 2019b). In addition, we evaluate the robustness under transfer-based Black-box (BB) attacks (Papernot et al., 2017). The black-box threat model requires no knowledge about network architecture and parameters. Thus, we use a pre-trained VGG19 network (Simonyan & Zisserman, 2014) as the source model to generate adversarial examples using PGD. As demonstrated by Su et al. (2018b), VGG networks have the strongest transferability among different architectures.\nIn each setup, we compare the robust accuracy of k-WTA networks with standard ReLU networks on three datasets, CIFAR-10, SVHN, and MNIST. Results on the former two are reported in Table 1, while the latter is reported in Appendix D.4. We use ResNet-18 for CIFAR-10 and SVHN. The perturbation range is 0.031 (CIFAR-10) and 0.047 (SVHN) for pixels ranging in [0, 1]. More detailed training and attacking settings are reported in Appendix D.1. The main takeaway from these experiments (in Table 1) is that k-WTA is able to universally improve the white-box robustness, regardless of the training methods. The k-WTA robustness under black-box attacks is not always significantly better than ReLU networks. But black-box attacks, due to the lack of network information, are generally much harder than white-box attacks. In this sense, white-box attacks make the networks more vulnerable, and k-WTA is able to improve a network’s worst-case robustness. This improvement is not tied to any specific training method, achieved with no significant overhead, just by a simple replacement of ReLU with k-WTA. Athalye et al. (2018) showed that gradient-based defenses may render the network more vulnerable under black-box attacks than under gradient-based white-box attacks. However, we have not observed this behavior in k-WTA networks. Even under the strongest black-box attack, i.e., by generating adversarial examples from an independently trained copy of the target network, gradient-based attacks are still stronger (with higher successful rate) than black-box attacks (see Appendix D.3). Additional experiments include: 1) tests under transfer attacks across two independently trained k-WTA networks and across k-WTA and ReLU networks, 2) evaluation of k-WTA performance on different network architectures, and 3) comparison of k-WTA performance with the LWTA (recall Sec. 2.1) performance. See Appendix D.3 for details.\n4.2 VARYING SPARSITY RATIO γ AND MODEL ARCHITECTURE.\nWe further evaluate our method on various network architectures with different sparsity ratios γ. Figure 4 shows the standard test accuracies and robust accuracies against PGD attacks while γ decreases. To test on different network architectures, we apply k-WTA to ResNet18, DenseNet121 and Wide ResNet (WRN-22-12). In each case, starting from γ = 0.2, we decrease γ using incremental fine-tuning. We then evaluate the robust accuracy on CIFAR dataset, taking 20-iteration PGD attacks with a perturbation range = 0.31 for pixels ranging in [0, 1]. We find that when γ is larger than ∼ 0.1, reducing γ has little effect on the standard accuracy, but increases the robust accuracy. When γ is smaller than ∼ 0.1, reducing γ drastically lowers both the standard and robust accuracies. The peaks in the Arob curves (Figure 4-right) are consistent with our theoretical understanding: Theorem 1 suggests that when l is fixed, a smaller γ tends to sparsify the linear region boundaries, exposing more gradients to the attacker. Meanwhile, as also discussed in Sec. 3, a smaller γ leads to a larger discontinuity jump and thus tends to improve the robustness." }, { "heading": "4.3 LOSS LANDSCAPE IN GRADIENT-BASED ATTACKS", "text": "We now empirically unveil why k-WTA is able to improve the network’s robustness (in addition to our theoretical analysis in Sec. 3). Here we visualize the attacker’s loss landscape in gradient-based attacks in order to reveal the landscape change caused by k-WTA. Similar to the analysis in Tramèr et al. (2017), we plot the attack loss of a model with respect to its input on points x′ = x+ 1g1+ 2g2, where x is a test sample from CIFAR test set, g1 is the direction of the loss gradient with respect to the input, g2 is another random direction, 1 and 2 sweep in the range of [−0.04, 0.04], with 50 samples each. This results in a 3D landscape plot with 2500 data points (Figure 5). As shown in Figure 5, k-WTA models (with γ = 0.1) have a highly non-convex and non-smooth loss landscape. Thus, the estimated gradient is hardly useful for adversarial searches. This explains why k-WTA models can effectively resist gradient-based attacks. In contrast, ReLU models have a much smoother loss surface, from which adversarial examples can be easily found using gradient descent. Inspecting the range of loss values in Figure 5, we find that adversarial training tends to compress the loss landscape’s dynamic range in both the gradient direction and the other random direction, making the dynamic range smaller than that of the models without adversarial training. This phenomenon has been observed in ReLU networks (Madry et al., 2017; Tramèr et al., 2017). Interestingly, k-WTA models manifest a similar behavior (Figure 5-a,b). Moreover, we find that in k-WTA models a larger γ leads to a smoother loss surface than a smaller γ (see Appendix D.6 for more details)." }, { "heading": "5 CONCLUSION", "text": "This paper proposes to replace widely used activation functions with the k-WTA activation for improving the neural network’s robustness against adversarial attacks. This is the only change we advocate. The underlying idea is to embrace the discontinuities introduced by k-WTA functions to make the search for adversarial examples more challenging. Our method comes almost for free, harmless to network training, and readily useful in the current paradigm of neural networks.\nAcknowledgments. This work was supported in part by the National Science Foundation (CAREER-1453101, 1816041, 1910839, 1703925, 1421161, 1714818, 1617955, 1740833), Simons Foundation (#491119 to Alexandr Andoni), Google Research Award, a Google PhD Fellowship, a Snap Research Fellowship, a Columbia SEAS CKGSB Fellowship, and SoftBank Group." }, { "heading": "Supplementary Document", "text": "Enhancing Adversarial Defense by k-Winners-Take-All" }, { "heading": "A OTHER RELATED WORK", "text": "In this section, we briefly review the key ideas of attacking neural network models and existing defense methods based on adversarial training.\nAttack methods. Recent years have seen adversarial attack studied extensively. The proposed attack methods fall under two general categories, white-box and black-box attacks. The white-box threat model assumes that the attacker knows the model’s structure and parameters fully. This means that the attacker can exploit the model’s gradient (with respect to the input) to find adversarial examples. A baseline of such attacks is the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014), which constructs the adversarial example x′ of a given labeled data (x, y) using a gradient-based rule:\nx′ = x + sign(∇xL(f(x), y)), (3)\nwhere f(x) denotes the neural network model’s output, L(·) is the loss function provided f(x) and input label y, and is the perturbation range for the allowed adversarial example. Extending FGSM, Projected Gradient Descent (PGD) (Kurakin et al., 2016) utilizes local firstorder gradient of the network in a multi-step fashion, and is considered the “strongest” first-order adversary (Madry et al., 2017). In each step of PGD, the adversarial example is updated by a FGSM rule, namely,\nx′n+1 = Πx′∈∆ x ′ n + sign(∇xL(f(x′n), y)), (4)\nwhere x′n is the adversarial examples after n steps and Πx∈∆ (x ′ n) projects x ′ n back into an allowed perturbation range ∆ (such as an ball of x under certain distance measure). Other attacks include Deepfool (Moosavi-Dezfooli et al., 2016), C&W (Carlini & Wagner, 2017) and momentum-based attack (Dong et al., 2018). Those methods are all using first-order gradient information to construct adversarial samples. The black-box threat model is a strict subset of the white-box threat model. It assumes that the attacker has no information about the model’s architecture or parameters. Some black-box attack model allows the attacker to query the victim neural network to gather (or reverse-engineer) information. By far the most successful black-box attack is transfer attack (Papernot et al., 2017; Tramèr et al., 2017). The idea is to first construct adversarial examples on an adversarially trained network and then attack the black-box network model use these samples. There also exist some gradient-free black-box attack methods, such as boundary attack (Brendel et al., 2017; Chen & Jordan, 2019), one-pixel attack (Su et al., 2019) and local search attack (Narodytska & Kasiviswanathan, 2016). Those methods rely on repeatedly evaluating the model and are not as effective as gradient-based white-box attacks.\nAdversarial training. Adversarial training (Goodfellow et al., 2014; Madry et al., 2017; Kurakin et al., 2016; Huang et al., 2015) is by far the most successful method against adversarial attacks. It trains the network model with adversarial images generated during the training time. Madry et al. (2017) showed that adversarial training in essence solves the following min-max optimization problem:\nmin f E{max x′∈∆\nL(f(x′), y)}, (5)\nwhere ∆ is the set of allowed perturbations of training samples, and y denotes the true label of each training sample. Recent works that achieve state-of-the-art adversarial robustness rely on adversarial training (Zhang et al., 2019; Xie et al., 2018b). However, adversarial training is notoriously slow because it requires finding adversarial samples on-the-fly at each training epoch. Its prohibitive cost makes adversarial training difficult to scale to large datasets such as ImageNet (Deng et al., 2009) unless enormous computation resources are available. Recently, Shafahi et al. (2019b) revise the adversarial training algorithm to make it has similar training time as regular training, while keep the standard and robust accuracy comparable to standard adversarial training.\nRegularization. Another type of defense is based on regularizing the neural network, and many works of this type are combined with adversarial training. For example, feature denoising (Xie et al., 2018b) adds several denoise blocks to the network structure and trains the network with adversarial training. Zhang et al. (Zhang et al., 2019) explicitly added a regularization term to balance the trade-off between standard accuracy and robustness, obtaining state-of-the-art robust accuracy on CIFAR. Some other regularization-based methods require no adversarial training. For example, Ross & DoshiVelez (2017) proposed to regularize the gradient of the model with respect to its input; Zheng et al. (2016) generated adversarial samples by adding random Gaussian noise to input data. However, these methods are shown to be brittle under stronger iterative gradient-based attacks such as PGD (Zhang et al., 2019). In contrast, as demonstrated in our experiments, our method without using adversarial training is able to greatly improve robustness under PGD and other attacks.\nB DISCONTINUITY JUMP OF Wφk(x)\nConsider a gradual and smooth change of the vector x. For the ease of illustration, let us assume all the values in x are distinct. Because every element in x changes smoothly, when the activation pattern A(x) changes, the k-th largest and k + 1-th largest value in x must swap: the previously k-th largest value is removed from the activation pattern, while the previously k + 1-th largest value is added in the activation pattern. Let i and j denote the indices of the two values, that is, xi is previously the k-th largest and xj is previously the k + 1-th largest. When this swap happens, xi and xj must be infinitesimally close to each other, and we use x∗ to indicate their common value. This swap affects the computation of Wφk(x). Before the swap happens, xi is in the activation pattern but xj is not, therefore Wi takes effect but Wj does not. After the swap, Wj takes effect while Wj is suppressed. Therefore, the discontinuity jump due to this swap is (Wj −Wi)x∗. When W is determined, the magnitude of the jump depends on x∗. Recall that x∗ is the k-th largest value in x when the swap happens. Thus, it depends on k and in turn the sparsity ratio γ: the smaller the γ is, the smaller k is effectively used (for a fixed vector length). As a result, the k-th largest value becomes larger—when k = 1, the largest value of x is used as x∗." }, { "heading": "C THEORETICAL PROOFS", "text": "In this section, we will prove Theorem 1 and Theorem 2. The formal version of the two theorems are Theorem 9 and Theorem 10 respectively.\nNotation. We use [n] to denote the set {1, 2, · · · , n}. We use 1(E) to indicate an indicator variable. If the event E happens, the value of 1(E) is 1. Otherwise the value of 1(E) is 0. For a weight matrix W , we use Wi to denote the i-th row of W . For a bias vector b, we use bi to denote the i-th entry of b. In this section, we show some behaviors of the k-WTA activation function. Recall that an n-layer neural network f(x) with k-WTA activation function can be seen as the following:\nf(x) = W (1) · φk(W (2) · φk(· · ·φk(W (n)x+ b(n))) + b(2)) + b(1)\nwhere W (i) is the weight matrix, b(i) is the bias vector of the i-th layer, and φ(·) is the k-WTA activation function, i.e., for an arbitrary vector y, φk(y) is defined as the following:\nφk(y)j = { yj , if yj is one of the top-k largest values, 0, otherwise.\nFor simplicity of the notation, if k is clear in the context, we will just use φ(y) for short. Notice that if there is a tie in the above definition, we assume the entry with smaller index has larger value. For a vector y ∈ Rl, we define the activation pattern A(y) ⊆ [l] as\nA(y) = {i ∈ [l] | yi is one of the top-k largest values}.\nNotice that if the activation patternA(y) is different fromA(y′), then W ·φ(y) and W ·φ(y′) will be in different linear region. Actually, W ·φ(y) may even represent a discontinuous function. In the next section, we will show that when the network is much wider, the function may be more discontinuous with respect to the input." }, { "heading": "C.1 DISCONTINUITY WITH RESPECT TO THE INPUT", "text": "We only consider the activation pattern of the output of one layer. We consider the behavior of the network after the initialization of the weight matrix and the bias vector. By initialization, the entries of the weight matrix W are i.i.d. random Gaussian variables, and the bias vector is zero. We can show that if the weight matrix is sufficiently wide, then for any vector x, with high probability, for all vector x′ satisfying that the \"perpendicular” distance between x and x′ is larger than a small threshold, the activation patterns of Wx and Wx′ are different. Notice that the scaling of W does not change the activation pattern of Wx for any x, we can thus assume that each entry of W is a random variable with standard Gaussian distribution N(0, 1). Before we prove Theorem 9, let us prove several useful lemmas. The following several lemmas does not depend on the randomness of the weight matrix. Lemma 1 (Inputs with the same activation pattern form a convex set). Given an arbitrary weight matrix W ∈ Rl×m and an arbitrary bias vector b ∈ Rl, for any x ∈ Rm, the set of all the vectors x′ ∈ Rm satisfying A(Wx′ + b) = A(Wx+ b) is convex, i.e., the set {x′ ∈ Rm | A(Wx+ b) = A(Wx′ + b)} is convex.\nProof. If A(Wx′ + b) = A(Wx+ b), then x′ should satisfy: ∀i ∈ A(Wx+ b), j ∈ [l] \\ A(Wx+ b),Wix′ + bi ≥ (or >) Wjx′ + bj . Notice that the inequality Wix′ + bi ≥ (or >) Wjx′ + bj denotes a half hyperplane (Wi −Wj)x′ + (bi − bj) ≥ (or >) 0. Thus, the set {x′ ∈ Rm | A(Wx+ b) = A(Wx′ + b)} is convex since it is an intersection of half hyperplanes.\nLemma 2 (Different patterns of input points with small angle imply different patterns of input points with large angle). Let α ∈ (0, 1). Given an arbitrary weight matrix W ∈ Rl×m, a bias vector b = 0, and a vector x ∈ Rm with ‖x‖2 = 1, if every vector x′ ∈ Rm with ‖x′‖2 = 1 and 〈x, x′〉 = α satisfies A(Wx + b) 6= A(Wx′ + b), then for any x′′ ∈ Rm with ‖x′′‖2 = 1 and 〈x, x′′〉 < α, it satisfies A(Wx+ b) 6= A(Wx′′ + b).\nProof. We draw a line between x and x′′. There must be a point x∗ ∈ Rm on the line and 〈x, x′〉 = α, where x′ = x∗/‖x∗‖2. Since b = 0, we haveA(Wx∗+b) = A(Wx′+b) 6= A(Wx+b). Since x∗ is on the line between x and x′′, we haveA(Wx′′+b) 6= A(Wx+b) by convexity (see Lemma 1).\nLemma 3 (A sufficient condition for different patterns). Consider two vectors y ∈ Rl and y′ ∈ Rl. If ∃i ∈ A(y), j ∈ [l] \\ A(y) such that y′i < y′j , then A(y) 6= A(y′).\nProof. Suppose A(y) = A(y′). We have i ∈ A(y′). It means that y′i is one of the top-k largest values among all entries of y′. Thus y′j is also one of the top-k largest values, and j should be in A(y′) which leads to a contradiction.\nIn the remaining parts, we will assume that each entry of the weight matrix W ∈ Rl×m is a standard random Gaussian variable. Lemma 4 (Upper bound of the entires of W ). Consider a matrix W ∈ Rl×m where each entry is a random variable with standard Gaussian distribution N(0, 1). With probability at least 0.99, ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ ml.\nProof. Consider a fixed i ∈ [l]. We have E[‖Wi‖22] = m. By Markov’s inequality, we have Pr[‖Wi‖22 > 100ml] ≤ 0.01/l. By taking union bound over all i ∈ [l], with probability at least 0.99, we have ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ ml.\nLemma 5 (Two vectors may have different activation patterns with a good probability). Consider a matrix W ∈ Rl×m where each entry is a random variable with standard Gaussian distribution N(0, 1). Let γ ∈ (0, 0.48) be the sparsity ratio of the activation, i.e., γ = k/l. For any two vectors x, x′ ∈ Rm with ‖x‖2 = ‖x′‖2 = 1 and 〈x, x′〉 = α for some arbitrary α ∈ (0.5, 1), with probability at least 1− 2−Θ((1/α2−1)γl), A(Wx) 6= A(Wx′) and ∃i ∈ A(Wx), j ∈ [l] \\ A(Wx) such that\nWix ′ < Wjx ′ − √\n1− α2 24α · √ 2π.\nProof. Consider arbitrary two vectors x, x′ ∈ Rm with ‖x‖2 = ‖x′‖2 = 1 and 〈x, x′〉 = α. We can find an orthogonal matrix Q ∈ Rm×m such that x̃ := Qx = (1, 0, 0, · · · , 0)> ∈ Rm and x̃′ := Qx′ = (α, √ 1− α2, 0, 0, · · · , 0)> ∈ Rm. Let W̃ = WQ>. Then we have W̃ x̃ = Wx and W̃ x̃′ = Wx′. Thus, we only need to analyze the activation patterns of W̃ x̃ and W̃ x̃′. Since Q> is an orthogonal matrix and each entry of W is an i.i.d. random variable with standard Gaussian distribution N(0, 1), W̃ = WQ> is also a random matrix where each entry is an i.i.d. random variable with standard Gaussian distribution N(0, 1). Let the entries in the first column of W̃ be X1, X2, · · · , Xl and let the entries in the second column of W̃ be Y1, Y2, · · · , Yl. Then we have\nWx = W̃ x̃ = X1X2· · · Xl , Wx′ = W̃ x̃′ = αX1 + √ 1− α2Y1 αX2 + √ 1− α2Y2 · · · αXl + √ 1− α2Yl . (6) We set ε = √ 1− α2/(96α) and define R′1 < R1 < R2 < R′2 as follows:\nPr X∼N(0,1)\n[X ≥ R′2] = (1− 2ε)γ, (7)\nPr X∼N(0,1)\n[X ≥ R2] = (1− ε)γ, (8)\nPr X∼N(0,1)\n[X ≥ R1] = (1 + ε)γ, (9)\nPr X∼N(0,1)\n[X ≥ R′1] = (1 + 2ε)γ. (10)\nSince γ < 0.48 and ε ≤ 0.02, we have (1 + 2ε)γ < 0.5. It implies 0 < R′1 < R1 < R2 < R′2. Claim 3.\nR′2 −R′1 ≤ 8ε √ 2π.\nProof. By Equation (7) and Equation (10),\nPr X∼N(0,1)\n[R′1 ≤ X ≤ R′2] = 4εγ.\nDue to the density function of standard Gaussian distribution, we have\n1√ 2π ∫ R′2 R′1 e−t 2/2dt = Pr X∼N(0,1) [R′1 ≤ X ≤ R′2] = 4εγ.\nSince R′2 ≥ R′1 ≥ 0, we have ∀t ∈ [R′1, R′2], e−t 2/2 ≥ e−R′22 /2. Thus,\n1√ 2π · e−R ′2 2 /2(R′2 −R′1) = 1√ 2π · e−R ′2 2 /2 ∫ R′2 R′1 1dt ≤ 1√ 2π ∫ R′2 R′1 e−t 2/2dt = 4εγ.\nBy the tail bound of Gaussian distribution, we have\nPr X∼N(0,1)\n[X ≥ R′2] ≤ e−R ′2 2 /2.\nBy combining with Equation (7), we have\n(1− 2ε)γ · 1√ 2π (R′2 −R′1)\n= Pr X∼N(0,1) [X ≥ R′2] · 1√ 2π (R′2 −R′1)\n≤ e−R ′2 2 /2 · 1√\n2π (R′2 −R′1)\n≤ 4εγ, which implies\nR′2 −R′1 ≤ 4ε 1− 2ε √ 2π ≤ 8ε √ 2π,\nwhere the last inequality follows from 1− 2ε ≥ 0.5.\nClaim 4.\nPr X1,X2,··· ,Xl [ l∑ i=1 1(Xi ≥ R2) ≥ (1− ε/2)γl ] ≤ e−ε 2γl/24 (11)\nPr X1,X2,··· ,Xl [ l∑ i=1 1(Xi ≥ R1) ≤ (1 + ε/2)γl ] ≤ e−ε 2γl/18 (12)\nPr X1,X2,··· ,Xl [ l∑ i=1 1(R′2 ≥ Xi ≥ R2) ≤ εγl/2 ] ≤ e−εγl/8 (13)\nPr X1,X2,··· ,Xl [ l∑ i=1 1(R1 ≥ Xi ≥ R′1) ≤ εγl/2 ] ≤ e−εγl/8 (14)\nProof. For i ∈ [l], we have E[1(Xi ≥ R2)] = Pr[Xi ≥ R2] = (1 − ε)γ by Equation (8). By Chernoff bound, we have\nPr [ l∑ i=1 1(Xi ≥ R2) ≥ (1 + ε/2) · (1− ε)γl ] ≤ e−(ε/2) 2(1−2ε)γl/3.\nSince ε ≤ 0.02,\nPr [ l∑ i=1 1(Xi ≥ R2) ≥ (1− ε/2)γl ] ≤ e−ε 2γl/24.\nWe have E[1(Xi ≥ R1)] = Pr[Xi ≥ R1] = (1 + ε)γ by Equation (9). By Chernoff bound, we have\nPr [ l∑ i=1 1(Xi ≥ R1) ≤ (1− ε/3) · (1 + ε)γl ] ≤ e−(ε/3) 2(1+ε)γl/2.\nThus,\nPr [ l∑ i=1 1(Xi ≥ Ri) ≤ (1 + ε/2)γl ] ≤ e−ε 2γl/18\nWe have E [1(R′2 ≥ Xi ≥ R2)] = Pr[R′2 ≥ Xi ≥ R2] = εγ by Equation (7) and Equation (8). By Chernoff bound, we have\nPr [ l∑ i=1 1(R′2 ≥ Xi ≥ R2) ≤ 1/2 · εγl ] ≤ e−εγl/8\nSimilarly, we have E[1(R1 ≥ Xi ≥ R′1)] = Pr[R1 ≥ Xi ≥ R′1] = εγ by Equation (9) and Equation (10). By chernoff bound, we have\nPr X1,X2,··· ,Xl [ l∑ i=1 1(R1 ≥ Xi ≥ R′1) ≤ 1/2 · εγl ] ≤ e−εγl/8\nEquation (11) says that, with high probability, ∀i ∈ [l] with Xi ≥ R2, it has i ∈ A(Wx). Equation (12) says that, with high probability, ∀i ∈ [l] with Xi ≤ R1, it has i 6∈ A(Wx). Equation (14) (Equation (13)) says that, with high probability, there are many i ∈ [l] such that Wix ∈ [R′1, R1] (Wix ∈ [R2, R′2]). Let E = E1 ∧ E2 ∧ E3 ∧ E4, where\n• E1: ∑l i=1 1(Xi ≥ R2) ≤ (1− ε/2)γl,\n• E2: ∑l i=1 1(Xi ≥ R1) ≥ (1 + ε/2)γl,\n• E3: ∑l i=1 1(R1 ≥ Xi ≥ R′1) ≥ εγl/2,\n• E4: ∑l i=1 1(R ′ 2 ≥ Xi ≥ R2) ≥ εγl/2.\nAccording to Equation (11), Equation (12), Equation (13) and Equation (14), the probability that E happens is at least\n1− 4e−ε 2γl/24 (15)\nby union bound over Ē1, Ē2, Ē3, Ē4. Claim 5. Condition on E , the probability that ∃i ∈ [l] with Xi ∈ [R2, R′2] such that Yi < −α/ √ 1− α2 · 16ε √ 2π is at least\n1− (\n16ε · α√ 1− α2 + 1 2\n)εγl/2 .\nProof. For a fixed i ∈ [l],\nPr [ Yi ≥ −α/ √ 1− α2 · 16ε √ 2π ] = ∫ 0 −α/ √ 1−α2·16ε √ 2π 1√ 2π e−t 2/2dt+ 1 2\n≤ 1√ 2π · α/\n√ 1− α2 · 16ε √ 2π + 1\n2\n= 16ε · α√ 1− α2 + 1 2 .\nThus, according to event E4, we have\nPr [ ∀i with Xi ∈ [R2, R′2], Yi ≥ −α/ √ 1− α2 · 16ε √ 2π | E ] ≤ (\n16ε · α√ 1− α2 + 1 2\n)εγl/2 .\nClaim 6. Condition on E , the probability that ∃i ∈ [l] with Xi ∈ [R′1, R1] such that Yi ≥ 0 is at least 1− (1/2)εγl/2 .\nProof. For a fixed i ∈ [l], Pr[Yi ≤ 0] = 1/2. Thus, according to event E3, we have\nPr [∀i with Xi ∈ [R′1, R1], Yi ≤ 0 | E ] ≤ (1/2)εγl/2.\nCondition on that E happens. Because of E1, if Xi ≥ R2, Xi must be one of the top-k largest values. Due to Equation (6), we have Xi = Wix. Thus, if Xi ≥ R2, i ∈ A(Wx). By Claim 5, with probability at least\n1− (\n16ε · α√ 1− α2 + 1 2\n)εγl/2 , (16)\nthere is i ∈ A(Wx) such that\nWix ′ = αXi + √ 1− α2Yi\n≤ αXi + √ 1− α2 · ( − α√\n1− α2 · 16ε √ 2π ) = α(Xi − 16ε √ 2π) ≤ α(R′2 − 16ε √ 2π), (17)\nwhere the first step follows from Equation (6), the second step follows from Yi ≤ −α/ √\n1− α2 · 16ε √\n2π, and the last step follows from Xi ∈ [R2, R′2].\nBecause of E2 if Xj ≤ R1, Xj should not be one of the top-k largest values. Due to Equation (6), we have Xj = Wjx. Thus, if Xj ≤ R1, j 6∈ A(Wx). By Claim C.1, with probability at least\n1− (1/2)εγl/2 , (18) there is j 6∈ A(Wx) such that\nWjx ′ = αXj + √ 1− α2Yj ≥ αXj ≥ αR′1, (19)\nwhere the first step follows from Equation (6), the second step follows from Yj ≥ 0, and the last step follows from Xj ∈ [R′1, R1]. By Equation (19) and Equation (17), ∃i ∈ A(Wx), j ∈ [l] \\ A(Wx),\nWix ′ ≤ α(R′2 − 16ε √ 2π) ≤ α(R′1 − 8ε √ 2π) ≤Wjx′ − 8αε √ 2π\n≤Wjx′ − 4ε √ 2π = Wjx ′ − √\n1− α2 24α · √ 2π,\nwhere the second step follows from Claim 3, the forth step follows from α ≥ 0.5, and the last step follows from ε = √ 1− α2/(96α). By Lemma 3, we can conclude A(Wx) 6= A(Wx′). By Equation (15), Equation (16), Equation (18), and union bound, the overall probability is at least\n1− ( 4e−ε 2γl/24 + ( 16ε · α√\n1− α2 +\n1\n2\n)εγl/2 + ( 1\n2\n)εγl/2)\n≥ 1− ( 4e−ε 2γl/24 + ( 2\n3\n)εγl/2 + ( 1\n2\n)εγl/2)\n≥ 1− 6 · ( 2\n3 )ε2γl/24 ≥ 1− 2−Θ(( 1 α2 −1)γl),\nwhere the first and the last step follows from ε = √ 1− α2/(96α)\nNext, we will use a tool called ε-net. Definition 7 (ε-Net). For a given set S, if there is a set N ⊆ S such that ∀x ∈ S there exists a vector y ∈ N such that ‖x− y‖2 ≤ ε, then N is an ε-net of S. There is a standard upper bound of the size of an ε-net of a unit norm ball. Lemma 6 (Wojtaszczyk (1996) II.E, 10). Given a matrix U ∈ Rm×d, let S = {Uy | ‖Uy‖2 = 1}. For ε ∈ (0, 1), there is an ε-net N of S with |N | ≤ (1 + 1/ε)d. Now we can extend above lemma to the following. Lemma 7 (ε-Net for the set of points with a certain angle). Given a vector x ∈ Rm with ‖x‖2 = 1 and a parameter α ∈ (−1, 1), let S = {x′ ∈ Rm | ‖x′‖2 = 1, 〈x, x′〉 = α}. For ε ∈ (0, 1), there is an ε-net N of S with |N | ≤ (1 + 1/ε)m−1.\nProof. Let U ∈ Rm×(m−1) have orthonormal columns and Ux = 0. Then S can be represented as S = {α · x+ √\n1− α2 · Uy | y ∈ Rm−1, ‖Uy‖2 = 1}. Let S ′ = {Uy | y ∈ Rm−1, ‖Uy‖2 = 1}. According to Lemma 6, there is an ε-net N ′ of S ′ with size |N ′| ≤ (1 + 1/ε)m−1. We construct N as following:\nN = {α · x+ √\n1− α2 · z | z ∈ N ′}. It is obvious that |N | = |N ′| ≤ (1 + 1/ε)m−1. Next, we will show that N is indeed an ε-net of S. Let x′ be an arbitrary vector from S . Let x′ = α · x+ √ 1− α2 · z for some z ∈ S ′. There is a vector (α · x+ √ 1− α2 · z′) ∈ N such that z′ ∈ N ′ and ‖z − z′‖2 ≤ ε. Thus, we have\n‖x′ − (α · x+ √ 1− α2 · z′)‖2 = √ 1− α2‖z − z′‖2 ≤ ε.\nTheorem 8 (Rotating a vector a little bit may change the activation pattern). Consider a weight matrix W ∈ Rl×m where each entry is an i.i.d. sample drawn from the Gaussian distribution N(0, 1/l). Let γ ∈ (0, 0.48) be the sparsity ratio of the activation function, i.e., γ = k/l. With probability at least 0.99, it has ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ m. Condition on that ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ m happens, then, for any x ∈ Rm and α ∈ (0.5, 1), if\nl ≥ C · ( m+ log(1/δ)\nγ · 1 1− α2\n) · log ( m+ log(1/δ)\nγ · 1 1− α2 ) for a sufficiently large constantC, with probability at least 1−δ ·2−m, ∀x′ ∈ Rm with 〈x,x\n′〉 ‖x‖2‖x′‖2 ≤ α,\nA(Wx) 6= A(Wx′).\nProof. Notice that the scale of W does not affect the activation pattern of Wx for any x ∈ Rm. Thus, we assume that each entry of W is a standard Gaussian random variable in the remaining proof, and we will instead condition on ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ ml. The scale of x or x′ will not affect 〈x,x ′〉 ‖x‖2‖x′‖2 . It will not affect the activation pattern either. Thus, we assume ‖x‖2 = ‖x′‖2 = 1. By Lemma 4, with probability at least 0.99, we have ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ ml.\nLet\nS = {y ∈ Rm | ‖y‖2 = 1, 〈x, y〉 = α}.\nSet\nε = √ 2π(1− α2) 720α √ ml .\nBy Lemma 7, there is an ε-net N of S such that\n|N | ≤ ( 1 + 720α √ ml√\n2π(1− α2)\n)m .\nBy Lemma 5, for any y ∈ N , with probability at least\n1− 2−Θ((1/α 2−1)γl),\n∃i ∈ A(Wx), j ∈ [l] \\ A(Wx) such that\nWiy < Wjy − √\n1− α2 24α · √ 2π.\nBy taking union bound over all y ∈ N , with probability at least\n1− |N | · 2−Θ((1/α 2−1)γl)\n≥ 1− ( 1 + 720α √ ml√\n2π(1− α2)\n)m 2−Θ(( 1 α2 −1)γl)\n≥ 1− ( 1000 · √ ml√\n1− α2\n)m 2−Θ(( 1 α2 −1)γl)\n≥ 1− ( 1000 · √ ml√\n1− α2\n)m 2 −C′·( 1 α2 −1)γ·m+log(1/δ)γ · α2 1−α2 ·log ( ml 1−α2 ) // C ′ is a sufficiently large constant\n= 1− ( 1000 · √ ml√\n1− α2\n)m 2 −C′ ·(m+log(1/δ))·log ( ml 1−α2 )\n≥ 1− δ · 2−m,\nthe following event E ′ happens: ∀y ∈ N ,∃i ∈ A(Wx), j ∈ [l] \\ A(Wx) such that\nWiy < Wjy − √\n1− α2 24α · √ 2π.\nIn the remaining of the proof, we will condition on the event E ′. Consider y′ ∈ S. Since N is an ε-net of S, we can always find a y ∈ N such that\n‖y − y′‖2 ≤ ε = √\n2π(1− α2) 720α √ ml .\nSince event E ′ happens, we can find i ∈ A(Wx) and j ∈ [l] \\ A(Wx) such that\nWiy < Wjy − √\n1− α2 24α · √ 2π.\nThen, we have Wiy ′ = Wiy +Wi(y ′ − y)\n< Wjy − √\n1− α2 24α · √ 2π + ‖Wi‖2‖y′ − y‖2\n≤Wjy − √\n1− α2 24α · √ 2π + 10 √ ml · √ 2π(1− α2) 720α √ ml\n= Wjy − √\n1− α2 36α · √ 2π\n= Wjy ′ +Wj(y − y′)− √ 1− α2 36α · √ 2π ≤Wjy′ + ‖Wj‖2‖y − y′‖2 − √\n1− α2 36α · √ 2π\n≤Wjy′ + 10 √ ml · √ 2π(1− α2) 720α √ ml − √ 1− α2 36α · √ 2π ≤Wjy′ − √\n1− α2 72α · √ 2π,\nwhere the second step follows from Wiy < Wjy − √ 1− α2/(24α) · √\n2π and Wi(y′ − y) ≤ ‖Wi‖2‖y′ − y‖2, the third step follows from ‖Wi‖2 ≤ 10 √ ml and ‖y′ − y‖2 ≤√\n2π(1− α2)/(720α √ ml), the sixth step follows from Wj(y − y′) ≤ ‖Wj‖2‖y − y′‖2, and the\nseventh step follows from ‖Wi‖2 ≤ 10 √ ml and ‖y′ − y‖2 ≤ √ 2π(1− α2)/(720α √ ml). By Lemma 3, we know that A(Wx) 6= A(Wy′). Thus, ∀y′ ∈ Rm with ‖y′‖2 = 1 and 〈x, y′〉 = α, we have A(Wx) 6= A(Wy′) conditioned on E ′. By Lemma 2, we can conclude that ∀x′ ∈ Rm with ‖x′‖2 = 1 and 〈x, x′〉 ≤ α, we have A(Wx) 6= A(Wx′) conditioned on E ′.\nTheorem 9 (A formal version of Theorem 1). Consider a weight matrix W ∈ Rl×m where each entry is an i.i.d. sample drawn from the Gaussian distribution N(0, 1/l). Let γ ∈ (0, 0.48) be the sparsity ratio of the activation function, i.e., γ = k/l. With probability at least 0.99, it has ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ m. Condition on that ∀i ∈ [l], ‖Wi‖2 ≤ 10 √ m happens, then, for any x ∈ Rm, if\nl ≥ C · ( m+ log(1/δ)\nγ · 1 β\n) · log ( m+ log(1/δ)\nγ · 1 β ) for some β ∈ (0, 1) and a sufficiently large constant C, with probability at least 1 − δ · 2−m, ∀x′ ∈ Rm with ‖∆x‖22/‖x‖22 ≥ β, A(Wx) 6= A(Wx′), where x′ = c · (x+ ∆x) for some scaler c, and ∆x is perpendicular to x.\nProof. If 〈x, x′〉 ≤ 0, then the statement follows from Theorem 8 directly. In the following, we consider the case 〈x, x′〉 > 0. If ‖∆x‖2/‖x‖22 ≥ β,\n〈x, x′〉2\n‖x‖22‖x′‖22\n= c2‖x‖42\n‖x‖22(c2(‖x‖22 + ‖∆x‖22)) = ‖x‖22 ‖x‖22 + ‖∆x‖22\n≤ ‖x‖ 2 2 ‖x‖22 + β‖x‖22 ≤ 1 1 + β .\nThus, we have the bounds:\n1\n1− 〈x,x ′〉2\n‖x‖22‖x′‖22\n≤ 1 β\n+ 1 ≤ O ( 1\nβ\n) .\nBy Theorem 8, we conclude the proof.\nExample 1. Suppose that the training data contains N points x1, x2, · · · , xN ∈ Rm (m ≥ Ω(logN)), where each entry of xi for i ∈ [N ] is an i.i.d. Bernoulli random variable, i.e., each entry is 1 with some probability p ∈ (100 log(N)/m, 0.5) and 0 otherwise. Consider a weight matrix W ∈ Rl×m where each entry is an i.i.d. sample drawn from the Gaussian distribution N(0, 1/l). Let γ ∈ (0, 0.48) be the sparsity ratio of the activation function, i.e., γ = k/l. If l ≥ Ω(m/γ · log(m/γ)), then with probability at least 0.9, ∀i, j ∈ [N ], the activation pattern of Wxi and Wxj are different, i.e., A(Wxi) 6= A(Wxj).\nProof. Firstly, let us bound ‖xi‖2. We have E[‖xi‖22] = E [ ∑m t=1 xi,t] = pm. By Bernstein inequality, we have\nPr [∣∣∣∣∣ m∑ t=1 xi,t − pm ∣∣∣∣∣ > 110pm ] ≤ 2e − (pm/10) 2/2 pm+1 3 · 1 10 pm ≤ 0.01/N.\nThus, by taking union bound over all i ∈ [N ], with probability at least 0.99, ∀i ∈ [N ], √\n0.9pm ≤ ‖xi‖2 ≤ √ 1.1pm.\nNext we consider 〈xi, xj〉. Notice that E[〈xi, xj〉] = E [ ∑m t=1 xi,txj,t] = p 2m. There are two cases.\nCase 1 (p2m > 20 logN ). By Bernstein inequality, we have\nPr [∣∣〈xi, xj〉 − p2m∣∣ > 1 2 p2m ] ≤ 2e − (p 2m/2)2/2 p2m+1 3 1 2 p2m = 2e− 3 28p 2m ≤ 0.01/N2.\nBy taking union bound over all pairs of i, j, with probability at least 0.99, ∀i 6= j, 〈xi, xj〉 ≤ 32p 2m.\nSince ‖xi‖2, ‖xj‖2 ≥ √ 0.9pm, we have\n〈xi, xj〉 ‖xi‖2‖xj‖2 ≤ 3p 2m/2 0.9pm = 5 3 p ≤ 5 6 .\nCase 2 (p2m ≤ 20 logN ). By Bernstein inequality, we have\nPr [∣∣〈xi, xj〉 − p2m∣∣ > 10 logN] ≤ 2e− (10 logN)2/2p2m+13 ·10 logN ≤ 0.01/N2.\nBy taking union bound over all pairs of i, j, with probability at least 0.99, ∀i 6= j, 〈xi, xj〉 ≤ 10 logN , Since ‖xi‖2, ‖xj‖2 ≥ √ 0.9pm ≥ √ 90 logN , we have\n〈xi, xj〉 ‖xi‖2‖xj‖2 ≤ 10 logN 90 logN = 1 9 .\nThus, with probability at least 0.98, we have ∀i 6= j, 〈xi, xj〉/(‖xi‖2‖xj‖2) ≤ 5/6. By Theorem 8, with probability at least 0.99, ∀q ∈ [l], ‖Wq‖2 ≤ 10 √ m. Condition on this event, and since ∀i 6= j we have 〈xi, xj〉/(‖xi‖2‖xj‖2) ≤ 5/6, by Theorem 8 again and union bound over all i ∈ [N ], with probability at least 0.99, ∀i 6= j,A(Wxi) 6= A(Wxj)." }, { "heading": "C.2 DISJOINTNESS OF ACTIVATION PATTERNS OF DIFFERENT INPUT POINTS", "text": "Let X1, X2, · · · , Xm be i.i.d. random variables drawn from the standard Gaussian distribution N(0, 1). Let Z = ∑m i=1X 2 i . We use the notation χ 2 m to denote the distribution of Z. If m is clear in the context, we just use χ2 for short. Lemma 8 (A property of χ2 distribution). Let Z be a random variable with χ2m m (m ≥ 2) distribution. Given arbitrary ε, η ∈ (0, 1), if R is sufficiently large then\nPr[Z ≥ (1 + ε)R]/Pr[(1 + ε)R ≥ Z ≥ R] ≤ η.\nProof. Let R be a sufficiently large number such that:\n• eεR/2 ≥ 4ε .\n• eεR/8 ≥ Rm/2−1. • eεR/4 ≥ 169 · 1 η .\nLet ξ = ε/4. By the density function of χ2 distribution, we have\nPr[R ≤ Z ≤ (1 + ε)R] = 1 2m/2Γ(m/2) ∫ (1+ε)R R tm/2−1e−t/2dt,\nand\nPr[Z ≥ (1 + ε)R] = 1 2m/2Γ(m/2) ∫ ∞ (1+ε)R tm/2−1e−t/2dt,\nwhere Γ(·) is the Gamma function, and for integer m/2, Γ(m/2) = (m/2−1)(m/2−2) · · · ·2 ·1 = (m/2− 1)!. By our choice of R, we have\nPr[R ≤ Z ≤ (1 + ε)R] ≥ 1 2m/2Γ(m/2) ∫ (1+ε)R R e−t/2dt\n= 1 2m/2Γ(m/2) · 2 ( e−R/2 − e−(1+ε)R/2 ) ≥ 1\n2m/2Γ(m/2) · 2(1− ξ) · e−R/2,\nwhere the first step follows from ∀t ≥ R, tm/2−1 ≥ 1, and the third step follows from\ne−(1+ε)R/2\ne−R/2 = e−εR/2 ≤ ξ.\nWe also have:\nPr[Z ≥ (1 + ε)R] ≤ 1 2m/2Γ(m/2) ∫ +∞ (1+ε)R e−(1−ξ)t/2dt\n= 1 2m/2Γ(m/2) · 2 1− ξ · e−(1−ξ)(1+ε)R/2\n≤ 1 2m/2Γ(m/2) · 2 1− ξ · e−(1+ε/2)R/2,\nwhere the first step follos from ∀t ≥ R, tm/2−1 ≤ eξt/2, and the third step follows from (1− ξ)(1 + ε) ≥ (1 + ε/2). Thus, we have\nPr[Z ≥ (1 + ε)R] Pr[(1 + ε)R ≥ Z ≥ R] ≤ 1 (1− ξ)2 e−εR/4 ≤ 16 9 e−εR/4 ≤ η.\nLemma 9. Consider x, y, z ∈ Rm. If 〈x,y〉‖x‖2‖y‖2 ≤ α, 〈x,z〉 ‖x‖2‖z‖2 ≥ β for some α, β ≥ 0, then\n〈y,z〉 ‖y‖2‖z‖2 ≤ α +\n√ 1− β2. Furthermore, if β = 2+α+ √ 2−α2\n4 , then 〈y,z〉 ‖y‖2‖z‖2 ≤ (1 − εα)β, where\nεα ∈ (0, 1) only depends on α.\nProof. Without loss of generality, we suppose ‖x‖2 = ‖y‖2 = ‖z‖2 = 1. We can decompose y as ax+ y′ where y′ is perpendicular to x. We can decompose z as b1x+ b2y′/‖y′‖2 + z′ where z′ is perpendicular to both x and y′. Then we have:\n〈y, z〉 = ab1 + b2‖y′‖2 ≤ α+ √ 1− β2,\nwhere the last inequality follows from 0 ≤ b1 ≤ 1, a ≤ α, and b2 ≤ √ 1− b21 ≤ √\n1− β2, 0 ≤ ‖y′‖2 ≤ 1.\nBy solving β ≥ α+ √ 1− β2, we can get β ≥ α+ √\n2−α2 2 . Thus, if we set\nβ = 1 + α+\n√ 2−α2 2\n2 , β should be strictly larger than α+ √ 1− β2, and the gap only depends on α.\nLemma 10. Give x ∈ Rm, let y ∈ Rm be a random vector, where each entry of y is an i.i.d. sample drawn from the standard Gaussian distribution N(0, 1). Given β ∈ (0.5, 1), Pr[〈x, y〉/(‖x‖2‖y‖2) ≥ β] ≥ 1/(1 + 1/ √ 2(1− β))m.\nProof. Without loss of generality, we can assume ‖x‖2 = 1. Let y′ = y/‖y‖2. Since each entry of y is an i.i.d. Gaussian variable, y′ is a random vector drawn uniformly from a unit sphere. Notice that if 〈x, y′〉 ≥ β, then ‖x−y′‖2 ≤ √ 2(1− β). Let C = {z ∈ Rm | ‖z‖2 = 1, ‖z−x‖2 ≤ √ 2(1− β)} be a cap, and let S = {z ∈ Rm | ‖z‖2 = 1} be the unit sphere. Then we have\nPr[〈x, y′〉 ≥ β] = area(C)/area(S).\nAccording to Lemma 6, there is an √ 2(1− β)-net N with |N | ≤ (1 + 1/ √\n2(1− β))m. If we put a cap centered at each point in N , then the whole unit sphere will be covered. Thus, we can conclude\nPr[〈x, y′〉 ≥ β] ≥ 1/(1 + 1/ √ 2(1− β))m.\nTheorem 10 (A formal version of Theorem 2). Consider N data points x1, x2, · · · , xN ∈ Rm and a weight matrix W ∈ Rl×m where each entry of W is an i.i.d. sample drawn from the Gaussian distribution N(0, 1/l). Suppose ∀i 6= j ∈ [N ], 〈xi, xj〉/(‖xi‖2‖xj‖2) ≤ α for some α ∈ (0.5, 1). Fix k ≥ 1 and δ ∈ (0, 1), if l is sufficiently large, then with probability at least 1− δ,\n∀i, j ∈ [N ],A(Wxi) ∩ A(Wxj) = ∅.\nProof. Notice that the scale of W and x1, x2, · · · , xN do not affect either 〈xi, xj〉/(‖xi‖2‖xj‖2) or the activation pattern. Thus, we can assume ‖x1‖2 = ‖x2‖2 = · · · = ‖xN‖2 = 1 and each entry of W is an i.i.d. standard Gaussian random variable. Let β = 2+α+ √ 2−α2\n4 and εα be the same as mentioned in Lemma 9. Set ε and β ′ as\nε = 1 β−1\n2 , β ′ = (1 + ε)β.\nNow, set\nη = δ/100 100k log(N/δ) · (1 + 2/ √ 2(1− β′))m ,\nand let R satisfies\nPr Z∼χ2m [Z ≥ (1 + ε)2R2] = δ/100 l .\nAccording to Lemma 8, if l is sufficiently large, then R is sufficiently large such that\nPr Z∼χ2m [Z ≥ (1 + ε)2R2]/ Pr Z∼χ2m [(1 + ε)2R2 ≥ Z ≥ R2] ≤ η.\nNotice that for t ∈ [l], ‖Wt‖22 is a random variable with χ2m distribution. Thus, Pr[‖Wt‖2 ≥ (1 + ε)R] = δ/100l . By taking union bound over all t ∈ [l], with probability at least 1 − δ/100, ∀t ∈ [l], ‖Wt‖2 ≤ (1 + ε)R. In the remaining of the proof, we will condition on that ∀t ∈ [l], ‖Wt‖2 ≤ (1 + ε)R. Consider i, j ∈ [N ], t ∈ [l], if Wtxi > β′R, then we have\nWtxi ‖Wt‖2\n> β′R\n(1 + ε)R ≥ β′/(1 + ε) = β.\nDue to Lemma 9, we have\nWtxj ‖Wt‖2 < (1− εα)β.\nThus,\nWtxj < (1− εα)β‖Wt‖2 ≤ (1− εα)β(1 + ε)R ≤ (1− εα)β′R. (20)\nNotice that for i ∈ [N ], t ∈ [l], we have\nPr[Wtxi > β ′R] ≥ Pr[‖Wt‖2 ≥ R] Pr [ Wtxi ‖Wt‖2 ≥ β′ ]\n≥ δ/100 l · 1 η · 1 (1 + 1/ √ 2(1− β′))m\n≥ 1 l · 100k log(N/δ).\nBy Chernoff bound, with probability at least 1− δ/(100N),\nl∑ t=1 1(Wtxi > β ′R) ≥ k.\nBy taking union bound over i ∈ [N ], with probability at least 1− δ/100, ∀i ∈ [N ],\nl∑ t=1 1(Wtxi > β ′R) ≥ k.\nThis implies that ∀i ∈ [N ], if t ∈ A(Wxi), then Wtxi > β′R. Due to Equation (20), ∀j ∈ [N ], we have Wtxj < β′R which implies that t 6∈ A(Wxj). Thus, with probability at least 1− δ/50 ≥ 1− δ probability, ∀i 6= j, A(Wxi) ∩ A(Wxj) = ∅.\nRemark 1. Consider any x1, x2, · · · , xN ∈ Rm with ‖x1‖2 = ‖x2‖2 = · · · = ‖xN‖2 = 1. If ∀i 6= j ∈ [N ], 〈xi, xj〉 ≤ α for some α ∈ (0.5, 1), then |N | ≤ (1 + 2/ √ 2(1− α))m.\nProof. Since 〈xi, xj〉 ≤ α, ‖xi − xj‖22 = ‖xi‖22 + ‖xj‖22 − 2〈xi, xj〉 ≥ 2− 2α. Let S be the unit sphere, i.e., S = {x ∈ Rm | ‖x‖2 = 1}. Due to Lemma 6, there is a ( √ 2(1− α)/2)-net N of S\nwith size at most |N | ≤ (1 + 2/ √\n2(1− α))m. Consider xi, xj , and y ∈ N . By triangle inequality, if ‖xi − y‖2 < √ 2(1− α)/2, then ‖xj − y‖2 > √ 2(1− α)/2 due to ‖xi − xj‖2 ≥ √ 2(1− α).\nSince N is a net of S , for each xi, we can find a y ∈ N such that ‖xi − y‖2 < √\n2(1− α)/2. Thus, we can conclude N ≤ |N | ≤ (1 + 2/ √ 2(1− α))m.\nTheorem 11. Consider N data points x1, x2, · · · , xN ∈ Rm with their corresponding labels z1, z2, · · · , zN ∈ R and a weight matrixW ∈ Rl×m where each entry ofW is an i.i.d. sample drawn from the Gaussian distribution N(0, 1/l). Suppose ∀i 6= j ∈ [N ], 〈xi, xj〉/(‖xi‖2‖xj‖2) ≤ α for some α ∈ (0.5, 1). Fix k ≥ 1 and δ ∈ (0, 1), if l is sufficiently large, then with probability at least 1− δ, there exists a vector v ∈ Rl such that\n∀i ∈ [N ], 〈v, φk(Wxi)〉 = zi.\nProof. Due to Theorem 10, with probability at least 1 − δ, ∀i 6= j, A(Wxi) ∩ A(Wxj) = ∅. Let t1, t2, · · · , tN ∈ [l] such that ti ∈ A(Wxi). Then ti 6∈ A(Wxj) for j 6= i. For each entry vt, if t = ti for some i ∈ [N ], then set vt = zi/(Wtxi). Then for i ∈ [N ], we have\n〈v, φk(Wxi)〉 = ∑\nt∈A(Wxi)\nvt ·Wtxi = zi/(Wtixi) ·Wtixi = zi." }, { "heading": "D ADDITIONAL EXPERIMENTAL RESULTS", "text": "This section presents details of our experiment settings and additional results for evaluating and empirically understanding the robustness of k-WTA networks." }, { "heading": "D.1 EXPERIMENT SETTINGS", "text": "First, we describe the details of setting up the experiments described in Sec. 4. To compare k-WTA networks with their ReLU counterparts, we replace all ReLU activations in a network with k-WTA activation, while retaining all other modules (such as BatchNorm, Convolution, and pooling). To test on different network architectures, including ResNet18, DenseNet121, and Wide ResNet, we use the standard implementations that are publicly available3. All experiments are conducted using PyTorch framework.\nTraining setups. We follow the same training procedure on CIFAR-10 and SVHN datasets. All the ReLU networks are trained with stochastic gradient descent (SGD) method with momentum=0.9. We use a learning rate 0.1 from the first to 50-th epoch and 0.01 from 50-th to 80-th epoch. To compare with ReLU networks, the k-WTA networks are trained in the same way as ReLU networks. All networks are trained with a batch size of 256. For k-WTA networks with a sparsity ratio γ = 0.1, when adversarial training is not used, we train them incrementally (recall in Sec. 2.2). starting with γ = 0.2 for 50 epochs with SGD (using learning rate=0.1, momentum=0.9) and then decreasing γ by 0.005 every 2 epochs until γ reaches 0.1. When adversarial training is enabled, we use untargeted PGD attack with 8 iterations to construct adversarial examples. To train networks with TRADES (Zhang et al., 2019), we use the implementation of its original paper4 with the parameter 1/λ = 6, a value that reportedly leads to the best robustness according to the paper. To train networks with the free adversarial training method (Shafahi et al., 2019b), we implement the training algorithm by following the original paper. We set the parameter m = 8 as suggested in the paper.\nAttack setups. All attacks are evaluated under the `∞ metric, with perturbation size = 0.031 (CIFAR-10) and 0.047 (SVHN) for pixels ranging in [0, 1]. We use Foolbox (Rauber et al., 2017), a third-party toolbox for evaluating adversarial robustness. We use the following setups for generating adversarial examples in various attack methods: For PGD attack, we use 40 iterations with random start, the step size is 0.003. For C&W attack, we set the binary search step to 5, maximum number of iterations to 20, learning rate to 0.01, and initial constant to 0.01. For Deepfool, we use 20 steps and 10 sub-samples in its configuration. For momentum attack, we set the step size to 0.003 and number of iterations to 20. All other parameters are set by Foolbox to be its default values." }, { "heading": "D.2 EFFICACY OF INCREMENTAL TRAINING", "text": "We now report additional experiments to demonstrate the efficacy of the incremental fine-tuning method (described in Sec. 2.2). As shown in Figure 6 and described its caption, models trained with incremental fine-tuning (denoted as w/ FT in the plots’ legend) performs better in terms of both standard accuracy (denoted as std in the plots’ legend) and robust accuracy (denoted as Rob in the plots) when the k-WTA sparsity γ < 0.2, suggesting that fine-tuning is worthwhile when γ is small." }, { "heading": "D.3 ADDITIONAL RESULTS ON CIFAR-10", "text": "Tests on different network architectures. We evaluate the robustness of k-WTA on different network architectures, including ResNet-18, DenseNet-121 and WideResNet-22-10. The results are reported in Table 2, where similar to the notation used in Table 1 of the main text, Arob is calculated as the worst-case robustness, i.e., under the most effective attack among PGD, C&W, Deepfool and MIM. The training and attacking settings are same as other experiments described Sec. D.1. As shown in Table 2, while the standard and robustness accuracies, Astd and Arob, vary on different network architectures, k-WTA networks consistently improves the worst-case robustness Arob over ReLU networks, no matter what the network architecture and training method are used.\n3https://github.com/kuangliu/pytorch-cifar 4https://github.com/yaodongyu/TRADES\nComparison with LWTA. We in addition compare k-WTA to LWTA activation (Srivastava et al., 2013; 2014). For fair comparisons, we use the same sparsity ratio γ in both k-WTA and LWTA. As shown in Table 2, on all network architectures and training methods we tested, k-WTA networks consistently have better robustness performance than LWTA networks (in terms of both Astd and Arob). These results suggest that k-WTA is more suitable then LWTA for defending against adversarial attacks.\nTransfer attack. Since a k-WTA network is architecturally similar to its ReLU counterpart—with the only difference being the activation—we evaluate their robustness under (black-box) transfer\nattacks across k-WTA and ReLU networks. To this end, we build a ReLU and a k-WTA-0.1 network on ResNet-18, and train both networks with natural (non-adversarial) training as well as adversarial training. This gives us four different models denoted (in Table 3) as ReLU, k-WTA-0.1, ReLU (AT), and k-WTA-0.1 (AT). We then launch transfer attacks across each pair of models. We also consider by-far the strongest black-box attack (according to Papernot et al. (2017)): for the same model, for example, a k-WTA-0.1 network optimized by adversarial training, we train two independent versions, each with a different random initialization, and apply the transfer attacks across the two versions. The results are reported in Table 3, where each row corresponds to a target (attacked) model, and each column corresponds to a source model from which the adversarial examples are generated. On the diagonal line of Table 3, each entry corresponds to the robustness under aforementioned transfer attacks across the two versions of the same models. The results suggest that 1) it is more difficult to transfer attack k-WTA networks than ReLU networks using adversarial examples from other models, and 2) it is also more difficult to use adversarial examples of a k-WTA network to attack other models. In a sense, the adversarial examples of a k-WTA network tend to be “disjoint” from the adversarial examples of a ReLU network, despite their architectural similarity. Inspecting the diagonal entries of Table 3, we also find that k-WTA networks are more robust than their ReLU counterparts under the strongest black-box attack (Papernot et al., 2017) (i.e., transfer attacks across two different versions of the same model)." }, { "heading": "D.4 MNIST RESULTS", "text": "On MNIST dataset, we conduct experiments with an adversarial perturbation size =0.3 for pixels ranging in [0, 1]. We use Stochastic Gradient Descent (SGD) with learning rate=0.01 and momentum=0.9 to train a 3-layer CNN. The training takes 20 epochs for all the methods we evaluate. The robust accuracy are evaluated under PGD attacks that take 20 iterations with random initialization and a step size of 0.03. The results are summarized in Table 4. Again, k-WTA activation consistently improves robustness under all different training methods. Even with natural (non-adversarial) training, the resulting k-WTA network still has 62.2% robust accuracy, significantly outperforming ReLU network." }, { "heading": "D.5 ROBUSTNESS WITH RESPECT TO NATURAL PERTURBATIONS", "text": "We also evaluate the robustness of k-WTA networks under (non-adversarial) natural perturbations. We evaluated various types of perturbations, including adding Gaussian noise to the input image (std=0.05/0.1), random translation (maximum 5 pixel), random rotation (maximum 10 degrees) and color jittering (i.e., randomly changing the brightness, contrast and saturation of an image, with a maximum perturbation of 0.4), following Hendrycks & Dietterich (2019). The results are summerized in Table 5, in which all models are ResNet18. We found that under all the tested perturbations, the accuracy drops in k-WTA networks are no worse than those in ReLU networks. Note that in these tests, all our models are trained with standard data\naugmentations (e.g., random crop and random flip); they are not specifically trained to avert the tested perturbations. We also highlight an interesting finding here. Adding Gaussian noise leads to a large accuracy drop (e.g., from 92.9% to 69.7%/27.0% as shown in the table) on naturally trained ReLU network, but in k-WTA networks (especially k-WTA-0.1), the corresponding accuracy drop is much smaller (e.g., from 89.3% to 80.2%/50.9%). We conjecture that the dense discontinuities in k-WTA networks (recall Figure 5) effectively add noise to the input distribution, thus making the model more robust against input noise." }, { "heading": "D.6 LOSS LANDSCAPE VISUALIZATION", "text": "In addition to the experiments shown in Figure 5 and Sec. 4.3 of the main text, we further we visualize the loss landscapes of k-WTA networks when different sparsity ratios γ are used. The plots are shown in Figure 7, produced in the same way as Figure 5 described in Sec. 4.3. As analyzed in Sec. 3, a larger γ tends to smooth the loss surface of the k-WTA network with respect to the input, while a smaller γ renders the loss surface more discontinuous and “spiky”. In addition, adversarial training tends to reduce the range of the loss values—a similar phenomenon in ReLU networks has already been reported Madry et al. (2017); Tramèr et al. (2017)—but that does not mean that the loss surface becomes smoother; the loss surface remains spiky." } ]
2,020
ENHANCING ADVERSARIAL DEFENSE BY k-WINNERS- TAKE-ALL
SP:7dd326afe8e4e148955d98fb30d561b9e6be5ba9
[ "This paper studies the problem of leveraging past experience to quickly solve new control tasks. The starting point (and perhaps the main contribution) is the observation that some tasks have similar high-level goals, while differing in how those goals are achieved. To that end, the paper introduces an meta-RL algorithm that, given a new task, attempts to solve it by adapting a high-level, goal-setting module, and learn a new, low-level policy to reach each commanded goal. The proposed method might be viewed as a combination of PEARL [Rakelly 19] and HAC [Levy 19]. The proposed method is compared against state-of-the-art hierarchical RL and meta-RL methods on four robotic manipulation tasks. The proposed method outperforms the baselines on each task.", "In this paper, the authors focus on the problem of meta-reinforcement learning (meta-RL). Specifically, the authors consider the setting of meta-RL for goal reaching tasks where each task corresponds to an unknown goal. Existing meta-RL algorithms directly train for a policy that output low level actions, which might be inefficient in this goal-reaching setting. In this paper, the authors combine the hierarchical RL framework of HAC[1] with the probabilistic task context inference method of PEARL[2], and propose the meta-goal generation for hierarchical RL (MGHRL) algorithm. In this algorithm, a two layer hierarchical policy is used where the high level policy generate goals for the low level goal-reaching policy to reach. In order to adapt to an unknown goal, the high level policy is conditioned on the output of a task inference module to generate goals for the unknown ground truth goal. The goal-reaching policy would then use the generated goal to interact with the environment." ]
Meta reinforcement learning (meta-RL) is able to accelerate the acquisition of new tasks by learning from past experience. Current meta-RL methods usually learn to adapt to new tasks by directly optimizing the parameters of policies over primitive actions. However, for complex tasks which requires sophisticated control strategies, it would be quite inefficient to directly learn such a meta-policy. Moreover, this problem can become more severe and even fail in spare reward settings, which is quite common in practice. To this end, we propose a new meta-RL algorithm called meta goal-generation for hierarchical RL (MGHRL) by leveraging hierarchical actor-critic framework. Instead of directly generate policies over primitive actions for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and effective meta-learning from past experience and outperforms state-of-the-art meta-RL and Hierarchical-RL methods in sparse reward settings.
[]
[ { "authors": [ "Marcin Andrychowicz", "Dwight Crow", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Andrew G. Barto", "Sridhar Mahadevan" ], "title": "Recent advances in hierarchical reinforcement learning", "venue": "Discrete Event Dynamic Systems,", "year": 2003 }, { "authors": [ "Yoshua Bengio", "Samy Bengio", "Jocelyn Cloutier" ], "title": "Learning a synaptic learning", "venue": "rule. IJCNN-91Seattle International Joint Conference on Neural Networks, ii:969", "year": 1991 }, { "authors": [ "Peter Dayan", "Geoffrey E. Hinton" ], "title": "Feudal reinforcement learning", "venue": "In Advances in Neural Information Processing Systems 5, [NIPS Conference],", "year": 1992 }, { "authors": [ "Thomas G. Dietterich" ], "title": "Hierarchical reinforcement learning with the MAXQ value function decomposition", "venue": "J. Artif. Intell. Res.,", "year": 2000 }, { "authors": [ "Yan Duan", "John Schulman", "Xi Chen", "Peter L. Bartlett", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Rl$ˆ2$: Fast reinforcement learning via slow reinforcement learning", "venue": "CoRR, abs/1611.02779,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Kevin Frans", "Jonathan Ho", "Xi Chen", "Pieter Abbeel", "John Schulman" ], "title": "Meta learning shared hierarchies", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Andrew Levy", "George Konidaris", "Robert Platt Jr.", "Kate Saenko" ], "title": "Learning multi-level hierarchies with hindsight", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive metalearner", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin A. Riedmiller", "Andreas Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ronald Parr", "Stuart J. Russell" ], "title": "Reinforcement learning with hierarchies of machines", "venue": "In Advances in Neural Information Processing Systems 10, [NIPS Conference],", "year": 1997 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder", "Vikash Kumar", "Wojciech Zaremba" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "CoRR, abs/1802.09464,", "year": 2018 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Chelsea Finn", "Sergey Levine", "Deirdre Quillen" ], "title": "Efficient offpolicy meta-reinforcement learning via probabilistic context variables", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Jonas Rothfuss", "Dennis Lee", "Ignasi Clavera", "Tamim Asfour", "Pieter Abbeel" ], "title": "Promp: Proximal meta-policy search", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy P. Lillicrap" ], "title": "Meta-learning with memory-augmented neural networks", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Juergen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning", "venue": null, "year": 1987 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael I. Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Bradly C. Stadie", "Ge Yang", "Rein Houthooft", "Xi Chen", "Yan Duan", "Yuhuai Wu", "Pieter Abbeel", "Ilya Sutskever" ], "title": "Some considerations on learning to explore via meta-reinforcement learning", "venue": "CoRR, abs/1803.01118,", "year": 2018 }, { "authors": [ "Richard S. Sutton", "Doina Precup", "Satinder P. Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artif. Intell.,", "year": 1999 }, { "authors": [ "Sebastian Thrun", "Lorien Y. Pratt" ], "title": "Learning to learn", "venue": "In Springer US,", "year": 1998 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Tim Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Jane X. Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z. Leibo", "Rémi Munos", "Charles Blundell", "Dharshan Kumaran", "Matthew Botvinick" ], "title": "Learning to reinforcement learn", "venue": "CoRR, abs/1611.05763,", "year": 2016 }, { "authors": [ "Tianbing Xu", "Qiang Liu", "Liang Zhao", "Jian Peng" ], "title": "Learning to explore via meta-policy gradient", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 } ]
[ { "heading": null, "text": "Meta reinforcement learning (meta-RL) is able to accelerate the acquisition of new tasks by learning from past experience. Current meta-RL methods usually learn to adapt to new tasks by directly optimizing the parameters of policies over primitive actions. However, for complex tasks which requires sophisticated control strategies, it would be quite inefficient to directly learn such a meta-policy. Moreover, this problem can become more severe and even fail in spare reward settings, which is quite common in practice. To this end, we propose a new meta-RL algorithm called meta goal-generation for hierarchical RL (MGHRL) by leveraging hierarchical actor-critic framework. Instead of directly generate policies over primitive actions for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and effective meta-learning from past experience and outperforms state-of-the-art meta-RL and Hierarchical-RL methods in sparse reward settings." }, { "heading": "1 INTRODUCTION", "text": "Deep Reinforcement Learning (DRL) has recently shown a great success on a wide range of tasks, ranging from games (Mnih et al., 2015) to robotics control (Levine et al., 2016; Bengio & LeCun, 2016). However, for more complex problems with larger state and action spaces or sparse reward settings, traditional DRL methods hardly works. Hierarchical reinforcement learning (HRL) in which multiple layers of policies are trained to learn to operate on different levels of temporal abstraction, has long held the promise to learn such difficult tasks (Dayan & Hinton, 1992; Parr & Russell, 1997; Barto & Mahadevan, 2003). By decomposing a complex problem into subproblems, HRL significantly reduces the difficulty of solving specific task. Learning multiple levels of policies in parallel is challenging due to non-stationary state transition functions. Recent HRL approaches (Nachum et al., 2018; Levy et al., 2019) use states as goals directly, allowing simple and fast training of the lower layer.\nHuman intelligence is remarkable for their fast adaptation to many new situations using the knowledge learned from past experience. However, agents trained by conventional DRL methods mentioned above can only learn one separate policy per task, failing to generalize to new tasks without additional large amount of training data. Meta reinforcement learning (meta-RL) addresses such problems by learning how to learn. Given a number of tasks with similar structures, meta-RL methods enable agents learn such structure from previous experience on many tasks. Thus when encountering a new task, agents can quickly adapt to it with only a small amount of experience.\nMost current meta-RL methods leverage experience from previous tasks to adapt to new tasks by directly learn the policy parameters over primitive action space. (Finn et al., 2017; Rakelly et al., 2019). Such approaches suffer from two problems: (i) For complex tasks which requires sophisticated control strategies, it would be quite inefficient to directly learn such policy with one nonlinear function approximator and the adaptation to new tasks is prone to be inaccurate. This problem can become more severe in spare reward settings. (ii) When the task distribution is much wider (riding bicycle as meta-train task and riding motorcycle as meta-test task), these methods can hardly be effective since primitive action execution mechanism is entirely different although they may share\na similar high-level strategy. Moreover, existing current meta-RL methods perform badly in sparse reward settings, which are quite common in real world.\nIn this paper, we aim at tackling the problems mentioned above by proposing an efficient hierarchical meta-RL method that realizes meta learning high-level goal generation and leaves the learning of low-level policy for independent RL. Intuitively, this is quite similar to how a human being behaves: we usually transfer the overall understanding of similar tasks rather than remember specific actions. Our meta goal-generation framework is built on top of the architecture of PEARL (Rakelly et al., 2019) and a two level hierarchy inspired by HAC (Levy et al., 2019). Our evaluation on several simulated robotics environments (Plappert et al., 2018) shows the superiority of MGHRL to stateof-the-art meta-RL and hierarchical RL methods in sparse reward settings.\nGenerally, our contributions are as follows:\n• We propose an algorithm that achieves efficient meta reinforcement learning on challenging robotics environments with sparse reward settings and outperforms other leading methods.\n• Similar to the way humans leverage past experience to learn new complex tasks, our algorithm focuses on meta learning the overall strategy for different tasks, which provides a much simpler and better way for meta RL comparing with directly learning the detailed solution.\nSince we focus on meta goal-generation and leave the low level policy for independent learning, we believe our algorithm can still accelerate the acquisition of new tasks sampled from much wider task distributions. For example, to learn tasks such as riding bicycles and riding a motorcycle, the two primitive action execution mechanism are entirely different but the two learning process still share similar high-level structures. Through meta goal-generation learning, we expect our method can still accelerate the acquisition of such tasks. We leave these for future work to explore." }, { "heading": "2 RELATED WORK", "text": "Our algorithm is based on meta learning framework (Thrun & Pratt, 1998; Schmidhuber, 1987; Bengio et al., 1991), which aims to learn models that can adapt quickly to new tasks. Meta learning algorithms for few-shot supervised learning problems have explored a wide variety of approaches and architectures (Santoro et al., 2016; Vinyals et al., 2016; Ravi & Larochelle, 2017). In the context of reinforcement learning, recurrent (Duan et al., 2016; Wang et al., 2016) and recursive (Mishra et al., 2018) meta-RL methods adapt to new tasks by aggregating experience into a latent representation on which the policy is conditioned. Another set of methods is gradient-based meta reinforcement learning (Finn et al., 2017; Stadie et al., 2018; Rothfuss et al., 2019; Xu et al., 2018). Its objective is to learn an initialization such that after one or few steps of policy gradients the agent attains full performance on a new task. These methods focus on on-policy meta learning which are usually sample inefficient. Our algorithm is closely related to probabilistic embeddings for actor-critic RL (PEARL) (Rakelly et al., 2019), which is an off-policy meta RL algorithm. PEARL leverages posterior sampling to decouple the problems of inferring the task and solving it, which greatly enhances meta-learning efficiency. However, when facing complex tasks that require sophisticated control strategies, PEARL cannot effectively learn a proper meta-policy as we will show in Section 5.\nDiscovering meaningful and effective hierarchical policies is a longstanding research problem in RL (Dayan & Hinton, 1992; Parr & Russell, 1997; Sutton et al., 1999; Bacon et al., 2017; Dietterich, 2000). Schmidhuber 1987 proposed a HRL approach that can support multiple levels. Multi-level hierarchies have the potential to accelerate learning in sparse reward tasks because they can divide a problem into a set of short-horizon subproblems. Nachum et al. 2018 proposed HIRO, a 2-level HRL approach that can learn off-policy and outperforms two other popular HRL techniques used in continuous domains: Option-Critic (Bacon et al., 2017) and FeUdal Networks (Vezhnevets et al., 2017). Our algorithm is built on Hierarchical actor-critic (Levy et al., 2019), which is a framework that can learn multiple levels of policies in parallel. Most current HRL works focus on the learning problem in a single task and few of them considers to take advantage of HRL for multi-task or metalearning tasks. MLSH (Frans et al.) is such a work which also combines meta-RL with Hierarchical RL. It focuses on meta learning on the low level policy and need to retrain its high level policy when facing new tasks. In contrast, with the key insight that humans leverage abstracted prior knowledge\nobtained from past experience, our method focus on meta learning high level overall strategy using past experience and leave the detailed action execution for independent RL." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 META REINFORCEMENT LEARNING", "text": "In our meta learning scenario, we assume a distribution of tasks p(τ) that we want our model to adapt to. Each task correspond to a different Markov decision process (MDP), Mi = {S,A, Ti, Ri}, with state space S, action space A, transition distribution Ti, and reward function Ri. We assume that the transitions and reward function vary across tasks. Meta RL aims to learn a policy that can adapt to maximize the expected reward for novel tasks from p(τ) as efficiently as possible.\nPEARL (Rakelly et al., 2019) is an off-policy meta-reinforcement learning method that drastically improves sample efficiency comparing to previous meta-RL algorithms. The meta-training process of PEARL learns a policy that adapts to the task at hand by conditioning the history of past transitions, which we refer to as context c. Specifically, for the ith transition in task τ , cτi = (si, ai, ri, s ′ i). PEARL leverages an inference network qφ(z|c) and outputs probabilistic latent variable z. The parameters of q(z|c) are optimized jointly with the parameters of the actor πθ(a|s, z) and critic Qθ(s, a, z), using the reparametrization trick (Kingma & Welling, 2014) to compute gradients for parameters of qφ(z|c) through sampled z’s." }, { "heading": "3.2 HIERARCHICAL ACTOR-CRITIC", "text": "HAC (Levy et al., 2019) aims to accelerate learning by enabling hierarchical agents to jointly learn a multi-level hierarchy of policies in parallel. HAC is comprised of two components: a particular hierarchical architecture and a method for learning the multiple levels of policies in parallel given sparse rewards. The hierarchies produced by HAC have a specific architecture consisting of a set of nested, goal-conditioned policies that use the state space as the mechanism for breaking down a task into subtasks. HAC extends the idea of Hindsight Experience Replay (Andrychowicz et al., 2017) by creating two types of hindsight transitions. Hindsight action transition simulates a transition function that uses the optimal low level policy while hindsight goal transition use the final states achieved as the goal state in each step’s transition. They enable agents to learn multiple policies in parallel using only sparse reward functions." }, { "heading": "4 ALGORITHM", "text": "" }, { "heading": "4.1 TWO-LEVEL HIERARCHY", "text": "We set up a hierarchical two-layer RL structure similar to HAC. The high level network uses policy µh to generate goals for temporally extended periods in terms of desired observations. In our task they correspond to the positional features of the gripper. The low level policy µl directly controls the agent and produces actions for moving towards the desired goals.\nAs shown in Figure 1 (a), the high level policy µh observes the state and produces a high level action (or goal) gt. Low level policy µl has at most K attempts of primitive action to achieve gt. Here, K which can be viewed as the maximum horizon of a subgoal action is a hyperparameter given by the user. As long as the low level policy µl run out of K attempts or gt is achieved, this high level transition terminates. The high level policy uses agent’s current state as the new observation and produced another goal for low level policy to achieve.\nWe use an intrinsic reward function in which a reward of 0 is granted only if the goal produced by high level policy is achieved and a reward of -1 otherwise. Note that the environment’s return (i.e. whether the agent successfully accomplished the task) will not affect the reward received by the low level policy. In our evaluation on simulated robotics environments, we use the positional features of the observations as the representation for gt. A goal gt is judged to be achieved only if the distance between gt and the gripper’s current position sn+1 is less than threshold l." }, { "heading": "4.2 META GOAL-GENERATION FOR HIERARCHICAL REINFORCEMENT LEARNING", "text": "One primary motivation for our hierarchical meta reinforcement learning strategy is that, when people try to solve new tasks using prior experience, they usually focus on the overall strategy we used in previous tasks instead of the primitive action execution mechanism. Most state-of-the-art meta learning methods (Rakelly et al., 2019; Finn et al., 2017) leverage experiences from previous tasks to quickly adapt to the new tasks and directly learn the policy parameters. However, it can be difficult to meta learn a proper policy that consider both the overall strategy and detailed action execution in some complex tasks. Using only one level of non-linear function approximator may lead the agents to learning an inaccurate meta-policy when both the overall structure and primitive action execution mechanism are complex. Moreover, in sparse reward settings which is a common situation in real-world problems, current meta learning algorithms do not perform well enough since their training methods are based on non-hierarchical RL methods like TRPO (Schulman et al., 2015), SAC (Haarnoja et al., 2018), etc. These methods suffer from the difficulty of effective exploration and the lack of positive update signals.\nTo address the problem mentioned above, we take advantage of our two-level hierarchy structure and propose a new meta reinforcement learning framework called meta goal-generation for hierarchical RL (MGHRL). Instead of learning to generate detailed strategy for new tasks, MGHRL learns to generate overall strategy (goals) given past experience and leaves the detailed method of how to achieve the goals for independent RL. We leverage PEARL framework (Rakelly et al., 2019) and independently train a high level meta-policy which is able to quickly adapt to new tasks and generate proper goals. Note that off-policy RL method is indispensable in our structure when training high level policy due to its excellent sample efficiency during meta-training. Good sample efficiency enables fast adaptation by accumulating experience online, and performs structured exploration by reasoning about uncertainty over tasks, which is crucial to hierarchical parallel training framework. We leave the low level policy to be trained independently with non-meta RL algorithm using hindsight experience replay mechanism. In our simulated robotics experiments, the low level policy aims to move the gripper to the desired goal position which can be reused when switching to other tasks. Thus we only need to train a single set of low-level polices which can be shared and reused across different tasks. On the other hand, in other situations where the tasks are from different domains, for example, when we use our experience of learning riding bicycle to help us learn how to ride a motorcycle, the primitive action execution mechanism are entirely different. In this case, we can train low level policy independently on new tasks without using past experience. Our main insight is that when dealing with entirely new tasks, the primitive action execution mechanism can be entirely different but the general strategy of how to accomplish the new tasks and prior tasks can be similar.\nWith meta learning on high level policy, our algorithm still greatly accelerate the acquisition of new tasks.\nOur high level meta-RL network uses a probabilistic embedding actor-critic framework similar to PEARL. The network consists of two parts. The first part is a context encoder which leverages data from a variety of training tasks to learn to infer the value of z from a recent history of high-level experience in the new task, where z functions as a latent probabilistic context variable. The encoder network parameterized by ϕ takes context (experience) ch as input and output posterior qϕ(z|ch) as a permutation-invariant function (Rakelly et al., 2019) of prior high level experience. The context ch consists of experience {s, g, r, s′} collected using hindsight technique as we will introduce in Section 4.3. Then we can sample z from the posterior and compute policy output and Q value conditioned on it. Through posterior sampling via latent contexts, the high level network can learn to infer new tasks efficiently using past experience. The second part is built on top of soft actor-critic algorithm (Haarnoja et al., 2018). As we mentioned before, samples from the posterior belief are passed to actor µhφ(g|s, z) and critic Qθ(s, g, z) to make predictions of the sampled task. Note that we treat z as part of the state when we implement with SAC.\nThe actor and critic are trained to predict optimally given z with batches of transitions drawn uniformly from the entire replay buffer. The context encoder is optimized using gradients from the critic. We summarize our meta-training procedure in Algorithm1 and Figure 1 (b). Concretely, for each training task drawn from task distribution, we sample context and generate hindsight transitions for both levels of hierarchy (line 4 ∼ 13) by performing current policy. Then we train high level and low level networks with the collected data (line 16 ∼ 22).\nAlgorithm 1 MGHRL Meta-training Require: Batch of training tasks {τi}i=1,...,T from p(τ), maximum horizon K of subgoal action\n1: Initialize replay buffers Bih,Bil for each training task 2: while not done do 3: for each task τi do 4: Initialize high-level context cih = {} 5: for m=1,...,M do 6: Sample z ∼ qϕ(z|cih) 7: gi ← πh(g|s, z) 8: for K attempts or until gi achieved do 9: Gather data using ai ← πl(a|s, g)\n10: Generate hindsight action transition, hindsight goal transition and add to Bil 11: end for 12: Generate hindsight transitions, subgoal test transitions and add to Bih 13: Sample high level context cih = {sj , gj , rj , s′j}j=1,...,N ∼ Bih 14: end for 15: end for 16: for steps in training steps do 17: for each task τi do 18: Sample high level context cih ∼ Bih and RL batch bih ∼ Bih, bil ∼ Bil 19: Sample z ∼ qϕ(z|cih) and calculate Lhactor(bih, z), Lhcritic(bih, z), LhKL 20: Update low level actor and critic network with bil 21: end for 22: Update high level networks with ∑ i L h actor, ∑ i L h critic, ∑ i L h KL 23: end for 24: end while" }, { "heading": "4.3 PARALLEL TRAINING STRATEGY", "text": "Efficient meta reinforcement learning requires parallel training for the two levels of our networks. To achieve parallel training paradigm, there exists two main issues in MGHRL framework. The first issue for meta learning hierarchies is that agents need to act randomly to reach their goals and obtain the sparse reward which proves to be quite difficult for both levels. We need other strategies to ensure\neach level learn effectively in sparse reward settings. The second issue is the non-stationary problem when we do parallel training for the high level and low level networks. Whenever low level policy πl changes, the high level transition function is likely to change as well. Old off-policy experience may exhibit different transitions conditioned on the same goals, making the transition invalid for training. The same problem occurs when the low level is exploring with some random noise. Thus in our algorithm, we rewrite the past experience transitions as hindsight action transitions (Andrychowicz et al., 2017), and supplement both levels of hierarchy with additional sets of transitions as was done in HAC.\nHindsight action transition simulates a transition function that uses the optimal low level policy which enables our framework to train both levels in parallel. It substitutes the action component in high level transition to the next state achieved in low level. If the original high level transition is [st, gt, rt, st+1], the hindsight action transition will be [st, s g t+1, rt, st+1], where s g t+1 represents the component vector of next state that matches the goal vector. The new transition we get is independent of changing or exploring low level policy since it’s always optimal.\nWe utilize hindsight goal transition and subgoal test transition to further address the problems mentioned before. Hindsight goal transition is created for both levels. After at most K attempts is executed, the final states achieved is used as the goal state in each step’s transition instead of the original goal state. And the reward will be updated to reflect the new goal state. Subgoal test transition is meant to compensate for the drawbacks brought by hindsight action transition. Hindsight action transition prefer the shortest path of goals that have been found but may ignore the range of goals that the low level policy is able to reach. Thus, subgoal test transition add a penalty of −K to the reward if the goal is not achieved after K attempts by low level policy and set the discount rate to 0 to avoid non-stationary issues." }, { "heading": "5 EXPERIMENTS", "text": "We evaluated our algorithm on several challenging continuous control robotics tasks (integrated with OpenAI Gym) (Plappert et al., 2018), simulated via the MuJoCo physics simulator (Todorov et al., 2012). Visualizations of these environments are shown in Figure 2. More details on each environment can be found at https://openai.com/blog/ ingredients-for-robotics-research/.\nFetch-Reach Fetch has to move the gripper to the desired goal position. This task is very easy to learn and is therefore a suitable benchmark to ensure that a new idea works at all.\nFetch-Push A box is placed on a table in front of the robot and Fetch has to move a box by pushing it until it reaches a desired goal position. The robot fingers are locked to prevent grasping. The learned behavior is usually a mixture of pushing and rolling.\nFetch-Slide A puck is placed on a long slippery table and the target position is outside of the robots reach so Fetch has to hit the puck across a long table such that it slides and comes to rest on the desired goal.\nFetch-PickandPlace Fetch has to pick up a box from a table using its gripper and move it to a desired goal located on the table." }, { "heading": "5.1 ENVIRONMENTAL SETUP", "text": "In all our experiments, we compare our algorithm to baselines including PEARL with dense reward, PEARL with sparse reward and HAC with shared policy. The last one means we train a shared HAC policy jointly across all meta-train tasks sampled from the whole task distribution. Note that Rakelly et al. (2019) has already proved that PEARL greatly outperforms other existing meta RL methods like MAML (Finn et al., 2017), ProMP (Rothfuss et al., 2019) at both sample efficiency and final performance. Thus we mainly compare our results with PEARL using its public source code. In addition, for a fair comparison, we modify the HAC source code with SAC algorithm which are considered to be much powerful than DDPG in the original implementation (Haarnoja et al., 2018), to ensure the consistence to PEARL and MGHRL.\nWe set the goal space to be the set of all possible positions of the gripper, in which a goal is a 3-d vector. In the environments, the low level policy of our algorithm aims to move the gripper to the desired goal position. Such policy won’t change at all when switching to other tasks since the mechanism of moving gripper keeps the same between different tasks. Thus we use a shared policy trained jointly across all tasks for the low level of MGHRL. In all four scenarios, we set the maximum low-level horizon K to be 10 and the distance threshold to be 0.05. The high level context data sampler Shc samples uniformly from the most recently collected batch of data, which is recollected every 1000 meta-training steps. Unlike HAC, we use target networks for both levels, which updates with τ = 0.005. All context encoder, actor and critic neural networks had three hidden layers, with 300 nodes in each layer. The discount factor was set to γ = 0.99. We use a sparse reward function in which a reward of 0 is granted only if the terminal goal given by the environment is achieved and a reward of -1 otherwise. The dense reward used in our baseline is a value corresponding to the distance between current position of the box (or gripper in fetch-reach case) and desired goal position. In all four scenarios, we do our experiments on 50 train tasks and 10 test tasks, where the difference between each task is in the terminal goal position we want the box or gripper to reach." }, { "heading": "5.2 RESULTS", "text": "We evaluate the performance of approaches in the term of the average success rate. As shown in Figure 3, in Fetch-reach environment which is very easy to learn as we mentioned before, the tested methods except PEARL with sparse reward all reach a final performance of 100% success rate. In other three scenarios, MGHRL significantly outperforms the other three method in such sparse reward settings. Our two-level hierarchy and hindsight transitions significantly decrease the difficulty of meta learning with sparse reward. As we expected, PEARL performs badly in sparse reward settings. The original version of PEARL is based on SAC, such non-hierarchical RL method has been proved to perform badly before on challenging tasks with sparse reward settings. Thus it is reasonable that PEARL, which can be viewed as a meta-version of SAC, performs badly as well in sparse reward settings. HAC with shared policy generally performs better than PEARL in Fetch-slide and Fetch-pickandplace environments. We assume that it is because in our settings since we only change the terminal goals’ positions to create different tasks, thus it is possible that the policy learned from one task will work on other task whose terminal goal positions are very close to previous training ones. But such method lacks generalization ability and cannot always achieve good performance when tested on varied tasks as shown in our results.\nWe also compare our method to PEARL with dense reward to demonstrate that MGHRL is able to more efficiently and accurately meta learn from past experience. Shown in Figure 3, generally, our algorithm still outperforms PEARL and adapts to new tasks much more quickly. In such environments with sophisticated control strategies, directly using PEARL to meta learn a policy that consider both overall strategy and detailed execution would decrease prediction accuracy and sample efficiency. Thus it is better to decompose the meta-RL training process and focus on meta goalgeneration learning. Moreover, under dense reward settings of these challenging tasks, the critic of PEARL has to approximate a highly non-linear function that includes the Euclidean distance between positions and the difference between two quaternions for rotations (Plappert et al., 2018). As our method use a hierarchical structure, learning with the sparse return is much simpler since the critic only has to differentiate between successful and failed states." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have presented a hierarchical meta-RL algorithm, MGHRL, which realizes meta goal-generation and leave the low-level policy for independent RL. MGHRL aims to more efficiently and accurately meta learn from past experience by focusing on learning the overall strategy of tasks instead of learning detailed action execution. Our experimental results on a range of simulated robotics environments show the superiority of MGHRL over state-of-the-art meta RL and hierarchical RL methods in challenging and practical sparse reward settings.\nWe believe our work open up many directions in training agents that can quickly adapt to new tasks sampled from much wider distribution efficiently. Currently, we have only conducted experiments on meta learning tasks with relatively narrow task distribution (e.g. different goal positions of the box). As future work, we expect our algorithm can accelerate the acquisition of entirely new tasks (i.e. using fetch-push and fetch-slide as meta train tasks and using fetch-pickandplace as meta test task) by only meta learning overall strategy and leaving the details of primitive action execution mechanism for further separate low-level policy learning. Moreover, we note our results on some tasks are still far from perfect. There is still much work left for future research to improve meta-RL methods’ performance on those tasks." } ]
2,019
null
SP:b46afcbaf09319053c4ae21b7ee68a34c78dc28f
[ "The authors propose a new loss function as well as an adjoining visualization for improved performance of hard negative / easy positive mining for deep triplet metric learning. The authors note that under the NCA loss, if one selects an easy positive / hard negative and computes the gradient with respect to this example, this can lead to the negative example also being pulled closer to the anchor which is undesired. Similar phenomena can also be observed for easy positive / semi-hard negative mining as well. Motivated by this, the authors begin by designing a visualization to make this issue with NCA loss more apparent. Then they design what they refer to as an “entanglement factor” to quantify this issue more precisely. Using the desired dynamics of the gradients for the easy positive / hard negative mining and integrate to form what they refer to as the “second order loss.” Using this loss, they compare against the standard NCA loss on several datasets, showing modest performance gains. They also compare against a variety of other deep triplet embedding frameworks and show competitive results.", "This paper uses the triplet scatter plot as a way to describe triplet selection strategies. The authors explain previously observed bad behavior for hard-negative triplet mining showing that it tends to make all points close to each other. The authors propose a simple modification to the desired gradients and derive a loss function that gives those gradients. With this modification, they show that easy positive hard negative (EPHN) gives results that exceed or are competitive with state of the art approaches. The paper is well-written and makes a convincing argument which will be of interest to a broad community." ]
The Triplet Loss approach to Distance Metric Learning is defined by the strategy to select triplets and the loss function through which those triplets are optimized. During optimization, two especially important cases are easy positive and hard negative mining which consider, the closest example of the same and different classes. We characterize how triplets behave based during optimization as a function of these similarities, and highlight that these important cases have technical problems where standard gradient descent behaves poorly, pulling the negative example closer and/or pushing the positive example farther away. We derive an updated loss function that fixes these problems and shows improvements to the state of the art for CUB, CAR, SOP, In-Shop Clothes datasets.
[ { "affiliations": [], "name": "EFFECTIVELY OPTI" }, { "affiliations": [], "name": "HARD NEGATIVES" } ]
[ { "authors": [ "Fatih Cakir", "Kun He", "Xide Xia", "Brian Kulis", "Stan Sclaroff" ], "title": "Deep metric learning to rank", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Miguel A Carreira-Perpinan", "Geoffrey E Hinton" ], "title": "On contrastive divergence learning", "venue": "In AISTATS,", "year": 2005 }, { "authors": [ "Sumit Chopra", "Raia Hadsell", "Yann LeCun" ], "title": "Learning a similarity metric discriminatively, with application to face verification", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2005 }, { "authors": [ "Weifeng Ge" ], "title": "Deep metric learning with hierarchical triplet loss", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jacob Goldberger", "Geoffrey E Hinton", "Sam T. Roweis", "Ruslan R Salakhutdinov" ], "title": "Neighbourhood components analysis", "venue": "Advances in Neural Information Processing Systems", "year": 2005 }, { "authors": [ "Ben Harwood", "BG Kumar", "Gustavo Carneiro", "Ian Reid", "Tom Drummond" ], "title": "Smart mining for deep metric learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Wonsik Kim", "Bhavya Goyal", "Kunal Chawla", "Jungmin Lee", "Keunjoo Kwon" ], "title": "Attention-based ensemble for deep metric learning", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jonathan Krause", "Michael Stark", "Jia Deng", "Li Fei-Fei" ], "title": "3d object representations for fine-grained categorization", "venue": "In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia,", "year": 2013 }, { "authors": [ "Yair Movshovitz-Attias", "Alexander Toshev", "Thomas K. Leung", "Sergey Ioffe", "Saurabh Singh" ], "title": "No fuss distance metric learning using proxies", "venue": "In Proc. International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Michael Opitz", "Georg Waltner", "Horst Possegger", "Horst Bischof" ], "title": "Bier - boosting independent embeddings robustly", "venue": "In Proc. International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Filip Radenović", "Giorgos Tolias", "Ondřej Chum" ], "title": "CNN image retrieval learns from bow: Unsupervised fine-tuning with hard examples", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Edgar Simo-Serra", "Eduard Trulls", "Luis Ferraz", "Iasonas Kokkinos", "Pascal Fua", "Francesc MorenoNoguer" ], "title": "Discriminative learning of deep convolutional feature point descriptors", "venue": "In Proc. International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Hyun Oh Song", "Yu Xiang", "Stefanie Jegelka", "Silvio Savarese" ], "title": "Deep metric learning via lifted structured feature embedding", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Hao Wang", "Yitong Wang", "Zheng Zhou", "Xing Ji", "Zhifeng Li", "Dihong Gong", "Jingchao Zhou", "Wei Liu" ], "title": "Cosface: Large margin cosine loss for deep face recognition", "venue": "CoRR, abs/1801.09414,", "year": 2018 }, { "authors": [ "Jiang Wang", "Yang Song", "Thomas Leung", "Chuck Rosenberg", "Jingbin Wang", "James Philbin", "Bo Chen", "Ying Wu" ], "title": "Learning fine-grained image similarity with deep ranking", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Hong Xuan", "Richard Souvenir", "Robert Pless" ], "title": "Deep randomized ensembles for metric learning", "venue": "In Proc. European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hong Xuan", "Abby Stylianou", "Robert Pless" ], "title": "Improved embeddings with easy positive triplet mining", "venue": "arXiv preprint arXiv:1904.04370,", "year": 2019 }, { "authors": [ "Yuhui Yuan", "Kuiyuan Yang", "Chao Zhang" ], "title": "Hard-aware deeply cascaded embedding", "venue": "In Proc. International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Shi Qiu Xiaogang Wang Ziwei Liu", "Ping Luo", "Xiaoou Tang" ], "title": "Deepfashion: Powering robust clothes recognition and retrieval with rich annotations", "venue": "In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep metric learning optimizes an embedding function that maps semantically similar images to relatively nearby locations and maps semantically dissimilar images to distant locations. A number of approaches have been proposed for this problem (Schroff et al., 2015a; Sohn, 2016; MovshovitzAttias et al., 2017; Song et al., 2016; Xuan et al., 2018; Kim et al., 2018; Ge, 2018). A common way to learn the mapping is to define a loss function based on triplets of images: an anchor image, a positive image from the same class, and a negative image from a different class. The loss penalizes cases where the anchor is mapped closer to the negative image than it is to the positive image.\nOne common variant of these functions (e.g. (Wang et al., 2018; Sohn, 2016)), uses a Deep Learning framework to map images to a feature vector, and computes similarity between normalized feature vectors based on the dot-product. This approach forces the features to lie on a hypersphere and has advantages of making the feature comparison intuitive and efficient.\nIn this work we explore standard implementations of these loss functions and show there are two problems. First, when the gradient of the loss function does not consider the normalization to a hypersphere, a large part of the gradient is lost when points are re-projected back to the sphere, especially in the easy-positive/hard-negative cases of triplets including nearby points. Second, when optimizing the parameters (the weights) of the network, when points are already mapped close together, it may be difficult to find gradient directions that effectively separate nearby images.\nWe give systematic derivation showing when and where these challenging triplets arise, and diagram the sets of triplets where standard gradient descent makes the loss increase. We find that this explains problems previously reported in the literature such as the difficulty in optimizing hardnegative triplets (Harwood et al., 2017). Furthermore, these problems are mostly introduced because the loss-function of the triplets is based on the differences between the anchor-positive and anchornegative distances, so there is an equivalent effect of encouraging the positive image to be closer or the negative image to be further. We create a new loss function that breaks this symmetry and weights the importance of changing the anchor-positive and anchor-negative distances. Briefly, our main contributions are:\n• A systematic characterization of triplet selection strategies and a visualization that highlights regions of bad gradient behavior.\n• A simple modification to a standard loss function to fix bad gradient behavior. • Improvements to current state of the art results across a range of datasets." }, { "heading": "2 BACKGROUND", "text": "There is a large body of work in distance metric learning and they are leading with two main ideas. One of the idea is to increase the intra-class cluster density and keep the inter-class clusters as far as possible. Pairwise loss functions like contrastive divergence (Carreira-Perpinan & Hinton, 2005; Chopra et al., 2005; Radenović et al., 2016) directly optimize for this constraint, and ”No fuss metric learning” (Movshovitz-Attias et al., 2017), implicitly optimizes for this constraint by assigning each class to different location and penalizes the failure of any example to go to its assigned location.\nThe other approach more directly reflects the fact that for image retrieval applications, it is not necessary for all elements of a class to be clustered, but instead that the distance to elements of the same class should be smaller than the distance to elements of different classes. Directly optimizing for this is based on triplets of images. The triplets are drawn from the training data and include an anchor image, a positive image from the same class, and a negative image from a different class.\nKey questions for these approaches explore how to select the triplets that are used. Choices such as hard or semi-hard triplet mining (Schroff et al., 2015b; Simo-Serra et al., 2015; Wang et al., 2014) focus on triplets with negative examples that are closest (hard negative mining) or nearly as close to the anchor as positive images (semi-hard negative mining) and emphasize creating separations between classes in the embedding space. Recent work such as easy positive triplet mining (Xuan et al., 2019) selects the closest anchor-positive pairs and ensures that at least they are closer than the nearest negatives.\nThe next section introduces a diagram to systematically organize these triplet selection approaches, and to explore where different loss functions fail to improve the triplets." }, { "heading": "3 TRIPLET SCATTER DIAGRAM", "text": "Triplet loss is trained with triplets of images, (xa, xp, xn), where xa is an anchor image, xp is a positive image of the same class as the anchor, and xn is a negative image of a different class. We consider a convolution neural network, f(·) that embeds the images on a unit hypersphere, (f(xa), f(xp), f(xn)). We use (fa, fp, fn) to simplify the representation of the normalized feature vectors. When embedded on a hypersphere, the cosine similarity is a convenient metric to measure\nthe similarity of anchor-positive pair Sap = fᵀa fp and anchor-negative pair San = f ᵀ a fn, and this similarity is bounded in the range [−1, 1]. The triplet scatter diagram is an approach to characterizing a given set of triplets. Figure 1 represents each triplet as a 2D point (Sap, San), describing how similar the positive and negative images are to the anchor. This diagram is useful because the location on the diagram describes important features of the triplet:\n• Triplets that are already in the correct configuration, where the similarity between anchor and positive is greater than the similarity between anchor and negative images are below the San = Sap diagonal. Dots representing triplets in the correct configuration are drawn in blue, dots where the negative is closer are drawn in red.\n• Triplets that include an anchor and the most similar of the possible positive examples are the ”Easy Positives” and are on the right side of the diagram because Sap tends to be close to 1. We circle these with a red ring.\n• Hard negatives are cases where the anchor is very similar to a negative example, so San is close to 1, depicted as red dots circled with a blue ring.\n• Hard negatives are cases where the anchor is very similar to a negative example, so San is close to 1, depicted as red dots circled with a blue ring.\n• One very selective mining strategy is ”Easy-Positive, Semi-Hard Negative”, where an anchor is matched with closest possible positive match, and a negative example which has a similar similarity. The blue dot circled with red dashed circle highlights one such example.\n• Another selective mining strategy is ”Easy-Positive, Hard Negative”, which selects, for an anchor, the most similar positive and negative examples. The red circle surrounded by a blue dashed circle represents one such example.\nDuring the later discussion, we may show a subset area of Ω, Ωs = [0, 1]× [0, 1], because it is rare that the hardest negative or positive pairs have a similarity less than 0.\nFigure 1 (right) calls out two specific regions of points that we analyze in the next section; the extremes of hard negatives and easy positives, and the region that only includes positive similarities Ωs = [0, 1] × [0, 1], that includes nearly all triplets constructed with easy positives and hard negatives." }, { "heading": "4 DIAGRAMMING WHY SOME TRIPLETS ARE HARD TO OPTIMIZE", "text": "The triplet scatter diagram offers the ability to understand when the gradient based optimization of the network parameters is effective and when it fails. The triplets are used to train a network whose loss function encourages the anchor to be more similar to its positive example (drawn from the same class) than to its negative example (drawn from a different class), encouraging Sap to be greater than San. While there are several possible choices, we consider NCA (Goldberger et al., 2005) as the loss function, and denote this as L1st to differentiate it from an updated loss function introduced later:\nL1st(fa, fp, fn) = −log exp (Sap)\nexp (Sap) + exp (San) (1)\nAll of the following derivation can also be done for the triplet loss formulation used in (Schroff et al., 2015a); this has a very similar form and is derived in the Appendix.\nThe gradient of triplets loss L1st(fa, fp, fn) can be decomposed into two parts: a single gradient with respect to feature vectors fa, fp, fn:\n∆L = ∂L\n∂fa ∆fa +\n∂L ∂fp ∆fp + ∂L ∂fn ∆fn, (2)\nand subsequently being clear that these feature vectors respond to changes in the model parameters (the CNN network weights) θ:\n∆L = ∂L\n∂fa\n∂fa ∂θ ∆θ + ∂L\n∂fp\n∂fp ∂θ ∆θ + ∂L\n∂fn\n∂fn ∂θ ∆θ. (3)\nThe gradient optimization only affects the feature embedding through variations in θ, but we first highlight problems with hypersphere embedding assuming that the optimization could directly affect the embedding location. To do this we derive the loss gradient with respect to the feature vector fa, fp, fn and use this gradient to update the feature locations where should decrease the error:\nfp new = fp − αgp = fp − α\n∂L ∂fp = fp + βfa (4)\nfn new = fn − αgn = fn − α\n∂L ∂fn = fn − βfa (5)\nfa new = fa − αga = fa − α\n∂L ∂fa = fa − βfn + βfp (6)\nwhere β = α exp (San)exp (Sap)+exp (San) and α is the learning rate. This gradient update has a clear geometric meaning: the positive point fp is encouraged to move along the direction of the vector fa; the negative point fn is encouraged to move along the opposite direction of the vector fa; the anchor point fa is encouraged to move along the direction of the sum of fp and negative fn. All of these are weighted by the same weighting factor β. Then we can get the new similarity of anchor-positive and anchor-negative (The derivation is given in the Appendix):\nSnewap = (1 + β 2)Sap + 2β − βSpn − β2San (7)\nSnewan = (1 + β 2)San − 2β + βSpn − β2Sap (8)\nThese gradients ga, gp, gn have components that move them off the sphere, computing the cosine similarity requires that we compute the norm of fanew, fpnew and fnnew(the derivation for them is shown in Appendix). Given the norm of updated feature vector, we can calculate the similarity change after the gradient update.\n∆Sap = Snewap\n‖fanew‖‖fpnew‖ − Sap (9)\n∆San = Snewan\n‖fanew‖‖fnnew‖ − San (10)\nBecause the gradient of the loss function does not consider this normalization, following the negative gradient can actually cause some triplets to push the anchor closer to the negative example or push the anchor away from the positive example, even assuming that you can directly push the anchor, positive and negative features vectors in any direction.\nThe direction in which fa, fp, and fn move depends on the relative position of the fa, fp, fn on the hypersphere. We use γ (fully defined in the Appendix) as a term to describe their relative orientation; when fa, fn, and fp are close enough so that locally the hypersphere is a plane, then γ is the dotproduct of normalized vector from fa to fp and fa to fn. Therefore, if fp, fa, fn are co-planer then γ = 1, and if moving from fa to fp is orthogonal to the direction from fa to fn, then γ = 0. Given this description of the relative positions of the anchor, positive and negative points, Figure 2 shows calculations of the change in similarity between the anchor positive and anchor negative for γ = 0.5. There is an area along the right side of the ∆Sap plot highlighting locations where the anchor and positive are pushed farther apart(∆Sap < 0), and along the top of the ∆San plot highlighting locations where the anchor and negative are pulled closer together(∆San > 0). This behavior arises because the gradient is pushing the feature off the hypersphere and therefore, after normalization, the effect is lost.\nThis discussion so far considers the derivative of the loss as a function of the position of the feature vectors, but the optimization can only control the feature vectors based on the network parameters θ. Changes the θ are likely to affect nearby points in similar ways. For example, if there is a hard negative example with easy positive where the anchor is close to both the positive and the negative image, then changing θ to move the anchor closer to the positive is likely to pull the negative example along with it. We call this effect ”entanglement” and propose a simple model to capture its effect on how the gradient update affects the similarities.\nWe use a scalar p and a factor q = √ SapSan to quantify this entanglement. As for the factor q, when anchor, positive and negative are nearby to each other, both Sap and San will be large and q will increase the entanglement effect; when positive or negative is far away to anchor, one of Sap\nand San will be small and q will reduce the entanglement effect. The total similarity changes with entanglement will modelled as follows:\n∆Stotalap = ∆Sap + p √ SapSan∆San (11)\n∆Stotalan = ∆San + p √ SapSan∆Sap (12)\nFigure 2 shows (in color) problematic regions where this model of gradient entanglement indicates that anchor and positive images become less similar (Sap < 0) and regions where the anchor negative images become more similar for different parameters of the entanglement (San > 0).\nWhile figure 2 captures problematic regions on the scatter diagram, we can create a more complete description. The bottom row plots of figure2 shows the vector field on the scatter diagram, indicating that triplets move based on the gradient of their loss function. The figure shows several vector field plots with ∆Stotalap and ∆S total an with γ = 0.5 and p = 0.4, 0.8 settings and indicates the 3 types of movement direction for dots on the triplet scatter. When ∆Sap > 0 and ∆San > 0, the movement direction will point to up-right. When ∆Sap < 0 and ∆San < 0, the movement direction will point to bottom-left. When ∆Sap > 0 and ∆San > 0, the movement direction will point to bottomright. In fact, the entanglement strength may be varied in different situation during the optimization. The exact value will not be discussed in this paper. We only use the entanglement phenomenon to demonstrate problems that happen optimizing triplets with Easy Positives and Hard Negatives.\nProblem 1: Hard Negative Mining For a given anchor image, hard negative mining chooses the negative example that maximizes San. The vector field computed in the case of entanglement shows that most locations with large San (near the top of the plot) have vectors with an upward component, meaning the gradient update for a hard negative triplet will push the negative even closer to anchor. The result is that a network cannot effectively separate the negative pairs and tending to make all points close to each other. Initializing a fine-grained visualization tasks (e.g. CARS196) with a generic pre-trained network like ImageNet often creates a starting conditions where all points are mapped nearby to begin with, leading to an optimization failure where all triplets move towards (1,1) on the triplet scatter diagram and all images are mapped to the same feature location.\nProblem 2: Easy Positive Mining For a given anchor image, easy positive mining chooses the positive example that maximizes Sap. The vector field computed in the case of entanglement shows that most locations with large Sap (near the right of the plot) have vectors with a strong downward component, meaning the gradient updates pushes the anchor and negative to be negative related, which leads over-training. A general idea for a pair of images to be different is their similarity to be zero, the negative similarity value still means the pair of images are a kind of ’related’. In the later\nphase of the optimization, as the optimization proceeds the triplet scatter diagram will not effectively keep triplets close to the ideal (1,0) point." }, { "heading": "5 WEIGHT GRADIENT BY SIMILARITY", "text": "Triplets that have problem 1 or problem 2 create gradients that move fa, fp, fn in wrong directions. When the anchor is very close to either the positive or the negative points, the gradients defined by their interaction is largely lost because of the hypersphere normalization, and the remaining effective gradient is dominated by the entanglement.\nSpecifically, for problem 1, the anchor and the negative image are close together, and not effectively encouraged to move apart, and pull of the anchor image towards the positive pulls the negative image in the same direction. The fix for this problem is to emphasize more the part of gradient pushing the anchor and negative apart when San is close to 1.\nFor problem 2, the anchor and positive image are close together, and there is a more distant negative example. The effect of the distant negative example may push the anchor and positive example further apart. The fix for this problem is to emphasize decrease the weight of the gradient related to the distant negative example Sap is close to 1.\nA simple weighting strategy addresses both problems. We weight the gp and gn gradients with scalars wap = 1 − Sap and wan = San respectively. When a negative pair in a triplet is close but the positive in the triplet is relative distant to anchor, we want to emphasize pushing the negative pair apart, so we weight gp by 1 − Sap. Similarly, when a positive pair in a triplet is close but the negative in the triplet is distant to anchor, we want to decrease the effect of the negative example, then we weight gn with San. Our new gradients have the following form:\nαgp w = −βwwapfa (13) αgn w = βwwanfa (14) αga w = βwwanfn − βwwapfp (15)\nwhere βw = α exp ( 12S 2 an)\nexp (Sap− 12S2ap)+exp ( 1 2S 2 an) These can be integrated to find a loss function with these gradients. We call this a 2nd-order triplet loss because in the exponent the similarities are squared.\nL2nd(fa, fp, fn) = −log exp (Sap − 12S 2 ap)\nexp (Sap − 12S2ap) + exp ( 1 2S 2 an)\n, (16)\nThe calculation of the terms needed to solve for ∆Sap or ∆San for the new loss function in the Appendix, and Figure 2 (the right column) shows that how vector field for the new loss function. In the area near (1,1), there is no longer the challenge that triplets are always moved towards (1,1) even when including the effects of entanglement. This helps overcome the problem that the optimization pushes all points towards the same place and we will show this effect in the experimental section." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "We to run a set of experiments on CUB200 (Welinder et al., 2010), CAR196 (Krause et al., 2013), Stanford online products (Song et al., 2016) and In-shop cloth (Ziwei Liu & Tang, 2016) datasets. All tests are run on the PyTorch platform (Paszke et al., 2017), using ResNet18 and ResNet50 (He et al., 2016) architectures, pre-trained on ILSVRC 2012-CLS data (Russakovsky et al., 2015). Training images are re-sized to 256 by 256 pixels. We adopt a standard data augmentation scheme (random horizontal flip and random crops padded by 10 pixels on each side). For pre-processing, we normalize the images using the channel means and standard deviations. All networks are trained using stochastic gradient descent (SGD) with 40 epochs. We set initial learning rate 0.005 for CAR, SOP and In-shop cloth dataset and 0.0025 for CUB dataset, and divided by 10 after 20th and 30th epochs. The batch size is 128 and each batch of images contains n examples from c classes, randomly selected from the training data. Throughout the paper, we refer to the n examples per class as a group.\nComparing the performance of 1st order and 2nd order triplet loss only requires changing one line of code in a PyTorch implementation, substituting the loss from Equation 16 for the loss from\nEquation 1. Then we calculate Recall@K as the measurement for retrieval quality. In the CUB, CAR and SOP datasets, both the query set and gallery set refer to the testing set. During the query process, the top-K retrieved images exclude the query image itself. In the In-Shop dataset, the query set and gallery set are predefined by the original paper." }, { "heading": "6.1 VERIFICATION OF VECTOR FIELD", "text": "We implement two variants from EPHN paper (Xuan et al., 2019), Hard-Positive with HardNegative(HPHN) to mimic the Problem 1 and Easy-Positive with Easy-Negative(EPEH) to mimic the Problem 2 with CAR dataset and set n = 16 so that there will be enough positive and negative for easy or hard selecting. Also, in the batch, we only allow each feature vector can only be in one triplet to prevent the affection among the triplet. Figure 3 shows two sets experiment results of training a network with 2 epochs(73 batches per epoch) at initial phase and later phase of the optimization. We set the learning rate very low(0.00001) so as to prevent the model updating too much in this experiment and better observe the problem happen in the wrong area.\nWe plot where the triplet dot move to all possible 4 direction(top-right, top-left, bottom-left and bottom-right). This result highly match our proposal model. 1) there is no top-left movement which is not possible in our model. 2) In the initial phase, most of the movement for the L1st is top-right and for the L2nd is bottom-right, which fit the our proposal fix to the vector field 3) In the later phase, most of the movement for the L1st is bottom-right and for the L2nd is bottom-right, bottom-left and top-right. 4Although the learning rate is small, there is still a patter change with the dot in early phase. L1st move to top-right and L2nd move to bottom-right." }, { "heading": "6.2 COMPARATIVE RESULTS FOR EPHN", "text": "Hard-Negative mining optimization problem depicts the specific cause of the challenges reported (for example (Harwood et al., 2017)) for optimizing with hard negative examples. Also, in the EPHN work (Xuan et al., 2019), when n = 2, there is only one positive example to choose from, so Sap is more likely to be far away to 1. The negative is chosen as the closest negative, so San is more likely to be close to 1. This situation is similar to Problem 1. The L1st loss leads to a singularity at the location (1,1).\nIn addition, we re-implement the test in EPHN work to plot the recall@1 accuracy versus n across CUB, CAR, SOP, and In-shop dataset. Figure 4 shows a clear gap of L1st and L2nd with EPHN mining. We hypothesize that the small number of examples in most classes in SOP leads to the inconsistent behavior we observe. Moreover, the figure shows Problem 1 happened on the CUB when n = 2 and CAR when n = 2, 4, but L2nd loss perform well on with the same setting." }, { "heading": "6.3 COMPARISON: EPHN VS EPSHN", "text": "Semi-hard negative mining samples triplets choosing the most similar negative where Sap > San. It is therefore most affected by the Problem 2. We compare the result of easy positive and semi-hard\nnegative mining (EPSHN) and easy-positive hard-negative mining (EPHN) for both 1st order and 2nd order loss functions on CUB, CAR, SOP and In-shop dataset. Figure 4 shows a another gap of L2nd with EPHN and EPSHN on CUB, CAR and In-shop dataset for most choices of the group size n of elements per class in each batch. This result indicates that both Problem 1 and 2 are important for metric learning." }, { "heading": "6.4 COMPARING TO THE STATE OF ART", "text": "Finally we compare our results with current state-of-the-art embedding approaches, including more complex triplet loss approaches (Yuan et al., 2017; Ge, 2018; Cakir et al., 2019) and ensemble based approaches (Opitz et al., 2017; Kim et al., 2018; Xuan et al., 2018). Our embeddings are trained with ResNet50 and an output embedding size of 512. For CUB, CAR, SOP and In-shop, the optimal group size is 8,16,2 and 2. In Table 1, the 2nd order Easy Positive Hard Negative approach achieves a new record on the CUB and SOP dataset. On the CAR and In-shop dataset, our result is comparable to the ensemble methods." }, { "heading": "7 CONCLUSION", "text": "This paper uses the triplet scatter diagram as a way to describe triplet selection strategies. The diagram offers the ability to characterize how triplets change as the Deep Metric Learning progresses, and we explore the behavior of the gradient descent optimization for the common case where points are normalized to a hypersphere. We find that important classes of triplets have an effective gradient forces negative examples closer to the anchor, or positive examples farther from the anchor, and situations that encourage triplets of images that are all similar to become even more similar. This explain previously observed bad behavior for hard-negative triplet mining. We suggest a simple modification to the desired gradients, and derive a loss function that gives those gradients. Experimentally we show that this improves the convergence for hard negative triplet selection strategies.\nWith this modification, we no longer observe challenges in optimization with the Easy-Positive Hard-Negative triplet mining strategies and show that easy-positive hard negative mining gives results that exceed or are competitive with state of the art approaches that including complicated network architectures and ensembles." }, { "heading": "A APPENDIX: SIMILARITY AFTER GRADIENT UPDATING FOR 1ST ORDER LOSS", "text": "The following derivation shows how to get Snewap and S new an in equation 7 and 8.\nSnewap =fa newᵀfp new\n=(1 + β2)fa ᵀfp + βfa ᵀfa + βfp ᵀfp − βfnᵀfp − β2fnᵀfa\n=(1 + β2)Sap + 2β − βSpn − β2San\n(17)\nSnewan =fa newᵀfn new\n=(1 + β2)fa ᵀfn − βfaᵀfa − βfnᵀfn + βfpᵀfn − β2fpᵀfa\n=(1 + β2)Sap − 2β + βSpn − β2Sap\n(18)\nWe construct two planes: Pap spanned by fa and fp, and Pan spanned by fa and fn. On Pap, fp can be decomposed into two components: fa‖p (the direction along fa) and fa⊥p (the direction vertical to fa). On Pan, fn can be decomposed into two components: f a‖ n (the direction along fa) and fa⊥n (the direction vertical to fa). Then the Spn should be:\nSpn = fp ᵀfn = (f a‖ p + f a⊥ p )(f a‖ n + f a⊥ n ) = SapSan + γ √ 1− S2ap √ 1− S2an (19)\nwhere γ = fa⊥p f a⊥ n\n‖fa⊥p ‖‖fa⊥n ‖ which represents the projection factor between Pap and Pan" }, { "heading": "B APPENDIX: NORM OF UPDATED FEATURES FOR 1ST ORDER LOSS", "text": "The following derivation shows how to derive ‖fanew‖, ‖fpnew‖ and ‖fnnew‖ in equation 9 and 10. On Pap, gp can be decomposed into the direction along fp and the direction vertical to fp. On Pan, gn can be decomposed into the direction along fn and the direction vertical to fn. Then,\n‖fpnew‖2 = (1 + βSap)2 + β2(1− S2ap) (20) ‖fnnew‖2 = (1− βSan)2 + β2(1− S2an) (21)\nOn Pap, ga can be decomposed into 3 components: component in the plane and along fa, component in the plane and vertical fa, and component vertical to Pap. Then,\n‖fanew‖2 = (1+βSap−βSan)2+(β √ 1− S2ap−γβ √ 1− S2an)2+(β √ 1− γ2 √ 1− S2an)2 (22)" }, { "heading": "C APPENDIX: SIMILARITY AFTER GRADIENT UPDATING FOR 2ND ORDER LOSS", "text": "The following derivation shows how to get Snewap and S new an after gradient update with L 2nd\nSnewap = (1 + w 2 apβ 2 w)Sap + 2wapβw − wanβwSpn − wapwanβ2wSan (23)\nSnewan = (1 + w 2 anβ 2 w)Sap − 2wanβw + wapβwSpn − wapwanβ2wSap (24)" }, { "heading": "D APPENDIX: NORM OF UPDATED FEATURES FOR 2ND ORDER LOSS", "text": "The following derivation shows how to derive ‖fanew‖, ‖fpnew‖ and ‖fnnew‖ after gradient update with L2nd.\n‖fpnew‖2 = (1 + wapβwSap)2 + w2apβ2w(1− S2ap) (25) ‖fnnew‖2 = (1− wanβwSan)2 + w2anβ2w(1− S2an) (26)\n‖fanew‖2 = (1 + wapβwSap − wanβwSan)2\n+(wapβw √ 1− S2ap − γwanβw √ 1− S2an)2\n+(wanβw √ 1− γ2 √ 1− S2an)2 (27)" }, { "heading": "E APPENDIX: GRADIENT UPDATES FOR VANILLA TRIPLET LOSS", "text": "The main paper uses NCA to define the loss. Another common vanilla triplet-loss function is derived to guarantee a specific margin. This can be expressed in terms of fa,fp,fn as:\nL = max(‖fa − fp‖2 − ‖fa − fn‖2 + α, 0) = max(D, 0) (28)\ngp = ∂L ∂fp = { −β(fa − fp) if D > 0 0 otherwise\n(29)\ngn = ∂L ∂fn = { β(fa − fn) if D > 0 0 otherwise\n(30)\nga = ∂L ∂fa = { β(fn − fp) if D > 0 0 otherwise\n(31)\nwhere D = (‖fa − fp‖2 − ‖fa − fn‖2 + α) and β = 2. For simplicity, in the following discussion, we set D > 0 for vanilla triplet loss. Then we can get the fanew, fpnew and fnnew and their norm:\nfp new = fp + β(fa − fp) = (1− β)fp + βfa (32) fn new = fn − β(fa − fn) = (1 + β)fn − βfa (33) fa new = fa − βfn + βfp (34)\n‖fpnew‖2 = (1− β + βSap)2 + β2(1− S2ap) (35) ‖fnnew‖2 = (1 + β − βSan)2 + β2(1− S2an) (36)\n‖fanew‖2 = (1+βSap−βSan)2+(β √ 1− S2ap−γβ √ 1− S2an)2+(β √ 1− γ2 √ 1− S2an)2 (37)\nThe updated similarity Snewap and S new an will be:\nSnewap = (1− β + β2)Sap + 2β − β2 − β(1− β)Spn − β2San (38) Snewan = (1 + β + β 2)San − 2β − β2 + β(1 + β)Spn − β2Sap (39)\nComparing to the equation 7 and 8, vanilla triplet behavior is similar to the triplet-NCA. And we simulate the ∆Sap and ∆San with the vanilla triplet loss in figure 5.\nWe show results for EPHN for n = 2 and n = 8 on CAR dataset when the hard negative mining problem happens during the optimization. In both cases, all points are initially pushed towards the same location (1,1), which matches our predicted movement on the vector field of L1st with entanglement in figure 2." }, { "heading": "F APPENDIX: NUMERICAL SIMULATION FOR SIMILARITY CHANGE AND THEIR ENTANGLEMENT RESULT FOR 1ST ORDER, VANILLA AND 2ND ORDER LOSS", "text": "Figure 5 shows the numerical simulation ∆Sap and ∆San with L1st, Lv and L2nd. These show that the problematic regions in terms are qualitatively similar for different values of p, the parameter in our model of entanglement." }, { "heading": "G APPENDIX: EPHN FOR 1ST AND 2ND ORDER LOSS", "text": "Figure 6 shows the training of the embedding function observed through the triplet scatter diagram, showing for each point the triplet containing the closest positive and the closest negative from whole\ndataset after each epoch training. We observe the dynamic movement to the singularity at the location (1,1) results for EPHN for n = 2. The L2nd loss for EPHN never has this problem because the weighted gradient approach more effectively separates hard negative pairs early on in the optimization.\nWhen n = 8, the easy-positive part of the triplet is easier because each can pick the closest of 7 neighbors from the same class instead of the randomly assigned same class element and Sap will be closer to 1. Meanwhile, the negative is likely to be less similar because there are fewer other examples to choose from. Therefore optimization recovers because triplets are less likely to have large San and get trapped in the part of the vector field that lead to (1,1)." } ]
2,019
null
SP:2fa9b2601acf885062d3c9d158f6518a9213f398
[ "This paper assesses the effects of training an image classifier with different label types: 1-hot coarse-grained labels (10 classes), 1-hot fine grained labels (30 labels which are all subcategories of the 10 coarse-grained categories), word vector representations of the 30 fine-grained labels. They also compare the representations learned from an unsupervised auto-encoder. They assess the different representations through cosine similarity within/between categories and through comparison with human judgments in an odd-one-out task. They find that (i) the auto-encoder representation does not capture the semantic information learned by the supervised representations and (ii) representations learned by the model depend on the label taxonomy, how the targets are represented (1-hot vs. wordvec), and how the model is trained (e.g. fine-grained then coarse grained stages), (iiii) the different representations predict human judgements to differing degrees. the first finding is obvious and I'm not even sure why it needs to be stated -- of course semantics of images are not inherently encoded in the pixels of an image! The second point again, is not surprising . This paper starts to get at some interesting questions but does not follow through. It is also quite confusing to read despite thee simple subject matter. This paper is also missing a related work section! There has been so much word on adding structure to the label space of image classifiers (e.g. models that learn image/text embedding space jointly, models that predict word vectors, graphical model approaches to building in semantic information, etc.) and none of this is discussed. There has also been work on comparing convnet representations to human percepts e.g. https://cocosci.princeton.edu/papers/Peterson_et_al-2018-Cognitive_Science.pdf)and none of this work is discussed! This work needs to be better situated within the context of previous work in this field. Please write a related work section.", "This paper demonstrates the importance of labels at various levels (no label, basic level label, and superordinate level) as well as in combination to determine the importance of semantic information in classification problems. They train an identical CNN architecture either as an autoencoder (no labels), with the basic label, with the subordinate label, with the basic and subordinate labels, and with basic labels which are fine-tuned with one-hot encodings of superordinate labels, as well as with word vectors. Classification accuracy, t-SNE, cosine similarity matrices and predictions on a human behavior task are used to evaluate the differences across labels types. The authors find that superordinate labels are helpful and important for classification problems. " ]
We investigated how the visual representations learned by CNNs is affected by training using different linguistic labels (e.g., basic-level labels only, superordinate-level only, or both at the same time), and how these differentlytrained models compare in their ability to predict the behavior of humans tasked with selecting the object that is most different from two others in a triplet. CNNs used identical architectures and inputs, differing only with respect to the labels used to supervise the training. In the absence of labels, we found that models learned very little categorical structure, suggesting that this structure cannot be extracted purely from the visual input. Surprisingly, models trained with superordinate labels (vehicle, tool, etc.) were most predictive of the behavioral similarity judgments. We conclude that the representations used in an odd-one-out task are highly modulated by semantic information, especially at the superordinate level.
[]
[ { "authors": [ "Zeynep Akata", "Florent Perronnin", "Zaid Harchaoui", "Cordelia Schmid" ], "title": "Label-embedding for image classification", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2015 }, { "authors": [ "Yashas Annadani", "Soma Biswas" ], "title": "Preserving semantic relations for zero-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "F Gregory Ashby", "Shawn W Ell" ], "title": "The neurobiology of human category learning", "venue": "Trends in cognitive sciences,", "year": 2001 }, { "authors": [ "F Gregory Ashby", "W Todd Maddox" ], "title": "Human category learning", "venue": "Annu. Rev. Psychol.,", "year": 2005 }, { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Long Chen", "Hanwang Zhang", "Jun Xiao", "Wei Liu", "Shih-Fu Chang" ], "title": "Zero-shot visual recognition using semantics-preserving adversarial embedding networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "James J DiCarlo", "Davide Zoccolan", "Nicole C Rust" ], "title": "How does the brain solve visual object recognition? Neuron", "venue": null, "year": 2012 }, { "authors": [ "Andrea Frome", "Greg S Corrado", "Jon Shlens", "Samy Bengio", "Jeff Dean", "Marc’Aurelio Ranzato", "Tomas Mikolov" ], "title": "Devise: A deep visual-semantic embedding model", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Paul R Hays" ], "title": "From the jurassic dark: Linguistic relativity as evolutionary necessity. AMSTERDAM STUDIES IN THE THEORY AND HISTORY OF LINGUISTIC SCIENCE SERIES", "venue": null, "year": 2000 }, { "authors": [ "Martin N Hebart", "Adam H Dickter", "Alexis Kidder", "Wan Y Kwok", "Anna Corriveau", "Caitlin Van Wicklin", "Chris I Baker" ], "title": "Things: A database of 1,854 object concepts and more than 26,000 naturalistic object", "venue": "images. bioRxiv,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Christoph H Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Attribute-based classification for zero-shot visual object categorization", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Jie Lei", "Zhenyu Guo", "Yang Wang" ], "title": "Weakly supervised image classification with coarse and fine labels", "venue": "In 2017 14th Conference on Computer and Robot Vision (CRV),", "year": 2017 }, { "authors": [ "Jimmy Lei Ba", "Kevin Swersky", "Sanja Fidler" ], "title": "Predicting deep zero-shot convolutional neural networks using textual descriptions", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Gary Lupyan", "Molly Lewis" ], "title": "From words-as-mappings to words-as-cues: the role of language in semantic knowledge", "venue": "Language, Cognition and Neuroscience,", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Joshua C Peterson", "Paul Soulos", "Aida Nematzadeh", "Thomas L Griffiths" ], "title": "Learning hierarchical visual representations in deep neural networks using hierarchical linguistic labels", "venue": "arXiv preprint arXiv:1805.07647,", "year": 2018 }, { "authors": [ "Nicolas Pinto", "N Majaj", "Youssef Barhomi", "E Solomon", "JJ DiCarlo" ], "title": "Human versus machine: comparing visual object recognition systems on a level playing field", "venue": "Cosyne Abstracts,", "year": 2010 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Panqu Wang", "Garrison W Cottrell" ], "title": "Basic level categorization facilitates visual object recognition", "venue": "arXiv preprint arXiv:1511.04103,", "year": 2015 }, { "authors": [ "Charles Y. Zheng", "Francisco Pereira", "Chris I. Baker", "Martin N. Hebart" ], "title": "Revealing interpretable object representations from human behavior", "venue": "In International Conference on Learning Representations,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A critical distinction between human category learning and machine category learning is that only humans have a language. A language means that human learning is not limited to a one-to-one correspondence between a visual input and a category label. Indeed, the users of a language are known to actively seek out categorical relationships between objects and use these relationships in making perceptual similarity judgments and in controlling behavior (Hays, 2000; Lupyan & Lewis, 2017). A premise of our work is that a language provides a semantic structure to labels, and that this structure contributes to the superior efficiency and flexibility of human vision compared to any artificial systems (Pinto et al., 2010). Of course, the computer vision literature on zero-shot and fewshot learning has also made good progress in leveraging semantic information (e.g., image captions, attribute labels, relational information) to increase the generalizability of a model’s performance (Lampert et al., 2013; Sung et al., 2018; Lei Ba et al., 2015).\nStill, this performance pales in comparison to the human ability for classification, where zero-shot and few-shot learning is the norm, and efficiently-acquired category knowledge is easily generalized to new exemplars (Ashby & Maddox, 2005; Ashby & Ell, 2001). One reason why machine learning lags behind human performance may be because of a failure to fully consider the semantic structure of the ground-truth labels used for training, which can be heavily biased by basic or subordinatelevel categories. This might result in models learning visual feature representations that may not be best for generalization to new, higher-level categories. For example, ImageNet (Deng et al., 2009) contains 120 different dog categories, making the models that are trained using these labels dog experts, creating an interesting but highly atypical semantic structure.\nHere we study how the linguistic structure of labels influences what is learned by models trained on the same visual inputs. Specifically, we manipulated the labels used to supervise the training of CNN models, each having the same architecture and given identical visual inputs. For example, some of these models were trained with basic-level labels only, some with only superordinate-level labels, and some with both. We then compare visual representations learned by these models, and predict human similarity judgement that we collected using an Odd-one-out task where people had to select which of three object images was the most different. With this dataset, and using categorical representations extracted from our trained models, we could predict human similarity decisions with\nup to 74% accuracy, which gives us some understanding of the labels needed to produce human-like representations. Our study also broadly benefits both computer vision and behavioral science (e.g., psychology, neuroscience) by suggesting that the semantic structure of labels and datasets should be carefully constructed if the goal is to build vision models that learn visual features representations having the potential for human-like generalization. For behavioral science, this research provides a useful computational framework for understanding the effect of training labels on the human learning of category relationships in the context of thousands of naturalistic images of objects.\n2 RELATED WORK NEW" }, { "heading": "2.1 SEMANTIC LABEL EMBEDDING", "text": "Although many computer vision models perform well in image classification, generalization tasks such as zero-shot and few-show learning remain challenging. Several studies have attempted to address this problem by embedding semantic information into a model’s representations using text description (Lei Ba et al., 2015), attribute properties (Lampert et al., 2013; Akata et al., 2015; Chen et al., 2018), and relationships between objects (Sung et al., 2018; Annadani & Biswas, 2018). More related to our work, some studies even directly leveraged the linguistic structure of labels. For example, Lei et al. (2017) and Wang & Cottrell (2015) found that training CNNs with coarse-grained labels (e.g., basic-level categories) improve classification accuracy for finer-grained labels (e.g., subordinate-level labels). Also, Frome et al. (2013) re-trained a CNN to predict the word vectors learned by a word embedding model, instead of using one-hot labels, and found improved zeroshot predictions; the model was able to predict thousands of novel categories that were never seen with 18% accuracy. These results suggest that different semantic structures of labels, such as word hierarchy, an order of learning, or semantic similarity between words, affect learned visual representations in CNNs to differing degrees. The current study provides a more systematic investigation of this question." }, { "heading": "2.2 UNDERSTANDING HUMAN VISUAL REPRESENTATION", "text": "The human visual system is unparalleled in its ability to learn feature representations for objects that are robust to large changes in appearance. This tolerance to variability, not only enables accurate object recognition, but also facilitates generalization to new exemplars and categories(DiCarlo et al., 2012). Understanding how humans learn these visual representations is, therefore, an enormously important question, but one that is difficult to study because human learning in the real world is affected and confounded by many factors that are difficult to control experimentally. Recently, work has addressed this issue by computationally modeling and simulating human representation. For example, Hebart et al. (2019) studied human visual representations by fitting probabilistic models to human similarity judgement, and found that human visual representations are composed of semantically interpretable units, with each conveying categorical membership, functionality, and perceptual attributes. Peterson et al. (2018), the study most similar to ours, trained CNNs with labels that differed in hierarchy (e.g., subordinate-level vs. basic-level). They found that training on coarsergrained labels (either as standalone or as coming after finer-grained) induces a more semantically structured representation, and produces more human-like generalization performance. The current study builds on this earlier work by 1) including CNNs trained with no labels (autoencoder) or very fine-grained labels (word vector), 2) testing on a large-scale dataset of human similarity judgement, and 3) comparing superordinate vs. basic levels." }, { "heading": "3 MODEL TRAINING", "text": "Our goal is to study how linguistic labels change the visual representations learned by CNNs. To do this, we trained equivalently designed CNNs for classification, but each with different linguistic labels as ground-truth. In addition, we trained a Convolutional autoencoder, which encodes the images using the same convolutional structure as the other models but, instead of being supervised to predict the class of the image, the objective of this model is to generate an output image that is the same as the input. This Conv. Autoencoder, therefore, represents a model that was not trained with any linguistic label, in contrast to the other models that were each trained with some type of linguistic labels. The description of each model and the labels used for training are provided below.\n• Conv. Autoencoder: Autoencoder with Convolutional encoder and decoder trained to output the same image as input • Basic labels: CNN model trained with one-hot encoding of basic-level categories, n=30 • Superordinate labels: CNN model trained with one-hot encoding of superordinate-level\ncategories, n=10 • Basic + Superordinate: CNN model trained with two-hot encoding of both basic and\nsuperordinate-level categories, n=40(10+30) • Basic then Superordinate: CNN model trained with one-hot encoding of basic-level cat-\negories first (n=30), and then finetuned with one-hot encoding of superordinate categories (n=10)\n• Superordinate then Basic: CNN model trained with one-hot encoding of superordinatelevel categories first (n=10), and then finetuned with one-hot encoding of basic categories (n=30) NEW\n• Basic FastText vectors: CNN model trained with basic-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300\n• Superordinate FastText vectors: CNN model trained with superordinate-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300 NEW\nThe identical CNN architecture was used for each model in our labeling manipulation, except for the output layer and its activation function. This general pipeline is described in Figure 1. Our CNN models consist of five blocks of two Convolutional layers, each followed by Max pooling and Batch normalization layers. For all Convolutional and Max pooling operations, zero padding was used to produce output feature maps having the same size as the input. Rectified linear units (ReLU) were used to obtain an activation function after each convolution. The flattened output of the final Convolutional layer, the ”bottleneck” feature that we later extract and use as a model’s visual representation (dim=1568), was then fed into one fully connected dense layer. For Conv. Autoencoder, the same Convolutional architecture was used for encoding and decoding, with the hidden layer in the model (dim=1568) serving as the bottleneck feature for analysis. The final predicted output, ”label vector” is either one-hot or word embedding according to the model’s target labels. Output activation functions differed depending on what label vector was used: a sigmoid function for Basic + Superordinate CNN, a linear function for the Conv. Autoencoder and FastText vectors CNNs, and a softmax for the rest of CNNs.\nAll models were trained and validated on the images of 30 categories from the IMAGENET 2012 dataset (Deng et al., 2009), and tested on images of the same 30 categories from the THINGS dataset (Hebart et al., 2019). These 30 basic-level categories were grouped into 10 higher-level, superordinate categories, which included: ’mammal’, ’bird’, ’insect’, ’fruit’, ’vegetable’, ’vehicle’, ’container’, ’kitchen appliance’, ’musical instrument’, and ’tool’. A list of all 30 categories, with their superordinates, are provided in the Supplementary 7.1. All input images were converted from RGB to BGR and each channel was zero-centered with respect to the ImageNet images. Different loss functions were used for training different models: Binary Crossentropy loss for Basic + Superordinate CNN, and Mean Squared Error loss for both Conv. Autoencoder and FastText vectors CNNs, and Categorical Crossentropy loss was used for the rest of the CNNs. All models were trained using Adam optimization (Kingma & Ba, 2014), with a mini-batch size of 64. During training, early stopping was implemented and the model with the lowest validation loss was used for the following analysis." }, { "heading": "4 BEHAVIORAL DATA", "text": "To compare the visual representations learned by our trained models with those of humans, we collected human similarity judgments in an Odd-one-out task, as in Zheng et al. (2019). Participants were shown three images of objects per trial, a triplet, and were asked to choose which object was most different from the other two. Each triplet consisted of three exemplar objects from the 30 categories used for our model training. All exemplar objects came from Zheng et al. (2019), except for ’crate’, ’hammer’, ’harmonica’, and ’screwdriver’, which were replaced with new exemplars to increase image quality and category representativeness. There are 4060 possible triplets that can be\ngenerated from all 30 categories, but we collected behavioral data on only a subset of these to reduce the time and cost of data collection. This subset includes 1) the ten triplets having objects coming from the same superordinate category, e.g., ’orangutan’, ’lion’, ’gazelle’ 2) all 435 triplets where two objects came from the same superordinate category, e.g., ’orangutan’, ’lion’, ’minivan’, and 3) 1375 triplets where all objects came from different categories, e.g., ’orangutan’, ’minivan’, ’lemon’, yielding 1820 unique triplets in total. 51 Amazon Mechanical Turk (AMT) workers participated in this task, each making responses on ∼200 triplets. After removing responses with reaction times below 500ms, we collected 9697 similarity judgments where each triplet was viewed by 5.6 workers, on average (min=4, max=51)." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EVALUATING MODEL PERFORMANCE", "text": "Although our goal was not to compete with state-of-the-art vision models in classification, we evaluated classification accuracy to see the effects of different labels on learning, thereby confirming that the visual features learned by our models represented category knowledge. To evaluate classification accuracy, we report top@k, the percentage of accurately classified test images where the\ntrue class was in the model’s the top K predictions in Table 1. Average precision and average recall over all categories are also reported in the Supplementary 7.3. All metrics were computed on the THINGS test dataset (Hebart et al., 2019). Because the FastText vectors CNN predicts a word vector, not a class, we approximated its classification performance by calculating cosine similarity between predicted and true word vectors and choosing the corresponding class from top@k similarities. Classification results cannot be generated from Conv. Autoencoder, but we include examples of images generated from this model in the Supplementary 7.2 to show that the model worked. As can be seen in Table 1, the top@5 classification accuracy for all trained models was good (all >.82), although there is room for improved classification for FastText vectors CNN." }, { "heading": "5.2 EXPLORING VISUAL REPRESENTATIONS", "text": "To explore how the different linguistic labeling schemes affected the learned visual representations, we extracted and analyzed the bottleneck features from each model (i.e., the 1568-dimensional output of the last Convolutional layer; see Figure 1). We first measured the representational similarity of all objects in the training dataset (IMAGENET 2012; Deng et al., 2009) both between and within each category. These representational distributions were visualized using t-SNE (Maaten & Hinton, 2008) and are attached in Supplementary 7.5. We also analyzed the similarity between categorical FIX representations by plotting a similarity matrix in Figure 2. To create categorical representations, we simply averaged the obtained bottleneck features from all training images per category, creating in a sense ”prototypical” representation for each class.\nClustering Quality\nTo investigate how model’s category representations are dense and well separated, we computed the ratio of between-category dispersion and within-category dispersion using cosine distance (1-cosine angle of two feature vectors). Between-category dispersion is the average cosine distance between the center(mean) of different categories. Within-category dispersion is the average cosine distance between every exemplar and the center of each category. Comparing the models in Table 2 revealed that using distributed word vectors as targets, especially Superordinate FastText vectors, produced the highest between-to-within ratio, suggesting the most tightly clustered representations. Interestingly, the Basic + Superordinate CNN model, which was trained with both basic and superordinate labels at the same time, learned more scattered and less distinguishable categorical representations compared to other label-trained models. Lastly, Conv. Autoencoder produced the lowest between-towithin ratio, suggesting that even if a model learns visual features that are good enough to generate input-like images, these visual representations may still be poorly discriminable not only in basic level categories, but also in superordinate level categories. Widely distributed features of Conv. Autoencoder from T-SNE plots in Supplementary 7.5 further supported that the visual input alone is not sufficient to produce any clusterable structure or category representations. A similar trend was observed in the other clustering quality measures as reported in the Supplementary 7.4.\nVisualization of Categorical Representations\nFigure 2 visualizes cosine similarity matrices for the category representations learned by the models to explore whether the hierarchical semantic structure of the 30 categories is captured (e.g., every basic-level category belongs to one of ten superordinate categories). For a complete comparison, we also analyzed categorical representations extracted from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (i.e., the output from the first max-pooling layer; Simonyan & Zisserman, 2014). SPoSE model’s category representations were trained on human similarity judgments. This serves as an approximation of human perceived similarity, which can be a combination of semantic and visual similarities. While FastText similarity represents the semantic similarity between categories in basic-level terms, VGG16 early layer similarity represents lowerlevel visual similarity. Whereas little effect of category hierarchy can be seen in VGG16 early layer or Conv. Autoencoder features, the various semantic structure can be observed in the other models (e.g., the emergent bright yellow squares in the figure). Upon closer analysis, these categorical divisions seemed to occur for 1) nature vs. non-nature, 2) edible vs non-edible, and 3) the superordinate categories. Surprisingly, basic-level structures are still observed in Figure 2f (e.g., fine-grained lines in the diagonal), where the model is trained only on the superordinate-level labels. This suggests that guidance from superordinate labels was often as good or better as guidance from much finergrained basic-level labels, which is consistent with the previous finding that training with coarser labels induce more hierarchical structure in visual representations (Peterson et al., 2018)" }, { "heading": "5.3 PREDICTING HUMAN VISUAL BEHAVIOR", "text": "Finally, we evaluated how well the visual representations learned by the models could predict human similarity judgement in an Odd-one-out task (See Section 4). For each triplet, responses were generated from the models by comparing the cosine similarities between the three visual object representations and selecting the one most dissimilar from the other two. Three kinds of visual representations were computed and compared: 1) IMAGENET categorical representations, where features were averaged over ∼1000 images per category from the IMAGENET training dataset (Deng et al., 2009), 2) THINGS categorical representations, where features were averaged over ∼10 images per category from the THINGS dataset (Hebart et al., 2019), and 3) Single Exemplar representation, where only one feature per category was generated for the 30 exemplar images used in the behavioral data collection. Together with accuracy from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (Simonyan & Zisserman, 2014), three baseline models of accuracy are reported below, which constitute upper and lower bounds.\n• Null Acc: Accuracy achieved by predicting that every sample is the most frequent class in the dataset (lower bound, 36%).\n• Bayes Acc: Accuracy achieved by predicting that every sample is the most frequent class in each unique triplet set (upper bound, 84%).\n• SPoSE Acc: Accuracy achieved using the SPoSE model (Zheng et al., 2019), a probabilistic model that is directly trained on human responses on all triplets from 1854 THINGS objects (80%).\nAs shown in Figure 3, triplet prediction accuracy was highest when models used IMAGENET category representations and lowest when single exemplar representations were used, even if exemplar image is the one that participant actually saw during the experiment. This shows that when humans do visual similarity ratings, they not only evaluate visual inputs but also use rich and abstract semantic information learned from viewing myriad exemplars. Comparing individual model performance, the highest accuracy (74%) was obtained by the model trained with superordinate labels. This performance is particularly impressive, considering 1) how coarsely grained superordinate labels are (dim=10) compared to Basic labels (dim=30), Basic + Superordinate labels (dim=40), or FastText vectors (dim=300), and 2) that this model is not trained on the actual human triplet data, as was the case for the SPoSE model whose performance was about 80%.\nThese results suggest that the representations used by humans in an Odd-one-out task are highly semantic, reflecting category structure, especially at the superordinate level. However, this may be only because the setting of odd-one-out task has caused people to use superordinate label information. For example, when the participants are given a triplet like (’orangutan’, ’lion’, and ’lemon’),\nthey are prone to choose ’lemon’ because it is the most odd one in superordinate-level. In fact, when the number of superordinate categories in a triplet is two as in the example above, 90% of human responses can be predicted just by telling which one is the odd superordinate category. To investigate how much this task setting would affect the results, we broke down the triplet data based on the number of superordinate categories that a triplet belongs to and reported prediction performance for each split, as shown in the Figure 3. Interestingly, the model trained with superordinate labels alone still performed the best (63%) when superordinate-level information was not very helpful, where all three images in a triplet come from three different superordinate categories, e.g, (’mammal’,’fruit’,’vehicle’). Moreover, the superordinate labels CNN (59%) outperformed the basic labels CNN (56%) even when the images were to be compared at the basic level, where all three images in a triplet come from the same superordinate categories, e.g., (’lemon’,’orange’,’banana’). This implies humans leverage the guidance from coarser superordinate labels in shaping categorical visual representation in both basic and superordinate levels" }, { "heading": "6 CONCLUSION", "text": "To be able to generalize to unseen exemplars, any vision system has to learn statistical regularities that make members of the same category more similar to one another than members of other categories. But where do these regularities come from? Are they present in the bottom-up (visual) input to the network? Or does learning the regularities require top-down guidance from category labels? If so, what kinds of labels? To investigate this problem, we manipulated the visual representations learned by CNNs by supervising them using different types of labels and then evaluated these models in their ability to predict human similarity judgments. We found that the type of label used during training profoundly affected the visual representations that were learned, suggesting that there is categorical structure that is not present in the visual input and instead requires top-down guidance in the form of category labels. We also found guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels. Models trained only on superordinate class labels such as ”musical instrument” and ”container” were not only more sensitive to these broader classes than models trained on just basic-level labels, but exposure to just superordinate labels allowed the model to learn within-class structure, distinguishing a harmonica from a flute, and a screwdriver from a hammer. This finding is consistent with the previous work that revealed that training with coarser labels induce more semantically structured visual representations (Peterson et al., 2018). More surprisingly, models supervised using superordinate labels (vehicle, tool, etc.) were best in predicting human performance on a triplet odd-one-out task. CNNs trained with superordinate labels not only outperformed other models when the odd-one-out came from a\ndifferent superordinate category (which is not surprising), but also when all three objects from a triplet came from different superordinate categories (e.g., when choosing between a banana, a bee, and a screwdriver). Our ongoing work into how different types of labels shape visual representations is exploring the effect of labels specific to different languages (e.g., English vs. Mandarin), and how these may translate to differential human and CNN classification performance." }, { "heading": "ACKNOWLEDGMENTS", "text": "Details regarding research support will be added post-review." }, { "heading": "7 SUPPLEMENTARY MATERIAL", "text": "" }, { "heading": "7.1 LIST OF 30 CATEGORIES", "text": "Superordinate-level Category\nBasic-level Category Wordnet ID\nMammal Orangutan n02480495\nGazelle n02423022\nLion n02129165\nInsect Ant n02219486\nBee n02206856\nGrasshopper n02226429\nBird Hummingbird n01833805\nGoose n01855672\nVulture n01616318\nVegetable Artichoke n07718747\nCucumber n07718472\nZucchini n07716358\nFruit Orange n07747607\nLemon n07749582\nBanana n07753592\nTool Hammer n03481172\nScrewdriver n04154565\nShovel n04208210\nVehicle Minivan n03770679\nTrolley n04335435\nTaxi n02930766\nMusical Instrument Drum n03249569\nFlute n03372029\nHarmonica n03494278\nKitchen Appliance Refrigerator n04070727\nToaster n04442312\nCoffee pot n03063689\nContainer Bucket n02909870\nMailbox n03710193\nCrate n03127925\n7.2 CONV. AUTOENCODER PREDICTIONS" }, { "heading": "7.3 AVERAGE PRECISION AND AVERAGE RECALL SCORES FOR THE TRAINED MODELS.", "text": "The scores were sample-wise averaged (i.e., averaged over samples) for Basic + Superordinate CNN, and macro-averaged (i.e.,averaged over categories) for the other models.\nModel Name\nLearning Scheme # classes Dimension of Output\nAverage Precision\nAverage Recall\nBasic labels One-step 30 30 0.90 0.90 Superordinate labels One-step 10 10 0.94 0.94 Basic + Superordinate One-step 40 40 0.91 0.91 Basic then Superordinate Two-step 10 10 0.95 0.95 Superordinate then Basic Two-step 30 30 0.88 0.88 Basic FastText vectors One-step 30 300 0.47 0.50 Superordinate FastText vectors One-step 10 300 0.72 0.75" }, { "heading": "7.4 OTHER CLUSTERING QUALITY MEASURES", "text": "SC: Silhouette Coefficients; CH: Calinski-Harabasz Index; DB: Davies-Bouldin Index; BW: Between-to-within class dispersion in cosine distance; The arrow indicates in which direction of metric value represent more dense and well separated clusterings. NEW\nModel By superordinate category By basic category SC↑ CH↑ DB↓ BW↑ SC↑ CH↑ DB↓ BW↑\nConv. Autoencoder -0.06 166.08 12.24 0.11 -0.09 70.19 15.19 0.15 Basic labels -0.01 427.43 6.45 0.64 -0.02 200.45 7.35 0.84 Superordinate labels 0.00 628.95 5.25 0.71 -0.02 226.09 11.04 0.8 Basic + Superordinate -0.01 534.81 5.79 0.61 -0.02 231.97 7.62 0.78 Basic then Superordinate 0.00 580.74 5.61 0.76 -0.02 233.15 8.62 0.9 Superordinate then Basic -0.01 525.59 5.53 0.75 -0.01 227.35 7.47 0.93 Basic FastText vectors -0.01 1021.60 5.20 0.95 -0.04 423.39 8.75 1.14 Superordinate FastText vectors -0.01 1324.88 5.24 1.11 -0.05 445.75 14.02 1.18" }, { "heading": "7.5 T-SNE PLOTS FROM OUR TRAINED MODELS FIX", "text": "(a) Conv. Autoencoder (b) Basic labels\n(c) Superordinate labels (d) Basic + Superordinate\n(a) Basic then Superordinate (b) Superordinate then Basic\n(c) Basic FastText vectors (d) Superordinate FastText vectors" } ]
2,019
null
SP:164da37f418e9f6ce0470a329ed41e35f9ac1260
[ "The aim of this work is to make deep learning classifiers more interpretable by \"projecting\" each input sample into a small collection of prototype examples (with some weighting over those) and then basing the decision on a combination of the latent representations of the chosen prototypes. In this way, the chosen category can be justified as the input being similar to the selected prototypes. Additionally, this approach makes it possible to obtain a confidence score at test time.", "This paper presents a sample-based self-explaining method for image classification. The basic idea is adopt the attention mechanism to learn the relation between the latent representation of the query sample and training samples, and identify the training samples with higher similarity as the prototype. The classification decision is based on the label consistency between the identified prototypes (with the relation score in attention mechanism as the weight of different prototypes in determining the label agreement)" ]
We propose a novel inherently interpretable machine learning method that bases decisions on few relevant examples that we call prototypes. Our method, ProtoAttend, can be integrated into a wide range of neural network architectures including pre-trained models. It utilizes an attention mechanism that relates the encoded representations to samples in order to determine prototypes. The resulting model outperforms state of the art in three high impact problems without sacrificing accuracy of the original model: (1) it enables high-quality interpretability that outputs samples most relevant to the decision-making (i.e. a sample-based interpretability method); (2) it achieves state of the art confidence estimation by quantifying the mismatch across prototype labels; and (3) it obtains state of the art in distribution mismatch detection. All this can be achieved with minimal additional test time and a practically viable training time computational cost.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Jacob Bien", "Robert Tibshirani" ], "title": "Prototype selection for interpretable classification", "venue": null, "year": 2012 }, { "authors": [ "Chaofan Chen", "Oscar Li", "Alina Barnett", "Jonathan Su", "Cynthia Rudin" ], "title": "This looks like that: deep learning for interpretable image recognition", "venue": null, "year": 2018 }, { "authors": [ "Alexis Conneau", "Holger Schwenk", "Loı̈c Barrault", "Yann LeCun" ], "title": "Very deep convolutional networks for natural language processing", "venue": null, "year": 2016 }, { "authors": [ "Maurizio Corbetta", "Gordon L. Shulman" ], "title": "Control of goal-directed and stimulus-driven attention in the brain", "venue": "Nature Reviews Neuroscience,", "year": 2002 }, { "authors": [ "Yin Cui", "Feng Zhou", "Yuanqing Lin", "Serge J. Belongie" ], "title": "Fine-grained categorization and dataset bootstrapping using deep metric learning with humans in the loop", "venue": null, "year": 2016 }, { "authors": [ "Terrance DeVries", "Graham W. Taylor" ], "title": "Learning Confidence for Out-of-Distribution", "venue": "Detection in Neural Networks", "year": 2018 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Visualizing higher-layer features of a deep network", "venue": "In Technical report,", "year": 2009 }, { "authors": [ "H A Haenssle", "C Fink", "R Schneiderbauer", "F Toberer", "T Buhl" ], "title": "Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists", "venue": "Annals of Oncology,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": null, "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jrgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Elad Hoffer", "Nir Ailon" ], "title": "Deep metric learning using triplet network", "venue": null, "year": 2014 }, { "authors": [ "Anthony F. Jerant", "Jennifer T. Johnson", "Catherine Demastes Sheridan", "Timothy J. Caffrey" ], "title": "Early detection and treatment of skin cancer", "venue": "Am Fam Physician,", "year": 2000 }, { "authors": [ "Heinrich Jiang", "Been Kim", "Maya R. Gupta" ], "title": "To trust or not to trust a classifier", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision", "venue": null, "year": 2017 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "B. Kim", "M. Wattenberg", "J. Gilmer", "C. Cai", "J. Wexler", "F. Viegas", "R. Sayres" ], "title": "Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)", "venue": null, "year": 2018 }, { "authors": [ "Wonsik Kim", "Bhavya Goyal", "Kunal Chawla", "Jungmin Lee", "Keunjoo Kwon" ], "title": "Attention-based ensemble for deep metric learning", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding Black-box Predictions via Influence Functions", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In NIPS", "year": 2017 }, { "authors": [ "Oscar Li", "Hao Liu", "Chaofan Chen", "Cynthia Rudin" ], "title": "Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "André F.T. Martins", "Ramón Fernández Astudillo" ], "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "venue": "In MLR,", "year": 2016 }, { "authors": [ "G.A. Miller" ], "title": "The magical number seven, plus or minus 2: Some limits on our capacity for processing information", "venue": "Psychological review, 63:81–97,", "year": 1956 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel" ], "title": "Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning", "venue": null, "year": 2018 }, { "authors": [ "Mengye Ren", "Renjie Liao", "Ethan Fetaya", "Richard S. Zemel" ], "title": "Incremental few-shot learning with attention attractor networks", "venue": null, "year": 2018 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E. Hinton" ], "title": "Dynamic routing between capsules", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Devendra Singh Sachan", "Petuum" ], "title": "Revisiting lstm networks for semi-supervised text classification via mixed objective function", "venue": "In KDD,", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": null, "year": 2013 }, { "authors": [ "Chawin Sitawarin", "David A. Wagner" ], "title": "On the robustness of deep k-nearest neighbors", "venue": null, "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In NIPS", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy P. Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Li Wan", "Matthew Zeiler", "Sixin Zhang", "Yann Le Cun", "Rob Fergus" ], "title": "Regularization of neural networks using dropconnect", "venue": "In ICML,", "year": 2013 }, { "authors": [ "Chih-Kuan Yeh", "Joon Sik Kim", "Ian En-Hsu Yen", "Pradeep Ravikumar" ], "title": "Representer point selection for explaining deep neural networks", "venue": null, "year": 2018 }, { "authors": [ "Matthew D. Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": null, "year": 2013 }, { "authors": [ "Quanshi Zhang", "Ying Nian Wu", "Song Chun Zhu" ], "title": "Interpretable convolutional neural networks", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have been pushing the frontiers of artificial intelligence (AI) by yielding excellent performance in numerous tasks, from understanding images (He et al., 2016) to text (Conneau et al., 2016). Yet, high performance is not always a sufficient factor - as some realworld deployment scenarios might necessitate that an ideal AI system is ‘interpretable’, such that it builds trust by explaining rationales behind decisions, allow detection of common failure cases and biases, and refrains from making decisions without sufficient confidence. In their conventional form, deep neural networks are considered as black-box models – they are controlled by complex nonlinear interactions between many parameters that are difficult to understand. There are numerous approaches, (Kim et al., 2018; Erhan et al., 2009; Zeiler & Fergus, 2013; Simonyan et al., 2013), that bring post-hoc explainability of decisions to already-trained models. Yet, these have the fundamental limitation that the models are not designed for interpretability. There are also approaches on the redesign of neural networks towards making them inherently-interpretable, as in this paper. Some notable ones include sequential attention (Bahdanau et al., 2015), capsule networks (Sabour et al., 2017), and interpretable convolutional filters (Zhang et al., 2018).\nWe focus on inherently-interpretable deep neural network modeling with the foundations of prototypical learning. Prototypical learning decomposes decision making into known samples (see Fig. 1), referred here as prototypes. We base our method on the principle that prototypes should constitute a minimal subset of samples with high interpretable value that can serve as a distillation or condensed view of a dataset (Bien & Tibshirani, 2012). Given that the number of objects a human can interpret is limited (Miller, 1956), outputting few prototypes can be an effective approach for humans to understand the AI model behavior. In addition to such interpretability, prototypical learning:\n(1) provides an efficient confidence metric by measuring mismatches in prototype labels, allowing performance to be improved by refraining from making predictions in the absence of sufficient confidence, (2) helps detect deviations in the test distribution by measuring mismatches in prototype labels that represent the support of the training dataset, and (3) enables performance in the high label noise regime to be improved by controlling the number of selected prototypes. Given these motivations, prototypes should be controllable in number, and should be perceptually relevant to the input in explaining the decision making task. Prototype selection in its naive form is computationally expensive and perceptually challenging (Bien & Tibshirani, 2012). We design ProtoAttend to address this problem in an efficient way. Our contributions can be summarized as follows:\n1. We present principles that can be guiding for the design of inherently-interpretable models based on sample-based interpretability. 2. We propose a novel method, ProtoAttend, for selecting input-dependent prototypes based on an attention mechanism between the input and prototype candidates. ProtoAttend is model-agnostic and can even be integrated with pre-trained models. 3. ProtoAttend allows interpreting the contribution of each prototype via the attention outputs. 4. For a ‘condensed view’, we demonstrate that sparsity in weights can be efficiently imposed via\nthe choice of the attention normalization and additional regularization. 5. On image, text and tabular data, we demonstrate the four key benefits of ProtoAttend: interpretabil-\nity, confidence control, diagnosis of distribution mismatch, and robustness against label noise. ProtoAttend yields superior quality for sample-based interpretability, better-calibrated confidence scoring, and more sensitive out-of-distribution detection compared to alternative approaches. 6. ProtoAttend enables all these benefits via the same architecture and method, while maintaining comparable overall accuracy." }, { "heading": "2 RELATED WORK", "text": "Prototypical learning: The principles of ProtoAttend are inspired by (Bien & Tibshirani, 2012). They formulate prototype selection as an integer program and solve it using a greedy approach with linear program relaxation. It seems unclear whether such approaches can be efficiently adopted to deep learning. (Chen et al., 2018) and (Li et al., 2018) introduce a prototype layer for interpretability by replacing the conventional inner product with a distance computation for perceptual similarity. In contrast, our method uses an attention mechanism to quantify perceptual similarity and can choose input-dependent prototypes from a large-scale candidate database. (Yeh et al., 2018) decomposes the prediction into a linear combination of activations of training points for interpretability using representer values. The linear decomposition idea also exists in ProtoAttend, but the weights are learned via an attention mechanism and sparsity is encouraged in the decomposition. In (Koh & Liang, 2017), the training points that are the most responsible for a given prediction are identified using influence functions via oracle access to gradients and Hessian-vector products.\nMetric learning: Metric learning aims to find an embedding representation of the data where similar data points are close and dissimilar data pointers are far from each other. ProtoAttend is motivated by efficient learning of such an embedding space which can be used to decompose decisions. Metric learning for deep neural networks is typically based on modifications to the objective function, such as using triplet loss and N-pair loss (Sohn, 2016; Cui et al., 2016; Hoffer & Ailon, 2014). These yield perceptually meaningful embedding spaces yet typically require a large subset of nearest neighbors to avoid degradation in performance (Cui et al., 2016). (Kim et al., 2018) proposes a deep metric learning framework which employs an attention-based ensemble with a divergence loss so that each learner can attend to different parts of the object. Our method has metric learning capabilities like relating similar data points, but also performs well on the ultimate supervised learning task.\nAttention-based few-shot learning: Some of our inspirations are based on recent advances in attention-based few-shot learning. In (Vinyals et al., 2016), an attention mechanism is used to relate an example with candidate examples from a support set using a weighted nearest-neighbor classifier applied within an embedding space. In (Ren et al., 2018), incremental few-shot learning is implemented using an attention attractor network on the encoded and support sets. In (Snell et al., 2017), a non-linear mapping is learned to determine the prototype of a class as the mean of its support set in the embedding space. During training, the support set is randomly sampled to mimic the inference task. Overall, the attention mechanism in our method follows related principles but fundamentally differs in that few-shot learning aims for generalization to unseen classes whereas the goal of our method is robust and interpretable learning for seen classes.\nUncertainty and confidence estimation: ProtoAttend takes a novel perspective on the perennial problem of quantifying how much deep neural networks’ predictions can be trusted. Common approaches are based on using the scores from the prediction model, such as the probabilities from the softmax layer of a neural network, yet it has been shown that the raw confidence values are typically poorly calibrated (Guo et al., 2017). Ensemble of models (Lakshminarayanan et al., 2017) is one of the simplest and most efficient approaches, but significantly increases complexity and decreased interpretability. In (Papernot & McDaniel, 2018), the intermediate representations of the network are used to define a distance metric, and a confidence metric is proposed based on the conformity of the neighbors. (Jiang et al., 2018), proposes a confidence metric based on the agreement between the classifier and a modified nearest-neighbor classifier on the test sample. In (DeVries & Taylor, 2018), direct inference of confidence output is considered with a modified loss. Another direction of uncertainty and confidence estimation is Bayesian neural networks that return a distribution over the outputs (Kendall & Gal, 2017b) (Mullachery et al., 2018) (Kendall & Gal, 2017a)." }, { "heading": "3 PROTOATTEND: ATTENTION-BASED PROTOTYPICAL LEARNING", "text": "Consider a training set with samples, T = {xi, yi}. Conventional supervised learning aims to learn a model s(xi;S) that minimizes a predefined loss 1/B · ∑B i=1 L(yi, ŷi = s(xi;S))\n1 at each iteration, where B is the batch size for training. Our goal is to impose that decision making should be based on only a small number of training examples, i.e. prototypes, such that their linear superposition in an embedding space can yield the overall decision and the superposition weights correspond to their importance. Towards this goal, we propose defining a solution to prototypical learning with the following six principles:\ni. vi = f(xi; θ) encodes all relevant information of xi for the final decision. f() considers the global distribution of the samples, i.e. learns from all {xi, yi}. Although all the information in training dataset is embodied in the weights of the encoder2, we construct the learning method in such a way that decision is dominated by the prototypes with high weights.\nii. From the encoded information, we can find a decision function so that the mapping g(vi; η) is close to the ground truth yi, in a consistent way with conventional supervised learning.\niii. Given candidates x(c)j to select the prototypes from, there exists weights pi,j (where pi,j ≥ 0 and ∑D j=1 pi,j = 1), such that the decision g( ∑D j=1 pi,jv (c) j ; η) (where v (c) j = f(x (c) j ; θ))\nis close to the ground truth yi. iv. When the linear combination ∑D j=1 pi,jv (c) j is considered, prototypes with higher weights\npi,j have higher contribution in the decision g( ∑D j=1 pi,jv (c) j ; η).\nv. The weights should be sparse – only a controllable amount of weights pi,j should be nonzero. Ideally, there exists an efficient mechanism for outputting pi,j to control the sparsity without significantly affecting performance. vi. The weights pi,j depend on the relation between input and the candidate samples, pi,j = r(xi,x (c) j ;Γ), based on their perceptual relation for decision making. We do not introduce\nany heuristic relatedness metric such as distances in the representation space, but we allow the model to learn the relation function that helps the overall performance.\nLearning involves optimization of the parameters θ,Γ, η of the corresponding functions. If the proposed principles (such as reasoning from the linear combination of embeddings or assigning relevance to the weights) are not imposed during training but only at inference, a high performance cannot be obtained due to the train-test mismatch, as the intermediate representations can be learned in an arbitrary way without any necessities to satisfy them.3 The subsequent section presents ProtoAttend and training procedure to implement it." }, { "heading": "3.1 NETWORK ARCHITECTURE AND TRAINING", "text": "The principles above are conditioned on efficient learning of an encoding function to encode the relevant information for decision making, a relation function to determine the prototype weights, and\n1S represents the trainable parameters for s(;S) and is sometimes not show for notation convenience. 2Training of f() may also involve initializing with pre-trained models or transfer learning. 3For example, commonly-used distance metrics in the representation spaces fail at determining perceptual\nrelevance between samples when the model is trained in a vanilla way (Sitawarin & Wagner, 2019).\na final decision making block to return the output. Conventional supervised learning comprises the encoding and decision blocks. On the other hand, it is challenging to design a learning method with a relation function with a reasonable complexity. To this end, we adapt the idea of attention (Corbetta & Shulman, 2002; Vaswani et al., 2017), where the model focuses on an adaptive small portion of input while making the decision. Different from conventional employment of attention in sequence or visual learning, we propose to use attention at sample level, such that the attention mechanism is used to determine the prototype weights by relating the input and the candidate samples via alignment of their keys and queries. Fig. 2 shows the proposed architecture for training and inference. The three main blocks are described below:\nEncoder: A trainable encoder is employed to transform B input samples (note that B may be 1 at inference) and D samples from the database of prototype candidates (note that D may be as large as the entire training dataset at inference) into keys, queries and values. The encoder is shared and jointly updated for the input samples and prototype candidate database, to learn a common representation space for the values. The encoder architecture can be based on any trainable discriminative feature mapping function, e.g. ResNet (He et al., 2016) for images, with the modification of generating three types of embeddings. For mapping of the last encoder layer to key, query and value embeddings, we simply use a single fully-connected layer with a nonlinearity, separately for each.4 For input samples, V ∈ <B×dout and Q ∈ <B×datt denote the values and queries, and for candidate database samples K(c) ∈ <D×datt and V(c) ∈ <D×dout denote the keys and values. For keys and queries, we use separate representations as the entire system is not symmetric, there are a lot of candidate samples and the model may prefer to learn the keys to arrange the representation space such that it is meaningful when their inner products with a single query are considered.\nRelational attention: The relational attention yields the weight between the ith sample and jth candidate, pi,j , via alignment of the corresponding key and query in dot-product attention form5:\npi,j = n ( K (c) j Qi T / √ datt ) , (1)\nwhere n() is a normalization function to satisfy pi,j ≥ 0 and ∑D j=1 pi,j = 1 for which we consider softmax and sparsemax (Martins & Astudillo, 2016)6. The choice of the normalization function is an efficient mechanism to control the sparsity of the prototype weights, as demonstrated in experiments. Note that the relational attention mechanism does not introduce any extra trainable parameters.\nDecision making: The final decision block simply consists of a linear mapping from a convex combination of values that results in the output yi. Consider the convex combination of value embeddings, parameterized by α:\nŷi(α) = g ( (1− α)vi + α ∑D j=1 pi,jv (c) j ) . (2)\n4There are other viable options for the mapping but we restrict it to a single layer to minimize the additional number of trainable parameters, which becomes negligible in most cases.\n5We use Ai to denote the ith row of A. 6Sparsemax encourages sparsity by mapping the Euclidean projection onto the probabilistic simplex.\nFor α = 0, L (yi, ŷi(0)) is the conventional supervised learning loss (ignoring the relational attention mechanism) that can only impose principles (i) and (ii), but not the principles (iii)-(vi). A high accuracy for ŷi(0) merely indicates that the value embedding space represents each input sample accurately. For α = 1, L (yi, ŷi(1)) encourages the principles (i), (iii)-(iv), but not the principles (ii) and (vi).7 A high accuracy for ŷi(1) indicates that the linear combination of value embeddings accurately maps to the decision. For (vi), we propose that there should be a similar output mapping for the input and prototypes, for which we encourage high accuracy for both ŷi(0) and ŷi(1) with a loss term that is a mixture of L (yi, ŷi(0)) and L (yi, ŷi(1)) or guidance with an intermediate term, as ŷi(0.5), is required. Lastly, when α ≤ 0.5, we obtain the condition that the input sample itself has the largest contribution in the linear combination. Intuitively, the sample itself should be more relevant for the output compared to other samples, so the principles (iii) and (iv) can be encouraged. We propose and compare different training objective functions in Table 1. We observe that the last four are all viable options as the training objective, with similar performance. We choose the last one for the rest of the experiments, as in some cases, slightly better prototypes are observed qualitatively (see Sect. 5.2 for further discussion).\nTo control the sparsity of the weights (beyond the choice of the attention operation), we also propose a sparsity regularization term with a coefficient λsparse in the form of entropy, Lsparse(p) = −1/B ∑B i=1 ∑D j=1 pi,j log(pi,j + ), where is a small number for numerical stability. Lsparse(p) is minimized when p has only 1 non-zero value." }, { "heading": "3.2 CONFIDENCE SCORING USING PROTOTYPES", "text": "ProtoAttend provides a linear decomposition (via value embeddings) of the decision into prototypes that have known labels. Ideally, labels of the prototypes should all be the same as the labels of the\n7For example, simply assigning non-zero weights to another predetermined class, prototypical learning method can obtain perfect accuracy, but the assignment of predetermined class would be arbitrary.\ninput. When prototypes with high weights belong to the same class, the model shall be more confident and a correct classification result is expected, whereas in the cases of disagreement between prototype labels, the model shall be less confident and the likelihood of a wrong prediction is higher. With the motivation of separating correct vs. incorrect decisions via its value, we propose a confidence score based on the agreement between the prototypes:\nCi = D∑ j=1 pi,j · I(y(c)j = ŷi), (3)\nwhere I() is the indicator function. Table 1 shows the significant difference of the average confidence metric between correct vs. incorrect classification cases for the test dataset, as desired. In Fig. 3, the impact of confidence on accuracy is further analyzed with the reliability diagram as in (Papernot & McDaniel, 2018). When test samples are binned according to their confidence, it is observed that the bins with higher confidence yield much higher accuracy. There are small number of samples in the bins with lower confidence, and those tend to be the incorrect classification cases. In Section 4.4, the efficacy of confidence score in separating correct vs. incorrect classification is experimented in confidence-controlled prediction setting, demonstrating how much the prediction accuracy can be improved by refraining from small number of samples with low confidence at test time.\nTo further encourage confidence during training, we also consider a regularization term Lconf (p) = −1/B ∑B i=1 ∑D j=1 pi,j · I(y (c) j = yi) with a coefficient λconf . Lconf is minimized when all prototypes with pi,j > 0 are from the same ground truth class with output yi.8" }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 SETUP", "text": "We demonstrate the results of ProtoAttend for image, text and tabular data classification problems with different encoder architectures (see Supplementary Material for details). Outputs of the encoders are mapped to queries, keys and values using a fully-connected layer followed by ReLU. For values, layer normalization (Lei Ba et al., 2016) is employed for more stable training. A fully-connected layer is used in the decision making block, yielding logits for determining the estimated class. Softmax cross entropy loss is used as L(). Adam optimization algorithm is employed (Kingma & Ba, 2014) with exponential learning rate decay (with parameters optimized on a validation set). For image encoding, unless specified, we use the standard ResNet model (He et al., 2016). For text encoding, we use the very deep convolutional neural network (VDCNN) (Conneau et al., 2016) model, inputting sequence of raw characters. For tabular data encoding, we use an LSTM model (Hochreiter & Schmidhuber, 1997), which inputs the feature embeddings at every timestep. See Supplementary Material for implementation details, additional results and discussions." }, { "heading": "4.2 SPARSE EXPLANATIONS OF DECISIONS", "text": "We foremost demonstrate that our inherently-interpretable model design does not cause significant degradation in performance. Table 2 shows the accuracy and the median number of prototypes required to add up to a particular portion of the decision9 for different prototypical learning cases. In all cases, very small accuracy gap is observed with the baseline encoder that is trained in conventional supervised learning way. The attention normalization function and sparsity regularization are efficient mechanisms to control the sparsity – the number of prototypes required is much lower with sparsemax attention compared to softmax attention and can be further reduced with sparsity regularization (see Supplementary Material for details). With a small decrease in performance, the number of prototypes can be reduced to just a handful.10 There is difference between datasets, as intuitively expected from the discrepancy in the degree of similarity between the intra-class samples.\n8Note that the gradients of this regularization term with respect to pi,j is either 0 or 1 and it is often insufficient to train the model itself from scratch. But it is observed to provide further improvements in some cases.\n9E.g. if the prototype weights are [0.1, 0.15, 0.05, 0.25, 0.1, 0.05, 0.28, 0.02], then 2 prototypes are required for 50% of the decision, 6 for 90% and 7 for 95%.\n10We observe that excessively high sparsity (to yield 1-2 prototypes in most cases) may sometimes decrease the quality of prototypes due to overfitting to discriminative features that are less perceptually meaningful.\nFigs. 4, 5 and 6 exemplify prototypes for image, text and tabular data. In general, perceptually-similar samples are chosen as the prototypes with the largest weights. We also compare the relevant samples found by ProtoAttend with the methods of representer point selection (Yeh et al., 2018) and influence functions (Koh & Liang, 2017) (see Supplementary Material for details) on Animals with Attributes dataset. As shown in Fig. 7, our method finds qualitatively more relevant samples. This case also exemplifies the potential of our method for integration into pre-trained models by addition of simple layers for key, query and value generation." }, { "heading": "4.3 ROBUSTNESS TO LABEL NOISE", "text": "As prototypical learning with sparsemax attention aims to extract decision-making information from a small subset of training samples, it can be used to improve performance when the training dataset\ncontains noisy labels (see Table 3). The optimal value11 of λsparse increases with higher noisy label ratios, underlining the increasing importance of sparse learning.\n4.4 CONFIDENCE-CONTROLLED PREDICTION\nBy varying the threshold for the confidence metric, a trade-off can be obtained for what ratio of the test samples that the model makes a prediction for vs. the overall accuracy it obtains on the samples above that threshold.12 Figs. 8(a) and 8(b) demonstrate this trade-off and compare it to alternative methods. The sharper slope of the plots show that our method is superior to dkNN (Papernot & McDaniel, 2018) and trust score (Jiang et al., 2018), the methods based on quantifying the mismatch with nearest-neighbor samples, in terms of finding related samples. Although the baseline accuracy is higher with 4 ensemble networks obtained via deep ensemble (Lakshminarayanan et al., 2017), our method utilizes a single network and the additional accuracy gains by refraining from uncertain predictions is similar to our approach as shown by the similar slopes of the curves.\nOverall, the baseline accuracy can be significantly improved by making less predictions. Compared to the state of the art models, our canonical method with simple and small models shows similar accuracy by making slightly fewer predictions – e.g. for MNIST, (Wan et al., 2013) achieves 0.21% error rate, that is obtained by our method refraining from only 0.45% of predictions using ResNet-32 and for DBpedia, (Sachan & Petuum, 2018) achieves 0.91% error, that is obtained by our method refraining from 3% of predictions using 9-layer VDCNN. In general, the smaller the number of prototypes, the smaller the trade-off space. Thus, softmax attention (which normally results in more prototypes) is better suited for confidence-controlled prediction compared to sparsemax (see Supplementary Material for more comparisons)." }, { "heading": "4.5 OUT-OF-DISTRIBUTION SAMPLES", "text": "Well-calibrated confidence scores at inference can be used to detect deviations from the training dataset. As the test distribution deviates from the training distribution, prototype weights tend to mismatch more and yield lower confidence scores. Fig. 9 (a) shows the ratio of samples above a certain confidence level as the test dataset deviates. Rotations deviate the distribution of test images from the training images, and cause significant degradation in confidence scores, as well as the overall\n11For a fair comparison, we re-optimize the learning rate parameters on a separate validation set. 12Note that this trade-off is often more meaningful to consider rather than the metrics based on the actual value of confidence score itself, as methods may differ in how they define the confidence metric, and thus yield very different ranges and distributions for it.\naccuracy. On the other hand, using test image from a different dataset, degrade them even further. Next, Fig. 9 (b) shows quantification of out-of-distribution detection with prototypical learning, using the method from (Hendrycks & Gimpel, 2016). ProtoAttend yields an AUC of 0.838, being on par with the-state of the art approaches (Hendrycks et al.)." }, { "heading": "5 COMPUTATIONAL COST", "text": "ProtoAttend requires only a very small increase in the number of learning parameters (merely two extra small matrices for the fully-connected layers to obtain queries and keys). However, it does require a longer training time and has higher memory requirements to process the candidate database. At inference, keys and values for the candidate database can be computed only once and integrated into the model. Thus, the overhead merely becomes the computation of attention outputs (e.g. for CIFAR-10 model, the attention overhead at inference is less than 0.6 MFLOPs, orders of magnitude lower than the computational complexity of a ResNet model). During training on the other hand, both forward and backward propagation steps for the encoder need to be computed for all candidate samples and the total time is higher (e.g. 4.45 times slower to train until convergence for CIFAR-10 compared to the conventional supervised learning). The size of the candidate database is limited by the memory of the processor, so in practice we sample different candidate databases randomly from the training dataset at each iteration. For faster training, data and model parallelism approaches are straightforward to implement – e.g., different processors can focus on different samples, or they can focus on different parts of the convolution or inner product operations. Further computationally-efficient approaches may involve less frequent updates for candidate queries and values." }, { "heading": "6 CONCLUSIONS", "text": "We propose an attention-based prototypical learning method, ProtoAttend, and demonstrate its usefulness for a wide range of problems on image, text and tabular data. By adding a relational attention mechanism to an encoder, prototypical learning enables novel capabilities. With sparsemax attention, it can base the learning on a few relevant samples that can be returned at inference for interpretability, and can also improve robustness to label noise. With softmax attention, it enables confidence-controlled prediction that can outperform state of the art results with simple architectures by simply making slightly fewer predictions, as well as enables detecting deviations from the training data. All these capabilities are achieved without sacrificing overall accuracy of the base model." }, { "heading": "A PSEUDO CODE FOR TRAINING", "text": "Algorithm 1 Pseudo-code of ProtoAttend training 1: Inputs: Training dataset T , encoder model h(x; θ), classifier model h(v;φ), normalization\nfunction n, input batch size B, candidate batch size D, attention dimension datt, α values to be used for loss: (0, 0.5, 1), task-specific loss function L, ADAM learning rate r, and exponential decay rate parameters β1 and β2\n2: Initialize Trainable encoder parameters θ and classifier layer parameters φ 3: while until convergence do 4: Sample a mini-batch from the training dataset for the inputs: (xi, yi)Bi=1 ∼ T 5: Sample a mini-batch from the training dataset for the prototypes: (x(c)j , y (c) j ) D j=1 ∼ T 6: for i = 1, ..., B do 7: Obtain queries and values for the input:\nQi,Vi ← h(x; θ)\n8: for j = 1, ..., D do 9: Obtain keys and values for the prototypes:\nK (c) j ,V (c) j ← h(x (c); θ)\n10: for i = 1, ..., B do 11: for j = 1, ..., D do 12: Estimate the relational attention coefficients:\npi,j ← n ( K (c) j Qi T / √ datt ) 13: Obtain the predictions 14: for i = 1, ..., B do\nŷi(α = 0)← g (vi;φ)\nŷi(α = 0.5)← g ( 0.5vi + 0.5 ∑D j=1 pi,jv (c) j ;φ )\nŷi(α = 1)← g (∑D\nj=1 pi,jv\n(c) j ;φ ) 15: Estimate the total loss function\nLbatch ← 1/B · ∑B\ni=1 L (yi, ŷi(0)) + L (yi, ŷi(1)) + L (yi, ŷi(0.5))\n16: Update the encoder model and the classifier layer\nφ← φ− ADAM(∇φLbatch, r, β1, β2)\nθ ← θ − ADAM(∇θLbatch, r, β1,2 )" }, { "heading": "B RELATION TO INFLUENCE FUNCTIONS", "text": "Here we clarify the relationship of our work to influence functions from a theoretical perspective. Influence functions quantify how a model’s predictions would change if we did not have a particular training point. For the purpose of sample-based explainability, (Koh & Liang, 2017) proposes that the relation between an input sample xi and the candidate samples13 x (c) j can be obtained by quantifying\n13All training samples are used as candidate samples in (Koh & Liang, 2017).\nthe influence of upweighting (x(c)j , y (c) j ) on the loss at a query point (xi, yi):\nIi,j = −∇(θ,φ)L(y (c) j ) T (H−1 (θ̂,φ̂) )T∇(θ,φ)L(yi), (4)\nwhere H(θ̂,φ̂) is the Hessian and is positive definite by assumption. Let’s consider the singular value decomposition (H−1\n(θ̂,φ̂) ) = Ξ ·Σ ·ΨT and also define the function k(x, y) = ∇(θ,φ)L(y). Then, Eq.\n4 can be written as:\nIi,j = (ΨT · k(x(c)j , y (c) j )) T · (−Σ ·ΞT ) · k(xi, yi), (5)\nWe can observe that Ii,j is in the form of an inner product between two functions applied on (xi, yi) and (x(c)j , y (c) j ). These two functions are composed of a shared (and potentially complex) function, followed by a linear mapping with non-shared parameters. This expression is indeed in a similar form with the argument of the normalization function for attention in Eq. 1, where the queries and keys are obtained by a shared encoder except the last layer. The only notable difference is that ProtoAttend encoder functions merely input xi and x (c) j , not the ground truth labels. Instead of relying on ground truth labels or complex Hessian estimations, ProtoAttend infers the encoded representations for the queries and keys directly in a feedforward way, by learning from the entire training dataset. Note that ProtoAttend does not use a separate encoder for values, and obtains a high performance by sharing the vast majority of the parameters while obtaining the keys, queries and values.\nIn (Koh & Liang, 2017), Influence Functions are also related to nearest neighbor search-based relevant point determination approaches, for sample-based explainability. When Euclidean space is considered for distances, with the assumption that all points have the same norm, the inner product between the representations correspond to their similarity. This scenario is the special case of ProtoAttend when we use the same representation for keys, queries and values, and when we train with only α = 0 loss term although we would use pi,j for similarity determination. As studied in (Koh & Liang, 2017), nearest neighbor-based methods are far less accurate in capturing the effect of model training, compared to Influence Functions. Our empirical results in Figs. 7 and 13 show superior performance of ProtoAttend compared to Influence Functions in finding perceptually more similar samples.\nOverall, unlike Influence Functions, ProtoAttend modifies the model training for the desired goals, that fundamentally yields more degrees of freedom to optimize while achieving superior prototype learning quality effectively." }, { "heading": "C TRAINING DETAILS", "text": "Different candidate databases are sampled randomly from the training dataset at each iteration. Training database size is chosen to fit the model to the memory of a single GPU. D at inference is chosen sufficiently large to obtain high accuracy. Table 4 shows the database size D for the datasets used in the experiments. The size of the prototype candidate database should be sufficiently large such that the model can attend to reasonable prototypes with high coefficients (separately for each input). With appropriate sparsity mechanisms, we normally only end up with a few prototypes with large coefficients. Indeed, most of the coefficients would be zero with sparsemax activation and sparsity regularization.\nC.1 IMAGE DATA\nC.1.1 MNIST DATASET\nWe apply random cropping after padding each side by 2 pixels and per image standardization. The base encoder uses a standard 32 layer ResNet architecture. The number of filters is initially 16 and doubled every 5 blocks. In each block, two 3× 3 convolutional layers are used to transform the input, and the transformed output is added to the input after a 1 × 1 convolution. 4× downsampling is applied by choosing the stride as 2 after 5th and 10th blocks. Each convolution is followed by batch normalization and ReLU nonlinearity. After the last convolution, 7× 7 average pooling is applied. The output is followed by a fully-connected layer of 256 units and ReLU nonlinearity, followed by layer normalization (Lei Ba et al., 2016). Keys and queries are mapped from the output using a fully-connected layer followed by ReLU nonlinearity, where the attention size is datt=16. Values are mapped from the output using a fully-connected layer of dout=64 units and ReLU nonlinearity, followed by layer normalization. For the baseline encoder, the initial learning rate is chosen as 0.002 and exponential decay is applied with a rate of 0.9 applied every 6k iterations. The model is trained for 84k iterations. For prototypical learning model with softmax attention, the initial learning rate is chosen as 0.002 and exponential decay is applied with a rate of 0.8 applied every 8k iterations. The model is trained for 228k iterations. For prototypical learning model with sparsemax attention, the initial learning rate is chosen as 0.001 and exponential decay is applied with a rate of 0.93 applied every 6k iterations. The model is trained for 228k iterations. All models use a batch size of 128 and gradient clipping above 20.\nC.1.2 FASHION-MNIST DATASET\nWe apply random cropping after padding each side by 2 pixels, random horizontal flipping, and per image standardization. The base encoder uses a standard 32 layer ResNet architecture, similar to our MNIST experiments. For the baseline encoder, the initial learning rate is chosen as 0.0015 and exponential decay is applied with a rate of 0.9 applied every 10k iterations. The model is trained for 332k iterations. For prototypical learning with softmax attention, the initial learning rate is chosen as 0.0007 and exponential decay is applied with a rate of 0.92 applied every 8k iterations. The model is trained for 450k iterations. For prototypical learning with sparsemax attention, the initial learning rate is chosen as 0.001 and exponential decay is applied with a rate of 0.9 applied every 8k iterations. The model is trained for 392k iterations. For prototypical learning with sparsemax attention and sparsity regularization (with λsparse = 0.0003), the initial learning rate is chosen as 0.001 and exponential decay is applied with a rate of 0.94 applied every 8k iterations. λconf = 0.1 is chosen when confidence regularization is applied. The model is trained for 440k iterations. All models use a batch size of 128 and gradient clipping above 20.\nC.1.3 CIFAR-10 DATASET\nWe apply random cropping after padding each side by 3 pixels, random horizontal flipping, random vertical flipping and per image standardization. The base encoder uses a standard 50 layer ResNet architecture. The number of filters is initially 16 and doubled every 8 blocks. In each block, two 3× 3 convolutional layers are used to transform the input, and the transformed output is added to the input after a 1× 1 convolution. 4× downsampling is applied by choosing the stride as 2 after 8th and 16th blocks. Each convolution is followed by batch normalization and the ReLU nonlinearity. After the last convolution, 8×8 average pooling is applied. The output is followed by a fully-connected layer of 256 units and the ReLU nonlinearity, followed by layer normalization (Lei Ba et al., 2016). Keys and queries are mapped from the output using a fully-connected layer followed by the ReLU nonlinearity, where the attention size is datt=16. Values are mapped from the output using a fully-connected layer of dout=128 units and the ReLU nonlinearity, followed by layer normalization. For the baseline encoder, the initial learning rate is chosen as 0.002 and exponential decay is applied with a rate of 0.95 applied every 10k iterations. The model is trained for 940k iterations. For prototypical learning with softmax attention, the initial learning rate is chosen as 0.0035 and exponential decay is applied with a rate of 0.95 applied every 10k iterations. The model is trained for 625k iterations. For prototypical learning with sparsemax attention, the initial learning rate is chosen as 0.0015 and exponential decay is applied with a rate of 0.95 applied every 10k iterations. The model is trained for 905k iterations. For prototypical learning with sparsemax attention and sparsity regularization\n(with λsparse = 0.00008), the initial learning rate is chosen as 0.0015 and exponential decay is applied with a rate of 0.95 applied every 12k iterations. λconf = 0.1 is chosen when confidence regularization is applied. The model is trained for 450k iterations. All models use a batch size of 128 and gradient clipping above 20.\nCIFAR-10 experiments with noisy labels. For CIFAR-10 experiments with noisy labels for the base encoder we only optimize the learning parameters. Noisy labels are sampled uniformly from the set of labels excluding the correct one. The baseline model with noisy label ratio of 0.8 uses an initial learning rate of 0.001, decayed with a rate of 0.92 every 6k iterations, and is trained for 15k iterations. For the dropout approach, dropout with a rate of 0.1 is applied, and the model uses an initial learning rate of 0.002, decayed with a rate of 0.85 every 8k iterations, and is trained for 24k iterations. The baseline model with noisy label ratio of 0.6 uses an initial learning rate of 0.002, decayed with a rate of 0.92 every 6k iterations, and is trained for 12k iterations. For the dropout approach, dropout with a rate of 0.3 is applied, and the model uses an initial learning rate of 0.002, decayed with a rate of 0.92 every 8k iterations, and is trained for 18k iterations. The baseline model with noisy label ratio of 0.4 uses an initial learning rate of 0.002, decayed with a rate of 0.92 every 6k iterations, and is trained for 15k iterations. For the dropout approach, dropout with a rate of 0.5 is applied, and the model uses an initial learning rate of 0.002, decayed with a rate of 0.92 every 6k iterations, and is trained for 18k iterations. For experiments for the prototypical learning model with sparsemax attention, we optimize the learning parameters and λsparse. For the model with noisy label ratio of 0.8, λsparse = 0.0015, initial learning rate is chosen as 0.0006 and exponential decay is applied with a rate of 0.95 applied every 8k iterations. The model is trained for 108k iterations. For the model with noisy label ratio of 0.6, λsparse = 0.0005, initial learning rate is chosen as 0.001 and exponential decay is applied with a rate of 0.9 applied every 8k iterations. The model is trained for 92k iterations. For the model with noisy label ratio of 0.4, λsparse = 0.0003, initial learning rate is chosen as 0.001 and exponential decay is applied with a rate of 0.9 applied every 6k iterations. The model is trained for 122k iterations.\nC.1.4 FRUITS DATASET\nWe apply random cropping after padding each side by 5 pixels, random horizontal flipping, random vertical flipping and per image standardization. In the encoder, first, a downsampling with a convolutional layer is applied with a stride of 2, and using 16 filters, followed by a downsampling with max-pooling with a stride of 2. After obtaining the 25× 25 inputs, a standard 32 layer ResNet architecture (similar to MNIST) is used, followed by a fully-connected layer of 128 units and the ReLU nonlinearity, followed by layer normalization (Lei Ba et al., 2016). Keys and queries are mapped from the output using a fully-connected layer followed by the ReLU nonlinearity, where the attention size is datt=16. Values are mapped from the output using a fully-connected layer of dout=64 units and the ReLU nonlinearity, followed by layer normalization. W eight decay with a factor of 0.0001 is applied for the convolutional filters. The model uses a batch size of 128 and gradient clipping above 20.\nC.1.5 ISIC MELANOMA DATASET\nThe ISIC Melanoma dataset is formed from the ISIC Archive (ISIC, 2016) that contains over 13k dermoscopic images collected from leading clinical centers internationally and acquired from a variety of devices within each center. The dataset consists of skin images with labels denoting whether they contain melanoma or are benign. We construct the training and validation dataset using 15122 images (13511 benign and 1611 melanoma cases), and the evaluation dataset using 3203 images (2867 benign and 336 melanoma). While training, benign cases are undersampled in each batch to have 0.6 ratio including candidate database sets at training and inference. All images are resized to 128 × 128 pixels. We apply random cropping after padding each side by 8 pixels, random horizontal flipping, random vertical flipping and per image standardization. In the encoder, first, a downsampling with a convolutional layer is applied with a stride of 2, and using 16 filters, followed by a downsampling with max-pooling with a stride of 2. After obtaining the 32× 32 inputs, the base encoder uses a standard 50 layer ResNet architecture (similar to CIFAR10), followed by a fully-connected layer of 128 units and the ReLU nonlinearity, followed by layer normalization (Lei Ba et al., 2016). Keys and queries are mapped from the output using a fully-connected layer followed by the ReLU nonlinearity, where the attention size is datt=16. Values are mapped from the\noutput using a fully-connected layer of dout=64 units and the ReLU nonlinearity, followed by layer normalization. For the baseline encoder, the initial learning rate is chosen as 0.002 and exponential decay is applied with a rate of 0.9 applied every 3k iterations. The model is trained for 220k iterations. For prototypical learning with softmax attention, the initial learning rate is chosen as 0.0006 and exponential decay is applied with a rate of 0.9 applied every 3k iterations. The model is trained for 147k iterations. For prototypical learning with sparsemax attention, the initial learning rate is chosen as 0.0006 and exponential decay is applied with a rate of 0.9 applied every 4k iterations. The model is trained for 166k iterations. All models use a batch size of 128 and gradient clipping above 20.\nC.1.6 ANIMALS WITH ATTRIBUTES DATASET\nWe train ProtoAttend with sparsemax attention using the features from a pre-trained ResNet-50 as provided in (Yeh et al., 2018). To map the pre-trained features, we simply insert a single fullyconnected layer with 256 units with ReLU nonlinearity and layer normalization, followed by the individual fully-connected layers of keys, queries and values (16, 16 and 64 units respectively with ReLU nonlinearity). Sparsity regularization is applied with λsparse = 0.000001. We train the model for 70k iterations. The initial learning rate is chosen as 0.0006 and exponential decay is applied with a rate of 0.8 applied every 10k iterations. A classification accuracy above 91% is obtained for the test set.\nC.2 TEXT DATA\nC.2.1 DBPEDIA DATASET\nThere are 14 output classes: Company, Educational Institution, Artist, Athlete, Office Holder, Mean Of Transportation, Building, Natural Place, Village, Animal, Plant, Album, Film, Written Work. As the input, 16-dimensional trainable embeddings are mapped from the dictionary of 69 raw characters (Conneau et al., 2016). The maximum length is set to 448 and longer inputs are truncated while the shorter inputs are padded. The input embeddings are first transformed with a 1-D convolutional block consisting 64 filters with kernel width of 3 and stride of 2. Then, 8 convolution blocks as in (Conneau et al., 2016) are applied, with 64, 64, 128, 128, 256, 256, 512 and 512 filters respectively. All use the kernel width of 3, and after each two layers, max pooling is applied with kernel width of 3 and a stride of 2. All convolutions are followed by batch normalization and the ReLU nonlinearity. Convolutional filters use weight normalization with parameter 0.00001. The last convolution block is followed by k-max pooling with k=8 (Conneau et al., 2016). Finally, we apply two fully-connected layers with 1024 hidden units. In contrast to (Conneau et al., 2016), we also use layer normalization (Lei Ba et al., 2016) after fully-connected layers as we observe this leads to more stable training behavior. Keys and queries are mapped from the output using a fully-connected layer followed by the ReLU nonlinearity, where the attention size is datt=16. Values are mapped from the output using a fully-connected layer of dout=64 units and the ReLU nonlinearity, followed by layer normalization. For the baseline encoder, initial learning rate is chosen as 0.0008 and exponential decay is applied with a rate of 0.9 applied every 8k iterations. The model is trained for 212k iterations. For prototypical learning model with softmax attention, the initial learning rate is chosen as 0.0008 and exponential decay is applied with a rate of 0.9 applied every 8k iterations. The model is trained for 146k iterations. For prototypical learning model with sparsemax attention, the initial learning rate is chosen as 0.0005 and exponential decay is applied with a rate of 0.82 applied every 8k iterations. The model is trained for 270k iterations. All models use a batch size of 128 and gradient clipping above 20. We do not apply any data augmentation.\nC.3 TABULAR DATA\nC.3.1 ADULT CENSUS INCOME\nThere are two output classes: whether or not the annual income is above $50k. Categorical categories such as the ‘marital-status’ are mapped to multi-hot representations. Continuous variables are used after a fixed normalization transformation. For ‘age’, the transformation first subtracts 50 and then divides by 30. For ‘fnlwgt’, the transformation first takes the log, and then subtracts 9, and then divides by 3. For ‘education-num’, the transformation first subtracts 6 and then divides by 6. For ‘hours-per-week’, the transformation first subtracts 50 and then divides by 50. For ‘capital-gain’\nand ‘capital-loss’, the normalization takes the log, and then subtracts 5, and then divides by 5. The concatenated features are then mapped to a 64 dimensional vector using a fully-connected layer, followed by the ReLU nonlinearity. The base encoder uses an LSTM architecture, with 4 timesteps. At each timestep, 64-dimensional inputs are applied after a dropout with rate 0.5. The output of the last timestep is used after applying a dropout with rate 0.5. Keys and queries are mapped from this output using a fully-connected layer followed by the ReLU nonlinearity, where the attention size is datt=16. Values are mapped from the output using a fully-connected layer of dout=16 units and the ReLU nonlinearity, followed by layer normalization. For the baseline encoder, the initial learning rate is chosen as 0.002 and exponential decay is applied with a rate of 0.9 applied every 2k iterations. The model is trained for 4.5k iterations. For the models with attention in prototypical learning framework, the initial learning rate is chosen as 0.0005 and exponential decay is applied with a rate of 0.92 applied every 2k iterations. The softmax attention model is trained for 13.5k iterations and the sparsemax attention model is trained for 11.5k iterations. For the model with sparsity regularization, the initial learning rate is 0.003 and exponential decay is applied with a rate of 0.7 applied every 2k iterations, and the model is trained for 7k iterations. All models use a batch size of 128 and gradient clipping above 20. We do not apply any data augmentation." }, { "heading": "D ADDITIONAL PROTOTYPE EXAMPLES", "text": "Fig. 10 exemplify prototypes for CIFAR-10. For most cases, we observe the similarity of discriminative features between inputs and prototypes. For example, the body figures of birds, the shape of tires, the face patterns of dogs, the body figures of frogs, the appearance of the background sky for planes, are among the features apparent in examples.\nFig. 11 shows additional prototype examples for DBPedia dataset. Prototypes have very similar sentence structure, words and concepts, while categorizing the sentences into ontologies.\nFig. 12 shows example prototypes for ISIC Melanoma. In some cases, we observe the commonalities between input and prototypes that distinguish melanoma cases such as the non-circular geometry or irregularly-notched borders (Jerant et al., 2000). Compared to other datasets, ISIC Melonama dataset yields lower interpretable prototype quality on average. We hypothesize this to be due to the perceptual difficulty of the problem as well as the insufficient encoder performance shown by the lower classification accuracy (despite the acceptable AUC).\nFig. 13 shows more comparison examples for prototypical learning framework with sparsemax attention vs. representer point selection (Yeh et al., 2018) on Animals with Attributes dataset. For some cases, including chimpanzee, zebra, dalmatian and tiger, ProtoAttend yields perceptually very similar samples. The similarity of the chimpanzee body form and the background, zebra patterns, dalmatian pattern on the grass, and tiger pattern and head pose, are prominent. Representer point selection fails to capture such similarity features as effectively. On the other hand, for bat, otter and wolf, the results are somewhat less satisfying. The wing part of the bat, multiple count of the otters with the background, and the color and furry head of the wolf seem to be captured, but with less apparent similarity than some other possible samples from the dataset. Representer point selection method also cannot be claimed to be successful in these cases. Lastly, for leopard, ProtoAttend only yields one non-zero prototype (which is indeed statistically rare given the model and sparsity choices). The pattern of the leopard image seems relevant, but it is also not fully satisfying to observe a single prototype that is not perceptually more similar. All of the test examples in Fig. 13 are classified correctly with our framework and all of the shown prototypes are also from the correct classes." }, { "heading": "E COMPARISON OF CONFIDENCE-CONTROLLED PREDICTION FOR SOFTMAX VS. SPARSEMAX", "text": "Figs. 14 and 15 show the accuracy vs. ratio of samples for softmax vs. sparsemax attention without confidence regularization. The baseline accuracy (at 100% prediction ratio) is higher for softmax attention for some datasets, whereas higher for sparsemax for some others. On the other hand, higher number of prototypes yielded by softmax attention results in a wider range for confidence-controlled prediction trade-off.\nAs an impactful case study, we consider melanoma detection problem with ISIC dataset (ISIC, 2016) in Supplementary Material. In medical diagnosis, it is strongly desired to maintain a sufficiently-high prediction performance, potentially by verifying the decisions of an AI system by medical experts in the cases where the AI models are not confident. By refraining from some predictions, as shown in Fig. 16, we demonstrate unprecedentedly high AUC values without using transfer learning or highly-customized models (Haenssle et al., 2018)." }, { "heading": "F HUMAN USER STUDY ON THE USEFULNESS OF PROTOTYPES", "text": "We perform a user study by asking humans how much an extra image helps in explaining the guessed class of the input, after showing what the trained network predicts for that input. We consider the Animals with Attributes dataset (exemplified in Fig. 13). We randomly pick test samples and assess how much showing the top prototype makes a difference. The results in Table 5 shows that ProtoAttend picks\nTable 5: Human ratings (mean score and 95% confidence interval) on how much an extra image helps guessing the class of the input.\nSampling method Score (out of 5) Top prototype by ProtoAttend 4.33± 0.09\nRandomly sampled from the predicted class 3.97± 0.12 Randomly sampled from any class 1.33± 0.09" }, { "heading": "G CONTROLLING SPARSITY VIA REGULARIZATION", "text": "Number of iterations\n600k300k\nM ed\nia n\nnu m\nbe r o\nf p ro\nto ty\npe s\nfo r 9\n5% d\nec is\nio n\n0\n5 10\n100\n1000\nƛsparse=0\nƛsparse=0.0001 ƛsparse=0.001\nƛ sparse =0.003\nƛ sparse =0.01\nƛsparse=0.1\nƛ sparse =0.03\nFigure 17: Number of training iterations vs. median number prototypes to explain 95% of the decision (in logarithmic scale), for Fashion-MNIST with softmax attention.\nFig. 17 shows the impact of sparsity regularization coefficient on training. By varying the value of λsparse, the number of prototypes can be efficiently controlled. For high values of sparsity regularization coefficient, the model gets stuck at a point where it is forced to make decision from a low number of prototypes before the encoder model is properly learned, hence typically yields considerably lower performance. We also observe sparsity mechanism via sparsemax attention to yield better performance than softmax attention with high sparsity regularization." }, { "heading": "H PROTOTYPE QUALITY", "text": "In general, the following scenarios may yield low prototype quality:\n1. Lack of related samples in the candidate database. 2. Perceptual difference between humans and encoders in determining discriminative features. 3. High intra-class variability that makes training difficult. 4. Imperfect encoder that cannot yield fully accurate representations of the input. 5. Insufficiency of relational attention to determine weights from queries and keys. 6. Inefficient decoupling between encoder & attention blocks and the final decision block.\nThere can be problem-dependent fundamental limitations on (1)-(3), whereas (4)-(6) are raised by choices of models and losses and can be further improved. We leave the quantification of prototype quality using information-theoretic metrics or discriminative neural networks to future work." }, { "heading": "I UNDERSTANDING MISCLASSIFICATION CASES", "text": "One of the benefits of prototypical learning is insights into wrong decision cases. Fig. 18 exemplifies prototypes with wrong labels, that give insights about why the model is confused about a particular input (e.g. due to similarity of the visual patterns). Such insights can be actionable to improve the model performance, such as adding more training samples for the confusing classes or modifying the loss functions." } ]
2,019
null
SP:e7d072333891bebe16584ee8276b874cb28fffda
[ "The submission considers the problem of imitation learning when the dynamics of the expert are not known to the agent and the dynamics of the agent may change frequently. It is however assumed that the agent has access to a parameterized simulator that can simulate the expert dynamics. The parameters for the simulator are not known but are assumed to be drawn from a known distribution.", "This paper proposes an algorithm for imitation of expert demonstrations, in situations where the imitator is acting under a different environment (different dynamics, for instance) than the one used to collect expert demonstrations. The algorithm builds on GAIL with the following modifications – the discriminator is made dynamics-invariant by adding a domain-adversarial loss, and the policy is made to condition on a dynamics context. A separate dynamics posterior network is trained (either supervised or unsupervised) to predict this context at test-time. " ]
We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain. This is an important problem in robotic learning because in real world scenarios 1) reward functions are hard to obtain, 2) learned policies from one domain are difficult to deploy in another due to varying source to target domain statistics, 3) collecting expert demonstrations in multiple environments where the dynamics are known and controlled is often infeasible. We address these constraints by building upon recent advances in adversarial imitation learning; we condition our policy on a learned dynamics embedding and we employ a domain-adversarial loss to learn a dynamics-invariant discriminator. The effectiveness of our method is demonstrated on simulated control tasks with varying environment dynamics and the learned adaptive agent outperforms several recent baselines.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Chris L Baker", "Joshua B Tenenbaum", "Rebecca R Saxe" ], "title": "Goal inference as inverse planning", "venue": "In Proceedings of the Annual Meeting of the Cognitive Science Society,", "year": 2007 }, { "authors": [ "Konstantinos Bousmalis", "George Trigeorgis", "Nathan Silberman", "Dilip Krishnan", "Dumitru Erhan" ], "title": "Domain separation networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yevgen Chebotar", "Ankur Handa", "Viktor Makoviychuk", "Miles Macklin", "Jan Issac", "Nathan Ratliff", "Dieter Fox" ], "title": "Closing the sim-to-real loop: Adapting simulation randomization with real world experience", "venue": "arXiv preprint arXiv:1810.05687,", "year": 2018 }, { "authors": [ "Jason V Davis", "Brian Kulis", "Prateek Jain", "Suvrit Sra", "Inderjit S Dhillon" ], "title": "Information-theoretic metric learning", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Yan Duan", "Marcin Andrychowicz", "Bradly Stadie", "OpenAI Jonathan Ho", "Jonas Schneider", "Ilya Sutskever", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "One-shot imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Sergey Levine", "Pieter Abbeel" ], "title": "Guided cost learning: Deep inverse optimal control via policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": "arXiv preprint arXiv:1409.7495,", "year": 2014 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yen-Chang Hsu", "Zsolt Kira" ], "title": "Neural network-based clustering using pairwise constraints", "venue": "arXiv preprint arXiv:1511.06321,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Ilya Kostrikov", "Kumar Krishna Agrawal", "Sergey Levine", "Jonathan Tompson" ], "title": "Addressing sample inefficiency and reward bias in inverse reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Ajay Mandlekar", "Yuke Zhu", "Animesh Garg", "Li Fei-Fei", "Silvio Savarese" ], "title": "Adversarially robust policy learning: Active construction of physically-plausible perturbations", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Anusha Nagabandi", "Ignasi Clavera", "Simin Liu", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to adapt in dynamic, real-world environments through metareinforcement learning", "venue": "arXiv preprint arXiv:1803.11347,", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Icml,", "year": 2000 }, { "authors": [ "Xue Bin Peng", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Sim-to-real transfer of robotic control with dynamics randomization", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Lerrel Pinto", "James Davidson", "Rahul Sukthankar", "Abhinav Gupta" ], "title": "Robust adversarial reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Dean A Pomerleau" ], "title": "Alvinn: An autonomous land vehicle in a neural network", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine" ], "title": "Epopt: Learning robust neural network policies using model ensembles", "venue": "arXiv preprint arXiv:1610.01283,", "year": 2016 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Fereshteh Sadeghi", "Sergey Levine" ], "title": "Cad2rl: Real single-image flight without a single real image", "venue": "arXiv preprint arXiv:1611.04201,", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Bradly C Stadie", "Pieter Abbeel", "Ilya Sutskever" ], "title": "Third-person imitation learning", "venue": "arXiv preprint arXiv:1703.01703,", "year": 2017 }, { "authors": [ "Jie Tan", "Tingnan Zhang", "Erwin Coumans", "Atil Iscen", "Yunfei Bai", "Danijar Hafner", "Steven Bohez", "Vincent Vanhoucke" ], "title": "Sim-to-real: Learning agile locomotion for quadruped robots", "venue": "arXiv preprint arXiv:1804.10332,", "year": 2018 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Wenhao Yu", "Jie Tan", "C Karen Liu", "Greg Turk" ], "title": "Preparing for the unknown: Learning a universal policy with online system identification", "venue": "arXiv preprint arXiv:1702.02453,", "year": 2017 }, { "authors": [ "Brian D Ziebart", "Andrew L Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Aaai,", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans and animals can learn complex behaviors via imitation. Inspired by these learning mechanisms, Imitation Learning (IL) has long been a popular method for training autonomous agents from human-provided demonstrations. However, human and animal imitation differs markedly from commonly used approaches in machine learning. Firstly, humans and animals tend to imitate the goal of the task rather than the particular motions of the demonstrator (Baker et al., 2007). Secondly, humans and animals can easily handle imitation scenarios where there is a shift in embodiment and dynamics between themselves and a demonstrator. The first feature of human IL can be represented within the framework of Inverse Reinforcement Learning (IRL) (Ng et al., 2000; Abbeel & Ng, 2004; Ziebart et al., 2008), which at a high level casts the problem of imitation as one of matching outcomes rather than actions. Recent work in adversarial imitation learning (Ho & Ermon, 2016; Finn et al., 2016) has accomplished this by using a discriminator to judge whether a given behavior is from an expert or imitator, and then a policy is trained using the discriminator expert likelihood as a reward. While successful in multiple problem domains, this approach makes it difficult to accommodate the second feature of human learning: imitation across shifts in embodiment and dynamics. This is because in the presence of such shifts, the discriminator may either simply use the embodiment or dynamics to infer whether it is evaluating expert behavior, and as a consequence fails to provide a meaningful reward signal.\nIn this paper we are concerned with the problem of learning adaptive policies that can be transferred to environments with varying dynamics, by imitating a small number of expert demonstrations collected from a single source domain. This problem is important in robotic learning because it is better aligned with real world constraints: 1) reward functions are hard to obtain, 2) learned policies from one domain are hard to deploy to different domains due to varying source to target domain statistics, and 3) the target domain dynamics oftentimes changes while executing the learned policy. As such, this work assumes ground truth rewards are not available, and furthermore we assume that expert demonstrations come from only a single domain (i.e. an instance of an environment where dynamics cannot be exactly replicated by the policy at training time). To the best of our knowledge, this is the first work to tackle this challenging problem formulation.\nOur proposed method solves the above problem by building upon the GAIL (Ho & Ermon, 2016; Finn et al., 2016) framework, by firstly conditioning the policy on a learned dynamics embedding (“context variable” in policy search literature (Deisenroth et al., 2013)). We propose two embedding\napproaches on which the policy is conditioned, namely, a direct supervised learning approach and a variational autoencoder (VAE) (Kingma & Welling, 2013) based unsupervised approach. Secondly, to prevent the discriminator from inferring whether it is evaluating the expert behavior or imitator behavior purely through the dynamics, we propose using a Gradient Reversal Layer (GRL) to learn a dynamics-invariant discriminator. We demonstrate the effectiveness of the proposed algorithm on benchmark Mujoco simulated control tasks.\nThe main contributions of our work include: 1) present a general and novel problem formulation that is well aligned with real world scenarios in comparison to recent literature 2) devise a conceptually simple architecture that is capable of learning an adaptive policy from a small number of expert demonstrations (order of 10s) collected from only one source environment, 3) design an adversarial loss for addressing the covariate shift issue in discriminator learning." }, { "heading": "2 RELATED WORK", "text": "Historically, two main avenues have been heavily studied for imitation learning: 1) Behavioral Cloning (BC) and 2) Inverse Reinforcement Learning (IRL). Though conceptually simple, BC suffers from compound errors caused by covariate shift, and subsequently, often requires a large quantity of demonstrations (Pomerleau, 1989), or access to the expert policy (Ross et al., 2011) in order to recover a stable policy.\nRecent advancements in imitation learning (Ho & Ermon, 2016; Finn et al., 2016) have adopted an adversarial formation that interleaves between 1) discriminating the generated policy against the expert demonstrations and 2) a policy improvement step where the policy aims to fool the learned discriminator.\nDynamics randomization (Tobin et al., 2017; Sadeghi & Levine, 2016; Mandlekar et al., 2017; Tan et al., 2018; Pinto et al., 2017; Peng et al., 2018; Chebotar et al., 2018; Rajeswaran et al., 2016) has been one of the prevailing vehicles for addressing varying simulation to real-world domain statistics. This avenue of methods typically involves perturbing the environment dynamics (often times adversarially) in simulation in order to learn an adaptive policy that is robust enough to bridge the “Reality Gap”.\nWhile dynamics randomization has been explored for learning robust policies in an RL setting, it has a critical limitation in the imitation learning context: large domain shifts might result in directional differences in dynamics, therefore, the demonstrated actions might no longer be admissible for solving the task in the target domain. Our method (Figure 2) also involves training in a variety of environments with different dynamics. However, we propose conditioning the policy on an explicitly learned dynamics embedding to enable adaptive policies based on online system ID.\nYu et al. (2017) adopted a similar approach towards building adaptive policies. They learn an online system identification model and condition the policy on the predicted model parameters in an RL setting. In comparison to their work, we do not assume access to the ground truth reward signals or the ground truth physics parameters at evaluation time, which makes this work’s problem formulation a harder learning problem, but with greater potential for real-world applications. We will compare our method with Yu et al. (2017) in the experimental section.\nThird person imitation learning (Stadie et al., 2017) also employs a GRL (Ganin & Lempitsky, 2014) under a GAIL-like formulation with the goal of learning expert behaviors in a new domain. In comparison, our method also enables learning adaptive policies by employing an online dynamics identification component, so that the policies can be transferred to a class of domains, as opposed to one domain. In addition, learned policies using our proposed method can handle online dynamics perturbations.\nMeta learning (Finn et al., 2017) has also been applied to address varying source to target domain dynamics (Duan et al., 2017; Nagabandi et al., 2018). The idea behind meta learning in the context of robotic learning is to learn a meta policy that is “initialized” for a variety of tasks in simulation, and then fine-tune the policy in the real-world setting given a specific goal. After the meta-learning phase, the agent requires significantly fewer environment interactions to obtain a policy that solves the task. In comparison to meta learning based approaches, fine-tuning on the test environment is\nnot required in our method, with the caveat being that this is true only within the target domain where the dynamics posterior is effective." }, { "heading": "2.1 BACKGROUND", "text": "In this section we will briefly review GAIL (Ho & Ermon, 2016). Inspired by GANs, the GAIL objective is defined as:\nmin θ max ω EπE [logDω(s, a)] +Eπθ [log(1−Dω(s, a))] (1)\nWhere πE denotes the expert policy that generated the demonstrations; πθ is the policy to imitate the expert; D is a discriminator that learns to distinguish between πθ and πE with generated stateaction pairs. In comparison to GAN optimization, the GAIL objective is rarely differentiable since differentiation through the environment step is often intractable. Optimization is instead achieved via RL-based policy gradient algorithms, e.g., PPO (Schulman et al., 2017) or off policy methods, e.g., TD3 (Kostrikov et al., 2018). Without an explicit reward function, GAIL relies on reward signals provided by the learned discriminator, where a common reward formulation is rω(s, a) = − log(1−Dω(s, a))." }, { "heading": "3 ADAPTIVE ADVERSARIAL IMITATION LEARNING (ADAIL)", "text": "" }, { "heading": "3.1 PROBLEM DEFINITION", "text": "Suppose we are given a classE of environments with different dynamics but similar goals, a domain generator g(c) which takes in a code c and generates an environment ec ∈ E, and a set of expert demonstrations {τexp} collected from one source environment eexp ∈ E. In adaptive imitation learning, one attempts to learn an adaptive policy πθ that can generalize across environments within E. We assume that the ground truth dynamics parameters c, which are used to generate the simulated environments, are given (or manually sampled) during the training phase." }, { "heading": "3.2 ALGORITHM OVERVIEW", "text": "We allow the agent to interact with a class of similar simulated environments with varying dynamics parameters, which we call “adaptive training”. To be able to capture high-level goals from a small set of demonstrations, we adopt a approach similar to GAIL. To provide consistent feedback signals during training across environments with different dynamics, the discriminator should be dynamicsinvariant. We enable this desirable feature by learning a dynamics-invariant feature layer for the discriminator by 1) adding another head DR(c|s, a) to the discriminator to predict the dynamics parameters, and 2) inserting a GRL in-between DR and the dynamics-invariant feature layer. The discriminator design is illustrated in Figure 1. In addition, to enable adaptive policies, we introduced a dynamics posterior that takes a roll-out trajectory and outputs an embedding, on which the policy is conditioned. Intuitively, explicit dynamics latent variable learning endows the agent with the ability to identify the system and act differently against changes in dynamics. Note that a policy can learn to infer dynamics implicitly, without the need for an external dynamics embedding. However, we find experimentally that policies conditioned explicitly on the environment parameters outperform those that do not. The overall architecture is illustrated in Figure 2. We call the algorithm Adaptive Adversarial Imitation Learning (ADAIL), with the following objective (note that for brevity, we for now omit the GRL term discussed in Section 3.4):\nmin θ max ω,φ EπE [logDω(s, a)] +Eπθ(·|c)[log(1−Dω(s, a))] +Eτ∼πθ(·|c)[logQφ(c|τ)] (2)\nWhere c is a learned latent dynamics representation that is associated with the rollout environment in each gradient step; τ is a roll-out trajectory using πθ(·|c) in the corresponding environment; Q(c|τ) is a “dynamics posterior” for inferring the dynamics during test time; The last term in the objective, Eτ∼πθ(·|c)[logQφ(c|τ)], is a general form of the expected log likelihood of c given τ . Note that, the posterior training is on-policy, meaning that the rollouts are collected online using the current\nAlgorithm 1 ADAIL 1: Inputs: 2: An environment class E. 3: Initial parameters of policy θ, discriminator ω, and posterior φ. 4: A set of expert demonstrations {τexp} on one of the environment eexp ∈ E. An environment generator g(c) that takes a code and generates an environment ec ∈ E. A prior distribution of p(c).\n5: for i = 1, 2, .. do 6: Sample c ∼ p(c) and generate environment ec ∼ g(c) 7: Sample trajectories τi ∼ πθ(·|c) in ec and τei ∼ {τexp} 8: Update the discriminator parameters ω with the gradients: Ê(s,a)∼τi [∇w log(Dw(s, a))] + Ê(s,a)∼τei [∇w log(1−Dw(s, a)] 9: Update the discriminator parameters ω again with the following loss, such that the gradients are reversed when back-prop through the dynamics-invariant layer: −Ê(s,a)∼τi [log(D\nR(c|s, a))] 10: Update the posterior parameters φ with gradients Êτi [∇φ logQφ(c|τi))] 11: Update policy πθ(·|c) using policy optimization method (PPO) with: Ê(s,a)∼τi [− log(1−Dω(s, a))] 12: Output: Learned policy πθ , and posterior Qφ.\npolicy, thereby the last term of the objective is dependent on θ. One can employ various supervised and unsupervised methods towards optimizing this term. We will explore a few methods in the following subsections.\nThe algorithm is outlined in Algorithm 1." }, { "heading": "3.3 ADAPTIVE TRAINING", "text": "Adaptive training is achieved through 1) allowing the agent to interact with a class of similar simulated environments within class E, and 2) learning a dynamics posterior for predicting the dynamics based on rollouts. The environment classE is defined as a set of parameterized environments with n degrees of freedom, where n is the total number of latent dynamics parameters that we can change. We assume that we have access to an environment generator g(c) that takes in a sample of the\ndynamics parameters c and generates an environment. At each time when an on-policy rollout is initiated, we re-sample the dynamics parameters c based on a predefined prior distribution p(c)." }, { "heading": "3.4 LEARNING A DYNAMICS-INVARIANT DISCRIMINATOR", "text": "GAIL learns from the expert demonstrations by matching an implicit state-action occupancy measure. However, this formulation might be problematic in our training setting, where on-policy rollouts are collected from environments with varying dynamics. In non-source environments, the discriminator can no longer provide canonical feedback signals. This motivates us to learn a dynamics-invariant feature space, where, the behavior-oriented features are preserved but dynamicsidentifiable features are removed. We approach this problem by assuming that the behavior-oriented characteristics and dynamics-identifiable characteristics are loosely coupled and thereby we can learn a dynamics-invariant representation for the discriminator. In particular, we employ a technique called a Gradient Reversal Layer (GRL) (Ganin & Lempitsky, 2014), which is widely used in image domain adaptation (Bousmalis et al., 2016). The dynamics-invariant features layer is shared with the original discriminator classification head, illustrated in Figure 1." }, { "heading": "3.5 DIRECT SUPERVISED DYNAMICS LATENT VARIABLE LEARNING", "text": "Perhaps one of the best latent representations of the dynamics is the ground truth physics parameterization (gravity, friction, limb length, etc). In this section we explore supervised learning for inferring dynamics. A neural network is employed to represent the dynamics posterior, which is learned via supervised learning by regressing to the ground truth physics parameters given a replay buffer of policy rollouts. We update the regression network using a Huber loss to match environment dynamics labels. Details about the Huber loss can be found in appendix A.1. During training, we condition the learned policy on the ground truth physics parameters. During evaluation, on the other hand, the policy is conditioned on the predicted physics parameters from the posterior.\nWe use (state, action, next state) as the posterior’s input, i.e., Qφ(c|s, a, s′), and a 3-layer fullyconnected neural network to output the N-dimensional environment parameters. Note that one can use a recurrent neural network and longer rollout history for modeling complex dynamic structures, however we found that this was not necessary for the chosen evaluation environments." }, { "heading": "3.6 VAE-BASED UNSUPERVISED DYNAMICS LATENT VARIABLE LEARNING", "text": "For many cases, the number of varying latent parameters of the environment is high, one might not know the set of latent parameters that will vary in a real world laboratory setting, or the latent parameters are oftentimes strongly correlated (e.g., gravity and friction) in terms of their effect on environment dynamics. In this case, predicting the exact latent parameterization is hard. The policy is mainly concerned with the end effector of the latent parameters. This motivates us to use a unsupervised tool to extract a latent dynamics embedding. In this section, we explore a VAE-based unsupervised approach similar to conditional VAE (Sohn et al., 2015) with an additional contrastive regularization loss, for learning the dynamics without ground truth labels.\nWith the goal of capturing the underlying dynamics, we avoid directly reconstructing the (state, action, next state) tuple, (s, a, s′). Otherwise, the VAE would likely capture the latent structure of the state space. Instead, the decoder is modified to take-in the state-action pair, (s, a), and a latent code, c, and outputs the next state, s′. The decoder now becomes a forward dynamics predictive model. The unsupervised dynamics latent variable learning method is illustrated in Figure 3.\nThe evidence lower bound (ELBO) used is:\nELBO = EQφ(c|s,a,s′)[logPψ(s ′|s, a, c)]−KL(Qφ(c|s, a, s′)||P (c)) (3)\nWhere Q(c|s, a, s′) is the dynamics posterior (encoder); P (s′|s, a, c) is a forward dynamics predictive model (decoder); P (c) is a Gaussian prior over the latent code c. Similar to Davis et al. (2007) and Hsu & Kira (2015), to avoid the encoder learning an identity mapping on s′, we add the following KL-based contrastive regularization to the loss:\nLcontrastive = KL(Qφ(s0, a0, s ′ 0)||Qφ(s1, a1, s′1))−min{KL(Qφ(s2, a2, s′2)||Qφ(s3, a3, s′3)), D0}\nWhere (s0, a0, s′0) and (s1, a1, s ′ 1) are sampled from the same roll-out trajectory; (s2, a2, s ′ 2) and (s3, a3, s ′ 3) are sampled from different roll-out trajectories. D0 is a constant. We use this regularization to introduce additional supervision in order to improve the robustness of the latent posterior.\nThe overall objective for the dynamics learner is\nmin φ,ψ −ELBO+ λLcontrastive (4)\nwhere λ is a scalar to control the relative strength of the regularization term. The learned posterior (encoder) infers the latent dynamics, which is used for conditioning the policy. The modified algorithm can be found in the appendix (Algorithm 2)." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 ENVIRONMENTS", "text": "To evaluate the proposed algorithm we consider 4 simulated environments: CartPole, Hopper, HalfCheetah and Ant. The chosen dynamics parameters are specified in table 3, and an example of one such parameter (HalfCheetah gravity component x) is shown in Figure 4. During training the parameters are sampled uniformly from the chosen range. Source domain parameters are also given in the table 3. For each source domain, we collect 16 expert demonstrations.\nGym CartPole-V0: We vary the force magnitude in continuous range [−1, 1] in our training setting. Note that the force magnitude can take negative values, which flips the force direction.\n3 Mujoco Environments: Hopper, HalfCheetah, and Ant: With these three environments, we vary 2d dynamics parameters: gravity x-component and friction." }, { "heading": "4.2 ADAIL ON SIMULATED CONTROL TASKS", "text": "" }, { "heading": "Is the dynamics posterior component effective under large dynamics shifts?", "text": "We first demonstrate the effectiveness of the dynamics posterior under large dynamics shifts on a toy Gym environment, Cartpole, by varying 1d force magnitude. As the direction of the force changes, blindly mimicking the demonstrations collected from the source domain (Fm = 1.0) would not work on target domains with Fm < 0.0. This result is evident when comparing ADAIL to GAIL with dynamics randomization. As shown in Figure 5a, GAIL with Dynamics Randomization failed to generalize to Fm < 0.0, whereas, ADAIL is able to achieve the same performance as Fm > 0.0. We also put a comparison with ADAIL-rand, where the policy is conditioned on uniformly random values of the dynamics parameters, which completely breaks the performance across the domains.\nHow does the GRL help improve the robustness of performance across domains?\nTo demonstrate the effectiveness of GRL in the adversarial imitation learning formulation, we do a comparative study with and without GRL on GAIL with dynamics randomization in the Hopper environment. The results are shown in Figure 5b.\nHow does the overall algorithm work in comparison with baseline methods?\nWe demonstrate the overall performance of ADAIL by applying it to three Mujoco control tasks: HalfCheetah, Ant and Hopper. For each of the Mujoco environments, we vary 2 continuous dynamics parameters and we compare the performance of ADAIL with a few baseline methods, including 1) the PPO expert which was used to collect demonstrations; 2) the UP-true algorithm of Yu et al. (2017), which is essentially a PPO policy conditioned on ground truth physics parameters; and 3) GAIL with dynamics randomization, which is unmodified GAIL training on a variety of environments with varying dynamics. The results of this experiment are show in in Figure 6.\nHalfCheetah The experiments show that 1) as expected the PPO expert (Plot 6a) has limited adaptability to unseen dynamics. 2) UP-true (Plot 6b) achieves similar performance across test environments. Note that since UP-true has access to the ground truth reward signals and the policy is conditioned on ground truth dynamics parameters, the Plot 6b shows an approximate expected upper bound for our proposed method since we do not assume access to reward signals during policy training, or to ground truth physics parameters at policy evaluation time. 3) GAIL with dynamics randomization (Plot ??) can generalize to some extent, but failed to achieve the demonstrated performance in the source environment (gravity x = 0.0, friction = 0.5) 4) Plots 9f 9g show evaluation of the proposed method ADAIL with policy conditioned on ground truth physics parameters and predicted physics parameters respectively; ADAIL matches the expert performance in the source environment (gravity x = 0.0, friction = 0.5) and generalizes to unseen dynamics. In particular, when the environment dynamics favors the task, the adaptive agent was able to obtain even higher performance (around friction = 1.2, gravity = 2).\nAnt and Hopper. We again show favorable performance on both Ant and Hopper in Figure 6.\nHow does the algorithm generalize to unseen environments?\nTo understand how ADAIL generalizes to environments not sampled at training time, we do a suite of studies in which the agent is only allowed to interact in a limited set of environments. Figure 7 shows the performance of ADAIL on different settings, where a 5×5 region of environment parameters including the expert source environment are “blacked-out”. This case is particularly challenging since the policy is not allowed access the domain from which the expert demonstrations were collected, and so our dynamics-invariant discriminator is essential. For additional held out experiments see Appendix A.5.\nThe experiments show that, 1) without training on the source environment, ADAIL with the ground truth parameters tends to have performance drops on the blackout region but largely is able to generalize (Figure 7a); 2) the posterior’s RMSE raises on the blackout region (Figure 7c); 3) consequently ADAIL with the predicted dynamics parameters suffers from the posterior error on the blackout region (Figure 7b).\nHow does unsupervised version of the algorithm perform?\nVAE-ADAIL on HalfCheetah. With the goal of understanding the characteristics of the learned dynamics latent embedding through the unsupervised method and its impact on the overall algorithm, as a proof of concept we apply VAE-ADAIL to HalfCheetah environment varying a 1D continuous dynamics, friction. The performance is shown in Figure 8." }, { "heading": "5 CONCLUSION", "text": "In this work we proposed the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive control policies from a limited number of expert demonstrations. We demonstrated the effectiveness of ADAIL on two challenging MuJoCo test suites and compared against recent SoTA. We showed that ADAIL extends the generalization capacities of policies to unseen environments, and we proposed a variant of our algorithm, VAE-ADAIL, that does not require environment dynamics labels at training time. We will release the code to aid in reproduction upon publication." }, { "heading": "A APPENDIX", "text": "A.1 HUBER LOSS FOR DYNAMICS EMBEDDING LOSS\nWe use the following loss function when training the dynamics embedding posterior:\nLδ(c,Qφ(τ)) =\n{ 1 2 (c−Qφ(τ))\n2 for |c−Qφ(τ)| < δ δ|c−Qφ(τ)| − 12δ 2 otherwise (5)\nWhere δ controls the joint position between L2 and L1 penalty in Huber loss.\nLemma 1. Minimizing the above Huber loss is equivalent to maximizing the log likelihood, logP (c|τ), assuming P (c|τ) is distributed as a Gaussian distribution when |c−Qφ(τ)| < δ, and as a Laplace distribution otherwise. See appendix A.2 for the proof.\nA.2 LEMMA 1 PROOF\nProof. For |c−Qφ(τ)| < δ,\nlogP (c|τ) = log 1√ 2πσ1 e − (c−Q(τ))\n2\n2σ21 σ1 is a positive constant (6)\nlogP (c|τ) = log 1√ 2πσ1 − 1 2σ21 (c−Q(τ))2 (7)\n∇ logP (c|τ) =∇(log 1√ 2πσ1 − 1 2σ21 (c−Q(τ))2) (8)\n=− C1∇ 1\n2 (c−Q(τ))2 C1 is a positive constant (9)\n=− C1∇Lδ(c,Qφ(τ)) (10)\nLikewise, we can prove for |c−Qφ(τ)| ≥ δ.\nA.3 VAE-ADAIL ALGORITHM\nAlgorithm 2 VAE-ADAIL 1: Inputs: 2: An environment class E. 3: Initial parameters of policy θ, discriminator ω, and dynamics posterior φ, ψ. 4: A set of expert demonstrations {τexp} on one of the environment eexp ∈ E. 5: for i = 1, 2, .. do 6: Sample environment e ∈ E. 7: Sample trajectories τi ∼ πθ(·|Qφ) in e and τei ∼ {τexp} 8: Update the discriminator parameters ω with the gradients\nÊ(s,a)∼τi [∇w log(Dw(s, a))] + Ê(s,a)∼τei [∇w log(1−Dw(s, a)] 9: Update the posterior parameters φ, ψ with the objective described in Eq (3) & (4)\n10: Update policy πθ(·|c) using policy optimization method (TRPO/PPO) with: Ê(s,a)∼τi [− log(1−Dω(s, a))]\n11: Output: Learned policy πθ , and posterior Qφ.\nA.4 HALFCHEETAH ADAIL PERFORMANCE COMPARISON\nFigure 9: Comparing ADAIL with a few baselines on HalfCheetah. Each plot is a heatmap that demonstrates the performance of an algorithm in environments with different dynamics. Each cell of the plot shows 10 episodes averaged cumulative rewards on a particular 2D range of dynamics.\n(a) PPO Expert (2991.23± 2020.93)\n(b) GAIL (2189.76± 2110.70)\n(c) GAIL-rand (3182.72± 1753.86) (d) State-only GAIL-rand (3301.20± 1350.29)\n(e) UP-true (3441.76± 1248.77)\n(f) ADAIL-true (4419.75± 1493.54)\n(g) ADAIL-pred (4283.20± 1569.31)\n(h) Posterior RMSE (1.03± 0.36)\nA.5 HELD-OUT ENVIRONMENT EXPERIMENT\nA.6 HYPERPARAMETERS\nA.6.1 ADAIL\nWe used fully connected neural networks with 2 hidden layers for all three components of the system. The Network hyperparameters for each of the test environments with 2D dynamics parameters are shown in the following table. For UP-True and GAIL-rand, we use the same set of hyperparameters.\nA.6.2 VAE-ADAIL\nHere we show the network architectures and learning rates for VAE-ADAIL." } ]
2,019
null
SP:0c6c9db564f0029c12c1a1e16373970eeeb800d4
[ "This paper proposes a novel use of mixup, which is originally a data augmentation method incorporating two training samples and their corresponding labels. The authors utilize mixup not for training but for inference (MI; Mixup Inference). Experimental results on Cifar 10, and Cifar 100 show that MI can boost the classification performance in combination with interpolated AT (Adversarial Training) and mixup.", "This paper introduces a novel method for an adversarial attack named mixup inference (MI). Most of the work focuses on embedding mixup mechanism in the training phase, but MI uses the mixup in the inference phase. MI method has two main effects for the adversarial attack: one is perturbation shrinkage, and the other one is input transfer because MI can exploit" ]
It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally unreasonable behavior nearby input examples. Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples. However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited. Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions. Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models. MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial. Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.
[ { "affiliations": [], "name": "Tianyu Pang" }, { "affiliations": [], "name": "Kun Xu" }, { "affiliations": [], "name": "Jun Zhu" } ]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Christopher Beckham", "Sina Honari", "Alex Lamb", "Vikas Verma", "Farnoosh Ghadiri", "R Devon Hjelm", "Christopher Pal" ], "title": "Adversarial mixup resynthesizers", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In ACM Workshop on Artificial Intelligence and Security (AISec),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Audio adversarial examples: Targeted attacks on speech-to-text", "venue": "IEEE Security and Privacy Workshops (SPW),", "year": 2018 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Hanjun Dai", "Hui Li", "Tian Tian", "Xin Huang", "Lin Wang", "Jun Zhu", "Le Song" ], "title": "Adversarial attack on graph structured data", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling cnns with simple transformations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Alhussein Fawzi", "Seyed-Mohsen Moosavi-Dezfooli", "Pascal Frossard" ], "title": "Robustness of classifiers: from adversarial to random noise", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Alhussein Fawzi", "Hamza Fawzi", "Omar Fawzi" ], "title": "Adversarial vulnerability for any classifier", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Van Der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Mixup as locally linear out-of-manifold regularization", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Sandy Huang", "Nicolas Papernot", "Ian Goodfellow", "Yan Duan", "Pieter Abbeel" ], "title": "Adversarial attacks on neural network policies", "venue": "arXiv preprint arXiv:1702.02284,", "year": 2017 }, { "authors": [ "Hiroshi Inoue" ], "title": "Data augmentation by pairing samples for images classification", "venue": "arXiv preprint arXiv:1801.02929,", "year": 2018 }, { "authors": [ "Di Jin", "Zhijing Jin", "Tianyi Zhou", "Peter Szolovits" ], "title": "Is bert really robust? natural language attack on text classification and entailment", "venue": null, "year": 1907 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "In The International Conference on Learning Representations (ICLR) Workshops,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio", "Yinpeng Dong", "Fangzhou Liao", "Ming Liang", "Tianyu Pang", "Jun Zhu", "Xiaolin Hu", "Cihang Xie" ], "title": "Adversarial attacks and defences competition", "venue": "arXiv preprint arXiv:1804.00097,", "year": 2018 }, { "authors": [ "Alex Lamb", "Vikas Verma", "Juho Kannala", "Yoshua Bengio" ], "title": "Interpolated adversarial training: Achieving robust neural networks without sacrificing accuracy", "venue": null, "year": 1906 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Tianyu Pang", "Chao Du", "Yinpeng Dong", "Jun Zhu" ], "title": "Towards robust detection of adversarial examples", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Tianyu Pang", "Chao Du", "Jun Zhu" ], "title": "Max-mahalanobis linear discriminant analysis networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Ning Qian" ], "title": "On the momentum term in gradient descent learning algorithms", "venue": "Neural networks,", "year": 1999 }, { "authors": [ "Edward Raff", "Jared Sylvester", "Steven Forsyth", "Mark McLean" ], "title": "Barrage of random transforms for adversarially robust defense", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Takuya Shimada", "Shoichiro Yamaguchi", "Kohei Hayashi", "Sosuke Kobayashi" ], "title": "Data interpolating prediction: Alternative interpretation of mixup", "venue": "arXiv preprint arXiv:1906.08412,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Pedro Tabacof", "Eduardo Valle" ], "title": "Exploring the space of adversarial images", "venue": "In 2016 International Joint Conference on Neural Networks (IJCNN),", "year": 2016 }, { "authors": [ "Yuji Tokozume", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Learning from between-class examples for deep sound recognition", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Yuji Tokozume", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Between-class learning for image classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Vladimir Vapnik" ], "title": "The nature of statistical learning theory", "venue": "Springer science & business media,", "year": 2013 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Aaron Courville", "Ioannis Mitliagkis", "Yoshua Bengio" ], "title": "Manifold mixup: Encouraging meaningful on-manifold interpolation as a regularizer", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Juho Kannala", "Yoshua Bengio", "David Lopez-Paz" ], "title": "Interpolation consistency training for semi-supervised learning", "venue": "arXiv preprint arXiv:1903.03825,", "year": 2019 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Lamb" ], "title": "combines AT with mixup. Interpolated AT trains on interpolations of adversarial examples along with interpolations of unperturbed examples (cf. Alg", "venue": null, "year": 2019 }, { "authors": [ "Mixup + Xie" ], "title": "Rotation degree range [−40◦", "venue": "Mixup + Guo et al", "year": 2018 }, { "authors": [ "Interpolated AT + Xie" ], "title": "Rotation degree range [−30◦", "venue": "Interpolated AT + Guo et al", "year": 2018 }, { "authors": [ "Following Athalye" ], "title": "2018), we design the adaptive attacks for our MI method", "venue": null, "year": 2018 }, { "authors": [ "Mixup + Xie" ], "title": "Rotation degree range [−20◦", "venue": "Mixup + Guo et al", "year": 2018 }, { "authors": [ "Interpolated AT + Xie" ], "title": "Rotation degree range [−20◦", "venue": "Interpolated AT + Guo et al", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have achieved state-of-the-art performance on various tasks (Goodfellow et al., 2016). However, counter-intuitive adversarial examples generally exist in different domains, including computer vision (Szegedy et al., 2014), natural language processing (Jin et al., 2019), reinforcement learning (Huang et al., 2017), speech (Carlini & Wagner, 2018) and graph data (Dai et al., 2018). As DNNs are being widely deployed, it is imperative to improve model robustness and defend adversarial attacks, especially in safety-critical cases. Previous work shows that adversarial examples mainly root from the locally unstable behavior of classifiers on the data manifolds (Goodfellow et al., 2015; Fawzi et al., 2016; 2018; Pang et al., 2018b), where a small adversarial perturbation in the input space can lead to an unreasonable shift in the feature space.\nOn the one hand, many previous methods try to solve this problem in the inference phase, by introducing transformations on the input images. These attempts include performing local linear transformation like adding Gaussian noise (Tabacof & Valle, 2016), where the processed inputs are kept nearby the original ones, such that the classifiers can maintain high performance on the clean inputs. However, as shown in Fig. 1(a), the equivalent perturbation, i.e., the crafted adversarial perturbation, is still δ and this strategy is easy to be adaptively evaded since the randomness of x0 w.r.t x0 is local (Athalye et al., 2018). Another category of these attempts is to apply various non-linear transformations, e.g., different operations of image processing (Guo et al., 2018; Xie et al., 2018; Raff et al., 2019). They are usually off-the-shelf for different classifiers, and generally aim to disturb the adversarial perturbations, as shown in Fig. 1(b). Yet these methods are not quite reliable since there is no illustration or guarantee on to what extent they can work.\nOn the other hand, many efforts have been devoted to improving adversarial robustness in the training phase. For examples, the adversarial training (AT) methods (Madry et al., 2018; Zhang et al., 2019; Shafahi et al., 2019) induce locally stable behavior via data augmentation on adversarial examples. However, AT methods are usually computationally expensive, and will often degenerate model\n∗Equal contribution. †Corresponding author.\nperformance on the clean inputs or under general-purpose transformations like rotation (Engstrom et al., 2019). In contrast, the mixup training method (Zhang et al., 2018) introduces globally linear behavior in-between the data manifolds, which can also improve adversarial robustness (Zhang et al., 2018; Verma et al., 2019a). Although this improvement is usually less significant than it resulted by AT methods, mixup-trained models can keep state-of-the-art performance on the clean inputs; meanwhile, the mixup training is computationally more efficient than AT. The interpolated AT method (Lamb et al., 2019) also shows that the mixup mechanism can further benefit the AT methods.\nHowever, most of the previous work only focuses on embedding the mixup mechanism in the training phase, while the induced global linearity of the model predictions is not well exploited in the inference phase. Compared to passive defense by directly classifying the inputs (Zhang et al., 2018; Lamb et al., 2019), it would be more effective to actively defend adversarial attacks by breaking their locality via the globally linear behavior of the mixup-trained models. In this paper, we develop an inference principle for mixup-trained models, named mixup inference (MI). In each execution, MI performs a global linear transformation on the inputs, which mixups the input x with a sampled clean example xs, i.e., x̃ = λx+ (1− λ)xs (detailed in Alg. 1), and feed x̃ into the classifier as the processed input. There are two basic mechanisms for robustness improving under the MI operation (detailed in Sec. 3.2.1), which can be illustrated by simple geometric intuition in Fig. 1(c). One is perturbation shrinkage: if the input is adversarial, i.e., x = x0 + δ, the perturbation δ will shrink by a factor λ after performing MI, which is exactly the mixup ratio of MI according to the similarity between triangles. Another one is input transfer: after the MI operation, the reduced perturbation λδ acts on random x̃0. Comparing to the spatially or semantically local randomness introduced by Gaussian noise or image processing, x̃0 introduces spatially global and semantically diverse randomness w.r.t x0. This makes it less effective to perform adaptive attacks against MI (Athalye et al., 2018). Furthermore, the global linearity of the mixup-trained models ensures that the information of x0 remained in x̃0 is proportional to λ, such that the identity of x0 can be recovered from the statistics of x̃0.\nIn experiments, we evaluate MI on CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009) under the oblivious attacks (Carlini & Wagner, 2017) and the adaptive attacks (Athalye et al., 2018). The results demonstrate that our MI method is efficient in defending adversarial attacks in inference, and is also compatible with other variants of mixup, e.g., the interpolated AT method (Lamb et al., 2019). Note that Shimada et al. (2019) also propose to mixup the input points in the test phase, but they do not consider their method from the aspect of adversarial robustness." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we first introduce the notations applied in this paper, then we provide the formula of mixup in training. We introduce the adversarial attacks and threat models in Appendix A.1." }, { "heading": "2.1 NOTATIONS", "text": "Given an input-label pair (x, y), a classifier F returns the softmax prediction vector F (x) and the predicted label ŷ = arg maxj∈[L] Fj(x), where L is the number of classes and [L] = {1, · · · , L}. The classifier F makes a correct prediction on x if y = ŷ. In the adversarial setting, we augment the\ndata pair (x, y) to a triplet (x, y, z) with an extra binary variable z, i.e.,\nz = { 1, if x is adversarial, 0, if x is clean.\n(1)\nThe variable z is usually considered as hidden in the inference phase, so an input x (either clean or adversarially corrupted) can be generally denoted as x = x0 + δ · 1z=1. Here x0 is a clean sample from the data manifold p(x) with label y0, 1z=1 is the indicator function, and δ is a potential perturbation crafted by adversaries. It is worthy to note that the perturbation δ should not change the true label of the input, i.e., y = y0. For `p-norm adversarial attacks (Kurakin et al., 2017; Madry et al., 2018), we have ‖δ‖p ≤ , where is a preset threshold. Based on the assumption that adversarial examples are off the data manifolds, we formally have x0 + δ /∈ supp(p(x)) (Pang et al., 2018a)." }, { "heading": "2.2 MIXUP IN TRAINING", "text": "In supervised learning, the most commonly used training mechanism is the empirical risk minimization (ERM) principle (Vapnik, 2013), which minimizes 1n ∑n i=1 L(F (xi), yi) on the training dataset D = {(xi, yi)}ni=1 with the loss function L. While computationally efficient, ERM could lead to memorization of data (Zhang et al., 2017) and weak adversarial robustness (Szegedy et al., 2014).\nAs an alternative, Zhang et al. (2018) introduce the mixup training mechanism, which minimizes 1 m ∑m j=1 L(F (x̃j), ỹj). Here x̃j = λxj0 + (1−λ)xj1; ỹj = λyj0 + (1−λ)yj1, the input-label pairs (xj0, yj0) and (xj1, yj1) are randomly sampled from the training dataset, λ ∼ Beta(α, α) and α is a hyperparameter. Training by mixup will induce globally linear behavior of models in-between data manifolds, which can empirically improve generalization performance and adversarial robustness (Zhang et al., 2018; Tokozume et al., 2018a;b; Verma et al., 2019a;b). Compared to the adversarial training (AT) methods (Goodfellow et al., 2015; Madry et al., 2018), trained by mixup requires much less computation and can keep state-of-the-art performance on the clean inputs." }, { "heading": "3 METHODOLOGY", "text": "Although the mixup mechanism has been widely shown to be effective in different domains (Berthelot et al., 2019; Beckham et al., 2019; Verma et al., 2019a;b), most of the previous work only focuses on embedding the mixup mechanism in the training phase, while in the inference phase the global linearity of the trained model is not well exploited. Compared to passively defending adversarial examples by directly classifying them, it would be more effective to actively utilize the globality of mixup-trained models in the inference phase to break the locality of adversarial perturbations." }, { "heading": "3.1 MIXUP INFERENCE", "text": "The above insight inspires us to propose the mixup inference (MI) method, which is a specialized inference principle for the mixup-trained models. In the following, we apply colored y, ŷ and ys to visually distinguish different notations. Consider an input triplet (x, y, z), where z is unknown in advance. When directly feeding x into the classifier F , we can obtain the predicted label ŷ. In the adversarial setting, we are only interested in the cases where x is correctly classified by F if it is clean, or wrongly classified if it is adversarial (Kurakin et al., 2018). This can be formally denoted as\n1y 6=ŷ = 1z=1. (2)\nThe general mechanism of MI works as follows. Every time we execute MI, we first sample a label ys ∼ ps(y), then we sample xs from ps(x|ys) and mixup it with x as x̃ = λx+ (1− λ)xs. ps(x, y) denotes the sample distribution, which is constrained to be on the data manifold, i.e., supp(ps(x)) ⊂ supp(p(x)). In practice, we execute MI for N times and average the output predictions to obtain FMI(x), as described in Alg. 1. Here we fix the mixup ratio λ in MI as a hyperparameter, while similar properties hold if λ comes from certain distribution." }, { "heading": "3.2 THEORETICAL ANALYSES", "text": "Theoretically, with unlimited capability and sufficient clean samples, a well mixup-trained model F can be denoted as a linear function H on the convex combinations of clean examples (Hornik et al., 1989; Guo et al., 2019), i.e., ∀xi, xj ∼ p(x) and λ ∈ [0, 1], there is\nH(λxi + (1− λ)xj) = λH(xi) + (1− λ)H(xj). (3)\nAlgorithm 1 Mixup Inference (MI) Input: The mixup-trained classifier F ; the input x. Hyperparameters: The sample distribution ps; the mixup ratio λ; the number of execution N . Initialize FMI(x) = 0; for k = 1 to N do\nSample ys,k ∼ ps(ys), xs,k ∼ ps(xs|ys,k); Mixup x with xs,k as x̃k = λx+ (1− λ)xs,k; Update FMI(x) = FMI(x) + 1N F (x̃k);\nend for Return: The prediction FMI(x) of input x.\nSpecially, we consider the case where the training objective L is the cross-entropy loss, then H(xi) should predict the one-hot vector of label yi, i.e., Hy(xi) = 1y=yi . If the input x = x0 + δ is adversarial, then there should be an extra non-linear part G(δ;x0) of F , since x is off the data manifolds. Thus for any input x, the prediction vector can be compactly denoted as\nF (x) = F (x0 + δ · 1z=1) = H(x0) +G(δ;x0) · 1z=1. (4)\nAccording to Eq. (3) and Eq. (4), the output of x̃ in MI is given by:\nF (x̃) = H(x̃0) +G(λδ; x̃0) · 1z=1 = λH(x0) + (1− λ)H(xs) +G(λδ; x̃0) · 1z=1,\n(5)\nwhere x̃0 = λx0 + (1− λ)xs is a virtual unperturbed counterpart of x̃ as shown in Fig. 1(c). Note that FMI(x) in Alg. 1 is a Monte Carlo approximation of Eps [F (x̃)] as\nFMI(x) = 1\nN N∑ i=1 F (x̃i) ∞−→ Eps [F (x̃)], (6)\nwhere ∞−→ represents the limitation when the execution timesN →∞. Now we separately investigate the y-th and ŷ-th (could be the same one) components of F (x̃) according to Eq. (5), and see how these two components differ from those of F (x). These two components are critical because they decide whether we can correctly classify or detect adversarial examples (Goodfellow et al., 2016). Note that there is Hy(x0) = 1 and Hys(xs) = 1, thus we have the y-th components as\nFy(x) = 1 +Gy(δ;x0) · 1z=1; Fy(x̃) = λ+ (1− λ) · 1y=ys +Gy(λδ; x̃0) · 1z=1.\n(7)\nFurthermore, according to Eq. (2), there is 1y=ŷ = 1z=0. We can represent the ŷ-th components as\nFŷ(x) = 1z=0 +Gŷ(δ;x0) · 1z=1; Fŷ(x̃) = λ · 1z=0 + (1− λ) · 1ŷ=ys +Gŷ(λδ; x̃0) · 1z=1.\n(8)\nFrom the above formulas we can find that, except for the hidden variable z, the sampling label ys is another variable which controls the MI output F (x̃) for each execution. Different distributions of sampling ys result in different versions of MI. Here we consider two easy-to-implement cases:\nMI with predicted label (MI-PL): In this case, the sampling label ys is the same as the predicted label ŷ, i.e., ps(y) = 1y=ŷ is a Dirac distribution on ŷ.\nMI with other labels (MI-OL): In this case, the label ys is uniformly sampled from the labels other than ŷ, i.e., ps(y) = Uŷ(y) is a discrete uniform distribution on the set {y ∈ [L]|y 6= ŷ}. We list the simplified formulas of Eq. (7) and Eq. (8) under different cases in Table 1 for clear representation. With the above formulas, we can evaluate how the model performance changes with and without MI by focusing on the formula of\n∆F (x; ps) = FMI(x)− F (x) ∞−→ Eps [F (x̃)]− F (x). (9)\nSpecifically, in the general-purpose setting where we aim to correctly classify adversarial examples (Madry et al., 2018), we claim that the MI method improves the robustness if the prediction\nvalue on the true label y increases while it on the adversarial label ŷ decreases after performing MI when the input is adversarial (z = 1). This can be formally denoted as\n∆Fy(x; ps)|z=1 > 0; ∆Fŷ(x; ps)|z=1 < 0. (10)\nWe refer to this condition in Eq. (10) as robustness improving condition (RIC). Further, in the detection-purpose setting where we want to detect the hidden variable z and filter out adversarial inputs, we can take the gap of the ŷ-th component of predictions before and after the MI operation, i.e., ∆Fŷ(x; ps) as the detection metric (Pang et al., 2018a). To formally measure the detection ability on z, we use the detection gap (DG), denoted as\nDG = ∆Fŷ(x; ps)|z=1 −∆Fŷ(x; ps)|z=0. (11)\nA higher value of DG indicates that ∆Fŷ(x; ps) is better as a detection metric. In the following sections, we specifically analyze the properties of different versions of MI according to Table 1, and we will see that the MI methods can be used and benefit in different defense strategies." }, { "heading": "3.2.1 MIXUP INFERENCE WITH PREDICTED LABEL", "text": "In the MI-PL case, when the input is clean (i.e., z = 0), there is F (x) = F (x̃), which means ideally the MI-PL operation does not influence the predictions on the clean inputs. When the input is adversarial (i.e., z = 1), MI-PL can be applied as a general-purpose defense or a detection-purpose defense, as we separately introduce below:\nGeneral-purpose defense: If MI-PL can improve the general-purpose robustness, it should satisfy RIC in Eq. (10). By simple derivation and the results of Table 1, this means that\nExs∼ps(x|ŷ) [Gk(δ;x0)−Gk(λδ; x̃0)] { > 1− λ, if k = ŷ, < λ− 1, if k = y. (12)\nSince an adversarial perturbation usually suppress the predicted confidence on the true label and promote it on the target label (Goodfellow et al., 2015), there should beGŷ(δ; x̃0) > 0 andGy(δ; x̃0) < 0. Note that the left part of Eq. (12) can be decomposed into\nExs∼ps(x|ŷ) [Gk(δ;x0)−Gk(δ; x̃0)]︸ ︷︷ ︸ input transfer +Exs∼ps(x|ŷ) [Gk(δ; x̃0)−Gk(λδ; x̃0)]︸ ︷︷ ︸ perturbation shrinkage . (13)\nHere Eq. (13) indicates the two basic mechanisms of the MI operations defending adversarial attacks, as shown in Fig. 1(c). The first mechanism is input transfer, i.e., the clean input that the adversarial perturbation acts on transfers from the deterministic x0 to stochastic x̃0. Compared to the Gaussian noise or different image processing methods which introduce spatially or semantically local randomness, the stochastic x̃0 induces spatially global and semantically diverse randomness. This will make it harder to perform an adaptive attack in the white-box setting (Athalye et al., 2018).\nThe second mechanism is perturbation shrinkage, where the original perturbation δ shrinks by a factor λ. This equivalently shrinks the perturbation threshold since ‖λδ‖p = λ‖δ‖p ≤ λ , which means that MI generally imposes a tighter upper bound on the potential attack ability for a crafted perturbation. Besides, empirical results in previous work also show that a smaller perturbation threshold largely weakens the effect of attacks (Kurakin et al., 2018). Therefore, if an adversarial attack defended by these two mechanisms leads to a prediction degradation as in Eq. (12), then applying MI-PL would improve the robustness against this adversarial attack. Similar properties also hold for MI-OL as described in Sec. 3.2.2. In Fig. 2, we empirically demonstrate that most of the existing adversarial attacks, e.g., the PGD attack (Madry et al., 2018) satisfies these properties.\nDetection-purpose defense: According to Eq. (11), the formula of DG for MI-PL is\nDGMI-PL = Exs∼ps(x|ŷ)[Gŷ(δ;x0)−Gŷ(λδ; x̃0)]− (1− λ). (14)\nBy comparing Eq. (12) and Eq. (14), we can find that they are consistent with each other, which means that for a given adversarial attack, if MI-PL can better defend it in general-purpose, then ideally MI-PL can also better detect the crafted adversarial examples." }, { "heading": "3.2.2 MIXUP INFERENCE WITH OTHER LABELS", "text": "As to MI-OL, when the input is clean (z = 0), there would be a degeneration on the optimal clean prediction as Fy(x̃) = Fŷ(x̃) = λ, since the sampled xs does not come from the true label y. As compensation, MI-OL can better improve robustness compared to MI-PL when the input is adversarial (z = 1), since the sampled xs also does not come from the adversarial label ŷ in this case.\nGeneral-purpose defense: Note that in the MI-OL formulas of Table 1, there is a term of 1y=ys . Since we uniformly select ys from the set [L] \\ {ŷ}, there is E(1y=ys) = 1L−1 . According to the RIC, MI-OL can improve robustness against the adversarial attacks if there satisfies\nEys∼Uŷ(y)Exs∼ps(x|ys) [Gk(δ;x0)−Gk(λδ; x̃0)] { > 0, if k = ŷ, < (λ−1)(L−2)L−1 , if k = y.\n(15)\nNote that the conditions in Eq. (15) is strictly looser than Eq. (12), which means MI-OL can defend broader range of attacks than MI-PL, as verified in Fig. 2.\nDetection-purpose defense: According to Eq. (11) and Table 1, the DG for MI-OL is\nDGMI-OL = Eys∼Uŷ(y)Exs∼ps(x|ys)[Gŷ(δ;x0)−Gŷ(λδ; x̃0)]− (1− λ). (16)\nIt is interesting to note that DGMI-PL = DGMI-OL, thus the two variants of MI have the same theoretical performance in the detection-purpose defenses. However, in practice we find that MI-PL performs better than MI-OL in detection, since empirically mixup-trained models cannot induce ideal global linearity (cf. Fig. 2 in Zhang et al. (2018)). Besides, according to Eq. (6), to statistically make sure that the clean inputs will be correctly classified after MI-OL, there should be ∀k ∈ [L] \\ {y},\nEys∼Uŷ(y)Exs∼ps(x|ys)[Fy − Fk] > 0 =⇒ λ > L −1. (17)" }, { "heading": "4 EXPERIMENTS", "text": "In this section, we provide the experimental results on CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009) to demonstrate the effectiveness of our MI methods on defending adversarial attacks. Our codes are available at https://github.com/P2333/Mixup-Inference." }, { "heading": "4.1 SETUP", "text": "In training, we use ResNet-50 (He et al., 2016) and apply the momentum SGD optimizer (Qian, 1999) on both CIFAR-10 and CIFAR-100. We run the training for 200 epochs with the batch size of 64. The initial learning rate is 0.01 for ERM, mixup and AT; 0.1 for interpolated AT (Lamb et al., 2019). The learning rate decays with a factor of 0.1 at 100 and 150 epochs. The attack method for AT and interpolated AT is untargeted PGD-10 with = 8/255 and step size 2/255 (Madry et al., 2018), and the ratio of the clean examples and the adversarial ones in each mini-batch is 1 : 1 (Lamb et al., 2019). The hyperparameter α for mixup and interpolated AT is 1.0 (Zhang et al., 2018). All defenses with randomness are executed 30 times to obtain the averaged predictions (Xie et al., 2018)." }, { "heading": "4.2 EMPIRICAL VERIFICATION OF THEORETICAL ANALYSES", "text": "To verify and illustrate our theoretical analyses in Sec. 3, we provide the empirical relationship between the output predictions of MI and the hyperparameter λ in Fig. 2. The notations and formulas annotated in Fig. 2 correspond to those introduced in Sec. 3. We can see that the results follow our theoretical conclusions under the assumption of ideal global linearity. Besides, both MI-PL and MI-OL empirically satisfy RIC in this case, which indicates that they can improve robustness under the untargeted PGD-10 attack on CIFAR-10, as quantitatively demonstrated in the following sections." }, { "heading": "4.3 PERFORMANCE UNDER OBLIVIOUS ATTACKS", "text": "In this subsection, we evaluate the performance of our method under the oblivious-box attacks (Carlini & Wagner, 2017). The oblivious threat model assumes that the adversary is not aware of the existence of the defense mechanism, e.g., MI, and generate adversarial examples based on the unsecured classification model. We separately apply the model trained by mixup and interpolated AT as the classification model. The AUC scores for the detection-purpose defense are given in Fig. 3(a). The results show that applying MI-PL in inference can better detect adversarial attacks, while directly detecting by the returned confidence without MI-PL performs even worse than a random guess.\nWe also compare MI with previous general-purpose defenses applied in the inference phase, e.g., adding Gaussian noise or random rotation (Tabacof & Valle, 2016); performing random padding or resizing after random cropping (Guo et al., 2018; Xie et al., 2018). The performance of our method and baselines on CIFAR-10 and CIFAR-100 are reported in Table 2 and Table 3, respectively. Since for each defense method, there is a trade-off between the accuracy on clean samples and adversarial samples depending on the hyperparameters, e.g., the standard deviation for Gaussian noise, we carefully select the hyperparameters to ensure both our method and baselines keep a similar performance on clean data for fair comparisons. The hyperparameters used in our method and baselines are reported in Table 4 and Table 5. In Fig. 3(b), we further explore this trade-off by grid searching the hyperparameter space for each defense to demonstrate the superiority of our method.\nAs shown in these results, our MI method can significantly improve the robustness for the trained models with induced global linearity, and is compatible with training-phase defenses like the interpolated AT method. As a practical strategy, we also evaluate a variant of MI, called MI-Combined, which applies MI-OL if the input is detected as adversarial by MI-PL with a default detection threshold; otherwise returns the prediction on the original input. We also perform ablation studies of ERM / AT + MI-OL in Table 2, where no global linearity is induced. The results verify that our MI methods indeed exploit the global linearity of the mixup-trained models, rather than simply introduce randomness." }, { "heading": "4.4 PERFORMANCE UNDER WHITE-BOX ADAPTIVE ATTACKS", "text": "Following Athalye et al. (2018), we test our method under the white-box adaptive attacks (detailed in Appendix B.2). Since we mainly adopt the PGD attack framework, which synthesizes adversarial examples iteratively, the adversarial noise will be clipped to make the input image stay within the valid range. It results in the fact that with mixup on different training examples, the adversarial perturbation will be clipped differently. To address this issue, we average the generated perturbations over the adaptive samples as the final perturbation. The results of the adversarial accuracy w.r.t the number of adaptive samples are shown in Fig. 4. We can see that even under a strong adaptive attack, equipped with MI can still improve the robustness for the classification models." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose the MI method, which is specialized for the trained models with globally linear behaviors induced by, e.g., mixup or interpolated AT. As analyzed in Sec. 3, MI can exploit this induced global linearity in the inference phase to shrink and transfer the adversarial perturbation, which breaks the locality of adversarial attacks and alleviate their aggressivity. In experiments, we empirically verify that applying MI can return more reliable predictions under different threat models." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, U19B2034, U1811461), Beijing NSF Project (No. L172037), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, the JP Morgan Faculty Research Program and the NVIDIA NVAIL Program with GPU/DGX Acceleration." }, { "heading": "A MORE BACKGROUNDS", "text": "In this section, we provide more backgrounds which are related to our work in the main text.\nA.1 ADVERSARIAL ATTACKS AND THREAT MODELS\nAdversarial attacks. Although deep learning methods have achieved substantial success in different domains (Goodfellow et al., 2016), human imperceptible adversarial perturbations can be easily crafted to fool high-performance models, e.g., deep neural networks (DNNs) (Nguyen et al., 2015).\nOne of the most commonly studied adversarial attack is the projected gradient descent (PGD) method (Madry et al., 2018). Let r be the number of iteration steps, x0 be the original clean example, then PGD iteratively crafts the adversarial example as\nx∗i = clipx, (x ∗ i−1 + i · sign(∇x∗i−1L(x ∗ i−1, y))), (18)\nwhere clipx, (·) is the clipping function. Here x∗0 is a randomly perturbed image in the neighborhood of x0, i.e., Ů(x0, ), and the finally returned adversarial example is x = x∗r = x0 + δ, following our notations in the main text.\nThreat models. Here we introduce different threat models in the adversarial setting. As suggested in Carlini et al. (2019), a threat model includes a set of assumptions about the adversarys goals, capabilities, and knowledge.\nAdversary’s goals could be simply fooling the classifiers to misclassify, which is referred to as untargeted mode. Alternatively, the goals can be more specific to make the model misclassify certain examples from a source class into a target class, which is referred to as targeted mode. In our experiments, we evaluate under both modes, as shown in Table 2 and Table 3.\nAdversary’s capabilities describe the constraints imposed on the attackers. Adversarial examples require the perturbation δ to be bounded by a small threshold under `p-norm, i.e., ‖δ‖p ≤ . For example, in the PGD attack, we consider under the `∞-norm.\nAdversary’s knowledge describes what knowledge the adversary is assumed to have. Typically, there are three settings when evaluating a defense method:\n• Oblivious adversaries are not aware of the existence of the defense D and generate adversarial examples based on the unsecured classification model F (Carlini & Wagner, 2017). • White-box adversaries know the scheme and parameters of D, and can design adaptive\nmethods to attack both the model F and the defense D simultaneously (Athalye et al., 2018). • Black-box adversaries have no access to the parameters of the defense D or the model F\nwith varying degrees of black-box access (Dong et al., 2018).\nIn our experiments, we mainly test under the oblivious setting (Sec. 4.3) and white-box setting (Sec. 4.4), since previous work has already demonstrated that randomness itself is efficient on defending black-box attacks (Guo et al., 2018; Xie et al., 2018).\nA.2 INTERPOLATED ADVERSARIAL TRAINING\nTo date, the most widely applied framework for adversarial training (AT) methods is the saddle point framework introduced in Madry et al. (2018):\nmin θ ρ(θ), where ρ(θ) = E(x,y)∼p[max δ∈S L(x+ δ, y; θ)]. (19)\nHere θ represents the trainable parameters in the classifier F , and S is a set of allowed perturbations. In implementation, the inner maximization problem for each input-label pair (x, y) is approximately solved by, e.g., the PGD method with different random initialization (Madry et al., 2018).\nAs a variant of the AT method, Lamb et al. (2019) propose the interpolated AT method, which combines AT with mixup. Interpolated AT trains on interpolations of adversarial examples along with interpolations of unperturbed examples (cf. Alg. 1 in Lamb et al. (2019)). Previous empirical results demonstrate that interpolated AT can obtain higher accuracy on the clean inputs compared to the AT method without mixup, while keeping the similar performance of robustness." }, { "heading": "B TECHNICAL DETAILS", "text": "We provide more technical details about our method and the implementation of the experiments.\nB.1 MORE DISCUSSION ON THE MI METHOD\nGenerality. According to Sec. 3, except for the mixup-trained models, the MI method is generally compatible with any trained model with induced global linearity. These models could be trained by other methods, e.g., manifold mixup (Verma et al., 2019a; Inoue, 2018; Lamb et al., 2019). Besides, to better defend white-box adaptive attacks, the mixup ratio λ in MI could also be sampled from certain distribution to put in additional randomness.\nEmpirical gap. As demonstrated in Fig. 2, there is a gap between the empirical results and the theoretical formulas in Table 1. This is because that the mixup mechanism mainly acts as a regularization in training, which means the induced global linearity may not satisfy the expected behaviors. To improve the performance of MI, a stronger regularization can be imposed, e.g., training with mixup for more epochs, or applying matched λ both in training and inference.\nB.2 ADAPTIVE ATTACKS FOR MIXUP INFERENCE\nFollowing Athalye et al. (2018), we design the adaptive attacks for our MI method. Specifically, according to Eq. (6), the expected model prediction returned by MI is:\nFMI(x) = Eps [F (λx+ (1− λ)xs)]. (20) Note that generally the λ in MI comes from certain distribution. For simplicity, we fix λ as a hyperparameter in our implementation. Therefore, the gradients of the prediction w.r.t. the input x is:\n∂FMI(x)\n∂x = Eps\n[ ∂F (λx+ (1− λ)xs)\n∂x\n] (21)\n= Eps [ ∂F (u)\n∂u ∣∣∣ u=λx+(1−λ)xs · ∂λx+ (1− λ)xs ∂x ] (22)\n= λEps [ ∂F (u)\n∂u |u=λx+(1−λ)xs\n] . (23)\nTable 5: The parameter settings for the methods in Table 3. The number of execution for each random method is 30.\nMethods Parameter Settings\nMixup -\nMixup + Gaussian noise Noise standard deviation σ = 0.025\nMixup + Random rotation Rotation degree range [−20◦, 20◦] Mixup + Xie et al. (2018) The random crop size is randomly selected from [18, 26]\nMixup + Guo et al. (2018) The random crop size is randomly selected from [24, 32]\nMixup + MI-OL The λOL = 0.5 Mixup + MI-Combined The λOL = 0.5, λOL = 0.4, threshold is 0.2\nInterpolated AT -\nInterpolated AT + Gaussian noise Noise standard deviation σ = 0.06\nInterpolated AT + Random rotation Rotation degree range [−20◦, 20◦] Interpolated AT + Xie et al. (2018) The random crop size is randomly selected from [22, 30]\nInterpolated AT + Guo et al. (2018) The random crop size is randomly selected from [24, 32]\nInterpolated AT + MI-OL The λOL = 0.6\nClean\nAdversarial\nFigure 5: Adversarial examples crafted by adaptive attacks with = 16/255 on CIFAR-10, against the defense of Interpolated AT + MI-OL.\nIn the implementation of adaptive PGD attacks, we first sample a series of examples {xs,k}NAk=1, where NA is the number of adaptive samples in Fig. 3. Then according to Eq. (18), the sign of gradients used in adaptive PGD can be approximated by\nsign\n( ∂FMI(x)\n∂x\n) ≈ sign ( NA∑ k=1 ∂F (u) ∂u ∣∣∣ u=λx+(1−λ)xs,k ) . (24)\nB.3 HYPERPARAMETER SETTINGS\nThe hyperparameter settings of the experiments shown in Table 2 and Table 3 are provided in Table 4 and Table 5, respectively. Since the original methods in Xie et al. (2018) and Guo et al. (2018) are both designed for the models on ImageNet, we adapt them for CIFAR-10 and CIFAR-100. Most of our experiments are conducted on the NVIDIA DGX-1 server with eight Tesla P100 GPUs." } ]
2,020
MIXUP INFERENCE: BETTER EXPLOITING MIXUP
SP:7f3dfc4a045d780299123b42cc712b3d7171e8eb
[ "The paper tries to ask if there is a good neural net architecture that works as effectively as gradient boosting decision trees on tabular data. The authors propose an architecture (NODE) that satisfies this conditions. NODE is an architecture consisting of differentiable oblivious decision trees that can be trained end to end via back propagation. The paper is readable and the experiments are well presented. They make use of an alpha-entmax transformation to obtain a differentiable architecture. The approach seems well motivated in the literature. It is unclear how novel the contribution is. It is unclear if in the experimental section the datasets used are standard for this classes of tasks. Would be good to mention if it is the case. ", "This paper introduces a new method to make ensembles of decision trees differentiable, and trainable with (stochastic) gradient descent. The proposed technique relies on the concept of \"oblivious decision trees\", which are a kind of decision trees that use the same classifier (i.e. a feature and threshold) for all the nodes that have the same depth. This means that for an oblivious decision tree of depth d, only d classifiers are learned. Said otherwise, an oblivious decision tree is a classifier that split the data using d splitting features, giving a decision table of size 2^d. To make oblivious decision trees differentiable, the authors propose to learn linear classifiers using all the features, but add a sparsity inducing operator on the weights of the classifiers (the entmax transformation). Similarly, the step function used to split the data is replaced by a continuous version (here a binary entmax transformation). Finally, the decision function is obtained by taking the outer product of all the scores of the classifiers: [c_1(x), 1-c_1(x)] o [c_2(x), 1-c_2(x)] ... This \"choice\" operator transforms the d dimensional vectors of the classifier scores to a 2^d dimensional vector. Another interpretation of the proposed \"differentiable oblivious decision trees\" is a two layer neural network, with sparsity on the weights of the first layer," ]
Nowadays, deep neural networks (DNNs) have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech. Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable. In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees (GBDT), which are often the top choice for tabular problems. In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data. In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning. With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks. We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data.
[ { "affiliations": [], "name": "TABULAR DATA" }, { "affiliations": [], "name": "Sergei Popov" }, { "affiliations": [], "name": "Stanislav Morozov" } ]
[ { "authors": [ "Iñigo Barandiaran" ], "title": "The random subspace method for constructing decision forests", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1998 }, { "authors": [ "Leo Breiman" ], "title": "doi: 10.1023/A: 1010933404324", "venue": "Random forests. Machine Learning,", "year": 2001 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": "In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Ji Feng", "Yang Yu", "Zhi-Hua Zhou" ], "title": "Multi-layered gradient boosting decision trees", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Jerome H Friedman" ], "title": "Greedy function approximation: a gradient boosting machine", "venue": "Annals of statistics,", "year": 2001 }, { "authors": [ "Vasyl Harasymiv" ], "title": "Lessons from 2 million machine learning models on kaggle", "venue": null, "year": 2015 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "CoRR, abs/1611.01144,", "year": 2016 }, { "authors": [ "Guolin Ke", "Qi Meng", "Thomas Finley", "Taifeng Wang", "Wei Chen", "Weidong Ma", "Qiwei Ye", "TieYan Liu" ], "title": "Lightgbm: A highly efficient gradient boosting decision tree", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Guolin Ke", "Jia Zhang", "Zhenhui Xu", "Jiang Bian", "Tie-Yan Liu" ], "title": "Tabnn: A universal neural network solution for tabular data", "venue": null, "year": 2018 }, { "authors": [ "Ron Kohavi" ], "title": "Bottom-up induction of oblivious read-once decision graphs: strengths and limitations", "venue": "In AAAI,", "year": 1994 }, { "authors": [ "Peter Kontschieder", "Madalina Fiterau", "Antonio Criminisi", "Samuel Rota Bulo" ], "title": "Deep neural decision forests", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Nathan Lay", "Adam P Harrison", "Sharon Schreiber", "Gitesh Dawer", "Adrian Barbu" ], "title": "Random hinge forest for differentiable learning", "venue": "arXiv preprint arXiv:1802.03882,", "year": 2018 }, { "authors": [ "Tianyi Lin", "Zhiyue Hu", "Xin Guo" ], "title": "Sparsemax and relaxed wasserstein for topic sparsity", "venue": "In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining,", "year": 2019 }, { "authors": [ "Yin Lou", "Mikhail Obukhov" ], "title": "Bdt: Gradient boosted decision tables for high accuracy and scoring efficiency", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Jerry Ma", "Denis Yarats" ], "title": "Quasi-hyperbolic momentum and adam for deep learning", "venue": "arXiv preprint arXiv:1810.06801,", "year": 2018 }, { "authors": [ "Andre Martins", "Ramon Astudillo" ], "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Kevin Miller", "Chris Hettinger", "Jeffrey Humpherys", "Tyler Jarvis", "David Kartchner" ], "title": "Forward thinking: building deep random forests", "venue": "arXiv preprint arXiv:1705.07366,", "year": 2017 }, { "authors": [ "Dmytro Mishkin", "Jiri Matas" ], "title": "All you need is a good init", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning (ICML-10),", "year": 2010 }, { "authors": [ "Vlad Niculae", "Mathieu Blondel" ], "title": "A regularized framework for sparse and structured neural attention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Vlad Niculae", "André FT Martins", "Mathieu Blondel", "Claire Cardie" ], "title": "Sparsemap: Differentiable sparse structured inference", "venue": "arXiv preprint arXiv:1802.04223,", "year": 2018 }, { "authors": [ "Ben Peters", "Vlad Niculae", "André F.T. Martins" ], "title": "Sparse sequence-to-sequence models", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Liudmila Prokhorenkova", "Gleb Gusev", "Aleksandr Vorobev", "Anna Veronika Dorogush", "Andrey Gulin" ], "title": "Catboost: unbiased boosting with categorical features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning internal representations by error propagation", "venue": "Technical report, California Univ San Diego La Jolla Inst for Cognitive Science,", "year": 1985 }, { "authors": [ "Ira Shavitt", "Eran Segal" ], "title": "Regularization learning networks: Deep learning for tabular datasets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Yongxin Yang", "Irene Garcia Morillo", "Timothy M Hospedales" ], "title": "Deep neural decision trees", "venue": "arXiv preprint arXiv:1806.06988,", "year": 2018 }, { "authors": [ "Zhi-Hua Zhou", "Ji Feng" ], "title": "Deep forest: Towards an alternative to deep neural networks", "venue": "In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The recent rise of deep neural networks (DNN) resulted in a substantial breakthrough for a large number of machine learning tasks in computer vision, natural language processing, speech recognition, reinforcement learning (Goodfellow et al., 2016). Both gradient-based optimization via backpropagation (Rumelhart et al., 1985) and hierarchical representation learning appear to be crucial in increasing the performance of machine learning for these problems by a large margin.\nWhile the superiority of deep architectures in these domains is undoubtful, machine learning for tabular data still did not fully benefit from the DNN power. Namely, the state-of-the-art performance in problems with tabular heterogeneous data is often achieved by “shallow” models, such as gradient boosted decision trees (GBDT) (Friedman, 2001; Chen & Guestrin, 2016; Ke et al., 2017; Prokhorenkova et al., 2018). While the importance of deep learning on tabular data is recognized by the ML community, and many works address this problem (Zhou & Feng, 2017; Yang et al., 2018; Miller et al., 2017; Lay et al., 2018; Feng et al., 2018; Ke et al., 2018), the proposed DNN approaches do not consistently outperform the state-of-the-art shallow models by a notable margin. In particular, to the best of our knowledge, there is still no universal DNN approach that was shown to systematically outperform the leading GBDT packages (e.g., XGBoost (Chen & Guestrin, 2016)). As additional evidence, a large number of Kaggle ML competitions with tabular data are still won by the shallow GBDT methods (Harasymiv, 2015). Overall, at the moment, there is no dominant deep learning solution for tabular data problems, and we aim to reduce this gap by our paper.\nWe introduce Neural Oblivious Decision Ensembles (NODE), a new DNN architecture, designed to work with tabular problems. The NODE architecture is partially inspired by the recent CatBoost package (Prokhorenkova et al., 2018), which was shown to provide state-of-the-art performance on a large number of tabular datasets. In a nutshell, CatBoost performs gradient boosting on oblivious decision trees (decision tables) (Kohavi, 1994; Lou & Obukhov, 2017), which makes inference very efficient, and the method is quite resistant to overfitting. In its essence, the proposed NODE architecture generalizes CatBoost, making the splitting feature choice and decision tree routing differentiable. As a result, the NODE architecture is fully differentiable and could be incorporated in any computational graph of existing DL packages, such as TensorFlow or PyTorch. Furthermore, NODE allows constructing multi-layer architectures, which resembles ”deep” GBDT that is trained end-to-end, which was never proposed before. Besides the usage of oblivious decision tables, another important design choice is the recent entmax transformation (Peters et al., 2019), which effectively performs a ”soft” splitting feature choice in decision trees inside the NODE architecture. As discussed in the following sections, these design choices are critical to obtain state-of-the-art performance. In a large number of experiments, we compare the proposed approach with the leading GBDT implementations with tuned hyperparameters and demonstrate that NODE outperforms competitors consistently on most of the datasets.\nOverall, the main contributions of our paper can be summarized as follows:\n1. We introduce a new DNN architecture for machine learning on tabular data. To the best of our knowledge, our method is the first successful example of deep architectures that substantially outperforms leading GBDT packages on tabular data.\n2. Via an extensive experimental evaluation on a large number of datasets, we show that the proposed NODE architecture outperforms existing GBDT implementations. 3. The PyTorch implementation of NODE is available online1.\nThe rest of the paper is organized as follows. In Section 2 we review prior work relevant to our method. The proposed Neural Oblivious Decision Ensembles architecture is described in Section 3 and experimentally evaluated in Section 4. Section 5 concludes the paper." }, { "heading": "2 RELATED WORK", "text": "In this section, we briefly review the main ideas from prior work that are relevant to our method.\nThe state-of-the-art for tabular data. Ensembles of decision trees, such as GBDT (Friedman, 2001) or random forests (Barandiaran, 1998), are currently the top choice for tabular data problems. Currently, there are several leading GBDT packages, such as XGBoost (Chen & Guestrin, 2016), LightGBM (Ke et al., 2017), CatBoost (Prokhorenkova et al., 2018), which are widely used by both academicians and ML practitioners. While these implementations vary in details, on most of the tasks their performances do not differ much (Prokhorenkova et al., 2018; Anghel et al.). The most important distinction of CatBoost is that it uses oblivious decision trees (ODTs) as weak learners. As ODTs are also an important ingredient of our NODE architecture, we discuss them below.\nOblivious Decision Trees. An oblivious decision tree is a regular tree of depth d that is constrained to use the same splitting feature and splitting threshold in all internal nodes of the same depth. This constraint essentially allows representing an ODT as a table with 2d entries, corresponding to all possible combinations of d splits (Lou & Obukhov, 2017). Of course, due to the constraints above, ODTs are significantly weaker learners compared to unconstrained decision trees. However, when used in an ensemble, such trees are less prone to overfitting, which was shown to synergize well with gradient boosting (Prokhorenkova et al., 2018). Furthermore, the inference in ODTs is very efficient: one can compute d independent binary splits in parallel and return the appropriate table entry. In contrast, non-oblivious decision trees require evaluating d splits sequentially.\nDifferentiable trees. The significant drawback of tree-based approaches is that they usually do not allow end-to-end optimization and employ greedy, local optimization procedures for tree construction. Thus, they cannot be used as a component for pipelines, trained in an end-to-end fashion. To address this issue, several works (Kontschieder et al., 2015; Yang et al., 2018; Lay et al., 2018)\n1https://github.com/Qwicen/node\npropose to ”soften” decision functions in the internal tree nodes to make the overall tree function and tree routing differentiable. In our work, we advocate the usage of the recent entmax transformation (Peters et al., 2019) to ”soften” decision trees. We confirm its advantages over the previously proposed approaches in the experimental section.\nEntmax. The key building block of our model is the entmax transformation (Peters et al., 2019), which maps a vector of real-valued scores to a discrete probability distribution. This transformation generalizes the traditional softmax and its sparsity-enforcing alternative sparsemax (Martins & Astudillo, 2016), which has already received significant attention in a wide range of applications: probabilistic inference, topic modeling, neural attention (Niculae & Blondel, 2017; Niculae et al., 2018; Lin et al., 2019). The entmax is capable to produce sparse probability distributions, where the majority of probabilities are exactly equal to 0. In this work, we argue that entmax is also an appropriate inductive bias in our model, which allows differentiable split decision construction in the internal tree nodes. Intuitively, entmax can learn splitting decisions based on a small subset of data features (up to one, as in classical decision trees), avoiding undesired influence from others. As an additional advantage, using entmax for feature selection allows for computationally efficient inference using the sparse pre-computed choice vectors as described below in Section 3.\nMulti-layer non-differentiable architectures. Another line of work (Miller et al., 2017; Zhou & Feng, 2017; Feng et al., 2018) promotes the construction of multi-layer architectures from nondifferentiable blocks, such as random forests or GBDT ensembles. For instance, (Zhou & Feng, 2017; Miller et al., 2017) propose to use stacking of several random forests, which are trained separately. In recent work, (Feng et al., 2018) introduces the multi-layer GBDTs and proposes a training procedure that does not require each layer component to be differentiable. While these works report marginal improvements over shallow counterparts, they lack the capability for end-to-end training, which could result in inferior performance. In contrast, we argue that end-to-end training is crucial and confirm this claim in the experimental section.\nSpecific DNN for tabular data. While a number of prior works propose architectures designed for tabular data (Ke et al., 2018; Shavitt & Segal, 2018), they mostly do not compare with the properly tuned GBDT implementations, which are the most appropriate baselines. The recent preprint (Ke et al., 2018) reports the marginal improvement over GBDT with default parameters, but in our experiments, the baseline performance is much higher. To the best of our knowledge, our approach is the first to consistently outperform the tuned GBDTs over a large number of datasets." }, { "heading": "3 NEURAL OBLIVIOUS DECISION ENSEMBLES", "text": "We introduce the Neural Oblivious Decision Ensemble (NODE) architecture with a layer-wise structure similar to existing deep learning models. In a nutshell, our architecture consists of differentiable oblivious decision trees (ODT) that are trained end-to-end by backpropagation. We describe our implementation of the differentiable NODE layer in Section 3.1, the full model architecture in Section 3.2, and the training and inference procedures in section 3.3." }, { "heading": "3.1 DIFFERENTIABLE OBLIVIOUS DECISION TREES", "text": "The core building block of our model is a Neural Oblivious Decision Ensemble (NODE) layer. The layer is composed ofm differentiable oblivious decision trees (ODTs) of equal depth d. As an input, allm trees get a common vector x ∈ Rn, containing n numeric features. Below we describe a design of a single differentiable ODT.\nIn its essence, an ODT is a decision table that splits the data along d splitting features and compares each feature to a learned threshold. Then, the tree returns one of the 2d possible responses, corresponding to the comparisons result. Therefore, each ODT is completely determined by its splitting features f ∈ Rd, splitting thresholds b ∈ Rd and a d-dimensional tensor of responses R ∈ R 2× 2× 2︸ ︷︷ ︸\nd . In this notation, the tree output is defined as:\nh(x) = R[1(f1(x)− b1), . . . ,1(fd(x)− bd)], (1)\nwhere 1(·) denotes the Heaviside function.\nTo make the tree output (1) differentiable, we replace the splitting feature choice fi and the comparison operator 1(fi(x)− bi) by their continuous counterparts. There are several existing approaches that can be used for modelling differentiable choice functions in decision trees (Yang et al., 2018), for instance, REINFORCE (Williams, 1992) or Gumbel-softmax (Jang et al., 2016). However, these approaches typically require long training time, which can be crucial in practice.\nInstead, we propose to use the α-entmax function (Peters et al., 2019) as it is able to learn sparse choices, depending only on a few features, via standard gradient descent. This function is a generalization of softmax in its variational form: softmax(x) = argmaxp∈∆[〈p, x〉+H(p)] where H(p) is Shannon entropy. We can define α-entmax by replacing H(p) with Tsallis α-entropy2.\nThe choice function is hence replaced by a weighted sum of features, with weights computed as α-entmax (α=1.5) over the learnable feature selection matrix F ∈ Rd×n:\nf̂i(x) = n∑ j=1 xj · entmaxα(Fij) (2)\nSimilarly, we relax the Heaviside function 1(fi(x)− bi) as a two-class entmax, which we denote as σα(x)=entmaxα([x, 0]). As different features can have different characteristic scales, we use the scaled version ci(x) = σα ( fi(x)−bi\nτi\n) , where bi and τi are learnable parameters for thresholds and\nscales respectively. Based on the ci(x) values, we define a ”choice” tensorC ∈ R 2× 2× 2︸ ︷︷ ︸\nd of the same size as the response tensor R by computing the outer product of all ci:\nC(x) =\n[ c1(x)\n1− c1(x)\n] ⊗ [\nc2(x) 1− c2(x)\n] ⊗ · · · ⊗ [ cd(x)\n1− cd(x)\n] (3)\nThe final prediction is then computed as a weighted linear combination of response tensor entries R with weights from the entries of choice tensor C:\nĥ(x) = ∑\ni1,...id∈{0,1}d Ri1,...,id · Ci1,...,id(x) (4)\nNote, that this relaxation equals to the classic non-differentiable ODT h(x)(1) iff both feature selection and threshold functions reach one-hot state, i.e. entmax always returns non-zero weights for a single feature and ci always return exactly zeros or ones.\n2If one is unfamiliar with this definition, we highly recommend reading Peters et al. (2019)\nFinally, the output of the NODE layer is composed as a concatenation of the outputs ofm individual trees [ ĥ1(x), . . . , ĥm(x) ] .\nMultidimensional tree outputs. In the description above, we assumed that tree outputs are onedimensional ĥ(x) ∈ R. For classification problems, where NODE predicts probabilities of each class, we use multidimensional tree outputs ĥ(x) ∈ R|C|, where |C| is a number of classes." }, { "heading": "3.2 GOING DEEPER WITH THE NODE ARCHITECTURE", "text": "The NODE layer, described above, can be trained alone or within a complex structure, like fullyconnected layers that can be organized into the multi-layer architectures. In this work, we introduce a new architecture, following the popular DenseNet (Huang et al., 2017) model and train it end-toend via backpropagation.\nSimilar to DenseNet, our architecture is a sequence of k NODE layers (see Section 3.1), where each layer uses a concatenation of all previous layers as its input. The input layer 0 of this architecture corresponds to the input features x, accessible by all successor layers. Due to such a design, our architecture is capable to learn both shallow and deep decision rules. A single tree on i-th layer can rely on chains of up to i− 1 layer outputs as features, allowing it to capture complex dependencies. The resulting prediction is a simple average of all decision trees from all layers.\nNote, in the multi-layer architecture described above, tree outputs ĥ(x) from early layers are used as inputs for subsequent layers. Therefore, we do not restrict the dimensionality of ĥ(x) to be equal to the number of classes, and allow it to have an arbitrary dimensionality l, which correspond to the (d+ 1)-dimensional response tensor R ∈ R 2× 2× 2︸ ︷︷ ︸ d ×l\n. When averaging the predictions from all layers, only first |C| coordinates of ĥ(x) are used for classification problems and the first one for regression problems. Overall, l is an additional hyperparameter with typical values in [1, 3]." }, { "heading": "3.3 TRAINING", "text": "Here we summarize the details of our training protocol.\nData preprocessing. First, we transform each data feature to follow a normal distribution via quantile transform3. In experiments, we observed that this step was important for stable training and faster convergence.\nInitialization. Before training, we perform the data-aware initialization (Mishkin & Matas, 2016) to obtain a good initial parameter values. In particular, we initialize the feature selection matrix\n3sklearn.preprocessing.QuantileTransformer\nuniformly Fij ∼ U(0, 1), while the thresholds b are initialized with random feature values fi(x) observed in the first data batch. The scales τi are initialized in such a way that all the samples in the first batch belong to the linear region of σα, and hence receive nonzero gradients. Finally, the response tensor entries are initialized with the standard normal distributionR[i1, . . . , id] ∼ N(0, 1). Training. As for existing DNN architectures, NODE is trained end-to-end via mini-batch SGD. We jointly optimize all model parameters: F, b,R. In this work, we experimented with traditional objective functions (cross-entropy for classification and mean squared error for regression), but any differentiable objective can be used as well. As an optimization method, we use the recent QuasiHyperbolic Adam with parameters recommended in the original paper (Ma & Yarats, 2018). We also average the model parameters over c = 5 consecutive checkpoints (Izmailov et al., 2018) and pick the optimal stopping point on the hold-out validation dataset.\nInference. During training, a significant fraction of time is spent computing the entmax function and multiplying the choice tensor. Once the model is trained, one can pre-compute entmax feature selectors and store them as a sparse vector (e.g., in coordinate (coo) format), making inference more efficient." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we report the results of a comparison between our approach and the leading GBDT packages. We also provide several ablation studies that demonstrate the influence of each design choice in the proposed NODE architecture." }, { "heading": "4.1 COMPARISON TO THE STATE-OF-THE-ART.", "text": "As our main experiments, we compare the proposed NODE architecture with two state-of-the-art GBDT implementations on a large number of datasets. In all the experiments we set α parameter in the entmax transformation to 1.5. All other details of the comparison protocol are described below.\nDatasets. We perform most of the experiments on six open-source tabular datasets from different domains: Epsilon, YearPrediction, Higgs, Microsoft, Yahoo, Click. The detailed description of the datasets is available in appendix. All the datasets provide train/test splits, and we used 20% samples from the train set as a validation set to tune the hyperparameters. For each dataset, we fix the train/val/test splits for a fair comparison. For the classification datasets (Epsilon, Higgs, Click), we minimize cross-entropy loss and report the classification error. For the regression and ranking datasets (YearPrediction, Microsoft, Yahoo), we minimize and report mean squared error (which corresponds to the pointwise approach to learning-to-rank).\nMethods. We compare the proposed NODE architecture to the following baselines:\n• Catboost. The recent GBDT implementation (Prokhorenkova et al., 2018) that uses oblivious decision trees as weak learners. We use the open-source implementation, provided by the authors.\n• XGBoost. The most popular GBDT implementation widely used in machine learning competitions (Chen & Guestrin, 2016). We use the open-source implementation, provided by the authors.\n• FCNN. Deep neural network, consisting of several fully-connected layers with ReLU nonlinearity layers (Nair & Hinton, 2010).\nRegimes. We perform comparison in two following regimes that are the most important in practice:\n• Default hyperparameters. In this regime, we compare the methods as easy-to-tune toolkits that could be used by a non-professional audience. Namely, here we do not tune hyperparameters and use the default ones provided by the GBDT packages. The only tunable parameter here is a number of trees (up to 2048) in CatBoost/XGBoost, which is set based on the validation set. We do not compare with FCNN in this regime, as it typically requires much tuning, and we did not find the set of parameters, appropriate for all datasets. The default architecture in our model contains only a single layer with 2048 decision trees of depth six. Both of these hyperparameters were inherited from the CatBoost package settings for oblivious decision trees. With these parameters, the NODE architecture is shallow, but it still benefits from end-to-end training via back-propagation.\n• Tuned hyperparameters. In this regime, we tune the hyperparameters for both NODE and the competitors on the validation subsets. The optimal configuration for NODE contains between two and eight NODE layers, while the total number of trees across all the layers does not exceed 2048. The details of hyperparameter optimization are provided in appendix.\nThe results of the comparison are summarized in Table 1 and Table 2. For all methods, we report mean performance and standard deviations computed over ten runs with different random seeds. Several key observations are highlighted below:\n1. With default hyperparameters, the proposed NODE architecture consistently outperforms both CatBoost and XGBoost on all datasets. The results advocate the usage of NODE as a handy tool for machine learning on tabular problems.\n2. With tuned hyperparameters, NODE also outperforms the competitors on most of the tasks. Two exceptions are the Yahoo and Microsoft datasets, where tuned XGBoost provides the highest performance. Given the large advantage of XGBoost over CatBoost on Yahoo, we speculate that the usage of oblivious decision trees is an inappropriate inductive bias for this dataset. This implies that NODE should be extended to non-oblivious trees, which we leave for future work.\n3. In the regime with tuned hyperparameters on some datasets FCNN outperforms GBDT, while on others GBDT is superior. Meanwhile, the proposed NODE architecture appears to be a universal instrument, providing the highest performance on most of the tasks.\nFor completeness we also aimed to compare to previously proposed architectures for deep learning on tabular data. Unfortunately, many works did not publish the source code. We were only able to perform a partial comparison with mGBDT (Feng et al., 2018) and DeepForest (Zhou & Feng, 2017), which source code is available. For both baselines, we use the implementations, provided by the authors, and tune the parameters on the validation set. Note, that the DeepForest implementation is available only for classification problems. Moreover, both implementations do not scale well, and for many datasets, we obtained Out-Of-Memory error (OOM). On datasets in our experiments it turns out that properly tuned GBDTs outperform both (Feng et al., 2018) and (Zhou & Feng, 2017)." }, { "heading": "4.2 ABLATIVE ANALYSIS", "text": "In this section, we analyze the key architecture components that define our model.\nChoice functions. Constructing differentiable decision trees requires a function that selects items from a set. Such function is required for both splitting feature selection and decision tree routing. We experimented with four possible options, each having different implications:\n• Softmax learns dense decision rules where all items have nonzero weights; • Gumbel-Softmax (Jang et al., 2016) learns to stochastically sample a single element from a set; • Sparsemax (Martins & Astudillo, 2016) learns sparse decision rules, where only a few items\nhave nonzero weights; • Entmax (Peters et al., 2019) generalizes both sparsemax and softmax; it is able to learn sparse\ndecision rules, but is smoother than sparsemax, being more appropriate for gradient-based optimization. In comparison α parameter was set to 1.5.\nWe experimentally compare the four options above with both shallow and deep architectures in Table 3. We use the same choice function for both feature selection and tree routing across all experiments. In Gumbel-Softmax, we replaced it with hard argmax one-hot during inference. The results clearly show that Entmax with α=1.5 outperforms the competitors across all experiments. First, Table 3 demonstrates that sparsemax and softmax are not universal choice functions. For instance, on the YearPrediction dataset, sparsemax outperforms softmax, while on the Epsilon dataset softmax is superior. In turn, entmax provides great empirical performance across all datasets. Another observation is that Gumbel-Softmax is unable to learn deep architectures with both constant and annealed temperature schedules. This behavior is probably caused by the stochasticity of Gumbel-Softmax and the responses on the former layers are too noisy to produce useful features for the latter layers.\nFeature importance. In this series of experiments, we analyze the internal representations, learned by the NODE architecture. We begin by estimating the feature importances from different layers of a multi-layer ensemble via permutation feature importance, initially introduced in (Breiman, 2001). Namely, for 10.000 objects from the Higgs dataset we randomly shuffle the values of each feature (original or learnt on some NODE layer) and compute the increase in the classification error. Then for each layer, we split feature importance values into seven equal bins and calculate the total feature importance of each bin, shown on Figure 3 (left-top). We discovered that the features from the first layer are used the most, with feature importances decreasing with depth. This figure shows that deep layers are able to produce important features, even though earlier layers have an advantage because of the DenseNet architecture. Next, we estimated the mean absolute contribution of individual trees to the final response, reported on Figure 3 (left-bottom). One can see the reverse trend, deep trees tend to contribute more to the final response. Figure 3 (right) clearly shows that there is anticorrelation of feature importances and contributions in the final response, which implies that the main role of ealier layers is to produce informative features, while the latter layers mostly use them for accurate prediction.\nTraining/Inference runtime. Finally, we compare the NODE runtime to the timings of the stateof-the-art GBDT implementations. In Table 4 we report the training and inference time for million of objects from the YearPrediction dataset. In this experiment, we evaluate ensembles of 1024 trees\nof depth six with all other parameters set to their default values. Our GPU setup has a single 1080Ti GPU and 2 CPU cores. In turn, our CPU setup has a 28-core Xeon E5-2660 v4 processor (which costs almost twice as much as the GPU). We use CatBoost v0.15 and XGBoost v0.90 as baselines, while NODE inference runs on PyTorch v1.1.0. Overall, NODE inference time is on par with heavily optimized GBDT libraries despite being implemented in pure PyTorch (i.e. no custom kernels)." }, { "heading": "5 CONCLUSION", "text": "In this paper, we introduce a new DNN architecture for deep learning on heterogeneous tabular data. The architecture is differentiable deep GBDTs, trained end-to-end via backpropagation. In extensive experiments, we demonstrate the advantages of our architecture over existing competitors with the default and tuned hyperparameters. A promising research direction is incorporating the NODE layer into complex pipelines trained via back-propagation. For instance, in multi-modal problems, the NODE layer could be employed as a way to incorporate the tabular data, as CNNs are currently used for images, or RNNs are used for sequences." }, { "heading": "A APPENDIX", "text": "A.1 DESCRIPTION OF THE DATASETS\nIn our experiments, we used six tabular datasets, described in Table 5. (1) Epsilon is high dimensional dataset from the PASCAL Large Scale Learning Challenge 2008. The problem is a binary classification. (2) YearPrediction is a subset of Million Song Dataset. It is regression dataset, and the task is to predict the release year of the song by using the audio features. It contains tracks from 1922 to 2011. (3) Higgs is a dataset from the UCI ML Repository. The problem is to predict whether the given event produces Higgs bosons or not. (4) Microsoft is a Learning to Rank Dataset. It consists of 136-dimensional feature vectors extracted from query-url pairs. Each pair has relevance judgment labels, which take values from 0 (irrelevant) to 4 (perfectly relevant) (5) Yahoo is very similar ranking dataset with query-url pairs labeled from 0 to 4. We treat both ranking problems as regression (which corresponds to the pointwise approach to learning-to-rank) (6) Click is a subset of data from the 2012 KDD Cup. For the subset construction, we randomly sample 500.000 objects of a positive class and 500.000 objects of a negative class. The categorical features were converted to numerical ones via Leave-One-Out encoder from category encoders package of the scikit-learn library.\nA.2 OPTIMIZATION OF HYPERPARAMETERS\nIn order to tune the hyperparameters, we performed a random stratified split of full training data into train set (80%) and validation set (20%) for the Epsilon, YearPrediction, Higgs, Microsoft, and Click datasets. For Yahoo, we use train/val/test split provided by the dataset authors. We use the Hyperopt4 library to optimize Catboost, XGBoost, and FCNN hyperparameters. For each method, we perform 50 steps of Tree-structured Parzen Estimator (TPE) optimization algorithm. As a final configuration, we choose the set of hyperparameters, corresponding to the smallest loss on the validation set.\nA.2.1 CATBOOST AND XGBOOST\nOn each iteration of Hyperopt, the number of trees was set based on the validation set, with maximal trees count set to 2048. Below is the list of hyperparameters and their search spaces for Catboost.\n• learning rate: Log-Uniform distribution [e−5, 1]\n• random strength: Discrete uniform distribution [1, 20]\n• one hot max size: Discrete uniform distribution [0, 25]\n• l2 leaf reg: Log-Uniform distribution [1, 10]\n• bagging temperature: Uniform distribution [0, 1]\n• leaf estimation iterations: Discrete uniform distribution [1, 10]\n4https://github.com/hyperopt/hyperopt\nXGBoost tuned parameters and their search spaces:\n• eta: Log-Uniform distribution [e−7, 1] • max depth: Discrete uniform distribution [2, 10] • subsample: Uniform distribution [0.5, 1] • colsample bytree: Uniform distribution [0.5, 1] • colsample bylevel: Uniform distribution [0.5, 1] • min child weight: Log-Uniform distribution [e−16, e5] • alpha: Uniform choice {0, Log-Uniform distribution [e−16, e2]} • lambda: Uniform choice {0, Log-Uniform distribution [e−16, e2]} • gamma: Uniform choice {0, Log-Uniform distribution [e−16, e2]}\nA.2.2 FCNN\nFully connected neural networks were tuned using Hyperas 11 library, which is a Keras wrapper for Hyperopt. We consider FCNN constructed from the following blocks: Dense-ReLU-Dropout. The number of units in each layer is independent of each other, and dropout value is the same for the whole network. The networks are trained with the Adam optimizer with averaging the model parameters over c=5 consecutive checkpoints (Izmailov et al., 2018) and early stopping on validation. Batch size is fixed to 1024 for all datasets. Below is the list of tuned hyperparameters.\n• Architecture: either Sequential (each layer is receives previous layer activations) or DenseNet (each layer receives activations from all previous layers • Number of layers: Discrete uniform distribution [2, 7] • Number of units: Dicrete uniform distribution over a set {128, 256, 512, 1024} • Learning rate: Uniform distribution [1e− 4, 1e− 2] • Dropout: Uniform distribution [0, 0.5]\nA.2.3 NODE\nNeural Oblivious Decision Ensembles were tuned by grid search over the following hyperparameter values. In the multi-layer NODE, we use the same architecture for all layers, i.e., the same number of trees of the same depth. total tree count here denotes the total number of trees on all layers. For each dataset, we use the maximal batch size, which fits in the GPU memory. We always use learning rate 10−3.\n• num layers: {2, 4, 8} • total tree count: {1024, 2048} • tree depth: {6, 8} • tree output dim: {2, 3}\n5https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html 6https://archive.ics.uci.edu/ml/datasets/yearpredictionmsd 7https://archive.ics.uci.edu/ml/datasets/HIGGS 8https://www.microsoft.com/en-us/research/project/mslr/ 9https://webscope.sandbox.yahoo.com/catalog.php?datatype=c\n10http://www.kdd.org/kdd-cup/view/kdd-cup-2012-track-2 11https://github.com/maxpumperla/hyperas" } ]
2,020
null
SP:1d0977845884e1768b8a853e0c13fa71619f8164
[ "This work provides a novel model-based reinforcement learning algorithm for continuous domains (Mujoco) dubbed POPLIN. The presented algorithm is similar in vein to the state-of-the-art PETS algorithm, a planning algorithm that uses state-unconditioned action proposal distributions to identify good action sequences with CEM in the planning routine. The important difference compared to PETS is the incorporation of a parametric state-conditioned policy (trained on real data) in the planning routine to obtain better action-sequences (CEM is used to learn the \"offset\" from the parametric policy). The paper presents two different algorithmic ablations where CEM either operates in action space or parameter space (POPLIN-A and POPLIN-P respectively), in combination with different objectives to learn the parametric policy. The method is evaluated on 12 continuous benchmarks and compared against state-of-the-art model-based and model-free algorithms, indicating dominance of the newly proposed method.", "This paper presents POPLIN, a novel model-based reinforcement learning algorithm, which trains a policy network to improve model-prediction control. The paper studies extensively how to utilize the policy, by planning in action space or planning in parameter space and how to train the policy, by behavioral cloning, by GAN or by averaging the results of CEM. The experiments show that the proposed algorithm can perform very well in MuJoCo tasks. " ]
Model-based reinforcement learning (MBRL) with model-predictive control or online planning has shown great potential for locomotion control tasks in both sample efficiency and asymptotic performance. Despite the successes, the existing planning methods search from candidate sequences randomly generated in the action space, which is inefficient in complex high-dimensional environments. In this paper, we propose a novel MBRL algorithm, model-based policy planning (POPLIN), that combines policy networks with online planning. More specifically, we formulate action planning at each time-step as an optimization problem using neural networks. We experiment with both optimization w.r.t. the action sequences initialized from the policy network, and also online optimization directly w.r.t. the parameters of the policy network. We show that in the MuJoCo benchmarking environments, POPLIN is about 3x more sample efficient than the previously stateof-the-art algorithms, such as PETS, TD3 and SAC. To explain the effectiveness of our algorithm, we show that the optimization surface in parameter space is smoother than in action space. Further more, we found the distilled policy network can be effectively applied without the expansive model predictive control during test time for some environments such as Cheetah. Code is released here1.
[ { "affiliations": [], "name": "Tingwu Wang" }, { "affiliations": [], "name": "Jimmy Ba" } ]
[ { "authors": [ "Zdravko I Botev", "Dirk P Kroese", "Reuven Y Rubinstein", "Pierre L’Ecuyer" ], "title": "The cross-entropy method for optimization", "venue": "In Handbook of statistics,", "year": 2013 }, { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yevgen Chebotar", "Karol Hausman", "Marvin Zhang", "Gaurav Sukhatme", "Stefan Schaal", "Sergey Levine" ], "title": "Combining model-based and model-free updates for trajectory-centric reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "arXiv preprint arXiv:1805.12114,", "year": 2018 }, { "authors": [ "Pieter-Tjerk De Boer", "Dirk P Kroese", "Shie Mannor", "Reuven Y Rubinstein" ], "title": "A tutorial on the cross-entropy method", "venue": "Annals of operations research,", "year": 2005 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Vladimir Feinberg", "Alvin Wan", "Ion Stoica", "Michael I Jordan", "Joseph E Gonzalez", "Sergey Levine" ], "title": "Model-based value estimation for efficient model-free reinforcement learning", "venue": "arXiv preprint arXiv:1803.00101,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Ilya Sutskever", "Sergey Levine" ], "title": "Continuous deep q-learning with model-based acceleration", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "David Ha", "Jürgen Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "Nicolas Heess", "Gregory Wayne", "David Silver", "Timothy Lillicrap", "Tom Erez", "Yuval Tassa" ], "title": "Learning continuous control policies by stochastic value gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Nicolas Heess", "Srinivasan Sriram", "Jay Lemmon", "Josh Merel", "Greg Wayne", "Yuval Tassa", "Tom Erez", "Ziyu Wang", "Ali Eslami", "Martin Riedmiller" ], "title": "Emergence of locomotion behaviours in rich environments", "venue": "arXiv preprint arXiv:1707.02286,", "year": 2017 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "arXiv preprint arXiv:1906.08253,", "year": 2019 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Sergey Levine", "Pieter Abbeel" ], "title": "Learning neural network policies with guided policy search under unknown dynamics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Sergey Levine", "Vladlen Koltun" ], "title": "Guided policy search", "venue": "In International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Yuping Luo", "Huazhe Xu", "Yuanzhi Li", "Yuandong Tian", "Trevor Darrell", "Tengyu Ma" ], "title": "Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees", "venue": null, "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "arXiv preprint arXiv:1708.02596,", "year": 2017 }, { "authors": [ "Quynh Nguyen", "Matthias Hein" ], "title": "The loss surface of deep and wide neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Arthur George Richards" ], "title": "Robust constrained model predictive control", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 2005 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Daniel Soudry", "Elad Hoffer" ], "title": "Exponentially vanishing sub-optimal local minima in multilayer neural networks", "venue": "arXiv preprint arXiv:1702.05777,", "year": 2017 }, { "authors": [ "Richard S Sutton" ], "title": "Integrated architectures for learning, planning, and reacting based on approximating dynamic programming", "venue": "In Machine Learning Proceedings", "year": 1990 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM SIGART Bulletin,", "year": 1991 }, { "authors": [ "Yuval Tassa", "Tom Erez", "Emanuel Todorov" ], "title": "Synthesis and stabilization of complex behaviors through online trajectory optimization", "venue": "In Intelligent Robots and Systems (IROS),", "year": 2012 }, { "authors": [ "Emanuel Todorov", "Weiwei Li" ], "title": "A generalized iterative lqg method for locally-optimal feedback control of constrained nonlinear stochastic systems", "venue": "In Proceedings of the 2005, American Control Conference,", "year": 2005 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In Intelligent Robots and Systems (IROS),", "year": 2012 }, { "authors": [ "Marvin Zhang", "Sharad Vikram", "Laura Smith", "Pieter Abbeel", "Matthew J Johnson", "Sergey Levine" ], "title": "Solar: Deep structured latent representations for model-based reinforcement learning", "venue": "arXiv preprint arXiv:1808.09105,", "year": 2018 }, { "authors": [ "OpenAI Gym Brockman" ], "title": "more environments based on the standard bench-marking environments", "venue": null, "year": 2016 }, { "authors": [ "Hopper Swimmer" ], "title": "Cheetah-v0 Walker2d Swimmer-v0 POPLIN-P", "venue": null, "year": 2055 }, { "authors": [ "MuJoCoTodorov" ], "title": "table, we record the performance at 200,000 time-step. Cheetah Ant Hopper Swimmer Cheetah-v0 Walker2d Swimmer-v0 POPLIN-P", "venue": null, "year": 2012 }, { "authors": [ "Uni", "Sep" ], "title": "Initial Distribution Sigma", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "A model-based reinforcement learning (MBRL) agent learns its internal model of the world, i.e. the dynamics, from repeated interactions with the environment. With the learnt dynamics, a MBRL agent can for example perform online planning, interact with imaginary data, or optimize the controller through dynamics, which provides significantly better sample efficiency (Deisenroth & Rasmussen, 2011; Sutton, 1990; Levine & Abbeel, 2014; Levine & Koltun, 2013). However, MBRL algorithms generally do not scale well with the increasing complexity of the reinforcement learning (RL) tasks in practice. And modelling errors in dynamics that accumulate with time-steps greatly limit the applications of MBRL algorithms. As a result, many latest progresses in RL has been made with model-free reinforcement learning (MFRL) algorithms that are capable of solving complex tasks at the cost of large number of samples (Schulman et al., 2017; Heess et al., 2017; Schulman et al., 2015; Mnih et al., 2013; Lillicrap et al., 2015; Haarnoja et al., 2018).\nWith the success of deep learning, a few recent works have proposed to learn neural network-based dynamics models for MBRL. Among them, random shooting algorithms (RS), which uses modelpredictive control (MPC), is shown to have good robustness and scalability (Richards, 2005). In shooting algorithms, the agent randomly generates action sequences, use the dynamics to predict the future states, and choose the first action from the sequence with the best expected reward. However, RS usually has worse asymptotic performance than model-free controllers (Nagabandi et al., 2017), and the authors of the the PETS algorithm (Chua et al., 2018) suggest that the performance of RS is directly affected by the quality of the learnt dynamics. They propose a probabilistic ensemble to capture model uncertainty, which enables PETS algorithm to achieve both better sample efficiency and better asymptotic performance than state-of-the-art model-free controllers in environments such as Cheetah. However, PETS is not as effective on environments with higher dimensionality.\n1https://github.com/WilsonWangTHU/POPLIN.\n-200.000\n-200.000\n-200.000\n-190.000\n-190.000\n-190.000 -190.000\n-190.000\n-190.0 00\n-190.000 -190.000\n-19 0.0\n00\n-180 .000\n-18 0.0\n00 -18 0.0\n00\n-180.000\n-180.000\n-18 0.0\n00\n-17 0.00\n0\n-170.000 -170.000\n-170.000\n-160.000\n(a1) Reward Surface (a2) PETS Iter 1 (a3) PETS Iter 2 (a4) PETS Iter 3\n-525.000\n-525.00 0\n-500.000\n-500.000\n-500.000\n-500.000\n-475.000\n-475.000\n-47 5.0\n00\n-475.000\n-475.000\n-450.000\n-450.000\n-425.000\n-425.000\n-425.000\n-400.000\n(b1) Reward Surface (b2) POPLIN Iter 1 (b3) POPLIN Iter 2 (b4) POPLIN Iter 3\nFigure 1: We transform each planned candidate action trajectory with PCA into a 2D blue scatter. The top and bottom figures are respectively the visualization of PETS (Chua et al., 2018) and our algorithm. The red area has higher reward. From left to right, we show how candidate trajectories are updated, across different planning iterations within one time-step. As we can see, while both reward surface is not smooth with respect to action trajectory. POPLIN, using policy networks, has much better search efficiency, while PETS is stuck around its initialization. The details are in section 5.3.\nIn this paper, we explore MBRL algorithms from a different perspective, where we treat the planning at each time-step as an optimization problem. Random search in action space, as what is being done in state-of-the-art MBRL algorithms such as PETS, is insufficient for more complex environments. On the one hand, we are inspired by the success of AlphaGo (Silver et al., 2016; 2017), where a policy network is used to generate proposals for the Monte-Carlo tree search. On the other hand, we are inspired by the recent research into understanding deep neural networks (Nguyen & Hein, 2017; Li et al., 2018; Soudry & Hoffer, 2017). Deep neural networks, frequently observed in practices, is much less likely to get stuck in sub-optimal points. In Figure 1, we apply principal component analysis (PCA) on the action sequences generated in each planning iteration within one time-step. The reward surface of the action space is not smooth and prone to local-minimas. We argue that optimization in the policy network’s parameter space will be more efficient. Furthermore, we note that the state-of-the-art MBRL algorithm with MPC cannot be applied real-time. We therefore experiment with different policy network distillation schemes for fast control without MPC. To sum up, the contribution of this paper is three-fold:\n• We apply policy networks to generate proposals for MPC in high dimensional locomotion control problems with unknown dynamics.\n• We formulate planning as optimization with neural networks, and propose policy planning in parameter space, which obtain state-of-the-art performance on current bench-marking environments, being about 3x more sample efficient than the previous state-of-the-art algorithm, such as PETS (Chua et al., 2018), TD3 (Fujimoto et al., 2018) and SAC (Haarnoja et al., 2018).\n• We also explore policy network distillation from the planned trajectories. We found the distilled policy network alone achieves high performance on environments like Cheetah without the expansive online planning." }, { "heading": "2 RELATED WORK", "text": "Model-based reinforcement learning (MBRL) has been long studied. Dyna (Sutton, 1990; 1991) algorithm alternately performs sampling in the real environments and optimize the controllers on the learned model of the environments. Other pioneering work includes PILCO (Deisenroth & Rasmussen, 2011), where the authors model the dynamics using Gaussian Process and directly optimize the surrogate expected reward. Effective as it is to solve simple environments, PILCO\nheavily suffers the curse of dimensionality. In (Levine & Abbeel, 2014; Levine & Koltun, 2013; Levine et al., 2016; Chebotar et al., 2017; Zhang et al., 2018), the authors propose guided policy search (GPS). GPS uses iLQG (Li & Todorov, 2004; Todorov & Li, 2005; Tassa et al., 2012) as the local controller, and distill the knowledge into a policy neural network. In SVG (Heess et al., 2015), the authors uses stochastic value gradient so that the stochastic policy network can be optimized by back-propagation with off-policy data. Recently with the progress of model-free algorithms such as TRPO and PPO (Schulman et al., 2015; 2017), Kurutach et al. (2018); Luo et al. (2019) propose modern variants of Dyna, where TRPO (Schulman et al., 2015) is used to optimize the policy network using data generated by the learnt dynamics. Concurrent to this work, Janner et al. (2019) further use SAC (Haarnoja et al., 2018) to train the policy network, and gets state-of-the-art performance on many tasks. At the same time, random shooting methods proposed by Nagabandi et al. (2017); Chua et al. (2018) have shown its robustness and effectiveness on benchmarking environments. PETS algorithm (Chua et al., 2018) is considered by many to be the state-of-the-art shooting algorithm, which we discuss in detail in section 3. Dynamics is also used to obtain better value estimation to speed up training (Gu et al., 2016; Feinberg et al., 2018; Buckman et al., 2018). Latent dynamics models using VAE (Kingma & Welling, 2013) are commonly used to solve problems with image input (Ha & Schmidhuber, 2018a;b; Hafner et al., 2018; Kaiser et al., 2019)." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 REINFORCEMENT LEARNING", "text": "In reinforcement learning, the problem of solving the given task is formulated as a infinite-horizon discounted Markov decision process. For the agent, we denote the action space and state space respectively as A and S. We also denote the reward function and transition function as r(st, at) and f(st+1|st, at), where st ∈ S and at ∈ A are the state and action at time-step t. The reward function is assumed known to the agent in this work. The agent maximizes its expected total reward J(π) = Eπ[ ∑∞ t=0 r(st, at)] with respect to the agent’s controller π." }, { "heading": "3.2 RANDOM SHOOTING ALGORITHM AND PETS", "text": "Our proposed algorithm is based on the random shooting algorithm (Richards, 2005). In random shooting algorithms (Nagabandi et al., 2017; Chua et al., 2018), a data-set of D = {(st, at, st+1)} is collected from previously generated real trajectories. The agent learns an ensemble of neural networks denoted as fφ(st+1|st, at), with the parameters of the neural networks denoted as φ. In planning, the agent randomly generates a population of K candidate action sequences. Each action sequence, denoted as a = {a0, ..., aτ}, contains the control signals at every time-steps within the planning horizon τ . The action sequence with the best expected reward given the current dynamics network fφ(st+1|st, at) is chosen. RS, as a model-predictive control algorithm, only executes the first action signal and re-plan at time-step. In PETS (Chua et al., 2018), the authors further use cross entropy method (CEM) (De Boer et al., 2005; Botev et al., 2013) to re-samples sequences near the best sequences from the last CEM iteration.\n4 MODEL-BASED POLICY PLANNING Algorithm 1 General POPLIN Framework 1: while Training iterations not Finished do 2: for ith time-step of the agent do 3: CEM planning as in section 4.1, 4.2 4: Execute the first action from CEM. 5: end for 6: Dynamics update and policy distillation. 7: end while In this section, we describe two variants of POPLIN: model-based policy planning in action space (POPLIN-A) and model-based policy planning in parameter space (POPLIN-P). Following the notations in section 3.2, we define the expected planning reward function at time-step i as follows:\nR(si,ai) = E [ i+τ∑ t=i r(st, at) ] , where st+1 ∼ fφ(st+1|st, at). (1)\nThe action sequence ai = {ai, ai+1, ..., ai+τ} is generated by the policy search module, as later described in Section 4.1 and 4.2. The expectation of predicted trajectories {si, si+1, ..., si+τ} is\nestimated by creating P particles from the current state. The dynamics model fk,tφ (st+1|st, at) used by kth particle at time-step t is sampled from deterministic or probabilistic ensemble models. To better illustrate, throughout the paper we denote this dynamics as a fixed deterministic model, i.e. fk,tφ ≡ fφ. In practice the dynamics uses probabilistic ensemble models, which requires some trivial modifications to the math and we refer readers to PETS Chua et al. (2018) for details." }, { "heading": "4.1 MODEL-BASED POLICY PLANNING IN ACTION SPACE", "text": "In model-based policy planning in action space (POPLIN-A), we use a policy network to generate good initial action distribution. We denote the policy network as π(st). Once the policy network proposes sequences of actions on the expected trajectories, we add Gaussian noise to the candidate actions and use CEM to fine-tune the mean and standard deviation of the noise distribution.\nSimilar to defining ai = {ai, ai+1, ..., ai+τ}, we denote the noise sequence at time-step t with horizon τ as δi = {δi, δi+1, ..., δi+τ}. We initialize the noise distribution as a Gaussian distribution with mean µ0 = 0 and covariance Σ0 = σ20I , where σ 2 0 is the initial noise variance. In each CEM iteration, we first sort out the sequences with the top ξ + 1 expected planning reward, whose noise sequences are denoted as {δ0i , δ1i , ..., δ ξ i }. Then we estimate the noise distribution of the elite candidates, i. e.,\nΣ′ ← Cov({δ0i , δ1i , ..., δ ξ i }), µ ′ ← Mean({δ0i , δ1i , ..., δ ξ i }). (2)\nThe elite distribution (µ′,Σ′) in CEM algorithm is used to update the candidate noise distribution as µ = (1 − α)µ + αµ′, Σ = (1 − α)Σ + αΣ′. For every time-step, several CEM iterations are performed by candidate re-sampling and noise distribution updating. We provide detailed algorithm boxes in appendix A.1. We consider the following two schemes to add action noise.\nPOPLIN-A-Init: In this planning schemes, we use the policy network only to propose the initialization of the action sequences. When planning at time-step i with observed state si, we first obtain the initial reference action sequences, denoted as âi = {âi, âi+1, ..., âi+τ}, by running the initial forward pass with policy network. At each planning time-step t, where i ≤ t ≤ i + τ , we have ât = π(ŝt), where ŝt = fφ(ŝt−1, at−1), ŝi = si The expected reward given search noise δi will be:\nR(si, δi) = E [ i+τ∑ t=i r(st, ât + δt) ] , where st+1 = fφ(st+1|st, ât + δt). (3)\nPOPLIN-A-Replan: POPLIN-A-Replan is a more aggressive planning schemes, which always re-plans the controller according the changed trajectory given the current noise distribution. If we had the perfect dynamics network and the policy network, then we expect re-planning to achieve faster convergence the optimal action distribution. But it increases the risk of divergent behaviors. In this case, the expected reward for each trajectory is\nR(si, δi) = E [ i+τ∑ t=i r(st, π(st) + δt) ] , where st+1 = fφ(st+1|st, π(st) + δt). (4)" }, { "heading": "4.2 MODEL-BASED POLICY PLANNING IN PARAMETER SPACE", "text": "While planning in the action space is a natural extension of the original PETS algorithm, we found it provides little performance improvement in complex environments. One potential reason is that POPLIN-A still performs CEM searching in action sequence space, where the conditions of convergence for CEM is usually not met. Let’s assume that a robot arm needs to either go left or right to get past the obstacle in the middle. In CEM planning in the action space, the theoretic distribution mean is always going straight, which fails to model the bi-modal action distribution.\nIndeed, planning in action space is a non-convex optimization whose surface has lots of holes and peaks. Recently, much research progress has been made in understanding why deep neural networks are much less likely to get stuck in sub-optimal points Nguyen & Hein (2017); Li et al. (2018); Soudry & Hoffer (2017). And we believe that planning in parameter space is essentially using deeper neural networks. Therefore, we propose model-based policy planning in parameter space (POPLIN-P).\nInstead of adding noise in the action space, POPLIN-P adds noise in the parameter space of the policy network. We denote the parameter vector of policy network as θ, and the parameter noise sequence starting from time-step i as ωi = {ωi, ωi+1, ..., ωi+τ}. The expected reward function is now\nR(si,ωi) = E [ i+τ∑ t=i r (st, πθ+ωt(st)) ] , where st+1 = fφ(st+1|st, πθ+ωt(st)). (5)\nSimilarly, we update the CEM distribution towards the following elite distribution:\nΣ′ ← Cov({ω0i ,ω1i , ...,ω ξ i }), µ ′ ← Mean({ω0i ,ω1i , ...,ω ξ i }). (6)\nWe can force the policy network noise within the sequence to be consistent, i.e. ωi = ωi+1 = ... = ωi+τ , which we name as POPLIN-P-Uni. This reduces the size of the flattened noise vector from (τ + 1)|θ| to |θ|, and is more consistent in policy behaviors. The noise can also be separate for each time-step, which we name as POPLIN-P-Sep. We benchmark both schemes in section 5.4.\nEquivalence to re-parameterized stochastic policy: Stochastic policy network encourages exploration, and increases the robustness against the impact of compounded model errors. POPLIN-P, which inserts exogenous noise into the parameter space, can be regarded as a re-parameterized stochastic policy network, which natural combines stochastic policy network with planning." }, { "heading": "4.3 MODEL-PREDICTIVE CONTROL AND POLICY CONTROL", "text": "MBRL with online re-planning or model-predictive control (MPC) is effective, but at the same time time-consuming. Many previous attempts have tried to distill the planned trajectories into a policy network Levine & Abbeel (2014); Levine & Koltun (2013); Chebotar et al. (2017); Zhang et al. (2018), and control only with policy network. In this paper, we define two settings of using POPLIN: MPC Control and Policy Control. In MPC control, the agent uses policy network during the online planning and only execute the first action. In policy control, the agent directly executes the signal produced by the policy network given current observation, just like how policy network is used in MFRL algorithms. We show both performance of POPLIN in this paper." }, { "heading": "4.4 POLICY DISTILLATION SCHEMES", "text": "The agents iterate between interacting with the environments, and distilling the knowledge from planning trajectory into a policy network. We consider several policy distillation schemes here, and discuss their effectiveness in the later experimental section.\nBehavior cloning (BC): BC can be applied to POPLIN-A and POPLIN-P, by minimizing the squared L2 loss as Equation 7. D is the collection of observation and planned action from real environment. When applying BC to POPLIN-P, we fix parameter noise of the network to be zeros.\nmin θ\nEs, a∈D||πθ(s)− a||2. (7)\nGenerative adversarial network training (GAN) Goodfellow et al. (2014): GAN can be applied to POPLIN-P. We consider the following fact. During MPC control, the agent only needs to cover the best action sequence in its action sequence distribution. Therefore, instead of point-to-point supervised training such as BC, we can train the policy network using GAN:\nmin πθ max ψ Es, a∈D log(Dψ(s, a)) + Es∈D, z∼N (0,σ0I) log(1−Dψ(s, πθ+z(s))), (8)\nwhere a discriminator D parameterized by ψ is used, and we sample the random noise z from the initial CEM distribution N (0, σ0I). Setting parameter average (AVG): AVG is also applicable to POPLIN-P. During interaction with real environment, we also record the optimized parameter noise in to the data-set, i. e. D = {(s, ω)}. And we sacrifice the effectiveness of the policy control and only use policy network as a good search initialization. The new parameter is updated as θ = θ + 1/|D| ∑ ω∈D ω." }, { "heading": "5 EXPERIMENTS", "text": "In section 5.1, we compare POPLIN with existing algorithms. We also show the policy control performance of POPLIN with different training methods in section 5.2. In section 5.3, we provide explanations and analysis for the effectiveness of our proposed algorithms by exploring and visualizing the planner’s reward optimization surface. In section 5.4, we study the sensitivity of our algorithms with respect to hyper-parameters, and show the performance of different algorithm variants." }, { "heading": "5.1 MUJOCO BENCHMARKING PERFORMANCE", "text": "In this section, we compare POPLIN with existing reinforcement learning algorithms including PETS (Chua et al., 2018), GPS (Levine et al., 2016), RS (Richards, 2005), MBMF (Nagabandi et al., 2017), TD3 (Fujimoto et al., 2018) METRPO (Kurutach et al., 2018), PPO (Schulman et al., 2017; Heess et al., 2017), TRPO (Schulman et al., 2015) and SAC (Haarnoja et al., 2018), which includes the most recent progress of both model-free and model-based algorithms. We examine the algorithms with 12 environments, which is a wide collection of environments from OpenAI Gym (Brockman et al., 2016) and the environments proposed in PETS (Chua et al., 2018), which are summarized in appendix A.2. Due to the page limit and to better visualize the results, we put the complete figures and tables in appendix A.3. And in Figure 2 and Table 1, we show the performance of our algorithms and the best performing baselines. The hyper-parameter search is summarized in appendix A.3.1.\nAs shown in Table 1, POPLIN achieves state-of-the-art performance in almost all environments, solving most of the them with 200,000 or 50,000 time-steps, instead of 1 million time-steps commonly used in MFRL algorithms. POPLIN-A (POPLIN-A-BC-Replan) has the best performance in simpler environments such as Pendulum, Cart-pole, Swimmer. But on complex environments such as Ant, Cheetah or Hopper, POPLIN-A does not have obvious performance gain compared with PETS. POPLIN-P (POPLIN-P-Sep-AVG) on the other hand, has consistent and stable performance among different environments. POPLIN-P is significantly better than all other algorithms in complex environments such as Ant and Cheetah. However, like other model-based algorithms, POPLIN cannot solve environments such as Walker and Humanoid. the performance of POPLIN plateaus\nquickly. Gradually model-free algorithms will have better asymptotic performance. We view this as a bottleneck of our algorithms and leave it to future research." }, { "heading": "5.2 POLICY CONTROL PERFORMANCE", "text": "In this section, we show the performance of POPLIN without MPC. To be more specific, we show the performance with the Cheetah, Pendulum, Pusher and Reacher3D, as shown in Figure 3, and we refer readers to appendix A.4 for the full results.\nWe note that policy control is not always successful, and in environments such as Ant and Walker2D, the performance is almost random. In simple environments such as Pusher and Reacher3D, POPLIN-A has the best MPC performance, but has worse policy control performance compared with POPLIN-PBC and POPLIN-P-GAN. At the same time, both POPLIN-P-BC and POPLIN-P-GAN are able to efficiently distill the knowledge from planned trajectory. Which one of POPLIN-P-BC and POPLIN-PGAN is better depends on the environment tested, and they can be used interchangeably. This indicates that POPLIN-A, which uses a deterministic policy network, is more prone to distillation collapse than POPLIN-P, which can be interpreted as using a stochastic policy network with reparameterization trick. POPLIN-P-Avg, which only use policy network as optimization initialization has good MPC performance, but sacrifices the policy control performance. In general, the performance of policy control lags behind MPC control." }, { "heading": "5.3 SEARCH EFFECTIVENESS AND REWARD SURFACE", "text": "In this section, we explore the reasons for the effectiveness of POPLIN. In Figure 4, we show the performance of PETS, POPLIN-A and POPLIN-P with different population sizes. As we can see, PETS and POPLIN-A, which are the two algorithms that add search noise in the action space, cannot increase their performance by having bigger population size. However, POPLIN-P is able to efficiently increase performance with bigger population size. We then visualize the candidates in their reward or optimization surface in Figure 1. We use PCA (principal component analysis) to transform the action sequences into 2D features. As we can see, the reward surface is not smooth, with lots of local-minima and local-maxima islands. The CEM distribution of PETS algorithm is almost fixed across iterations on this surface, even if there are potentially higher reward regions. POPLIN is able\nto efficiently search through the jagged reward surface, from the low-reward center to the high reward left-down corner. To further understand why POPLIN is much better at searching through the reward surface, we then plot the figures in the solution space in Figure 5. More specifically, we now perform PCA on the policy parameters for POPLIN-P. As we can see in Figure 5 (c), the reward surface in parameter space is much smoother than the reward surface in action space, which are shown in Figure 5 (a), (b). POPLIN-P can efficiently search through the smoother reward surface in parameter space.\n2 0 2 x\n3 2 1 0 1 2 3 y PETS\n2 0 2 x\n3 2 1 0 1 2 3 y POPLIN-P 0Layer 0.00 0.04 0.08 0.12 0.16 0.20\n0.00\n0.04\n0.08\n0.12\n0.16\n0.20\n2 0 2 x\n3 2 1 0 1 2 3 y POPLIN-A 0.000 0.012 0.024 0.036 0.048 0.060\n2 0 2 x\n3 2 1 0 1 2 3 y POPLIN-P 0.00 0.04 0.08 0.12 0.16 0.20\nFigure 6: Projected action distribution.\nIn Figure 6, we also visualize the actions distribution in one episode taken by PETS, POPLIN-A and POPLINP using policy networks of different number of hidden layers. We again use PCA to project the actions into 2D feature space. As we can see, POPLIN-P shows a clear pattern of being more multi-modal with the use of deeper the network." }, { "heading": "5.4 ABLATION STUDY", "text": "In this section, we study how sensitive our algorithms are with respect to some of the crucial hyperparameters, for example, the initial variance of the CEM noise distribution. We also show the performance of different algorithm variants. The full ablation study and performance against different random seeds are included in appendix A.5. In Figure 7 (a), we show the performance of POPLIN-A using different training schemes. We try both training with only the real data samples, which we denote as \"Real\", and training also with imaginary data the agent plans into the future, which we denote as \"Hallucination\". In practice, POPLIN-A-Init performs better than POPLIN-A-Replan, which suggests that there can be divergent or overconfident update in POPLIN-A-Replan. And training with or without imaginary does not have big impact on the performance. In Figure7 (b) and (c), we also compare the performance of POPLIN-P-Uni with POPLIN-P-Sep, where we show that POPLIN-P-Sep has much better performance than POPLIN-P-Uni, indicating the search is not efficient enough in the constrained parameter space. For POPLIN-P-Avg, with bigger initial variance of the noise distribution, the agent gets better at planning. However, increasing initial noise variance does not increase the performance of PETS algorithm, as shown in 7 (b), (d). It is worth mentioning that POPLIN-P-GAN is highly sensitive to the entropy penalty we add to the discriminator, with the 3 curves in Figure7 (c) using entropy penalty of 0.003, 0.001 and 0.0001 respectively," }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we explore efficient ways to combine policy networks with model-based planning. We propose POPLIN, which obtains state-of-the-art performance on the MuJoCo benchmarking environments. We study different distillation schemes to provide fast controllers during testing. More importantly, we formulate online planning as optimization using deep neural networks. We believe POPLIN will scale to more complex environments in the future." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ALGORITHM DIAGRAMS", "text": "To better illustrate the algorithm variants of our proposed methods, we summarize them in Algorithm 2, 3, 4.\nAlgorithm 2 POPLIN-A-Init 1: Initialize policy network parameters θ, dynamics network parameters φ, data-set D 2: while Training iterations not Finished do 3: for ith time-step of the agent do . Sampling Data 4: Initialize reference action sequence {âi, âi+1, ..., âi+τ}. . Using Equation 3 5: Initialize action-sequence noise distribution. µ = µ0, Σ = σ20I 6: for jth CEM Update do . CEM Planning 7: Sample action noise sequences {δi} from N (µ,Σ). 8: for Every candidate δi do . Trajectory Predicting 9: for t = i to i+ τ , st+1 = fφ(st+1|st, at = ât + δt) 10: Evaluate expected reward of this candidate. 11: end for 12: Fit distribution of the elite candidates as µ′,Σ′. 13: Update noise distribution µ = (1− α)µ+ αµ′, Σ = (1− α)Σ + αΣ′ 14: end for 15: Execute the first action from the optimal candidate action sequence. 16: end for 17: Update φ using data-set D . Dynamics Update 18: Update θ using data-set D . Policy Distillation 19: end while\nAlgorithm 3 POPLIN-A-Replan 1: Initialize policy network parameters θ, dynamics network parameters φ, data-set D 2: while Training iterations not Finished do 3: for ith time-step of the agent do . Sampling Data 4: Initialize action-sequence noise distribution. µ = µ0, Σ = σ20I 5: for jth CEM Update do . CEM Planning 6: Sample action noise sequences {δi} from N (µ,Σ). 7: for Every candidate δi do . Trajectory Predicting 8: for t = i to i+ τ , st+1 = fφ(st+1|st, at = πθ(st) + δt) 9: Evaluate expected reward of this candidate. 10: end for 11: Fit distribution of the elite candidates as µ′,Σ′. 12: Update noise distribution µ = (1− α)µ+ αµ′, Σ = (1− α)Σ + αΣ′ 13: end for 14: Execute the first action from the optimal candidate action sequence. 15: end for 16: Update φ using data-set D . Dynamics Update 17: Update θ using data-set D . Policy Distillation 18: end while" }, { "heading": "A.2 BENCH-MARKING ENVIRONMENTS", "text": "In the original PETS paper Chua et al. (2018), the authors only experiment with 4 environments, which are namely Reacher3D, Pusher, Cartpole and Cheetah. In this paper, we experiment with the 9 more environments based on the standard bench-marking environments from OpenAI Gym Brockman et al. (2016). More specifically, we experiment with InvertedPendulum, Acrobot, Pendulum, Ant, Hopper, Swimmer, Walker2d. We also note that the Cheetah environment in PETS Chua et al. (2018) is different from the standard HalfCheetah-v1 in OpenAI Gym. Therefore we experiment with both versions in our paper, where the Cheetah from PETS is named as \"Cheetah\", and the HalfCHeetah\nAlgorithm 4 POPLIN-P 1: Initialize policy network parameters θ, dynamics network parameters φ, data-set D 2: while Training iterations not Finished do 3: for ith time-step of the agent do . Sampling Data 4: Initialize parameter-sequence noise distribution. µ = µ0, Σ = σ20I 5: for jth CEM Update do . CEM Planning 6: Sample parameter noise sequences {ωi} from N (µ,Σ). 7: for Every candidate ωi do . Trajectory Predicting 8: for t = i to i+ τ , st+1 = fφ(st+1|st, at = πθ+ωt(st)) 9: Evaluate expected reward of this candidate. 10: end for 11: Fit distribution of the elite candidates as µ′,Σ′. 12: Update noise distribution µ = (1− α)µ+ αµ′, Σ = (1− α)Σ + αΣ′ 13: end for 14: Execute the first action from the optimal candidate action sequence. 15: end for 16: Update φ using data-set D . Dynamics Update 17: Update θ using data-set D . Policy Distillation 18: end while\nfrom OpenAI Gym is named as \"Cheetah-v0\". Empirically, Cheetah is much easier to solve than Cheetah-v0, as show in Table 2 and Table 4. We also include two swimmer, which we name as Swimmer and Swimmer-v0, which we explain in section A.2.1." }, { "heading": "A.2.1 FIXING THE SWIMMER ENVIRONMENTS", "text": "We also notice that after an update in the Gym environments, the swimmer became unsolvable for almost all algorithms. The reward threshold for solving is around 340 for the original swimmer, but almost all algorithms, including the results shown in many published papers Schulman et al. (2017), will be stuck at the 130 reward local-minima. We note that this is due the fact that the velocity sensor is on the neck of the swimmer, making swimmer extremely prone to this performance local-minimum. We provide a fixed swimmer, which we name as Swimmer, by moving the sensor from the neck to the head. We believe this modification is necessary to test the effectiveness of the algorithms." }, { "heading": "A.3 FULL RESULTS OF BENCH-MARKING PERFORMANCE", "text": "In this section, we show the figures of all the environments in Figure 8. We also include the final performance in the Table 2 and 4. As we can see, POPLIN has consistently the best performance among almost all the environments. We also include the time-steps we use on each environment for all the algorithms in Table 2 and 4." }, { "heading": "A.3.1 HYPER-PARAMETERS", "text": "In this section, we introduce the hyper-parameters we search during the experiments. One thing to notice is that, for all of the experiments on PETS, POPLIN, we use the model type PE (probabilistic ensembles) and propagation method of E (expectation). While other combinations of model type and propagation methods might result in better performance, they are usually prohibitively computationally expensive. For example, the combination of PE-DS requires a training time of about 68 hours for one random seed, for PETS to train with 200 iteration, which is 200,000 time-step. As a matter of fact, PE-E is actually one of the best combination in many environments. Since POPLIN is based on PETS, we believe this is a fair comparison for all the algorithms.\nWe show the hyper-parameter search we perform for PETS in the paper in Table 5. For the hyperparameters specific to POPLIN, we summarize them in 6 and 7." }, { "heading": "A.4 FULL RESULTS OF POLICY CONTROL", "text": "Due to the space limit, we are not able to put all of the results of policy control in the main article. More specifically, we add the figure for the original Cheetah-v0 compared to the figures shown in the main article, as can be seen in 9 (b). Again, we note that POPLIN-P-BC and POPLIN-P-GAN are comparable to each other, as mentioned in the main article. POPLIN-P-BC and POPLIN-P-GAN are the better algorithms respectively in Cheetah and Cheetah-v0, which are essentially the same environment with different observation functions." }, { "heading": "A.5 ABLATION STUDY FOR DIFFERENT VARIANT OF POPLIN", "text": "In this section, we show the results of different variant of our algorithm. In Figure 11, the performances of different random seeds are visualized, where we show that POPLIN has similar randomness in performance to PETS. Additionally, we visualize POPLIN-P-BC in Figure 10 (b), whose best distribution variance for policy planning is 0.01, while the best setting for testing is 0.03." }, { "heading": "A.6 POPULATION SIZE", "text": "In Figure 12, we include more detailed figures of the performance of different algorithms with different population size. One interesting finding is that even with fixed parameters of zeros, POPLINP can still performance very efficient search. This is indicating that the efficiency in optimization of\nPOPLIN-P, especially of POPLIN-P-AVG, is the key reasons for successful planning. However, this scheme naturally sacrifices the policy distillation and thus cannot be applied without planning." }, { "heading": "A.7 THE REWARD SURFACE OF DIFFERENT ALGORITHM", "text": "In this section, we provide a more detailed description of the reward surface with respect the the solution space (action space for PETS and POPLIN-A, and parameter space for POPLIN-P) in Figure 13, 14, 15, 16, 17. As we can see, variants of POPLIN-A are better at searching, but the reward surface is still not smooth. POPLIN-A-Replan is more efficient in searching than POPLIN-A-Init, but the errors in dynamics limit its performance. We also include the results for POPLIN-P using a 1-layer neural network in solution space in Figure 16 (g), (h). The results indicate that the deeper the network, the better the search efficiency.\nWe also provide more detailed version of Figure 1 in Figure 18. We respectively show the surface for PETS, POPLIN-P-P using 1 and 0 hidden layers. Their planned trajectories across different CEM updates are visualized in Figure 19, 20, 21. Originally in Figure 1, we use the trajectories in iteration\n1, 3, 5 for better illustration. In the appendix, we also provide all the iteration data. Again, the color indicates the expected cost (negative of expected reward). From left to right, we show the updated the trajectories in each iteration with blue scatters." } ]
2,020
EXPLORING MODEL-BASED PLANNING WITH POLICY NETWORKS
SP:28c833ad9939bcc4e355254536b610da50731d76
[ "This paper is concerned with network multi-agent RL (N-MARL), where agents need to update their policy based on messages obtained only from neighboring nodes. This is done under sensible restrictions on the state transition distribution, which can be claimed to hold true in realistic networked settings. The authors argue that introducing a spatial discount factor (along a temporal one), where neighboring nodes have a small distance, stabilizes learning. Also, they provide a way of learning a networked communication protocol. Experiments are done on somewhat realistic simulations of traffic.", "The authors use decentralized MARL for networked system control. Each agent might control a traffic light (exp 1) or a car in traffic (exp 2). Some features of their approach are a spatial Markov assumption (only neighborhood matters), a spatial discount factor, and NeurComm: a general message passing scheme between agent policies. The authors compare their method with CommNet (averages messages before broadcast), DIAL (small-scale direct communication), etc." ]
This paper considers multi-agent reinforcement learning (MARL) in networked system control. Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors. We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent. Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL. Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.
[ { "affiliations": [], "name": "Tianshu Chu" }, { "affiliations": [], "name": "Sandeep Chinchali" } ]
[ { "authors": [ "Masako Bando", "Katsuya Hasebe", "Akihiro Nakayama", "Akihiro Shibata", "Yuki Sugiyama" ], "title": "Dynamical model of traffic congestion and numerical simulation", "venue": "Physical review E,", "year": 1995 }, { "authors": [ "Tianshu Chu", "Jie Wang", "Lara Codecà", "Zhaojian Li" ], "title": "Multi-agent deep reinforcement learning for large-scale traffic signal control", "venue": "IEEE Transactions on Intelligent Transportation Systems,", "year": 2019 }, { "authors": [ "Jakob Foerster", "Ioannis Alexandros Assael", "Nando de Freitas", "Shimon Whiteson" ], "title": "Learning to communicate with deep multi-agent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jakob Foerster", "Nantas Nardelli", "Gregory Farquhar", "Philip Torr", "Pushmeet Kohli", "Shimon Whiteson" ], "title": "Stabilising experience replay for deep multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1702.08887,", "year": 2017 }, { "authors": [ "Jakob N Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jayesh K Gupta", "Maxim Egorov", "Mykel Kochenderfer" ], "title": "Cooperative multi-agent control using deep reinforcement learning", "venue": "In International Conference on Autonomous Agents and Multiagent Systems,", "year": 2017 }, { "authors": [ "Jiechuan Jiang", "Zongqing Lu" ], "title": "Learning attentional communication for multi-agent cooperation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "I Ge Jin", "Gábor Orosz" ], "title": "Dynamics of connected vehicle systems with delayed acceleration feedback", "venue": "Transportation Research Part C: Emerging Technologies,", "year": 2014 }, { "authors": [ "Xiangyu Kong", "Bo Xin", "Fangchen Liu", "Yizhou Wang" ], "title": "Revisiting the master-slave architecture in multi-agent deep reinforcement learning", "venue": "arXiv preprint arXiv:1712.07305,", "year": 2017 }, { "authors": [ "Daniel Krajzewicz", "Jakob Erdmann", "Michael Behrisch", "Laura Bieker" ], "title": "Recent development and applications of SUMO - Simulation of Urban MObility", "venue": "International Journal On Advances in Systems and Measurements,", "year": 2012 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Shayegan Omidshafiei", "Jason Pazis", "Christopher Amato", "Jonathan P How", "John Vian" ], "title": "Deep decentralized multi-task multi-agent rl under partial observability", "venue": "arXiv preprint arXiv:1703.06182,", "year": 2017 }, { "authors": [ "Peng Peng", "Ying Wen", "Yaodong Yang", "Quan Yuan", "Zhenkun Tang", "Haitao Long", "Jun Wang" ], "title": "Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games", "venue": "arXiv preprint arXiv:1703.10069,", "year": 2017 }, { "authors": [ "Junjie Qin", "Yinlam Chow", "Jiyan Yang", "Ram Rajagopal" ], "title": "Distributed online modified greedy algorithm for networked storage operation under uncertainty", "venue": "IEEE Transactions on Smart Grid,", "year": 2016 }, { "authors": [ "Chao Qu", "Shie Mannor", "Huan Xu", "Yuan Qi", "Le Song", "Junwu Xiong" ], "title": "Value propagation for decentralized networked deep multi-agent reinforcement learning", "venue": null, "year": 1901 }, { "authors": [ "Amanpreet Singh", "Tushar Jain", "Sainbayar Sukhbaatar" ], "title": "Learning when to communicate at scale in multiagent cooperative and competitive tasks", "venue": "arXiv preprint arXiv:1812.09755,", "year": 2018 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning multiagent communication with backpropagation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: an introduction", "venue": "Neural Networks, IEEE Transactions on,", "year": 1998 }, { "authors": [ "Yong Xu", "Renquan Lu", "Peng Shi", "Hongyi Li", "Shengli Xie" ], "title": "Finite-time distributed state estimation over sensor networks with round-robin protocol and fading channels", "venue": "IEEE transactions on cybernetics,", "year": 2018 }, { "authors": [ "Yaodong Yang", "Rui Luo", "Minne Li", "Ming Zhou", "Weinan Zhang", "Jun Wang" ], "title": "Mean field multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1802.05438,", "year": 2018 }, { "authors": [ "Kaiqing Zhang", "Zhuoran Yang", "Han Liu", "Tong Zhang", "Tamer Başar" ], "title": "Fully decentralized multiagent reinforcement learning with networked agents", "venue": "arXiv preprint arXiv:1802.08757,", "year": 2018 } ]
[ { "heading": null, "text": "This paper considers multi-agent reinforcement learning (MARL) in networked system control. Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors. We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent. Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL. Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance." }, { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL), formulated as a Markov decision process (MDP), is a promising data-driven approach for learning adaptive control policies (Sutton & Barto, 1998). Recent advances in deep neural networks (DNNs) further enhance its learning capacity on complex tasks. Successful algorithms include deep Q-network (DQN) (Mnih et al., 2015), deep deterministic policy gradient (DDPG) (Lillicrap et al., 2015), and advantage actor critic (A2C) (Mnih et al., 2016). However, RL is not scalable in many real-world control problems. This scalability issue is addressed in multi-agent RL (MARL), where each agent learns its individual policy from only local observations. However, MARL introduces new challenges in model training and execution, due to non-stationarity and partial observability in a decentralized MDP from the viewpoint of each agent. To address these challenges, various learning methods and communication protocols are proposed to stabilize training and improve observability.\nThis paper considers networked MARL (NMARL) in the context of networked system control (NSC), where agents are connected via a communication network for a cooperative control objective. Each agent performs decentralized control based on its local observations and messages from connected neighbors. NSC is extensively studied and widely applied. Examples include connected vehicle control (Jin & Orosz, 2014), traffic signal control (Chu et al., 2019), distributed sensing (Xu et al., 2018), and networked storage operation (Qin et al., 2016). We expect an increasing trend of NMARL based controllers in the near future, after the development of advanced communication technologies such as 5G and Internet-of-Things.\nRecent works studied decentralized NMARL under assumptions of global observations and local rewards (Zhang et al., 2018; Qu et al., 2019), which are reasonable in multi-agent gaming but not suitable in NSC. First, the control infrastructures are distributed in a wide region, so collecting global observations in execution increases communication delay and failure rate, and hurts the robustness. Second, online learning is not common due to safety and efficiency concerns. Rather, each model is trained offline and tested extensively before field deployment. In online execution, the model only runs forward propagation, and its performance is constantly monitored for triggering re-training. To reflect these practical constraints in NSC, we assume 1) each agent is connected to a limited number\nof neighbors and communication is restricted to its neighborhood, and 2) training is offline and global information is available in rollout training minibatches, despite a decentralized training process.\nThe contributions of this paper are three-fold. First, we formulate NMARL under the aforementioned NSC assumptions as a decentralized spatiotemporal MDP, and introduce a spatial discount factor to stabilize training, especially for non-communicative algorithms. Second, we propose a new neural communication protocol, called NeurComm, to adaptively share information on both system states and agent behaviors. Third, we design and simulate realistic NMARL environments to evaluate and compare our approaches against recent MARL baselines. 1" }, { "heading": "2 RELATED WORK", "text": "MARL works can be classified into four groups based on their communication methods. The first group is non-communicative and focuses on stabilizing training with advanced value estimation methods. In MADDPG, each action-value is estimated by a centralized critic based on global observations and actions (or inferred actions) (Lowe et al., 2017). COMA extends the same idea to A2C and estimates each advantage using a centralized critic and a counterfactual baseline (Foerster et al., 2018). In Dec-HDRQN (Omidshafiei et al., 2017) and PS-TRPO (Gupta et al., 2017), the centralized critic takes local observations, but the parameters are shared globally. In the NMARL work of Zhang et al. (2018), the critic is fully decentralized but each takes global observations and performs consensus updates. In this paper, we empirically confirm that a spatial discount factor helps stabilize the training of non-communicative algorithms under neighborhood observation.\nThe second group considers heuristic communication protocols or direct information sharing. Foerster et al. (2017) shows performance gains with directly-shared low dimensional policy fingerprints from other agents. Similarly, mean field MARL takes the average of neighbor policies for informed action-value estimation (Yang et al., 2018). The major disadvantage of this group is that, unlike NeurComm, the communication is not explicitly designed for performance optimization, which may cause inefficient and redundant communications in execution.\nThe third group proposes learnable communication protocols. In DIAL, the message is generated together with action-value estimation by each DQN agent, then it is encoded and summed with other input signals at the receiver side (Foerster et al., 2016). CommNet is a more general communication protocol, but it calculates the mean of all messages instead of encoding them (Sukhbaatar et al., 2016). Both works, especially CommNet, incur an information loss due to aggregation on input signals. Another collection of works focuses on communications in strategy games. In BiCNet (Peng et al., 2017), a bi-directional RNN is used to enable flat communication among agents, while in MasterSlave (Kong et al., 2017), two-way message passing is utilized in a hierarchical RNN architecture of master and slave agents. In contrast to existing protocols, NeurComm 1) encodes and concatenates signals, instead of aggregating them, to minimize information loss, and 2) includes policy fingerprints in communication to reduce non-stationarity.\nThe fourth group focuses on communication attentions to selectively send messages. ATOC (Jiang & Lu, 2018) learns a soft attention which allocates a communication probability to each other agent, while IC3Net (Singh et al., 2018) learns a hard binary attention which decides communicating or not. These works are especially useful when each agent has to prioritize the communication targets. NMARL is less likely the case since the communication range is restricted to small neighborhoods." }, { "heading": "3 SPATIOTEMPORAL RL", "text": "This section formulates the NMARL problem as a decentralized spatiotemporal MDP, and introduces the spatial discount factor to reduce its learning difficulty. To simplify the notation, we assume the true system state is observable, and use “state” and “observation” interchangeably. This does not affect the validity of proposed methods in practice. To save space, all proofs are deferred to A.\n1Code link: https://github.com/cts198859/deeprl_network." }, { "heading": "3.1 NETWORKED MARL", "text": "The networked system is represented by a graph G(V, E) where i ∈ V is each agent and ij ∈ E is each communication link. The corresponding MDP is characterized as (G, {Si,Ai}i∈V , p, r) where Si and Ai are the local state space and action space of agent i. Let S := ×i∈VSi and A := ×i∈VAi be the global state space and action space, MDP transitions follow a stationary probability distribution p : S × A × S → [0, 1], and global step rewards be denoted by r : S × A → R. In a multi-agent MDP, each agent i follows a decentralized policy πi : Si × Ai → [0, 1] to chose its own action ai,t ∼ πi(·|si,t) at time t. The MDP objective is to maximize E[Rπ0 ], where Rπt = ∑T τ=t γ\nτ−trτ is the long-term global return with discount factor γ. Here the expectation is taken over the global policy π : S×A → [0, 1], the initial distribution st ∼ ρ, and the transition sτ+1 ∼ p(·|sτ , aτ ), regarding the step reward rτ = r(sτ , aτ ), ∀τ < T , and the terminal reward rT = rT (sT ) 2. The same system can be formulated as a centralized MDP. Defining V π(s) = E[Rπt |st = s] as the state-value function and Qπ(s, a) = E[Rπt |st = s, at = a] as the action-value function, we have E[Rπ0 ] = ∑ s∈S ρ(s)V\nπ(s), V π(s) = ∑ a∈A π(a|s)Qπ(s, a), and the advantage function Aπ(s, a) = Qπ(s, a)− V π(s).\nMARL provides a scalable solution for controlling networked systems, but it introduces partial observability and non-stationarity in decentralized MDP of each agent, leading to inefficient and unstable learning performance. To see this, note si,t ∈ Si ⊆ S does not provide sufficient information for πi. Even assuming si,t = st, the transition pi(si,t+1|si,t, ai,t) = ∑ a−i,t∈A−i π−i(a−i,t|st) · p(st+1|st, ai,t, a−i,t) is non-stationary if the behavior policies of other agents π−i := {πj}j∈V\\{i} are evolving over time. In this paper, we enforce practical constraints and only allow local observations and neighborhood communications, which makes MARL even more challenging. Definition 3.1 (Networked Multi-agent MDP with Neighborhood Communication). In a networked cooperative multi-agent MDP (G, {Si,Ai}i∈V , {Mij}ij∈E , p, {ri}i∈V) with the message spaceM, the global reward is defined as r = 1|V| ∑ i∈V ri. All local rewards are shared globally, whereas the communication is limited to neighborhoods, that is, each agent i observes s̃i,t := si,t ∪mNii,t. Here Ni := {j ∈ V|ji ∈ E}, mNii,t := {mji,t}j∈Ni , and each message mji,t ∈Mji is derived from all the available information at that neighbor." }, { "heading": "3.2 SPATIOTEMPORAL RL", "text": "Definition 3.2 (Spatiotemporal MDP). We assume local transitions are independent of other agents given the neighboring agents, that is,\npi(si,t+1|sVi,t, ai,t) = ∑\naNi,t∈ANi\n∏ j∈Ni πj(aj,t|s̃j,t) · p(si,t+1|sVi,t, ai,t, aNi,t), (1)\nwhere Vi := Ni ∪ {i} is the closed neighborhood, and p is abused to denote any stationary transition. Then from the viewpoint of each agent i, Definition 3.1 is equivalent to a decentralized spatiotemporal MDP, characterized as (Si,Ai, {Mji}j∈Ni , pi, {ri}i∈V), by optimizing the discounted return\nRπi,t = T∑ τ=t γτ−t ∑ j∈V αdijrj,t , (2) where 0 ≤ α ≤ 1 is the spatial discount factor, and dij is distance between agents i and j.\nThe major assumption in Definition 3.2 is that the Markovian property holds both temporally and spatially, so that the next local state depends on the neighborhood states and policies only. This assumption is valid in most networked control systems such as traffic and wireless networks, as well as the power grid, where the impact of each agent is spread over the entire system via controlled flows, or chained local transitions. Note in NSC, each agent is connected to a limited number of neighbors (the degree of G is low). So spatiotemporal MDP is decentralized during model execution, and it naturally extends properties of MDP. To reduce the learning difficulty of spatiotemporal MDP, a spatiotemporally discounted return is introduced in Eq. (2) to scale down reward signals further away (which are more difficult to fit using local information). When α→ 0,\n2In infinite MDP, rT (s) = E [∑∞ t=T γ t−T rt ∣∣∣∣sT = s].\neach agent performs local greedy control; when α → 1, each agent performs global coordination and Rπi,t = R π t ,∀i ∈ V . Further, we have Qπi (s, a) = Qπi (s, aVi) = E[Rπi,t|st = s, aVi,t = aVi ], and V πi (s, a−i) = V π i (s, aNi) = ∑ ai∈Ai πi(ai|s̃i)Q π i (s, aVi), since the immediate local reward of each agent is only affected by controls within its closed neighborhood.\nNow we assume each agent is A2C, with parametric models πθi(s̃i) and Vωi(s̃i, aNi) for fitting the optimal policy π∗i and value function V\nπi . Note if s̃i is able to provide global information through cascaded neighborhood communications, both πθi and Vωi are able to fit return R π i,t. Also, global and future information, such as Rπi,τ and aNi,τ , are always available from each rollout minibatch in offline training. In contrast, only local information s̃i,t is allowed in online execution of policy πθi . Proposition 3.1 (Spatiotemporal RL with A2C). Let {πθi}i∈V and {Vωi}i∈V be the decentralized actor-critics, and {(si,τ ,mNii,τ , ai,τ , ri,τ )}i∈V,τ∈B be the on-policy minibatch from spatiotemporal MDPs under stationary policies {πθi}i∈V . Then each actor and critic are updated by losses\nL(θi) = 1 |B| ∑ τ∈B ( − log πθi(ai,τ |s̃i,τ )Âπi,τ + β ∑ ai∈Ai πθi(ai|s̃i,τ ) log πθi(ai|s̃i,τ ) ) , (3)\nL(ωi) = 1 |B| ∑ τ∈B ( R̂πi,τ − Vωi(s̃i,τ , aNi,τ ) )2 , (4)\nwhere Âπi,τ = R̂ π i,τ − vi,τ is the estimated advantage, R̂πi,τ = ∑τB−1 τ ′=τ γ τ ′−τ (∑ j∈V α dijrj,τ ′ ) + γτB−τvi,τB is the sampled action-value, vi,τ = Vω−i (s̃i,τ , aNi,τ ) is the estimated state-value, and β is the coefficient of the entropy loss." }, { "heading": "4 SPATIOTEMPORAL RL WITH NEURAL COMMUNICATION", "text": "For efficient and adaptive information sharing, we propose a new communication protocol called NeurComm. To simplify the notation, we assume all messages sent from agent i are identical, i.e., mij = mi,∀j ∈ Ni. Then\nhi,t = gνi(hi,t−1, eλsi (sVi,t), eλ p i (πNi,t−1), eλhi (hNi,t−1)), (5)\nwhere hi,t is the hidden state (or the belief ) of each agent and eλi and gνi are differentiable message encoding and extracting functions 3. To avoid dilution of state and policy information (the former is for improving observability while the later is for reducing non-stationarity), state and policy are explicitly included in the message besides agent belief, i.e., mi,t = si,t ∪ πi,t−1 ∪ hi,t−1, or s̃i,t := sVi,t∪πNi,t−1∪hNi,t−1 as in Eq. (5). Note the communication phase is prior-decision, so only hi,t−1 and πi,t−1 are available. This protocol can be easily extended for multi-pass communication: h (k) i,t = gν(k)i (h (k−1) i,t , eλsi (sVi,t), eλ p i (πNi,t−1), eλhi (h (k−1) Ni,t )), where h (0) i,t = hi,t−1, and k denotes each of the communication passes. The communication attentions can be integrated either at the sender as µi,t(mi,t), or at the receiver as µi,t(mNi,t). Replacing the input (s̃i,t) of Eq. (3)(4) with the belief (hi,t), the actor and critic become πθi(·|hi,t) and Vωi(hi,t, aNi,t), and the frozen estimations are πi,t and vi,t, respectively. Proposition 4.1 (Neighborhood Neural Communication). In spatiotemporal RL with neighborhood NeurComm, each agent utilizes the delayed global information to learn its belief, and it learns the message to optimize the control performance of all other agents.\nNeurComm enabled MARL can be represented using a single meta-DNN since all agents are connected by differentiable communication links, and s̃i are the intermediate outputs after communication layers. Fig. 1a illustrates the forward propagations inside each individual agent and Fig. 1b shows the broader multi-step spatiotemporal propagations. Note the gradient propagation of this meta-DNN is decentralized based on each local loss signal. As time advances, the involved parameters in each propagation expand spatially in the meta-DNN, due to the cascaded neighborhood communication. To see this mathematically, πθi,t(·|hi,t) = πθ̃i,t(·|sVi,t, πNi,t−1), with θ̃i,t = {λi, νi, θi}; while πθi,t+1(·|hi,t+1) = πθ̃i,t+1(·|sVi,t+1, πNi,t, {sNj ,t, πNj ,t−1}j∈Ni), with\n3Additional cell state needs to be maintained if LSTM is used.\nθ̃i,t+1 = {λj , νj}j∈Ni ∪ {λi, νi, θi}. In other words, {λi, νi} will be updated for improving actors πθj , ∀j ∈ V , as soon as they are included in θ̃j ; meanwhile, ri will be included in Rπj . In contrast, the policy is fully decentralized in execution, as gνi depends on s̃i only.\nNeurComm is general enough and has connections to other communication protocols. CommNet performs a more lossy aggregation since the received messages are averaged before encoding, and all encoded inputs are summed up (Sukhbaatar et al., 2016). In DIAL, each DQN agent encodes the received messages instead of averaging them, but still it sums all encoded inputs (Foerster et al., 2016). Also, both CommNet and DIAL do not have policy fingerprints included in messages." }, { "heading": "5 NUMERICAL EXPERIMENTS", "text": "" }, { "heading": "5.1 ENVIRONMENT SETUP", "text": "There are several benchmark MARL environments such as cooperative navigation and predator-prey, but few of them represent NSC. Here we design two NSC environments: adaptive traffic signal control (ATSC) and cooperative adaptive cruise control (CACC). Both ATSC and CACC are extensively studied in intelligent transportation systems, and they hold assumptions of a spatiotemporal MDP." }, { "heading": "5.1.1 ADAPTIVE TRAFFIC SIGNAL CONTROL", "text": "The objective of ATSC is to adaptively adjust signal phases to minimize traffic congestion based on real-time road-traffic measurements. Here we implement two ATSC scenarios: a 5×5 synthetic traffic grid and a real-world 28-intersection traffic network from Monaco city, using standard microscopic traffic simulator SUMO (Krajzewicz et al., 2012).\nGeneral settings. For both scenarios, each episode simulates the peak-hour traffic, and a 5s control interval is applied to prevent traffic light from too frequent switches, based on RL control latency and driver response delay. Thus, one MDP step corresponds to 5s simulation and the horizon is 720 steps. Further, a 2s yellow time is inserted before switching to red light for safety purposes. In ATSC, the real-time traffic flow, that is, the total number of approaching vehicles along each incoming lane, is measured by near-intersection induction-loop detectors (ILDs) (shown as the blue areas of example intersections in Fig. 2). The cost of each agent is the sum of queue lengths along all incoming lanes.\nScenario settings. Fig. 2a illustrates the traffic grid formed by two-lane arterial streets with speed limit 20m/s and one-lane avenues with speed limit 11m/s. We simulate the peak-hour traffic dynamics through four collections of time-variant traffic flows, with both loading and recovering phases. At beginning, three major flows F1 are generated with origin-destination (O-D) pairs x10-x4, x11-x5, and x12-x6, meanwhile three minor flows f1 are generated with O-D pairs x1-x7, x2-x8, and x3-x9.\nAfter 15 minutes, F1 and f1 start to decay, while their opposite flows F2 and f2 start to dominate, as shown in Fig. 2b. Note the flows define the high-level demand only, the particular route of each vehicle is randomly generated. The grid is homogeneous and all agents have the same action space, which is a set of five pre-defined signal phases. Fig. 2c illustrates the Monaco traffic network, with controlled intersections in blue. NMARL in this scenario is more challenging since the network is heterogeneous with a variety of observation and action spaces. Four traffic flow collections are generated to simulate the peak-hour traffic, and each flow is a multiple of a “unit” flow of 325veh/hr, with randomly sampled O-D pairs inside rectangle areas in Fig. 2c. F1 and F2 are simulated during the first 40min, as [1, 2, 4, 4, 4, 4, 2, 1] unit flows with 5min intervals; F3 and F4 are generated in the same way, but with a delay of 15min. See code for more details." }, { "heading": "5.1.2 COOPERATIVE ADAPTIVE CRUISE CONTROL", "text": "The objective of CACC is to adaptively coordinate a platoon of vehicles to minimize the car-following headway and speed perturbations based on real-time vehicle-to-vehicle communication. Here we implement two CACC scenarios: “Catch-up” and “Slow-down”, with physical vehicle dynamics.\nGeneral settings. For both CACC tasks, we simulate a string of 8 vehicles for 60s, with a 0.1s control interval. Each vehicle observes and shares its headway h, velocity v, and acceleration a to neighbors within two steps. The safety constraints are: h ≥ 1m, v ≤ 30m/s, |a| ≤ 2.5m/s2. Safe RL is relevant here, but itself is a big topic and out of the scope of this paper. So we adopt a simple heuristic optimal velocity model (OVM) (Bando et al., 1995) to perform longitudinal vehicle control under above constraints, whose behavior is affected by hyper-parameters: headway gain α◦, relative velocity gain β◦, stop headway hst = 5m and full-speed headway hgo = 35m. Usually (α◦, β◦) represent the human driver behavior, here we train NMARL to recommend appropriate (α◦, β◦) for each OVM controller, selected from four levels {(0, 0), (0.5, 0), (0, 0.5), (0.5, 0.5)}. Assuming the target headway and velocity profile are h∗ = 20m and v∗t , respectively, the cost of each agent is (hi,t − h∗)2 + (vi,t − v∗t )2 + 0.1u2i,t. Whenever a collision happens (hi,t < 1m), a large penalty of 1000 is assigned to each agent and the state becomes absorbing. An additional cost 5(2hst − hi,t)2+ is provided in training for potential collisions.\nScenario settings. Since exploring a collision-free CACC strategy itself is challenging for onpolicy RL, we consider simple scenarios. In Catch-up scenario, vi,0 = v∗t =15m/s and hi,0 = h\n∗, ∀i 6= 1, whereas h1,0 = a · h∗, with a ∈ U [3, 4]. In Slow-down scenario, vi,0 = v∗0 = b·15m/s, b ∈ U [1.5, 2.5], and hi,0 = h∗, ∀i, whereas v∗t linearly decreases to 15m/s during the first 30s and then stays at constant." }, { "heading": "5.2 ALGORITHM SETUP", "text": "For fair comparison, all MARL approaches are applied to A2C agents with learning methods in Eq. (3)(4), and only neighborhood observation and communication are allowed. IA2C performs independent learning, which is an A2C implementation of MADDPG (Lowe et al., 2017) as the critic takes neighboring actions (see Eq. (4)). ConseNet (Zhang et al., 2018) has the additional consensus\nupdate to overwrite parameters of each critic as the mean of those of all critics inside the closed neighborhood. FPrint (Foerster et al., 2017) includes neighbor policies. DIAL (Foerster et al., 2016) and CommNet (Sukhbaatar et al., 2016) are described in Section 4. IA2C, ConseNet, and FPrint are non-communicative policies since they utilize only neighborhood information. In contrast, DIAL, CommNet, and NeurComm are communicative policies. Note communicative policies require more messages to be transferred and so higher communication bandwidth. In particular, the local message sizes are O(|si|+ |πi|+ |hi|) for DIAL and NeurComm, O(|si|+ |hi|) for CommNet, O(|si|+ |πi|) for FPrint, and O(|si|) for IA2C and ConseNet. The implementation details are in C.1. All algorithms use the same DNN hidden layers: one fully-connected layer for message encoding eλ, and one LSTM layer for message extracting gν . All hidden layers have 64 units. The encoding layer implicitly learns normalization across different input signal types. We train each model over 1M steps, with γ = 0.99, actor learning rate 5× 10−4, and critic learning rate 2.5× 10−4. Also, each training episode has a different seed for generalization purposes. In ATSC, β = 0.01, |B| = 120, while in CACC, β = 0.05, |B| = 60, to encourage the exploration of collision-free policies. Each training takes about 30 hours on a 32GB memory, Intel Xeon CPU machine." }, { "heading": "5.3 ABLATION STUDY", "text": "We perform ablation study in proposed scenarios, which are sorted as ATSC Monaco > ATSC Grid > CACC Slow-down > CACC Catch-up by task difficulty. ATSC is more challenging than CACC due to larger scale (>=25 vs 8), more complex dynamics (stochastic traffic flow vs deterministic vehicle dynamics), and longer control interval (5s vs 0.1s). ATSC Monaco > ATSC Grid due to more heterogenous network, while CACC Slow-down > CACC Catch-up due to more frequently changing leading vehicle profile. To visualize the learning performance, we plot the learning curve, that is, average episode return (R̄ = 1T ∑T−1 t=0 ∑ i∈V ri,t) vs training step. For better visualization, all learning curves are smoothened using moving average with a window size of 100 episodes.\nFirst, we investigate the impact of spatial discount factor, by comparing the learning curves among α ∈ {0.8, 0.9, 1} for IA2C and CommNet. Fig. 3 reveals a few interesting facts. First, α∗CommNet is always higher than α∗IA2C. Indeed, α ∗ CommNet = 1 in almost all scenarios (except for ATSC Monaco). This is because communicative policies perform delayed global information sharing, whereas noncommunicative policies utilize neighborhood information only, causing difficulty to fit the global return. Second, learning performance becomes much more sensitive to α when the task is more difficult. Specifically, all α values lead to similar learning curves in CACC Catch-up, whereas appropriate α values help IA2C converge to much better policies more steadily in other scenarios. Third, α∗ is high enough: α∗IA2C = 0.9 except for CACC Slow-down where α ∗ IA2C = 0.8. This is because the discounted problem must be similar enough to the original problem in execution.\nNext, we investigate the impact of NeurComm under α = 1. We start with a baseline which is similar to existing differentiable protocols, i.e., hi,t = LSTM(hi,t−1,relu(sVi,t) + relu(mNi,t)). We then evaluate two intermediate protocols “Concat Only” and “FPrint Only”, in which encoded inputs are concatenated and neighbor policies are included, respectively. Finally we evaluate their combination NeurComm. As shown in Fig. 3, all protocols have similar learning curves in easy CACC Catch-up scenario. Otherwise, both “Concat” and “FPrint” are able to enhance the baseline learning curves in certain scenarios and their affects are additive in NeurComm." }, { "heading": "5.4 TRAINING RESULTS", "text": "Fig. 4 compares the learning curves of all MARL algorithms, after tuned α∗ ∈ {0.6, 0.8, 0.9, 0.95, 1}. As expected, α∗ for non-communicative policies are lower than those for communicative policies.\nTab. 1 summarizes α∗ of controllers across different NMARL scenarios. For challenging scenarios like ATSC Monaco, lower α is preferred by almost all policies (except NeurComm). This demonstrates that α is an effective way to enhance MARL performance in general, especially for challenging tasks like ATSC Monaco. From another view point, α serves as an informative indicator on problem difficulty and algorithm coordination level. Based on Fig. 4, NeurComm is at least competitive in CACC scenarios, and it clearly outperforms other policies on both sample efficiency and learning stability in more challenging ATSC scenarios. Note in CACC a big penalty is assigned whenever a collision happens, so the standard deviation of episode returns is high." }, { "heading": "5.5 EXECUTION RESULTS", "text": "We freeze and evaluate trained MARL policies in another 50 episodes, and summarize the results in Tab. 2. In CACC scenarios, α enhanced FPrint policy achieves the best execution performance. Note NeurComm still outperforms other communicative algorithms, so this result implies that delayed information sharing may not be helpful in easy but real-time and safety-critical CACC tasks. In contrast, NeurComm achieves the best execution performance for ATSC tasks. We also evaluate the execution performance of ATSC and CACC using domain-specific metrics in Tab. 3 and Tab. 4, respectively. The results are consistent with the reward-defined ones in Tab. 2.\nFurther, we investigate the performance of top policies in ATSC scenarios. For each ATSC scenario, we select the top two non-communicative and communicative policies and visualize their impact on network traffic by plotting the time series of network averaged queue length and intersection delay in Fig. 5. Note the line and shade show the mean and standard deviation of each metric across execution runs, respectively. Based on Fig. 5a, NeurComm achieves the most sustainable traffic control in ATSC Grid, so that the congested grid starts recovering immediately after the loading phase ends at 3000s. During the same unloading phase, CommNet prevents the queues from further increasing while non-communicative policies are failed to do so. Also, FPrint is less robust than IA2C as it\nintroduces a sudden congestion jump at 1000s. Similarly, NeurComm achieves the lowest saturation rate in ATSC Monaco (Fig. 5b).\nIntersection delay is another key metric in ATSC. Based on Fig. 5c, communicative policies are able to reduce intersection delay as well in ATSC Grid, though it is not explicitly included in the objective and so is not optimized by non-communicative policies. In contrast, communicative policies have fast increase on intersection delay in ATSC Monaco. This implies that communicative algorithms are able to capture the spatiotemporal traffic pattern in homogeneous networks whereas they still have the risk of overfitting on queue reduction in realistic and heterogenous networks. For example, they block the short source edges on purpose to reduce on-road vehicles by paying a small cost of queue length.\nFinally, we investigate the robustness (string stability) of top policies in CACC scenarios. In particular, we plot the time series of headway and velocity for the first and the last vehicles in the platoon. The profile of the first vehicle indicates how adaptively the controller pursues h∗ and v∗, while that of the last vehicle indicates how stable the controlled platoon is. Based on Tab. 1 and Tab. 4, the top communicative and non-communicative controllers are NeurComm and FPrint.\nFig. 6 shows the corresponding headway and velocity profiles for the selected controllers. Interestingly, MARL controllers are able to achieve steady state v∗ and h∗ for the first vehicle of platoon, whereas they still have difficulty to eliminate the perturbation through the platoon. This may be because of the heuristic low-level controller as well as the delayed information sharing." }, { "heading": "6 CONCLUSIONS", "text": "We have formulated the spatiotemporal MDP for decentralized NSC under neighborhood communication. Further, we have introduced the spatial discount factor to enhance non-communicative MARL algorithms, and proposed a neural communication protocol NeurComm to design adaptive and efficient communicative MARL algorithms. We hope this paper provides a rethink on developing scalable and robust MARL controllers for NSC, by following practical engineering assumptions and combining appropriate learning and communication methods rather than reusing existing MARL algorithms. One future direction is improving the recurrent units to naturally control spatiotemporal information flows within the meta-DNN in a decentralized way." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Marco Pavone and Alexander Anemogiannis for valuable discussions and insightful comments." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PROOF OF PROPOSITION 3.1", "text": "Proof. The proof follows the learning method in A2C Mnih et al. (2016), which shows that\nL(θ) = 1 |B| ∑ τ∈B ( − log πθ(aτ |sτ )Âπτ + β ∑ a∈A πθ(a|sτ ) log πθ(a|sτ ) ) , (6)\nL(ω) = 1 |B| ∑ τ∈B ( R̂πτ − Vω(sτ ) )2 , (7)\nwhere Âπτ = R̂ π τ − vτ , R̂πτ = ∑τB−1 τ ′=τ γ τ ′−τrτ ′ + γ τB−τvτB , and vτ = Vω−(sτ ), based on on-policy minibatch from a MDP {(sτ , aτ , rτ )}τ∈B. Now we consider spatiotemporal MDP, which has transition in Eq. (1), optimizes return in Eq. (2), and collects experience (si,t,mNii,t, ai,t, r̃i,t), where r̃i,t = ∑ j∈V α dijrj,t. In Theorem 3.1 of Zhang\net al. (2018), the decentralized actor and critic are π̃θi(s) and Ṽωi(s, a−i), for fitting π ∗ i (·|s) and∑\nai∈Ai πi(ai|s)Q πi(s, a) under global observations, respectively. Now assuming the observations and communications are restricted to each neighborhood as in Definition 3.1, then the actor and critic become πθi(s̃i) ≈ π̃θi(s) and Vωi(s̃i, aNi) ≈ Ṽωi(s, a−i), with the best observability. Hence, replacing πθ(a|s), Vω(s), r by πθi(ai|s̃i), Vωi(s̃i, aNi), and r̃i, respectively, we establish Eq. (3)(4) from Eq. (6)(7), which concludes the proof.\nNote partial observability and non-stationarity are present in πθi(ai|s̃i) and Vωi(s̃i, aNi). Fortunately, communication improves the observability. Based on Definition 3.1, any information that agent j knows at time t can be included in mji,t. We assume sj,t ∪ {mkj,t−1}k∈Nj ⊂ mji,t. Then\ns̃i,t ⊃ si,t ∪ {sj,t ∪ {mkj,t−1}k∈Nj}j∈Ni ⊃ {sj,t}j∈Vi ∪ {sj,t−1 ∪ {mkj,t−2}k∈Nj}j∈V|dij=2 ⊃ {sj,t}j∈Vi ∪ {sj,t−1}j∈V|dij=2 ∪ {sj,t−2 ∪ {mkj,t−3}k∈Nj}j∈V|dij=3 ⊃ . . . ⊃ si,t ∪ { sj,t+1−dij } j∈V\\{i} .\nThus, s̃i,t includes the delayed global observations. On the other hand, Eq. (1)(2) mitigate the non-stationarity. To see this mathematically,\nEπi,p[r̃i,t|st, at] =Eπi,pi [ri,t|sVi,t, aNi,t] + α ∑ j∈Ni Eπi,pj [rj,t|sVj ,t, aVj\\{i},t]\n+ dmax∑ d=2 αd ∑ j∈{V|dij=d} Epj [rj,t|sVj ,t, aVj ,t] , where the further away reward signals are discounted more. Note if communication is allowed, each agent will have delayed global observations, and the non-stationarity mainly comes from limited information of future actions." }, { "heading": "A.2 PROOF OF PROPOSITION 4.1", "text": "This proposition contains two statements regarding neural communication based global information sharing in forward and backward propagations. We establish each of them separately. Lemma A.1 (Spatial Information Propagation). In NeurComm, the delayed global information is utilized to estimate each hidden state, that is,\nhi,t ⊃ si,0:t ∪ { sj,0:t+1−dij , πj,0:t−dij } j∈V\\{i} , (8)\nwhere x ⊃ y if information y is utilized to estimate x, and x0:t := {x0, x1, . . . , xt}.\nProof. Based on the definition of NeurComm protocol (Eq. (5)), mi,t ⊃ hi,t−1, and hi,t ⊃ hi,t−1 ∪ sVi,t ∪ πNi,t−1 ∪mNi,t. Hence,\nhi,t ⊃ si,t ∪ {sj,t, πj,t−1}j∈Ni ∪ {hj,t−1}j∈Vi ⊃ si,t ∪ {sj,t, πj,t−1}j∈Ni ∪ { sj,t−1 ∪ {sk,t−1, πk,t−2}k∈Nj ∪ {hk,t−2}k∈Vj } j∈Vi\n= si,t−1:t ∪ {sj,t−1:t, πj,t−2:t−1}j∈Ni ∪ {sj,t−1, πj,t−2}j∈{V|dij=2} ∪ {hj,t−2}j∈{V|dij≤2}\n⊃ . . . ⊃ si,0:t ∪ {sj,0:t, πj,t−2:t−1}j∈Ni ∪ {sj,0:t−1, πj,0:t−2}j∈{V|dij=2}\n∪ . . . ∪ {sj,0:t+1−dmax , πj,0:t−dmax}j∈{V|dij=dmax}, which concludes the proof.\nLemma A.2 (Spatial Gradient Propagation). In NeurComm, each message is learned to optimize the performance of other agents, that is, {νi, λi} receive almost all gradients from L(θj), L(ωj), ∀j ∈ {V|j 6= i}.\nProof. If we rewrite the required information for a given hidden state hi,t using intermediate messages instead of inputs, the result of Lemma A.1 becomes\nhi,t ⊃ {mj,t}j∈Ni ⊃ {hj,t−1}j∈Ni ⊃ {mj,t−1}j∈{V|dij=2} ⊃ . . . ⊃ {mj,t+1−d}j∈{V|dij=d} ⊃ . . .\nHence, mi,τ is included in the meta-DNN of agent j at time τ + dij − 1. In other words, {νi, λi} receive gradients from L(θj), L(ωj), ∀j ∈ {V|j 6= i}, except for the first dij − 1 experience samples. Assuming dmax |B|, {νi, λi} receive almost all gradients from loss signals of all other agents, which concludes the proof." }, { "heading": "B ALGORITHMS", "text": "Algo. 1 presents the algorithm of model training in a synchronous way, following descriptions in Section 3 and 4. Four iterations are performed at each step: the first iteration (lines 3-5) updates and sends messages; the second iteration (lines 6-10) updates hidden state, policy, and action; the third iteration (lines 11-14) updates value estimation and executes action; the fourth iteration (lines 22-26) performs gradient updates on actor, critic, and neural communication. On the other hand, Algo. 2 presents the algorithm of decentralized model execution in an asynchronous way. It runs as a job that repeatedly measures traffic, sends message, receives messages, and performs control.\nAlgorithm 1: Multi-agent A2C with NeurComm (Training) Parameter :α, β, γ, T , |B|, ηω , ηθ. Result: {λi, νi, ωi, θi}i∈V .\n1 initialize s0, π−1, h−1, t← 0, k ← 0, B ← ∅; 2 repeat 3 for i ∈ V do 4 send mi,t = fλi(hi,t−1); 5 end 6 for i ∈ V do 7 observe s̃i,t = sVi,t ∪ πNi,t−1 ∪mNi,t; 8 update hi,t ← gνi(hi,t−1, s̃i,t), πi,t ← πθi(·|hi,t); 9 update ai,t ∼ πi,t;\n10 end 11 for i ∈ V do 12 update vi,t ← Vωi(hi,t, aNi,t); 13 execute ai,t; 14 end 15 simulate {si,t+1, ri,t}i∈V ; 16 update B ← B ∪ {(si,t, πi,t−1, ai,t, ri,t, vi,t)}i∈V ; 17 update t← t+ 1, k ← k + 1; 18 if t = T then 19 initialize s0, π−1, h−1, t← 0; 20 end 21 if k = |B| then 22 for i ∈ V do 23 update R̂πiτ , Âπiτ , ∀τ ∈ B, based on Proposition 3.1; 24 update {λj , νj}j∈V ∪ {ωi}, based on ηw∇L(wi); 25 update {λj , νj}j∈V ∪ {θi}, based on ηθ∇L(θi); 26 end 27 initialize B ← ∅, k ← 0; 28 end 29 until Stop condition is reached;\nAlgorithm 2: Multi-agent A2C with NeurComm (Execution)" }, { "heading": "Parameter :{λi, νi, ωi, θi}i∈V , ∆tcomm, ∆tcontrol.", "text": "1 for i ∈ V do 2 initialize hi ← 0, πi ← 0, {sj , πj ,mj}j∈Ni ← 0; 3 repeat 4 observe si; 5 update mi ← fλi(hi); 6 send si, πi,mi; 7 for j ∈ N do 8 receive and update sj , πj ,mj within ∆tcomm; 9 end\n10 update s̃i ← sVi ∪ πNi ∪mNi ; 11 update hi ← gνi(hi, s̃i), πi ← πθi(·|hi); 12 execute ai ∼ πi; 13 sleep ∆tcontrol; 14 until Stop condition is reached; 15 end" }, { "heading": "C EXPERIMENT DETAILS", "text": "" }, { "heading": "C.1 ALGORITHM SETUP", "text": "Detailed algorithm implementations are listed below, in term of Eq. (5). IA2C: hi,t = LSTM(hi,t−1,relu(sVi,t)). ConseNet: same as IA2C but with consensus critic update. FPrint: hi,t = LSTM(hi,t−1,concat(relu(sVi,t),relu(πNi,t−1))). NeurComm: hi,t = LSTM(hi,t−1,concat(relu(sVi,t),relu(πNi,t−1),relu(hNi,t−1))). DIAL: hi,t = LSTM(hi,t−1,relu(sVi,t) + relu(relu(hi,t−1)) + onehot(ai,t−1)). CommNet: hi,t = LSTM(hi,t−1,tanh(sVi,t) + linear(mean(hNi,t−1))). For ConseNet, we only do consensus update on the LSTM layer, since the input and output layer sizes may not be fixed across agents. Also, the actor and critic are πi,t = softmax(hi,t), and vi,t = linear(concat(hi,t,onehot(aNi,t)))" }, { "heading": "C.2 EXPERIMENTS IN ATSC ENVIRONMENT", "text": "" }, { "heading": "C.2.1 ACTION SPACE", "text": "Fig. 7 illustrates the action space of five phases for each intersection in the ATSC Grid scenario. The ATSC Monaco scenario has complex and heterogeneous action spaces, please see the code for more details. To summarize, there are 11 two-phase intersections, 3 three-phase intersections, 10 four-phase intersections, 1 five-phase intersection, and 3 six-phase intersections." }, { "heading": "C.2.2 SUMMARY OF EXECUTION PERFORMANCE", "text": "Table 3 summarizes the key metrics in ATSC. The spatial average is taken at each second, and then the temporal average is calculated for all metrics (except for trip delay, which is directly aggregated over all trips). NeurComm outperforms all baselines on minimizing queue length and intersection delay. Interestingly, even though IA2C is good at optimizing the given objective of queue length, it performs poorly on optimizing intersection and trip delays.\nC.2.3 VISUALIZATION OF EXECUTION PERFORMANCE\nFig. 8 and Fig. 9 show screenshots of traffic distributions in the grid at different simulation steps for each MARL controller. The visualization is based on one execution episode with random seed 2000. Clearly, communicative MARL controllers have better performance on reducing the intersection delay. NeurComm and CommNet have the best overall performance." }, { "heading": "C.3 EXPERIMENTS IN CACC ENVIRONMENTS", "text": "" }, { "heading": "C.3.1 SUMMARY OF EXECUTION PERFORMANCE", "text": "Table 4 summarizes the key metrics in CACC. The best headway and velocity averages are closest ones to h∗ = 20m, and v∗ = 15m/s. Note the averages are only computed from safe execution episodes, and we use another metric “collision number” to count the number of episodes where an collision happens within the horizon. Ideally, “collision-free” is the top priority. However, safe RL is not the focus of this paper so trained MARL controllers cannot achieve this goal in the experiments of CACC." } ]
2,020
MULTI-AGENT REINFORCEMENT LEARNING FOR NETWORKED SYSTEM CONTROL
SP:bd79f443ec2da0a34a77be823acfc81ba45d8a18
[ "The paper focuses on using intrinsic motivation to improve the exploration process of reinforcement learning agents in tasks with sparse-reward and that require multi-agent to achieve. The authors proposed to encourage the agents toward the actions which changed the world in the ways that \"would not be achieved if the agents were acting alone\". The experiments are done with dual-arm manipulation.", "The paper proposes a novel algorithm for encouraging synergistic behavior in multi-agent setups with an intrinsic reward that promotes the agents to work together to achieve states that they cannot achieve individually without cooperation. The paper focuses on a two-agent environment where an approximate forward dynamics model is learnt for each agent, and can be composed sequentially to predict the next environment state given each agent’s action. However, this prediction will be inaccurate if the agent’s affected the environment state in such a way that individual dynamics model cannot predict i.e. synergistic behavior was produced. This prediction error is used as extrinsic reward by the proposed approach, while also having a variant where the true next state is replaced by another approximation of a joint forward model which allows for differentiability of actions with respect to the intrinsic reward. Empirical analysis shows that this intrinsic reward promotes synergetic behavior on two-agent robotic manipulation tasks and achieves better performance that baselines and ablations." ]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually. Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own. Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent. We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy. While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken. We validate our approach in robotic bimanual manipulation and multi-agent locomotion tasks with sparse rewards; we find that our approach yields more efficient learning than both 1) training with only the sparse reward and 2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior. Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.
[ { "affiliations": [], "name": "Rohan Chitnis" }, { "affiliations": [], "name": "Shubham Tulsiani" }, { "affiliations": [], "name": "Saurabh Gupta" }, { "affiliations": [], "name": "Abhinav Gupta" } ]
[ { "authors": [ "Andrew G Barto" ], "title": "Intrinsic motivation and reinforcement learning", "venue": null, "year": 2013 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A. Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "David Carmel", "Shaul Markovitch" ], "title": "Exploration strategies for model-based learning in multi-agent systems: Exploration strategies", "venue": "Autonomous Agents and Multi-agent systems,", "year": 1999 }, { "authors": [ "Rohan Chitnis", "Shubham Tulsiani", "Saurabh Gupta", "Abhinav Gupta" ], "title": "Efficient bimanual manipulation using learned task schemas", "venue": "arXiv preprint arXiv:1909.13874,", "year": 2019 }, { "authors": [ "Ignasi Clavera", "Anusha Nagabandi", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to adapt: Meta-learning for model-based control", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Linxi Fan", "Yuke Zhu", "Jiren Zhu", "Zihua Liu", "Orien Zeng", "Anchit Gupta", "Joan Creus-Costa", "Silvio Savarese", "Li Fei-Fei" ], "title": "SURREAL: Open-source reinforcement learning framework and robot manipulation benchmark", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Jakob Foerster", "Richard Y Chen", "Maruan Al-Shedivat", "Shimon Whiteson", "Pieter Abbeel", "Igor Mordatch" ], "title": "Learning with opponent-learning awareness", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 122–130. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Elena Gribovskaya", "Aude Billard" ], "title": "Combining dynamical systems control and programming by demonstration for teaching discrete bimanual coordination tasks to a humanoid robot", "venue": "In 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI),", "year": 2008 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy Lillicrap", "Sergey Levine" ], "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "Nick Haber", "Damian Mrowca", "Li Fei-Fei", "Daniel LK Yamins" ], "title": "Emergence of structured behaviors from curiosity-based intrinsic motivation", "venue": "arXiv preprint arXiv:1802.07461,", "year": 2018 }, { "authors": [ "Ashley Hill", "Antonin Raffin", "Maximilian Ernestus", "Adam Gleave", "Rene Traore", "Prafulla Dhariwal", "Christopher Hesse", "Oleg Klimov", "Alex Nichol", "Matthias Plappert", "Alec Radford", "John Schulman", "Szymon Sidor", "Yuhuai Wu" ], "title": "Stable baselines. https://github.com/hill-a/ stable-baselines, 2018", "venue": null, "year": 2018 }, { "authors": [ "Ping Hsu" ], "title": "Coordinated control of multiple manipulator systems", "venue": "IEEE Transactions on Robotics and Automation,", "year": 1993 }, { "authors": [ "Sandy H Huang", "Martina Zambelli", "Jackie Kay", "Murilo F Martins", "Yuval Tassa", "Patrick M Pilarski", "Raia Hadsell" ], "title": "Learning gentle object manipulation with curiosity-driven deep reinforcement learning", "venue": null, "year": 1903 }, { "authors": [ "Natasha Jaques", "Angeliki Lazaridou", "Edward Hughes", "Caglar Gulcehre", "Pedro A Ortega", "DJ Strouse", "Joel Z Leibo", "Nando de Freitas" ], "title": "Intrinsic social motivation via causal influence in multi-agent RL", "venue": null, "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Oliver Kroemer", "Christian Daniel", "Gerhard Neumann", "Herke Van Hoof", "Jan Peters" ], "title": "Towards learning hierarchical skills for multi-phase manipulation tasks", "venue": "In 2015 IEEE International Conference on Robotics and Automation (ICRA),", "year": 2015 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Siqi Liu", "Guy Lever", "Josh Merel", "Saran Tunyasuvunakool", "Nicolas Heess", "Thore Graepel" ], "title": "Emergent coordination through competition", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang Gu", "Vikash Kumar" ], "title": "Multi-agent manipulation via locomotion using hierarchical sim2real", "venue": null, "year": 1908 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Pierre-Yves Oudeyer", "Frdric Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Deepak Pathak", "Dhiraj Gandhi", "Abhinav Gupta" ], "title": "Self-supervised exploration via disagreement", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Martin L Puterman" ], "title": "Markov Decision Processes: Discrete Stochastic Dynamic Programming", "venue": null, "year": 1994 }, { "authors": [ "Marc H Raibert", "John J Craig" ], "title": "Hybrid position/force control of manipulators", "venue": "Journal of Dynamic Systems, Measurement, and Control,", "year": 1981 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "A possibility for implementing curiosity and boredom in model-building neural controllers", "venue": "In Proc. of the international conference on simulation of adaptive behavior: From animals to animats,", "year": 1991 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Christian Smith", "Yiannis Karayiannidis", "Lazaros Nalpantidis", "Xavi Gratal", "Peng Qi", "Dimos V Dimarogonas", "Danica Kragic" ], "title": "Dual arm manipulationa survey", "venue": "Robotics and Autonomous systems,", "year": 2012 }, { "authors": [ "Siddharth Srivastava", "Eugene Fang", "Lorenzo Riano", "Rohan Chitnis", "Stuart Russell", "Pieter Abbeel" ], "title": "Combined task and motion planning through an extensible planner-independent interface layer", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2014 }, { "authors": [ "Bradly C Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "arXiv preprint arXiv:1507.00814,", "year": 2015 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "MuJoCo: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Jason Wolfe", "Bhaskara Marthi", "Stuart Russell" ], "title": "Combined task and motion planning for mobile manipulation", "venue": "In Twentieth International Conference on Automated Planning and Scheduling,", "year": 2010 }, { "authors": [ "Ning Xi", "Tzyh-Jong Tarn", "Antal K Bejczy" ], "title": "Intelligent planning and control for multirobot coordination: An event-based approach", "venue": "IEEE transactions on robotics and automation,", "year": 1996 }, { "authors": [ "R Zollner", "Tamim Asfour", "Rüdiger Dillmann" ], "title": "Programming by demonstration: Dual-arm manipulation tasks for humanoid robots", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566),", "year": 2004 } ]
[ { "heading": "1 INTRODUCTION", "text": "Consider a multi-agent environment such as a team of robots working together to play soccer. It is critical for a joint policy within such an environment to produce synergistic behavior, allowing multiple agents to work together to achieve a goal which they could not achieve individually. How should agents learn such synergistic behavior efficiently? A naive strategy would be to learn policies jointly and hope that synergistic behavior emerges. However, learning policies from sparse, binary rewards is very challenging – exploration is a huge bottleneck when positive reinforcement is infrequent and rare. In sparse-reward multi-agent environments where synergistic behavior is critical, exploration is an even bigger issue due to the much larger action space.\nA common approach for handling the exploration bottleneck in reinforcement learning is to shape the reward using intrinsic motivation, as was first proposed by Schmidhuber (1991). This has been shown to yield improved performance across a variety of domains, such as robotic control tasks (Oudeyer et al., 2007) and Atari games (Bellemare et al., 2016; Pathak et al., 2017). Typically, intrinsic motivation is formulated as the agent’s prediction error regarding some aspects of the world; shaping the reward with such an error term incentivizes the agent to take actions that “surprise it,” and is intuitively a useful heuristic for exploration. But is this a good strategy for encouraging synergistic behavior in multi-agent settings? Although synergistic behavior may be difficult to predict, it could be equally difficult to predict the effects of certain single-agent behaviors; this formulation of intrinsic motivation as “surprise” does not specifically favor the emergence of synergy.\nIn this paper, we study an alternative strategy for employing intrinsic motivation to encourage synergistic behavior in multi-agent tasks. Our method is based on the simple insight that synergistic behavior leads to effects which would not be achieved if the individual agents were acting alone. So,\n∗Work done during an internship at Facebook AI Research.\nwe propose to reward agents for joint actions that lead to different results compared to if those same actions were done by the agents individually, in a sequential composition. For instance, consider the task of twisting open a water bottle, which requires two hands (agents): one to hold the base in place, and another to twist the cap. Only holding the base in place would not effect any change in the bottle’s pose, while twisting the cap without holding the bottle in place would cause the entire bottle to twist, rather than just the cap. Here, holding with one hand and subsequently twisting with the other would not open the bottle, but holding and twisting concurrently would.\nBased on this intuition, we propose a formulation for intrinsic motivation that leverages the difference between the true effect of an action and the composition of individual-agent predicted effects. We then present a second formulation that instead uses the discrepancy of predictions between a joint and a compositional prediction model. While the latter formulation requires training a forward model alongside learning the control strategy, it has the benefit of being analytically differentiable with respect to the action taken. We later show that this can be leveraged within the policy gradient framework, in order to obtain improved sample complexity over using the policy gradient as-is.\nAs our experimental point of focus, we study six simulated robotic tasks: four bimanual manipulation (bottle opening, ball pickup, corkscrew rotating, and bar pickup) and two multi-agent locomotion (ant push and soccer). All tasks have sparse rewards: 1 if the goal is achieved and 0 otherwise. These tasks were chosen both because they require synergistic behavior, and because they represent challenging control problems for modern state-of-the-art deep reinforcement learning algorithms (Levine et al., 2016; Lillicrap et al., 2016; Gu et al., 2017; Mnih et al., 2016; Nagabandi et al., 2018). Across all tasks, we find that shaping the reward via our formulation of intrinsic motivation yields more efficient learning than both 1) training with only the sparse reward signal and 2) shaping the reward via the more standard single-agent formulation of intrinsic motivation as “surprise,” which does not explicitly encourage synergistic behavior. We view this work as a step toward general-purpose synergistic multi-agent reinforcement learning." }, { "heading": "2 RELATED WORK", "text": "Prediction error as intrinsic motivation. The idea of motivating an agent to reach areas of the state space which yield high model prediction error was first proposed by Schmidhuber (1991). Generally, this reward obeys the form ‖f(x)− f̂(x)‖, i.e. the difference between the predicted and actual value of some function computed on the current state, the taken action, etc. (Barto, 2013; Oudeyer et al., 2007; Bellemare et al., 2016); intrinsic motivation can even be used on its own when no extrinsic reward is provided (Pathak et al., 2017; 2019; Burda et al., 2019; Haber et al., 2018). A separate line of work studies how agents can synthesize a library of skills via intrinsic motivation in the absence of extrinsic rewards (Eysenbach et al., 2019). Recent work has also studied the use of surprise-based reward to solve gentle manipulation tasks, with the novel idea of rewarding the agent for errors in its own predictions of the reward function (Huang et al., 2019). In this paper, we will propose formulations of intrinsic motivation that are geared toward multi-agent synergistic tasks.\nExploration in multi-agent reinforcement learning. The problem of efficient exploration in multi-agent settings has received significant attention over the years. Lookahead-based exploration (Carmel & Markovitch, 1999) is a classic strategy; it rewards an agent for exploration that reduces its uncertainty about the models of other agents in the environment. More recently, social motivation has been proposed as a general principle for guiding exploration (Jaques et al., 2019): agents should prefer actions that most strongly influence the policies of other agents. LOLA (Foerster et al., 2018), though not quite an exploration strategy, follows a similar paradigm: an agent should reason about the impact of its actions on how other agents learn. Our work approaches the problem from a different angle that incentivizes synergy: we reward agents for taking actions to affect the world in ways that would not be achieved if the agents were acting alone.\nBimanual manipulation. The field of bimanual, or dual-arm, robotic manipulation has a rich history (Smith et al., 2012) as an interesting problem across several areas, including hardware design, model-based control, and reinforcement learning. Model-based control strategies for this task often draw on hybrid force-position control theory (Raibert et al., 1981), and rely on analytical models of the environment dynamics, usually along with assumptions on how the dynamics can be approximately decomposed into terms corresponding to the two arms (Hsu, 1993; Xi et al., 1996). On the other hand, learning-based strategies for this task often leverage human demonstrations to circumvent the challenge of exploration (Zollner et al., 2004; Gribovskaya & Billard, 2008; Kroemer et al., 2015). In this work, we describe an exploration strategy based on intrinsic motivation." }, { "heading": "3 APPROACH", "text": "Our goal is to enable learning for synergistic tasks in settings with sparse extrinsic rewards. A central hurdle in such scenarios is the exploration bottleneck: there is a large space of possible action sequences that the agents must explore in order to see rewards. In the absence of intermediate extrinsic rewards to guide this exploration, one can instead rely on intrinsic rewards that bias the exploratory behavior toward “interesting” actions, a notion which we will formalize.\nTo accomplish any synergistic task, the agents must work together to affect the environment in ways that would not occur if they were working individually. In Section 3.1, we present a formulation for intrinsic motivation that operationalizes this insight and allows guiding the exploration toward synergistic behavior, consequently learning the desired tasks more efficiently. In Section 3.2, we present a second formulation that is (partially) differentiable, making learning even more efficient by allowing us to compute analytical gradients with respect to the action taken. Finally, in Section 3.3 we show how our formulations can be used to efficiently learn task policies.\nProblem Setup. Each of the tasks we consider can be formulated as a two-agent finite-horizon MDP (Puterman, 1994).1 We denote the environment as E , and the agents as A and B. We assume a state s ∈ S can be partitioned as s := 〈sA, sB , senv〉, where sA ∈ SA, sB ∈ SB , and senv ∈ Senv. Here, sA and sB denote the proprioceptive states of the agents, such as joint configurations of robot arms, and senv captures the remaining aspects of the environment, such as object poses. An action a ∈ A is a tuple a := 〈aA, aB〉, where aA ∈ AA and aB ∈ AB , consisting of each agent’s actions. We focus on settings where the reward function of this MDP is binary and sparse, yielding reward rextrinsic(s) = 1 only when s achieves some desired goal configuration. Learning in such a setup corresponds to acquiring a (parameterized) policy πθ that maximizes the expected proportion of times that a goal configuration is achieved by following πθ.\nUnfortunately, exploration guided only by a sparse reward is challenging; we propose to additionally bias it via an intrinsic reward function. Let s̄ ∼ E(s, a) be a next state resulting from executing action a in state s. We wish to formulate an intrinsic reward function rintrinsic(s, a, s̄) that encourages synergistic actions and can thereby enable more efficient learning." }, { "heading": "3.1 COMPOSITIONAL PREDICTION ERROR AS AN INTRINSIC REWARD", "text": "We want to encourage actions that affect the environment in ways that would not occur if the agents were acting individually. To formalize this notion, we note that a “synergistic” action is one where\n1Our problem setup and proposed approach can be extended to settings with more than two agents. Details, with accompanying experimental results, are provided in Section 4.5.\nthe agents acting together is crucial to the outcome; so, we should expect a different outcome if the corresponding actions were executed sequentially, with each individual agent acting at a time.\nOur key insight is that we can leverage this difference between the true outcome of an action and the expected outcome with individual agents acting sequentially as a reward signal. We can capture the latter via a composition of forward prediction models for the effects of actions by individual agents acting separately. Concretely, let fA : Senv × SA × AA → Senv (resp. fB) be a singleagent prediction model that regresses to the next environment state resulting from A (resp. B) taking an action in isolation.2 We define our first formulation of intrinsic reward, rintrinsic1 (s, a, s̄), by measuring the prediction error of s̄env using a composition of these single-agent prediction models:\nf composed(s, a) = fB(fA(senv, sA, aA), sB , aB),\nrintrinsic1 (s, a, s̄) = ‖s̄env − f composed(s, a)‖. For synergistic actions a, the prediction f composed(s, a) will likely be quite different from s̄env.\nIn practice, we pretrain fA and fB using data of random interactions in instantiations of the environment E with only a single active agent. This implies that the agents have already developed an understanding of the effects of acting alone before being placed in multi-agent environments that require synergistic behavior. Note that while random interactions sufficed to learn useful prediction models fA and fB in our experiments, this is not essential to the formulation, and one could leverage alternative single-agent exploration strategies to collect interaction samples instead." }, { "heading": "3.2 PREDICTION DISPARITY AS A DIFFERENTIABLE INTRINSIC REWARD", "text": "The reward rintrinsic1 (s, a, s̄) presented above encourages actions that have a synergistic effect. However, note that this “measurement of synergy” for action a in state s requires explicitly observing the outcome s̄ of executing a in the environment. In contrast, when humans reason about synergistic tasks such as twisting open a bottle cap while holding the bottle base, we judge whether actions will have a synergistic effect without needing to execute them to make this judgement. Not only is the non-dependence of the intrinsic reward on s̄ scientifically interesting, but it is also practically desirable. Specifically, the term f composed(s, a) is analytically differentiable with respect to a (assuming that one uses differentiable regressors fA and fB , such as neural networks), but s̄env is not, since s̄ depends on a via the black-box environment. If we can reformulate the intrinsic reward to be analytically differentiable with respect to a, we can leverage this for more sample-efficient learning.\nTo this end, we observe that our formulation rewards actions where the expected outcome under the compositional prediction differs from the outcome when the agents act together. While we used the observed state s̄ as the indication of “outcome when the agents act together,” we could instead use a predicted outcome here. We therefore additionally train a joint prediction model f joint : S × A → Senv that, given the states and actions of both agents, and the environment state, predicts the next environment state. We then define our second formulation of intrinsic reward, rintrinsic2 (s, a, ·), using the disparity between the predictions of the joint and compositional models:\nrintrinsic2 (s, a, ·) = ‖f joint(s, a)− f composed(s, a)‖.\nNote that there is no dependence on s̄. At first, this formulation may seem less efficient than rintrinsic1 , since f joint can at best only match s̄env, and requires being trained on data. However, we note that this formulation makes the intrinsic reward analytically differentiable with respect to the action a executed; we can leverage this within the learning algorithm to obtain more informative gradient updates, as we discuss further in the next section.\nRelation to Curiosity. Typical approaches to intrinsic motivation (Stadie et al., 2015; Pathak et al., 2017), which reward an agent for “doing what surprises it,” take on the form rintrinsicnon-synergistic(s, a, s̄) = ‖f joint(s, a)− s̄env‖. These curiosity-based methods will encourage the system to keep finding new behavior that surprises it, and thus can be seen as a technique for curiosity-driven skill discovery. In contrast, we are focused on synergistic multi-agent tasks with an extrinsic (albeit sparse) reward, so our methods for intrinsic motivation are not intended to encourage a diversity of learned behaviors, but rather to bias exploration to enable sample-efficient learning for a given task.\n2As the true environment dynamics are stochastic, it can be useful to consider probabilistic regressors f . However, recent successful applications of model-based reinforcement learning (Nagabandi et al., 2018; Clavera et al., 2019) have used deterministic regressors, modeling just the maximum likelihood transitions." }, { "heading": "3.3 LEARNING SPARSE-REWARD SYNERGISTIC TASKS", "text": "We simultaneously learn the joint prediction model f joint and the task policy πθ. We train πθ via reinforcement learning to maximize the expected total shaped reward rfull = rintrinsici∈{1,2} + λ · r extrinsic across an episode. Concurrently, we make dual-purpose use of the transition samples {(s, a, s̄)} collected during the interactions with the environment to train f joint, by minimizing the loss ‖f joint(s, a) − s̄env‖. This simultaneous training of f joint and πθ, as was also done by Stadie et al. (2015), obviates the need for collecting additional samples to pretrain f joint and ensures that the joint prediction model is trained using the “interesting” synergistic actions being explored. Full pseudocode is provided in Appendix A.\nOur second intrinsic reward formulation allows us to leverage differentiability with respect to the action taken to make learning via policy gradient methods more efficient. Recall that any policy gradient algorithm (Schulman et al., 2017; 2015; Williams, 1992) performs gradient ascent with respect to policy parameters θ on the expected reward over trajectories: J(θ) := Eτ [rfull(τ)]. Expanding, we have J(θ) = Eτ [∑T t=0 r full(st, at, ·) ] = Eτ [∑T t=0 r intrinsic 2 (st, at, ·) + λ · rextrinsic(st) ] , where T is the horizon. We show in Appendix B that the gradient can be written as:\n∇θJ(θ) = T∑ t=0 Eτt [rfull(st, at, ·)∇θ log pθ(τ̄t)] + Eτ̄t [∇θEat∼πθ(st)[r intrinsic 2 (st, at, ·)]]. (1)\nHere, τt := 〈s0, a0, ..., st, at〉 denotes a trajectory up to time t, and τ̄t := 〈s0, a0, ..., st〉 denotes the same but excluding at. Given a state st, and assuming a differentiable way of sampling at ∼ πθ(st), such as using the reparameterization trick (Kingma & Welling, 2014), we can analytically compute the inner gradient in the second term since rintrinsic2 (st, at, ·) is differentiable with respect to at (again, assuming the regressors fA, fB , and f joint are differentiable). In Equation 1, the first term is similar to what typical policy gradient algorithms compute, with the difference being the use of pθ(τ̄t) instead of pθ(τt); the intuition is that we should not consider the effects of at here since it gets accounted for by the second term. In practice, however, we opt to treat the policy gradient algorithm as a black box, and simply add (estimates of) the gradients given by the second term to the gradients yielded by the black-box algorithm. While this leads to double-counting certain gradients (those of the expected reward at each timestep with respect to the action at that timestep), our preliminary experiments found this to minimally affect training, and make the implementation more convenient as one can leverage an off-the-shelf optimizer like PPO (Schulman et al., 2017)." }, { "heading": "4 EXPERIMENTS", "text": "We consider both bimanual manipulation tasks and multi-agent locomotion tasks, all of which require synergistic behavior, as our testbed. We establish the utility of our proposed formulations by comparing to baselines that do not use any intrinsic rewards, or use alternative intrinsic reward formulations. We also consider ablations of our method that help us understand the different intrinsic reward formulations, and the impact of partial differentiability. In Section 4.5, we show that our approach, with minor adaptations, continues to be useful in domains with more than two agents." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "We consider four bimanual manipulation tasks: bottle opening, ball pickup, corkscrew rotating, and bar pickup. These environments are suggested as bimanual manipulation tasks by Chitnis et al. (2019). Furthermore, we consider two multi-agent locomotion tasks: ant push (inspired by the domain considered by Nachum et al. (2019)) and soccer (adapted from the implementation provided alongside Liu et al. (2019)). All tasks involve sparse rewards, and require effective use of both agents to be solved. We simulate all tasks in MuJoCo (Todorov et al., 2012). Now, we describe the tasks, state representations, and action spaces.\nEnvironments. The four manipulation tasks are set up with 2 Sawyer arms at opposite ends of a table, and an object placed on the table surface. Two of these tasks are visualized in Figure 2, alongside the two multi-agent locomotion tasks.\n• Bottle Opening: The goal is to rotate a cuboidal bottle cap, relative to a cuboidal bottle base, by 90◦. The bottle is modeled as two cuboids on top of one another, connected via a hinge joint, such that in the absence of opposing torques, both cuboids rotate together. We vary the location and size of the bottle across episodes. • Ball Pickup: The goal is to lift a slippery ball by 25cm. The ball slips out when a single arm tries to lift it. We vary the location and coefficient of friction of the ball across episodes. • Corkscrew Rotating: The goal is to rotate a corkscrew relative to its base by 180◦. The corkscrew is modeled as a handle attached to a base via a hinge joint, such that in the absence of opposing torques, both rotate together. We vary the location and size of the corkscrew across episodes. • Bar Pickup: The goal is to lift a long heavy bar by 25cm. The bar is too heavy to be lifted by a single arm. We vary the location and density of the bar across episodes. • Ant Push: Two ants and a large block are placed in an environment. The goal is for the ants to move the block to a particular region. To control the block precisely, the ants need to push it together, as they will often topple over when trying to push the block by themselves. • Soccer: Two soccer-playing agents and a soccer ball are placed in an environment. The goal is for the ball to be kicked into a particular region, after having been in the possession of each agent for any amount of time. Therefore, the agents must both contribute to the movement of the ball.\nSee Section 4.5 for results on three-agent versions of the Ant Push and Soccer environments.\nState Representation. The internal state of each agent consists of proprioceptive features: joint positions, joint velocities, and (for manipulation tasks) the end effector pose. The environment state consists of the current timestep, geometry information for the object, and the object pose. We use a simple Euclidean metric over the state space. All forward models predict the change in the object’s world frame pose, via an additive offset for the 3D position and a Hamilton product for the orientation quaternion. The orientation is not tracked in the soccer task.\nAction Space. To facilitate learning within these environments, we provide the system with a discrete library of generic skills, each parameterized by some (learned) continuous parameters. Therefore, our stochastic policy πθ maps a state to 1) a distribution over skills for agent A to use, 2) a distribution over skills for agent B to use, 3) means and variances of independent Gaussian distributions for every continuous parameter of skills for A, and 4) means and variances of independent Gaussian distributions for every continuous parameter of skills for B. These skills can either be hand-designed (Wolfe et al., 2010; Srivastava et al., 2014) or learned from demonstration (Kroemer et al., 2015); as this is not the focus of our paper, we opt to simply hand-design them. While executing a skill, if the agents are about to collide with each other, we attempt to bring them back to the states they were in before execution. For manipulation tasks, if we cannot find an inverse kinematics solution for achieving a skill, it is not executed, though it still consumes a timestep. In either of these cases, the reward is 0. See Appendix C for more details on these environments." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "Network Architecture. All forward models and the policy are 4-layer fully connected neural networks with 64-unit hidden layers, ReLU activations, and a multi-headed output to capture both the actor and the critic. Bimanual manipulation tasks are built on the Surreal Robotics Suite (Fan et al., 2018). For all tasks, training is parallelized across 50 workers.\nTraining Details. Our proposed synergistic intrinsic rewards rely on forward models fA, fB , and f joint. We pretrain the single-agent model fA (resp. fB) on 105 samples of experience with a random policy of only agent A (resp. B) acting. Note that this pretraining does not use any extrinsic reward, and therefore the number of steps under the extrinsic reward is comparable across all the approaches. The joint model f joint and policy πθ start from scratch, and are optimized concurrently. We set the trade-off coefficient λ = 10 (see Appendix D). We use the stable baselines (Hill et al., 2018) implementation of PPO (Schulman et al., 2017) as our policy gradient algorithm. We use clipping parameter 0.2, entropy loss coefficient 0.01, value loss function coefficient 0.5, gradient clip threshold 0.5, number of steps 10, number of minibatches per update 4, number of optimization epochs per update 4, and Adam (Kingma & Ba, 2015) with learning rate 0.001." }, { "heading": "4.3 BASELINES", "text": "• Random policy: We randomly choose a skill and parameterization for each agent, at every step. This baseline serves as a sanity check to ensure that our use of skills does not trivialize the tasks. • Separate-agent surprise: This baseline simultaneously executes two independent single-agent curiosity policies that are pretrained to maximize the “surprise” rewards ‖fA(s, a) − s̄env‖ and ‖fB(s, a)− s̄env‖ respectively. • Extrinsic reward only: This baseline uses only extrinsic sparse rewards rextrinsic, without shaping. • Non-synergistic surprise: We learn a joint two-agent policy to optimize for the extrinsic reward\nand the joint surprise: rfull = rintrinsicnon-synergistic + λ · rextrinsic. This encourages curiosity-driven skill discovery but does not explicitly encourage synergistic multi-agent behavior." }, { "heading": "4.4 RESULTS AND DISCUSSION", "text": "Figure 3 shows task success rates as a function of the number of interaction samples for the different methods on each environment. We plot average success rate over 5 random seeds using solid lines, and shade standard deviations. Now, we summarize our three key takeaways.\n1 when leveraging the partial differentiability.\n1) Synergistic intrinsic rewards boost sample efficiency. The tasks we consider are hard and our use of parameterized skills does not trivialize the tasks. Furthermore, these tasks require coordination among the two agents, and so Separate-agent surprise policies do not perform well. Given enough training samples, Extrinsic reward only policies start to perform decently well. However, our use of synergistic intrinsic rewards to shape the extrinsic rewards from the environment accelerates learning, solving the task consistently with up to 5× fewer samples in some cases. 2) Synergistic intrinsic rewards perform better than non-synergistic intrinsic rewards. Policies that use our synergistic intrinsic rewards also work better than the Non-synergistic surprise baseline. This is primarily because the baseline policies learn to exploit the joint model rather than to behave synergistically. This also explains why Non-synergistic surprise used together with extrinsic reward hurts task performance (green vs. red curve in Figure 3). Past experiments with such surprise models have largely been limited to games, where progress is correlated with continued exploration (Burda et al., 2019); solving robotic tasks often involves more than just surprise-driven exploration. Figure 4 (top) gives additional results showing that our method’s competitive advantage over this baseline persists even if we allow the baseline additional interactions to pretrain the joint prediction model f joint without using any extrinsic reward (similar to our method’s pretraining for f composed).\n3) Analytical gradients boost sample efficiency. In going from rintrinsic1 (compositional prediction error) to rintrinsic2 (prediction disparity), we changed two things: 1) the reward function and 2) how it is optimized (we used Equation 1 to leverage the partial differentiability of rintrinsic2 ). We conduct an ablation to disentangle the impact of these two changes. Figure 4 (bottom) presents learning curves for using rintrinsic2 without analytical gradients, situated in comparison to the previously shown results. When we factor out the difference due to optimization and compare rintrinsic1 and r intrinsic 2 as different intrinsic reward formulations, rintrinsic1 performs better than r intrinsic 2 (purple vs. yellow curve). This is expected because rintrinsic2 requires training an extra model f joint concurrently with the policy, which at best could match the true s̄env. Leveraging the analytical gradients, though, affords rintrinsic2 more sample-efficient optimization (brown vs. purple curve), making it a better overall choice.\nWe have also tried using our formulation of intrinsic motivation without extrinsic reward (λ = 0); qualitatively, the agents learn to act synergistically, but in ways that do not solve the “task,” which is sensible since the task is unknown to the agents. See the project webpage for videos of these results. Furthermore, in Appendix D we provide a plot of policy performance versus various settings of λ." }, { "heading": "4.5 EXTENSION: MORE THAN TWO AGENTS", "text": "It is possible to extend our formulation and proposed approach to more than two agents. Without loss of generality, suppose there are three agents A, B, and C. The only major change is in the way that we should compute the compositional prediction: instead of f composed(s, a) = fB(fA(senv, sA, aA), sB , aB), we use f composed(s, a) = fC(fB(fA(senv, sA, aA), sB , aB), sC , aC). One issue is that as the number of agents increases, the ordering of the application of single-agent forward models within f composed becomes increasingly important. To address this, we also tried evaluating f composed as an average across the predictions given by all six possible orderings of application, but we did not find this to make much difference in the results. We leave a thorough treatment of this important question to future work.\nWe tested this approach on three-agent versions of the ant push and soccer environments, and found that it continues to provide a useful bias. See Figure 5. In three-agent ant push, we give harder goal regions for the ants to push the blocks to than in two-agent ant push; these regions were chosen by hand to make all three ants be required to coordinate to solve these tasks, rather than just two as before. In three-agent soccer, all three agents must have possessed the ball before the goal is scored." }, { "heading": "5 CONCLUSION", "text": "In this work, we presented a formulation of intrinsic motivation that encourages synergistic behavior, and allows efficiently learning sparse-reward tasks such as bimanual manipulation and multi-agent locomotion. We observed significant benefits compared to non-synergistic forms of intrinsic motivation. Our formulation relied on encouraging actions whose effects would not be achieved by individual agents acting in isolation. It would be beneficial to extend this notion further, and explicitly encourage action sequences, not just individual actions, whose effects would not be achieved by individual agents. Furthermore, while our intrinsic reward encouraged synergistic behavior in the single policy being learned, it would be interesting to extend it to learn a diverse set of policies, and thereby discover a broad set of synergistic skills over the course of training. Finally, it would be good to extend the domains to involve more complicated object types, such as asymmetric or deformable ones; especially for deformable objects, engineering better state representations is crucial." }, { "heading": "ACKNOWLEDGMENTS", "text": "Rohan is supported by an NSF Graduate Research Fellowship. Any opinions, findings, and conclusions expressed in this material are the authors’ and need not reflect the views of our sponsors." }, { "heading": "A PSEUDOCODE", "text": "Here is full pseudocode of our training algorithm described in Section 3.3:\nAlgorithm TRAIN-SYNERGISTIC-POLICY(πθ,M, n, α) 1 Input: πθ, an initial policy. 2 Input:M, an MDP for a synergistic task. 3 Input: n, the number of episodes of data with which to train single-agent models. 4 Input: α, a step size. 5 for i = 1, 2, ..., n do 6 Append episode of experience inM with only agent A acting to data buffer DA. 7 Append episode of experience inM with only agent B acting to data buffer DB . 8 Fit forward models fA, fB to predict next states in DA, DB . // Pretrained & fixed. 9 Djoint ← ∅ // Data for joint model, only needed if using rintrinsic2 .\n10 while πθ has not converged do 11 D ← batch of experience tuples (st, at, rextrinsict , st+1) from running πθ inM. 12 if using rintrinsic2 then 13 Append D to Djoint and fit forward model f joint to predict next states in Djoint. 14 for (st, at, rextrinsict , st+1) ∈ D do 15 Replace rextrinsict with r\nfull(st, at, st+1). // Shape reward, see Section 3.3. 16 ∇θJ(θ)← POLICYGRADIENT(πθ,D) 17 if using rintrinsic2 then 18 Update ∇θJ(θ) with analytical gradients per Equation 1. 19 θ ← θ + α∇θJ(θ) // Or Adam (Kingma & Ba, 2015)." }, { "heading": "B DERIVATION OF EQUATION 1", "text": "When using rintrinsic2 , the objective to be optimized can be written as:\nJ(θ) ≡ Eτ [rfull(τ)] = Eτ [ T∑ t=0 rfull(st, at, ·) ] = Eτ [ T∑ t=0 rintrinsic2 (st, at, ·) + λ · rextrinsic(st) ] .\nWe will write ∇θJ(θ) in a particular way. Let τ̄t = 〈s0, a0, s1, a1, ..., st〉 be a random variable denoting trajectories up to timestep t, but excluding at. We have:\n∇θJ(θ) = ∇θEτ [rfull(τ)] = T∑ t=0 ∇θEτ̄t [Eat∼πθ(st)[r full(st, at, ·)]],\nwhere we have used the fact that trajectories up to timestep t have no dependence on the future st+1, at+1, ..., sT , and we have split up the expectation. Now, observe that the inner expectation, Eat∼πθ(st)[rfull(st, at, ·)], is dependent on θ since the at are sampled from the policy πθ; intuitively, this expression represents the expected reward of st with respect to the stochasticity in the current policy. To make this dependence explicit, let us define rfullθ (st) := Eat∼πθ(st)[rfull(st, at, ·)]. Then:\n∇θJ(θ) = T∑ t=0 ∇θEτ̄t [rfullθ (st)]\n= T∑ t=0 ∫ τ̄t ∇θ[pθ(τ̄t)rfullθ (st)] dτ̄t\n= T∑ t=0 ∫ τ̄t pθ(τ̄t)r full θ (st)∇θ log pθ(τ̄t) + pθ(τ̄t)∇θrfullθ (st) dτ̄t\n= T∑ t=0 Eτ̄t [rfullθ (st)∇θ log pθ(τ̄t)] + Eτ̄t [∇θrfullθ (st)],\nwhere in the second line, we used both the product rule and the REINFORCE trick (Williams, 1992).\nNow, let τt = 〈s0, a0, s1, a1, ..., st, at〉 denote trajectories up to timestep t, including at (unlike τ̄t). Putting back Eat∼πθ(st)[rfull(st, at, ·)] in place of rfullθ (st) gives Equation 1:\n∇θJ(θ) = T∑ t=0 Eτ̄t [Eat∼πθ(st)[r full(st, at, ·)]∇θ log pθ(τ̄t)] + Eτ̄t [∇θEat∼πθ(st)[r full(st, at, ·)]]\n= T∑ t=0 Eτt [rfull(st, at, ·)∇θ log pθ(τ̄t)] + Eτ̄t [∇θEat∼πθ(st)[r intrinsic 2 (st, at, ·)]].\nIn the second line, we have used the facts that τ̄t and the extrinsic sparse reward do not depend on at. Note that we can estimate the term Eτ̄t [∇θEat∼πθ(st)[rintrinsic2 (st, at, ·)]] empirically using a batch of trajectory data τ1, ..., τn, for any timestep t." }, { "heading": "C ADDITIONAL ENVIRONMENT DETAILS", "text": "C.1 MANIPULATION TASKS\nWe provide additional details about the action space of each manipulation environment.\nThe following table describes the parameterization of each skill in the library, as well as which environments are allowed to utilize each skill:\nSkill Environments Continuous Parameters\ntop grasp bar, ball, bottle end effector position, end effector z-orientation\nside grasp bottle, corkscrew end effector position, approach angle\ngo-to pose ball, corkscrew end effector position, end effector orientation\nlift bar, ball vertical distance to lift end effector\ntwist bottle none (wrist joint rotates at current end effector pose)\nrotate corkscrew rotation axis, rotation radius\nno-op all none\nThe following table describes the search space of each continuous parameter. Since the object pose is known in simulation, we are able to leverage it in designing these search spaces:\nContinuous Parameter Environments Relevant Skills Search Space\nend effector position (unitless) bar top grasp [-1, 1] interpolated position along bar\nend effector position (meters) ball, bottle, corkscrew grasps, go-to pose [-0.1, 0.1] x/y/z offset from object center\nend effector z-orientation bar, ball, bottle top grasp [0, 2π]\napproach angle bottle, corkscrew side grasp [−π2 , π 2 ]\nend effector orientation ball, corkscrew go-to pose [0, 2π] r/p/y Euler angles converted to quat\ndistance to lift (meters) bar, bottle lift [0, 0.5]\nrotation axis corkscrew rotate [-0.1, 0.1] x/y offset from object center; vertical\nrotation radius (meters) corkscrew rotate [0, 0.2]\nNote that our inverse kinematics feasibility checks allow the system to learn to rule out end effector poses which are impossible to reach, since these cause no change in the state other than consuming a timestep, and generate 0 reward.\nC.2 LOCOMOTION TASKS\nWe provide additional details about the action space of the locomotion environments. For both the ant push and soccer tasks, we follow Nachum et al. (2019) and pre-train four skills: moving up, down, left, and right on the plane. Each skill has one continuous parameter specifying an amount to move. So, at each timestep, the policy must select both which direction to move and how much to move in that direction. All training hyperparameters are unchanged from the manipulation tasks.\nC.3 POLICY ARCHITECTURE\nFigure 6 shows a diagram of our policy architecture.\nD IMPACT OF COEFFICIENT λ\nWe conducted an experiment to study the impact of the trade-off coefficient λ on the performance of the learned policy. When λ = 0, no extrinsic reward is used, so the agents learn to act synergistically, but in ways that do not solve the “task,” which is sensible since the task is unknown to them. Our experiments reported in the main text used λ = 10. See Figure 7 for the results of this experiment." } ]
2,020
INTRINSIC MOTIVATION FOR ENCOURAGING SYNERGISTIC BEHAVIOR
SP:71d504ec722cacab616fca85dd2937b93e71caaf
[ "The paper introduces the R-Transformer architecture which adds a local RNN layer before each attention layer in Transformer. The authors claim state-of-the-art performance but only test on tiny tasks where Transformer models have not been heavily optimized and omit the main problem with RNNs - namely their speed. It is an interesting paper still and the locality is a nice way to remedy the speed problem, but the paper lacks a true study and ablations on this main limitation. In summary: the main new idea of the paper is to make RNNs local in Transformer (trying to add RNN layers has been explored before). This idea could be a good tradeoff between full RNN (slow) and no RNN (lack of context), but the following is missing: (1) ablations on speed vs results by locality window, (2) experiments on more widely reported and larger data-sets and models, at least including some language modeling task (wiki or lm1b) and some translation task (like en-de). Without these results, we cannot recommend to accept this paper.", "This paper proposes a new architecture, R-Transformer, that blends the Transformer networks and the recurrent networks, so as to better capture both the long- and short-term features. By injecting a local RNN layer at every level of the network, the authors hoped to enhance the Transformer's ability to model locality structure. To demonstrate the modeling power of R-Trasnformer, the paper evaluates the effectiveness of R-Transformer on 4 different sequence tasks (seqMNIST, polyphonic music, character- and word-level PTB)." ]
Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global longterm dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. We have made the code and data publicly available 1.
[]
[ { "authors": [ "Rami Al-Rfou", "Dokook Choe", "Noah Constant", "Mandy Guo", "Llion Jones" ], "title": "Character-level language modeling with deeper self-attention", "venue": "arXiv preprint arXiv:1808.04444,", "year": 2018 }, { "authors": [ "Shaojie Bai", "J Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": "arXiv preprint arXiv:1803.01271,", "year": 2018 }, { "authors": [ "Nicolas Boulanger-Lewandowski", "Yoshua Bengio", "Pascal Vincent" ], "title": "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription", "venue": "arXiv preprint arXiv:1206.6392,", "year": 2012 }, { "authors": [ "Shiyu Chang", "Yang Zhang", "Wei Han", "Mo Yu", "Xiaoxiao Guo", "Wei Tan", "Xiaodong Cui", "Michael Witbrock", "Mark A Hasegawa-Johnson", "Thomas S Huang" ], "title": "Dilated recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ciprian Chelba", "Frederick Jelinek" ], "title": "Structured language modeling", "venue": "Computer Speech & Language,", "year": 2000 }, { "authors": [ "Stanley F Chen", "Joshua Goodman" ], "title": "An empirical study of smoothing techniques for language modeling", "venue": "Computer Speech & Language,", "year": 1999 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "arXiv preprint arXiv:1412.3555,", "year": 2014 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "William W Cohen", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "Yann N Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Yann N Dauphin" ], "title": "A convolutional encoder model for neural machine translation", "venue": "arXiv preprint arXiv:1611.02344,", "year": 2016 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Alex Graves", "Navdeep Jaitly" ], "title": "Towards end-to-end speech recognition with recurrent neural networks", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Balázs Hidasi", "Alexandros Karatzoglou", "Linas Baltrunas", "Domonkos Tikk" ], "title": "Session-based recommendations with recurrent neural networks", "venue": "arXiv preprint arXiv:1511.06939,", "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Brian Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Yoon Kim", "Yacine Jernite", "David Sontag", "Alexander M Rush" ], "title": "Character-aware neural language models", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "David Krueger", "Tegan Maharaj", "János Kramár", "Mohammad Pezeshki", "Nicolas Ballas", "Nan Rosemary Ke", "Anirudh Goyal", "Yoshua Bengio", "Aaron Courville", "Chris Pal" ], "title": "Zoneout: Regularizing rnns by randomly preserving hidden activations", "venue": "arXiv preprint arXiv:1606.01305,", "year": 2016 }, { "authors": [ "Quoc V Le", "Navdeep Jaitly", "Geoffrey E Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "arXiv preprint arXiv:1504.00941,", "year": 2015 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Mitchell Marcus", "Beatrice Santorini", "Mary Ann Marcinkiewicz" ], "title": "Building a large annotated corpus of english: The penn treebank", "venue": null, "year": 1993 }, { "authors": [ "Tomáš Mikolov", "Martin Karafiát", "Lukáš Burget", "Jan Černockỳ", "Sanjeev Khudanpur" ], "title": "Recurrent neural network based language model", "venue": "In Eleventh annual conference of the international speech communication association,", "year": 2010 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Daniel Quang", "Xiaohui Xie" ], "title": "Danq: a hybrid convolutional and recurrent deep neural network for quantifying the function of dna sequences", "venue": "Nucleic acids research,", "year": 2016 }, { "authors": [ "Martin Sundermeyer", "Ralf Schlüter", "Hermann Ney" ], "title": "Lstm neural networks for language modeling", "venue": "In Thirteenth annual conference of the international speech communication association,", "year": 2012 }, { "authors": [ "Ke Tran", "Arianna Bisazza", "Christof Monz" ], "title": "Recurrent memory networks for language modeling", "venue": "arXiv preprint arXiv:1601.01272,", "year": 2016 }, { "authors": [ "Aäron Van Den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew W Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": null, "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Scott Wisdom", "Thomas Powers", "John Hershey", "Jonathan Le Roux", "Les Atlas" ], "title": "Full-capacity unitary recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Saizheng Zhang", "Yuhuai Wu", "Tong Che", "Zhouhan Lin", "Roland Memisevic", "Ruslan R Salakhutdinov", "Yoshua Bengio" ], "title": "Architectural complexity measures of recurrent neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recurrent Neural Networks (RNNs) especially its variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have achieved great success in a wide range of sequence learning tasks including language modeling, speech recognition, recommendation, etc (Mikolov et al., 2010; Sundermeyer et al., 2012; Graves & Jaitly, 2014; Hinton et al., 2012; Hidasi et al., 2015). Despite their success, however, the recurrent structure is often troubled by two notorious issues. First, it easily suffers from gradient vanishing and exploding problems, which largely limits their ability to learn very long-term dependencies (Pascanu et al., 2013). Second, the sequential nature of both forward and backward passes makes it extremely difficult, if not impossible, to parallelize the computation, which dramatically increases the time complexity in both training and testing procedure. Therefore, many recently developed sequence learning models have completely jettisoned the recurrent structure and only rely on convolution operation or attention mechanism that are easy to parallelize and allow the information flow at an arbitrary length. Two representative models that have drawn great attention are Temporal Convolution Networks(TCN) (Bai et al., 2018) and Transformer (Vaswani et al., 2017). In a variety of sequence learning tasks, they have demonstrated comparable or even better performance than that of RNNs (Gehring et al., 2017; Bai et al., 2018; Devlin et al., 2018).\nThe remarkable performance achieved by such models largely comes from their ability to capture long-term dependencies in sequences. In particular, the multi-head attention mechanism in Transformer allows every position to be directly connected to any other positions in a sequence. Thus, the information can flow across positions without any intermediate loss. Nevertheless, there are two issues that can harm the effectiveness of multi-head attention mechanism for sequence learning. The first comes from the loss of sequential information of positions as it treats every position identically. To mitigate this problem, Transformer introduces position embeddings, whose effects,\n1https://www.dropbox.com/sh/u35qgqnmjpywcqn/AAAITcId7DRPOD9KRooQW7i2a?dl=0\nhowever, have been shown to be limited (Dehghani et al., 2018; Al-Rfou et al., 2018). In addition, it requires considerable amount of efforts to design more effective position embeddings or different ways to incorporate them in the learning process (Dai et al., 2019). Second, while multi-head attention mechanism is able to learn the global dependencies, we argue that it ignores the local structures that are inherently important in sequences such as natural languages. Even with the help of position embeddings, the signals at local positions can still be very weak as the number of other positions is significantly more.\nTo address the aforementioned limitations of the standard Transformer, in this paper, we propose a novel sequence learning model, termed as R-Transformer. It is a multi-layer architecture built on RNNs and the standard Transformer, and enjoys the advantages of both worlds while naturally avoids their respective drawbacks. More specifically, before computing global dependencies of positions with the multi-head attention mechanism, we firstly refine the representation of each position such that the sequential and local information within its neighborhood can be compressed in the representation. To do this, we introduce a local recurrent neural network, referred to as LocalRNN, to process signals within a local window ending at a given position. In addition, the LocalRNN operates on local windows of all the positions identically and independently and produces a latent representation for each of them. In this way, the locality in the sequence is explicitly captured. In addition, as the local window is sliding along the sequence one position by one position, the global sequential information is also incorporated. More importantly, because the localRNN is only applied to local windows, the aforementioned two drawbacks of RNNs can be naturally mitigated. We evaluate the effectiveness of R-Transformer with a various of sequence learning tasks from different domains and the empirical results demonstrate that R-Transformer achieves much stronger performance than both TCN and standard Transformer as well as other state-of-the-art sequence models.\nThe rest of the paper is organized as follows: Section 2 discusses the sequence modeling problem we aim to solve; The proposed R-Transformer model is presented in Section 3. In Section 4, we describe the experimental details and discuss the results. The related work is briefly reviewed in Section 5. Section 6 concludes this work." }, { "heading": "2 SEQUENCE MODELING PROBLEM", "text": "Before introducing the proposed R-Transformer model, we formally describe the sequence modeling problem. Given a sequence of length N : x1, x2, · · · , xN , we aim to learn a function that maps the\ninput sequence into a label space Y: (f : XN → Y). Formally,\ny = f(x1, x2, · · · , xN ) (1)\nwhere y ∈ Y is the label of the input sequence. Depending on the definition of label y, many tasks can be formatted as the sequence modeling problem defined above. For example, in language modeling task, xt is the character/word in a textual sentence and y is the character/word at next position (Mikolov et al., 2010); in session-based recommendation, xt is the user-item interaction in a session and y is the future item that users will interact with (Hidasi et al., 2015); when xt is a nucleotide in a DNA sequence and y is its function, this problem becomes a DNA function prediction task (Quang & Xie, 2016). Note that, in this paper, we do not consider the sequence-to-sequence learning problems. However, the proposed model can be easily extended to solve these problems and we will leave it as one future work." }, { "heading": "3 THE R-TRANSFORMER MODEL", "text": "The proposed R-Transformer consists of a stack of identical layers. Each layer has 3 components that are organized hierarchically and the architecture of the layer structure is shown in Figure 1. As shown in the figure, the lower level is the local recurrent neural networks that are designed to model local structures in a sequence; the middle level is a multi-head attention that is able to capture global long-term dependencies; and the upper level is a position-wise feedforward networks which conducts a non-linear feature transformation. Next, we describe each level in detail." }, { "heading": "3.1 LOCALRNN: MODELING LOCAL STRUCTURES", "text": "Sequential data such as natural language inherently exhibits strong local structures. Thus, it is desirable and necessary to design components to model such locality. In this subsection, we propose to take the advantage of RNNs to achieve this. Unlike previous works where RNNs are often applied to the whole sequence, we instead reorganize the original long sequence into many short sequences which only contain local information and are processed by a shared RNN independently and identically. In particular, we construct a local window of sizeM for each target position such that the local window includes M consecutive positions and ends at the target position. Thus, positions in each local window form a local short sequence, from which the shared RNN will learn a latent representation. In this way, the local structure information of each local region of the sequence is explicitly incorporated in the learned latent representations. We refer to the shared RNN as LocalRNN. Comparing to original RNN operation, LocalRNN only focuses on local short-term dependencies without considering any long-term dependencies. Figure 2 shows the different between original RNN and LocalRNN operations. Concretely, given the positions xt−M+1, xt−M+2, · · · , xt of a local short sequence of length M , the LocalRNN processes them sequentially and outputs M hidden states, the last of which is used as the representation of the local short sequences:\nht = LocalRNN(xt−M+1, xt−M+2, · · · , xt) (2)\nwhere RNN denotes any RNN cell such as Vanilla RNN cell, LSTM, GRU, etc. To enable the model to process the sequence in an auto-regressive manner and take care that no future information is available when processing one position, we pad the input sequence by (M − 1) positions before the start of a sequence. Thus, from sequence perspective, the LocalRNN takes an input sequence and outputs a sequence of hidden representations that incorporate information of local regions:\nh1, h2, · · · , hN = LocalRNN(x1, x2, · · · , xN ) (3)\nThe localRNN is analogous to 1-D Convolution Neural Networks where each local window is processed by convolution operations. However, the convolution operation completely ignores the sequential information of positions within the local window. Although the position embeddings have been proposed to mitigate this problem, a major deficiency of this approach is that the effectiveness of the position embedding could be limited; thus it requires considerable amount of extra efforts (Gehring et al., 2017). On the other hand, the LocalRNN is able to fully capture the sequential information within each window. In addition, the one-by-one sliding operation also naturally incorporates the global sequential information.\nDiscussion: RNNs have long been a dominating choice for sequence modeling but it severely suffers from two problems – The first one is its limited ability to capture the long-term dependencies and the second one is the time complexity, which is linear to the sequence length. However, in LocalRNN, these problems are naturally mitigated. Because the LocalRNN is applied to a short sequence within a local window of fixed size, where no long-term dependency is needed to capture. In addition, the computation procedures for processing the short sequences are independent of each other. Therefore, it is very straightforward for the parallel implementation (e.g., using GPUs), which can greatly improve the computation efficiency." }, { "heading": "3.2 CAPTURING THE GLOBAL LONG-TERM DEPENDENCIES WITH MULTI-HEAD ATTENTION", "text": "The RNNs at the lower level introduced in the previous subsection will refine representation of each positions such that it incorporates its local information. In this subsection, we build a sub-layer on top of the LocalRNN to capture the global long-term dependencies. We term it as pooling sublayer because it functions similarly to the pooling operation in CNNs. Recent works have shown that the multi-head attention mechanism is extremely effective to learn the long-term dependencies, as it allows a direct connection between every pair of positions. More specifically, in the multihead attention mechanism, each position will attend to all the positions in the past and obtains a set of attention scores that are used to refine its representation. Mathematically, given current representations h1, h2, · · · , ht, the refined new representations ut are calculated as:\nut =MultiHeadAttention(h1, h2, · · · , ht) (4) = Concatenation(head1(ht), head2(ht), · · · , headk(ht))W o\nwhere headk(ht) is the result of kth attention pooling and W o is a linear projection matrix. Considering both efficiency and effectiveness, the scaled dot product is used as the attention function (Vaswani et al., 2017). Specifically, headi(ht) is the weighted sum of all value vectors and\nthe weights are calculated by applying attention function to all the query, key pairs:\n{α1, α2, · · ·αn } = Softmax({ < q, k1 >√ (dk) , < q, k2 >√ (dk) , · · · , < q, kn >√ (dk) }) (5)\nheadi(ht) = n∑ j=1 αjvj\nwhere q, ki, and vi are the query, key, and value vectors and dk is the dimension of ki. Moreover, q, ki, and vi are obtained by projecting the input vectors into query, key and value spaces, respectively (Vaswani et al., 2017). They are formally defined as:\nq, ki, vi =W qht,W khi,W vhi (6)\nwhere W q , W k and W v are the projection matrices and each attention pooling headi has its own projection matrices. As shown in Eq. (5), each headi is obtained by letting ht attending to all the “past” positions, thus any long-term dependencies between ht and hi can be captured. In addition, different heads will focus on dependencies in different aspects. After obtaining the refined representation of each position by the multi-head attention mechanism, we add a position-wise fully connected feed-forward network sub-layer, which is applied to each position independently and identically. This feedforward network transforms the features non-linearly and is defined as follows:\nFeedForward(mt) = max(0, utW1 + b1)W2 + b2 (7)\nFollowing (Vaswani et al., 2017), We add a residual (He et al., 2016) and layernorm (Ba et al., 2016) connection between all the sub-layers." }, { "heading": "3.3 OVERALL ARCHITECTURE OF R-TRANSFORMER", "text": "With all the aforementioned model components, we can now give a formal description of the overall architecture of an N -layer R-Transformer. For the ith layer (i ∈ {1, 2, · · ·N}):\nhi1, h i 2, · · · , hiT = LocalRNN(xi1, xi2, · · · , xiT ) (8)\nĥi1, ĥ i 2, · · · , ĥiT = LayerNorm(hi1 + xi1, hi2 + xi2, · · · , hiT + xiT )\nui1, u i 2, · · · , uiT =MultiHeadAttention(ĥi1, ĥi2, · · · , ĥiT )\nûi1, û i 2, · · · , ûiT = LayerNorm(ui1 + ĥi1, ui2 + ĥi2, · · · , uiT + ĥiT )\nmi1,m i 2, · · · ,miT = FeedForward(ûi1, ûi2, · · · , ûiT )\nxi+11 , x i+1 2 , · · · , x i+1 T = LayerNorm(m i 1 + û i 1,m i 2 + û i 2, · · · ,miT + ûiT )\nwhere T is the length of the input sequence and xit is the input position of the layer i at time step t.\nComparing with TCN: R-Transformer is partly motivated by the hierarchical structure in TCN Bai et al. (2018), thus, we make a detailed comparison here. In TCN, the locality in sequences in captured by convolution filters. However, the sequential information within each receptive field is ignored by convolution operations. In contrast, the LocalRNN structure in R-Transformer can fully incorporate it by the sequential nature of RNNs. For modeling global long-term dependencies, TCN achieves it with dilated convolutions that operate on nonconsecutive positions. Although such operation leads to larger receptive fields in lower-level layers, it misses considerable amount of information from a large portion of positions in each layer. On the other hand, the multi-head attention pooling in R-Transformer considers every past positions and takes much more information into consideration than TCN.\nComparing with Transformer: The proposed R-Transformer and standard Transformer enjoys similar long-term memorization capacities thanks to the multi-head attention mechanism (Vaswani et al., 2017). Nevertheless, two important features distinguish R-Transformer from the standard Transformer. First, R-Transformer explicitly and effectively captures the locality in sequences with the novel LocalRNN structure while standard Transformer models it very vaguely with multi-head attention that operates on all of the positions. Second, R-Transformer does not rely on any position embeddings as Transformer does. In fact, the benefits of simple position embeddings are very\nlimited (Al-Rfou et al., 2018) and it requires considerable amount of efforts to design effective position embeddings as well as proper ways to incorporate them (Dai et al., 2019). In the next section, we will empirically demonstrate the advantages of R-Transformer over both TCN and the standard Transformer." }, { "heading": "4 EXPERIMENT", "text": "Since the R-Transformer is a general sequential learning framework, we evaluate it with sequential data from various domains including images, audios and natural languages. We mainly compare it with canonical recurrent architectures (Vanilla RNN, GRU, LSTM) and two of the most popular generic sequence models that do not have any recurrent structures, namely, TCN and Transformer. However, since the majority of existing efforts to enhance Transformer are for natural languages, in the natural language evaluation, we also include one recent advanced Transformer, i.e., TransformerXL. For all the tasks, Transformer and R-Transformer were implemented with Pytorch and the results for canonical recurrent architectures and TCN were directly copied from Bai et al. (2018) as we follow the same experimental settings. In addition, to make the comparison fair, we use the same set of hyperparameters (i.e, hidden size, number of layers, number of heads) for R-Transformer and Transformer. Moreover, unless specified otherwise, for training, all models are trained with same optimizer and learning rate is chosen from the same set of values according to validation performance. In addition, the learning rate annealed such that it is reduced when validation performance reaches plateau." }, { "heading": "4.1 PIXEL-BY-PIXEL MNIST: SEQUENCE CLASSIFICATION", "text": "This task is designed to test model ability to memorize long-term dependencies. It was firstly proposed by Le et al. (2015) and has been used by many previous works (Wisdom et al., 2016; Chang et al., 2017; Zhang et al., 2016; Krueger et al., 2016). Following previous settings, we rescale each 28 × 28 image in MNIST dataset LeCun et al. (1998) into a 784 × 1 sequence, which will be classified into ten categories (each image corresponds to one of the digits from 0 to 9) by the sequence models. Since the rescaling could make pixels that are connected in the origin images far apart from each other, it requires the sequence models to learn very long-term dependencies to understand the content of each sequence. The dataset is split into training and testing sets as same as the default ones in Pytorch(version 1.0.0) 2. The model hyperparameters and classification accuracy are reported in Table 1. From the table, it can be observed that firstly, RNNs based methods generally perform worse than others. This is because the input sequences exhibit very long-term dependencies and it is extremely difficult for RNNs to memorize them. On the other hand, methods that build direct connections among positions, i.e., Transformer, TCN, achieve much better results. It is also interesting to see that TCN is slightly better than Transformer, we argue that this is because the standard Transformer cannot model the locality very well. However, our proposed R-Transformer that leverages LocalRNN to incorporate local information, has achieved better performance than TCN.\n2https://pytorch.org" }, { "heading": "4.2 NOTTINGHAM: POLYPHONIC MUSIC MODELING", "text": "Next, we evaluate R-Transformer on the task of polyphonic music modeling with Nottingham dataset (Boulanger-Lewandowski et al., 2012). This dataset collects British and American folk tunes and has been commonly used in previous works to investigate the model’s ability for polyphonic music modeling (Boulanger-Lewandowski et al., 2012; Chung et al., 2014; Bai et al., 2018). Following the same setting in Bai et al. (2018), we split the data into training, validation, and testing sets which contains 694, 173 and 170 tunes, respectively. The learning rate is chosen from {5e−4, 5e−5, 5e−6} and dropout with probability of 0.1 is used to avoid overfitting. Moreover, gradient clipping is used during the training process. We choose negative log-likelihood (NLL) as the evaluation metrics and lower value indicates better performance. The experimental results are shown in Table 2. Both LTSM and TCN outperform Transformer in this task. We suspect this is because these music tunes exhibit strong local structures. While Transformer is equipped with multi-head attention mechanism that is effective to capture long-term dependencies, it fails to capture local structures in sequences that could provide strong signals. On the other hand, R-Transformer enhanced by LocalRNN has achieved much better results than Transformer. In addition, it also outperforms TCN by a large margin. This is expected because TCN tends to ignore the sequential information in the local structure, which can play an important role as suggested by (Gehring et al., 2017)." }, { "heading": "4.3 PENNTREEBANK: LANGUAGE MODELING", "text": "In this subsection, we further evaluate R-Transformer’s ability on both character-level and wordlevel language modeling tasks. The dataset we use is PennTreebank(PTB) (Marcus et al., 1993) that contains 1 million words and has been extensively used by previous works to investigate sequence models (Chen & Goodman, 1999; Chelba & Jelinek, 2000; Kim et al., 2016; Tran et al., 2016). For character-level language modeling task, the model is required to predict the next character given a context. Following the experimental settings in Bai et al. (2018), we split the dataset into train-\ning, validation and testing sets that contains 5059K, 396K and 446K characters, respectively. For Transformer and R-Transformer, the learning rate is chosen from {1, 2, 3} and dropout rate is 0.15. Gradient clipping is also used during the training process. The bpc is used to measure the predicting performance.\nFor word-level language modeling, the models are required to predict the next word given the contextual words. Similarly, we follow previous works and split PTB into training, validation, and testing sets with 888K, 70K and 79K words, respectively. The vocabulary size of PTB is 10K. As with character-level language modeling,the learning rate is chosen from {1, 2, 3} for Transformer and R-Transformer and dropout rate is 0.35. In this task, we also add Transformer-XL (Dai et al., 2019) as one baseline, which has been particularly designed for language modeling tasks and has achieved state-of-the-art performance. Note that to make the comparison fair, we apply the same model configuration, i.e., number of layers, to Transformer-XL. All other settings such as optimizer are the same as its original ones. The learning rate is chosen from {0.01, 0.001, 0.0001} and its best validation performance is achieved with 0.001. Note that, except dropout, no other regularization tricks such as variational dropout and weight dropout are applied. The prediction performance is evaluated with perplexity, the lower value of which denotes better performance.\nThe experimental results of character-level and word-level language modeling tasks are shown in Table 3 and Table 4, respectively. Several observations can be made from the Table 3. First, Transformer performs only slightly better than RNNs while much worse than other models. The reason for this observation is similar to the case of polyphonic music modeling task that language exhibits strong local structures and standard Transformer can not fully capture them. Second, TCN achieves better results than all of the RNNs, which is attributed to its ability to capture both local structures and long-term dependencies in languages. Notably, for both local structures and longterm dependencies, R-Transformer has more powerful components than TCN, i.e., LocalRNN and Multi-head attention. Therefore, it is not surprising to see that R-Transformer achieves significantly better results. Table 4 presents the results for word-level language modeling. Similar trends are observed, with the only exception that LSTM achieves the best results among all the methods. In addition, the result of Transformer-XL is only slightly better than R-transformer. Considering the fact that Transformer-XL is specifically designed for language modeling and employs the recurrent connection of segments (Dai et al., 2019), this result suggests the limited contribution of engineered positional embeddings." }, { "heading": "4.4 DISCUSSIONS AND EVALUATION LIMITATIONS", "text": "In summary, experimental results have shown that the standard Transformer can achieve better results than RNNs when sequences exhibit very long-term dependencies, i.e., sequential MNIST while its performance can drop dramatically when strong locality exists in sequences, i.e., polyphonic music and language. Meanwhile, TCN is a very strong sequence model that can effectively learn both local structures and long-term dependencies and has very stable performance in different tasks. More importantly, the proposed R-Transformer that combines a lower level LocalRNN and a higher level multi-head attention, outperforms both TCN and Transformer by a large margin consistently in most of the tasks. The experiments are conducted on various sequential learning tasks with datasets from different domains. Moreover, all experimental settings are fair to all baselines. Thus, the observations from the experiments are reliable with the current experimental settings. However, due to the computational limitations, we are currently restricted our evaluation settings to moderate model and dataset sizes. Thus, more evaluations on big models and large datasets can make the results more convincing. We would like to leave this as one future work." }, { "heading": "5 RELATED WORK", "text": "Recurrent Neural Networks including its variants such LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014) have long been the default choices for generic sequence modeling. A RNN sequentially processes each position in a sequence and maintains an internal hidden state to compresses information of positions that have been seen. While its design is appealing and it has been successfully applied in various tasks, several problems caused by its recursive structures including low computation efficiency and gradient exploding or vanishing make it ineffective when learning long sequences. Therefore, in recent years, a lot of efforts has been made to develop models\nwithout recursive structures and they can be roughly divided into two categories depending whether they rely on convolutions operations or not.\nThe first category includes models that mainly built on convolution operations. For example, van den Oord et al. have designed an autoregressive WaveNet that is based on causal filters and dilated convolution to capture both global and local information in raw audios (Van Den Oord et al., 2016). Ghring et al. has successfully replace traditional RNN based encoder and decoder with convolutional ones and outperforms LSTM setup in neural machine translation tasks (Gehring et al., 2017; 2016). Moreover, researchers introduced gate mechanism into convolutions structures to model sequential dependencies in languages (Dauphin et al., 2017). Most recently, a generic architecture for sequence modeling, termed as Temporal Convolutional Networks (TCN), that combines components from previous works has been proposed in (Bai et al., 2018). Authors in (Bai et al., 2018) have systematically compared TCN with canonical recurrent networks in a wide range of tasks and TCN is able achieve better performance in most cases. Our R-transformer is motivated by works in this group in a sense that we firstly models local information and then focus on global ones.\nThe most popular works in second category are those based on multi-head attention mechanism. The multi-head attention mechanism was firstly proposed in Vaswani et al. (2017), where impressive performance in machine translation task has been achieved with Transformer. It was then frequently used in other sequence learning models (Devlin et al., 2018; Dehghani et al., 2018; Dai et al., 2019). The success of multi-head attention largely comes from its ability to learn long-term dependencies through direct connections between any pair of positions. However, it heavily relies on position embeddings that have limited effects and require a fair amount of effort to design effective ones. In addition, our empirical results shown that the local information could easily to be ignored by multi-head attention even with the existence of position embeddings. Unlike previously proposed Transformer-like models, R-Transformer in this work leverages the strength of RNN and is able model the local structures effectively without the need of any position embeddings." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a novel generic sequence model that enjoys the advantages of both RNN and the multi-head attention while mitigating their disadvantages. Specifically, it consists of a LocalRNN that learns the local structures without suffering from any of the weaknesses of RNN and a multi-head attention pooling that effectively captures long-term dependencies without any help of position embeddings. In addition, the model can be easily implemented with full parallelization over the positions in a sequence. The empirical results on sequence modeling tasks from a wide range of domains have demonstrated the remarkable advantages of R-Transformer over state-of-the-art nonrecurrent sequence models such as TCN and standard Transformer as well as canonical recurrent architectures." } ]
2,019
R-TRANSFORMER: RECURRENT NEURAL NETWORK ENHANCED TRANSFORMER
SP:e020226557ee78b133893665ef4f30c9cb81ff9f
[ "The paper proposes a neural-network-based estimation of mutual information, following the earlier line of work in [A]. The main focus has been to develop an estimator that can reliably work with small dataset sizes. They first reduce the sample complexity of estimating mutual information by decoupling the network learning problem and the estimation problem by creating a training and validation set and then using the validation set for estimating mutual information. Of course, there is still the problem of learning the network with smaller sized data. For this, they propose the strategy of creating multiple tasks from the same dataset, where the dataset is run through transformations that do not affect mutual information.", "This manuscript studies mutual-information estimation, in particular variational lower bounds, and focuses on reducing their sample complexity. The first contribution is based on adapting the MINE energy-based MI estimator family to out-of-sample testing. MINE involves fitting a very flexible parametric form of the distribution, such as a neural network, to the data to derive a mutual information lower bound. The present work separates the data fitting from the mutual information evaluation to decrease sample complexity, the argument being that the function class is no longer a limiting factor to sample complexity of the mutual information estimation. The second contribution uses meta learning to decrease the sample complexity required to fit the neural network, creating a family of tasks derived from the data with data transformation that do not modify the mutual information. The approaches are demonstrated on synthetic data as well as fMRI data, to detect significant inter-subject dependencies in time-series of neural responses." ]
Measuring Mutual Information (MI) between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications. Recent works have developed accurate MI estimators through provably lowbias approximations and tight variational lower bounds assuming abundant supply of samples, but require an unrealistic number of samples to guarantee statistical significance of the estimation. In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator (DEMINE) that can provide a tight lower confident interval of MI under limited data, through adding cross-validation to the MINE lower bound (Belghazi et al., 2018). Hyperparameter search is employed and a novel meta-learning approach with task augmentation is developed to increase robustness to hyperparameters, reduce overfitting and improve accuracy. With improved data-efficiency, our DEMINE estimator enables statistical testing of dependency at practical dataset sizes. We demonstrate the effectiveness of DEMINE on synthetic benchmarks and real world fMRI data, with application of inter-subject correlation analysis.
[]
[ { "authors": [ "David Barber Felix Agakov" ], "title": "The IM algorithm: a variational approach to information maximization", "venue": "Advances in Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "Ibrahim Ahmad", "Pi-Erh Lin" ], "title": "A nonparametric estimation of the entropy for absolutely continuous distributions (corresp.)", "venue": "IEEE Transactions on Information Theory,", "year": 1976 }, { "authors": [ "Brian B Avants", "Charles L Epstein", "Murray Grossman", "James C Gee" ], "title": "Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain", "venue": "Medical Image Analysis,", "year": 2008 }, { "authors": [ "Francis R Bach", "Michael I Jordan" ], "title": "Kernel independent component analysis", "venue": "Journal of machine learning research,", "year": 2002 }, { "authors": [ "Yashar Behzadi", "Khaled Restom", "Joy Liau", "Thomas T Liu" ], "title": "A component based noise correction method (CompCor) for BOLD and perfusion based fMRI", "venue": null, "year": 2007 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Devon Hjelm", "Aaron Courville" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "James Bergstra", "Daniel Yamins", "David Daniel Cox" ], "title": "Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision", "venue": null, "year": 2013 }, { "authors": [ "Thomas B Berrett", "Richard J Samworth" ], "title": "Nonparametric independence testing via mutual information", "venue": null, "year": 2019 }, { "authors": [ "Robert W Cox" ], "title": "AFNI: software for analysis and visualization of functional magnetic resonance neuroimages", "venue": "Computers and Biomedical research,", "year": 1996 }, { "authors": [ "Oscar Esteban", "Christopher Markiewicz", "Ross W Blair", "Craig Moodie", "Ayse Ilkay Isik", "Asier Erramuzpe Aliaga", "James Kent", "Mathias Goncalves", "Elizabeth DuPre", "Madeleine Snyder", "Hiroyuki Oya", "Satrajit Ghosh", "Jessey Wright", "Joke Durnez", "Russell Poldrack", "Krzysztof Jacek Gorgolewski" ], "title": "FMRIPrep: a robust preprocessing pipeline for functional MRI. bioRxiv, 2018", "venue": null, "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Tianhe Yu", "Tianhao Zhang", "Pieter Abbeel", "Sergey Levine" ], "title": "One-shot visual imitation learning via meta-learning", "venue": "In Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic model-agnostic meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Vladimir S Fonov", "Alan C Evans", "Robert C McKinstry", "CR Almli", "DL Collins" ], "title": "Unbiased nonlinear average age-appropriate brain templates from birth to adulthood", "venue": null, "year": 2009 }, { "authors": [ "Neil Gaiman" ], "title": "The man who forgot ray bradbury", "venue": "https://soundcloud.com/neilgaiman/ the-man-who-forgot-ray-bradbury,", "year": 2018 }, { "authors": [ "Weihao Gao", "Sreeram Kannan", "Sewoong Oh", "Pramod Viswanath" ], "title": "Estimating mutual information for discrete-continuous mixtures", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Weihao Gao", "Sewoong Oh", "Pramod Viswanath" ], "title": "Demystifying fixed k-nearest neighbor information estimators", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Matthew F Glasser", "Timothy S Coalson", "Emma C Robinson", "Carl D Hacker", "John Harwell", "Essa Yacoub", "Kamil Ugurbil", "Jesper Andersson", "Christian F Beckmann", "Mark Jenkinson" ], "title": "A multi-modal parcellation of human cerebral cortex", "venue": null, "year": 2016 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Krzysztof Gorgolewski", "Christopher Burns", "Cindee Madison", "Dav Clark", "Yaroslav Halchenko", "Michael Waskom", "Satrajit Ghosh" ], "title": "Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python", "venue": "Frontiers in Neuroinformatics,", "year": 2011 }, { "authors": [ "Krzysztof J Gorgolewski", "Tibor Auer", "Vince D Calhoun", "R Cameron Craddock", "Samir Das", "Eugene P Duff", "Guillaume Flandin", "Satrajit S Ghosh", "Tristan Glatard", "Yaroslav O Halchenko" ], "title": "The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments", "venue": "Scientific Data,", "year": 2016 }, { "authors": [ "Arthur Gretton", "Olivier Bousquet", "Alex Smola", "Bernhard Schölkopf" ], "title": "Measuring statistical dependence with hilbert-schmidt norms", "venue": "In International conference on algorithmic learning theory,", "year": 2005 }, { "authors": [ "Arthur Gretton", "Alexander J Smola", "Olivier Bousquet", "Ralf Herbrich", "Andrei Belitski", "Mark Augath", "Yusuke Murayama", "Jon Pauls", "Bernhard Schölkopf", "Nikos K Logothetis" ], "title": "Kernel constrained covariance for dependence measurement", "venue": "In AISTATS,", "year": 2005 }, { "authors": [ "Douglas N Greve", "Bruce Fischl" ], "title": "Accurate and robust brain image alignment using boundarybased registration", "venue": null, "year": 2009 }, { "authors": [ "J Swaroop Guntupalli", "Michael Hanke", "Yaroslav O Halchenko", "Andrew C Connolly", "Peter J Ramadge", "James V Haxby" ], "title": "A model of representational spaces in human cortex", "venue": "Cerebral Cortex,", "year": 2016 }, { "authors": [ "Uri Hasson", "Yuval Nir", "Ifat Levy", "Galit Fuhrmann", "Rafael Malach" ], "title": "Intersubject synchronization of cortical activity during natural vision", "venue": null, "year": 2004 }, { "authors": [ "Uri Hasson", "Asif A Ghazanfar", "Bruno Galantucci", "Simon Garrod", "Christian Keysers" ], "title": "Brain-tobrain coupling: a mechanism for creating and sharing a social world", "venue": "Trends in cognitive sciences,", "year": 2012 }, { "authors": [ "James V Haxby", "J Swaroop Guntupalli", "Andrew C Connolly", "Yaroslav O Halchenko", "Bryan R Conroy", "M Ida Gobbini", "Michael Hanke", "Peter J Ramadge" ], "title": "A common, high-dimensional model of the representational space in human ventral temporal cortex", "venue": null, "year": 2011 }, { "authors": [ "Caroline M Holmes", "Ilya Nemenman" ], "title": "Estimation of mutual information for real-valued data with error bars and controlled bias", "venue": null, "year": 1903 }, { "authors": [ "Mark Jenkinson", "Peter Bannister", "Michael Brady", "Stephen Smith" ], "title": "Improved optimization for the robust and accurate linear registration and motion correction of brain", "venue": "images. NeuroImage,", "year": 2002 }, { "authors": [ "Taesup Kim", "Jaesik Yoon", "Ousmane Dia", "Sungwoong Kim", "Yoshua Bengio", "Sungjin Ahn" ], "title": "Bayesian model-agnostic meta-learning", "venue": "arXiv preprint arXiv:1806.03836,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "A. Kraskov", "H. Stogbauer", "P. Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "David McAllester", "Karl Statos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "arXiv preprint arXiv:1811.04251,", "year": 2018 }, { "authors": [ "Samuel A Nastase", "Valeria Gazzola", "Uri Hasson", "Christian Keysers" ], "title": "Measuring shared responses across subjects using intersubject correlation", "venue": "Social Cognitive and Affective Neuroscience,", "year": 2019 }, { "authors": [ "Jim O’Grady" ], "title": "Running from the Bronx", "venue": "https://soundcloud.com/ the-story-collider/jim-ogrady-running-from-the,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alexander A. Alemi", "George Tucker" ], "title": "On variational lower bounds of mutual information", "venue": "In Bayesian Deep Learning Workshop,", "year": 2018 }, { "authors": [ "Jonathan D Power", "Anish Mitra", "Timothy O Laumann", "Abraham Z Snyder", "Bradley L Schlaggar", "Steven E Petersen" ], "title": "Methods to detect, characterize, and remove motion artifact in resting state", "venue": "fMRI. NeuroImage,", "year": 2014 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Marleen B Schippers", "Alard Roebroeck", "Remco Renken", "Luca Nanetti", "Christian Keysers" ], "title": "Mapping the information flow from one brain to another during gestural communication", "venue": "Proceedings of the National Academy of Sciences,", "year": 2010 }, { "authors": [ "Frank Sehnke", "Christian Osendorfer", "Thomas Rückstieß", "Alex Graves", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Parameter-exploring policy gradients", "venue": "Neural Networks,", "year": 2010 }, { "authors": [ "Lauren J Silbert", "Christopher J Honey", "Erez Simony", "David Poeppel", "Uri Hasson" ], "title": "Coupled neural systems underlie the production and comprehension of naturalistic narrative speech", "venue": "Proceedings of the National Academy of Sciences,", "year": 2014 }, { "authors": [ "Erez Simony", "Christopher J Honey", "Janice Chen", "Olga Lositsky", "Yaara Yeshurun", "Ami Wiesel", "Uri Hasson" ], "title": "Dynamic reconfiguration of the default mode network during narrative comprehension", "venue": "Nature Communications,", "year": 2016 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Greg J Stephens", "Lauren J Silbert", "Uri Hasson" ], "title": "Speaker–listener neural coupling underlies successful communication", "venue": "Proceedings of the National Academy of Sciences,", "year": 2010 }, { "authors": [ "Jeffrey Mark Treiber", "Nathan S White", "Tyler Christian Steed", "Hauke Bartsch", "Dominic Holland", "Nikdokht Farid", "Carrie R McDonald", "Bob S Carter", "Anders Martin Dale", "Clark C Chen" ], "title": "Characterization and correction of geometric distortions in 814 diffusion weighted images", "venue": "PLOS ONE,", "year": 2016 }, { "authors": [ "N.J. Tustison", "B.B. Avants", "P.A. Cook", "Y. Zheng", "A. Egan", "P.A. Yushkevich", "J.C. Gee" ], "title": "N4itk: improved n3 bias correction", "venue": "IEEE Transactions on Medical Imaging,", "year": 2010 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Sijia Wang", "Daniel J Peterson", "J Christopher Gatenby", "Wenbin Li", "Thomas J Grabowski", "Tara M Madhyastha" ], "title": "Evaluation of field map and nonlinear registration methods for correction of susceptibility artifacts in diffusion mri", "venue": "Frontiers in Neuroinformatics,", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Qinyi Zhang", "Sarah Filippi", "Arthur Gretton", "Dino Sejdinovic" ], "title": "Large-scale kernel methods for independence testing", "venue": "Statistics and Computing,", "year": 2018 }, { "authors": [ "Yongyue Zhang", "Michael Brady", "Stephen Smith" ], "title": "Segmentation of brain MR images through a hidden markov random field model and the expectation-maximization algorithm", "venue": "IEEE Transactions on Medical Imaging,", "year": 2001 }, { "authors": [ "MI Output" ], "title": "Tθ(X,Z) 1: θ ← Xavier Initialization (Glorot & Bengio, 2010) 2: for i = 1 : NO do 3: Sample a batch of (xi, zi)B ∼ (x, z)train", "venue": null, "year": 2010 }, { "authors": [ "L θ" ], "title": "Update θ using Adam (Kingma & Ba, 2014) with η 7: end for 8: MI = I(X,Z", "venue": null, "year": 2014 }, { "authors": [ "Esteban" ], "title": "2018), a Nipype library8 (Gorgolewski et al., 2011) based tool", "venue": null, "year": 2011 }, { "authors": [ "proximation Gretton" ], "title": "https://github.com/amber0309/HSIC and a block HSIC", "venue": null, "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Mutual Information (MI) is an important, theoretically grounded measure of similarity between random variables. MI captures general, non-linear, statistical dependencies between random variables. MI estimators that estimate MI from samples are important tools widely used in not only subjects such as physics and neuroscience, but also machine learning ranging from feature selection and representation learning to explaining decisions and analyzing generalization of neural networks.\nExisting studies on MI estimation between general random variables focus on deriving asymptotic lower bounds and approximations to MI under infinite data, and techniques for reducing estimator bias such as bias correction, improved signal modeling with neural networks and tighter lower bounds. Widely used approaches include the k-NN-based KSG estimator (Kraskov et al., 2004) and the variational lower-bound-based Mutual Information Neural Estimator (MINE) family (Belghazi et al., 2018; Poole et al., 2018).\nDespite the empirical and asymptotic bias improvements, MI estimation has not seen wide adoption. The challenges are two-fold. First, the analysis of dependencies among variables - let alone any MI analyses for scientific studies - requires not only an MI estimate, but also confidence intervals (Holmes & Nemenman, 2019) around the estimate to quantify uncertainty and statistical significance. Existing MI estimators, however, do not provide confidence intervals. As low probability events may still carry a significant amount of information, the MI estimates could vary greatly given additional observations (Poole et al., 2018). Towards providing upper and lower bounds of true MI under limited number of observations, existing MI lower bound techniques assume infinite data and would need further relaxations when a limited number of observations are provided. Closest to our work, Belghazi et al. (2018) studied the lower bound of the MINE estimator under limited data, but it involves bounds on generalization error of the signal model and would not yield useful confidence intervals for realistic datasets. Second, practical MI estimators should be insensitive to the choice of hyperparameters. An estimator should return a single MI estimate with its confidence interval irrespective of the type of the data and the number of observations. For learning-based approaches, this means that the model design and optimization hyperparameters need to not only be determined automatically but also taken into account when computing the confidence interval.\nTowards addressing these challenges, our estimator, DEMINE, introduces a predictive MI lower bound for limited samples that enables statistical dependency testing under practical dataset sizes. Our estimator builds on top of the MINE estimator family, but performs cross-validation to remove the need to bound generalization error. This yields a much tighter lower bound agnostic to hyperparameter search. We automatically selected hyperparameters through hyperparameter search, and a new cross-validation meta-learning approach is developed, based upon few-shot meta-learning, to automatically decide initialization of model parameters. Meta-overfitting is strongly controlled through task augmentation, a new task generation approach for meta-learning. With these improvements, we show that DEMINE enables practical statistical testing of dependency for not only synthetic datasets but also for real world functional Magnetic Resonance Imaging (fMRI) data analysis capturing nonlinear and higher-order brain-to-brain coupling.\nOur contributions are summarized as follows: 1) A data-efficient Mutual Information Neural Estimator (DEMINE) for statistical dependency testing; 2) A new formulation of meta-learning using Task Augmentation (Meta-DEMINE); 3) Application to real life, data-scarce applications (fMRI)." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 MI ESTIMATION", "text": "A widely used approach for estimating MI from samples is using k-NN estimates, notably the KSG estimator (Kraskov et al., 2004). Gao et al. (2017) provided a comprehensive review and studied the consistency and of asymptotic confidence bound of the KSG estimator (Gao et al., 2018). MI estimation can also be achieved by estimating individual entropy terms through kernel density estimation (Ahmad & Lin, 1976) or cross-entropy (McAllester & Statos, 2018). Despite their good performance on random variables with few dimensions, MI estimation on high-dimensional random variables remains challenging for commonly used Gaussian kernels. Fundamentally, estimating MI requires accurately modeling the random variables, where high-capacity neural networks have shown excellent performance on complex high-dimensional signals such as text, image and audio.\nRecent works on MI estimation have focused on developing tight asymptotic variational MI lower bounds where neural networks are used for signal modeling. The IM algorithm (Agakov, 2004) introduces a variational MI lower bound, where a neural network q(z|x) is learned as a variational approximation to the conditional distribution P (Z|X). The IM algorithm requires the entropy, H(Z), andEXZ log q(z|x) to be tractable, which applies to latent codes of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) as well as categorical variables. Belghazi et al. (2018) introduces MI lower bounds MINE and MINE-f which allow the modeling of general random variables and shows improved accuracy for high-dimensional random variables, with application to improving generative models. Poole et al. (2018) introduces a spectrum of energy-based MI estimators based on MINE and MINE-f lower bounds and a new TCPC estimator inspired by Contrastive Predictive Coding (Oord et al., 2018) for the case when multiple samples from P (Z|X) can be drawn.\nOur work introduces cross-validation to the MINE-f estimator. We derive the lower bound of MINE-f under limited number of samples, and introduce meta-learning and hyperparameter search to enable practical statistical dependency testing." }, { "heading": "2.2 GENERAL STATISTICAL DEPENDENCY TESTING", "text": "Existing works in general statistical dependency testing (Bach & Jordan, 2002; Gretton et al., 2005a; Berrett & Samworth, 2019) have developed non-parametric independent criterions based on correlation and mutual information estimators equivalent to testing I(X;Z) = 0, followed by detailed bias and variance analyses. Our approach for independent testing suggest a different direction by harnessing the generalization power of neural networks and may improve test performance on complex signals. The p-values provided by our test do not involve approximated distributions and hold for small number of examples and arbitrary number of signal dimensions. As different statistical dependency testing approaches have explicit or implicit assumptions and biases that make them suitable in different situations, a fair comparison across different approaches is a challenging task. Instead, we focus on a self-contained presentation of our dependency test, and provide preliminary comparisons with a widely studied Hilbert-Schmidt independence criterion (HSIC) (Gretton et al., 2005a) in the appendix." }, { "heading": "2.3 META LEARNING", "text": "Meta-learning, or “learning to learn”, seeks to improve the generalization capability of neural networks by searching for better hyperparameters (Maclaurin et al., 2015), network architectures (Pham et al., 2018), initialization (Finn et al., 2017a; 2018; Kim et al., 2018) and distance metrics (Vinyals et al., 2016; Snell et al., 2017). Meta-learning approaches have shown significant performance improvements in applications such as automatic neural architecture search (Pham et al., 2018), few-shot image recognition (Finn et al., 2017a) and imitation learning (Finn et al., 2017b).\nIn particular, our estimator benefits from the Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017a) framework which is designed to improve few-shot learning performance. A network initialization is learned to maximize its performance when fine-tuned on few-shot learning tasks. Applications include few-shot image classification and navigation.\nWe leverage the model-agnostic nature of MAML for MI estimation between generic random variable and adopt MAML for maximizing MI lower bounds. To construct a collection of diverse tasks for MAML learning from limited samples, inspired by MI’s invariance to invertible transformations, we propose a task-augmentation protocol to automatically construct tasks by sampling random transformations to transform the samples. Results show reduced overfitting and improved generalization." }, { "heading": "3 BACKGROUND", "text": "In this section, we will provide the background necessary to understand our approach1. We define X and Z to be two random variables, P (X,Z) is the joint distribution, and P (X) and P (Z) are the marginal distributions over X and Z respectively. Our goal is to estimate MI, I(X;Z) given independent and identically distributed (i.i.d.) sample pairs (xi, zi), i = 1, 2 . . . n from P (X,Z). Let F = {Tθ(x, z)}θ∈Θ be a class of scalar functions, where θ is the set of model parameters. Let q(x|z) = p(x) e Tθ(x,z)\nE(x,z)∼PXZ e Tθ(x,z)\n. Results from previous works (Belghazi et al., 2018; Poole et al.,\n2018) show that the following energy-based family of lower bounds of MI hold for any θ:\nI(X;Z) ≥ E(x,z)∼PXZ log q(x|z) p(x) = E(x,z)∼PXZTθ(x, z)− Ex∼PX logEz∼PZ e Tθ(x,z) , IEB1\n≥ E(x,z)∼PXZTθ(x, z)− logEx∼PX ,z∼PZ e Tθ(x,z) , IMINE ≥ E(x,z)∼PXZTθ(x, z)− Ex∼PX ,z∼PZ e Tθ(x,z) + 1 , IMINE-f,IEB\n(1)\nwhere, E is the expectation over the given distribution. Based on IMINE, the MINE estimator I(X,Z) ∧\nn is defined as in Eq.2. Estimators for IEB1, IMINE-f and IEB can be defined similarly.\nI(X;Z) ∧\nn = sup θ∈Θ\n1\nn n∑ i=1 Tθ(xi, zi)− log 1 n2 n∑ i=1 n∑ j=1 eTθ(xi,zj). (2)\nWith infinite samples to approximate expectation, Eq.2 converges to the lower bound I(X,Z) ∧\n∞ = supθ∈Θ IMINE. Note that the number of samples n needs to be substantially more than the number of model parameters d = |θ| to guarantee that Tθ(X,Y ) does not overfit to the samples (xi, zi), i = 1, 2 . . . n and overestimate MI. Formally, the sample complexity of MINE is defined as the minimum number of samples n in order to achieve Eq.3,\nPr(|I(X,Z) ∧ n − I(X,Z) ∧ ∞| ≤ ) ≥ 1− δ. (3)\nSpecifically, MINE proves that under the following assumptions: 1) Tθ(X,Z) is L-Lipschitz; 2) Tθ(X,Z) ∈ [−M,M ], 3) {θi ∈ [−K,K], ∀i ∈ 1, . . . , d}, the sample complexity of MINE is given by Eq.4.\nn ≥ 2M 2(d log(16KL\n√ d/ ) + 2dM + log(2/δ))\n2 . (4)\nFor example, a neural network with dimension d = 10, 000, M = 1, K = 0.1 and L = 1, achieving a confidence interval of = 0.1 with 95% confidence (δ = 0.05) would require n ≥ 18, 756, 256 samples. This is achievable for synthetic example generated by GANs like that studied in Belghazi et al. (2018). For real data, however, the cost of data acquisition for reaching statistically significant\n1We follow the same notations in Belghazi et al. (2018). We encourage the review of Belghazi et al. (2018); Poole et al. (2018) for a detailed understanding of IMINE, IEB1, and IEB.\nestimation can be prohibitively expensive. Our approach instead uses the MI lower bounds specified in Eq.1 from a prediction perspective, inspired by cross-validation. Our estimator, DEMINE, improves sample complexity by disentangling data for lower bound estimation from data for learning a generalizable Tθ(X,Z). DEMINE enables high-confidence MI estimation on small datasets." }, { "heading": "4 APPROACH", "text": "Section 4.1 specifies DEMINE for predictive MI estimation and derives the confidence interval; Section 4.2 formulates Meta-DEMINE, explains task augmentation, and defines the optimization algorithms." }, { "heading": "4.1 PREDICTIVE MUTUAL INFORMATION ESTIMATION", "text": "In DEMINE, we interpret the estimation of MINE-f lower bound2 Eq.1 as a learning problem. The goal is given a limited number of samples, infer the optimal network Tθ∗(X,Z) with parameters θ∗ defined as follows:\nθ∗ = argmax θ∈Θ EPXZTθ(X,Z)− EPXEPZeTθ(X,Z) + 1.\nSpecifically, samples from P (X,Z) are subdivided into a training set {(xi, zi)train, i = 1, . . . ,m} and a validation set {(xi, zi)val, i = 1, . . . , n}. The training set is used for learning a network θ̃ as an approximation to θ∗ whereas the validation set is used for computing the DEMINE estimation I(X,Z) ∧\nn,θ̃ defined as in Eq.5.\nI(X,Z) ∧\nn,θ̃ = 1\nn n∑ i=1 Tθ̃(xi, zi)val − 1 n2 n∑ i=1 n∑ j=1 eTθ̃(xi,zj)val + 1 (5)\nWe propose an approach to learn θ̃, DEMINE. DEMINE learns θ̃ by maximizing the MI lower bound on the training set as follows:\nθ̃ = argmin θ∈Θ\nL({(x, z)}train, θ),where,\nL({(x, z)}B, θ) = − 1\n|B| |B|∑ i=1 Tθ(xi, zi)B + 1 |B|2 |B|∑ i=1 |B|∑ j=1 eTθ(xi,zj)B − 1. (6)\nThe DEMINE algorithm is shown in Algorithm 2 in appendix.\nSample complexity analysis. Because θ̃ is learned independently of validation samples {(xi, zi)val, i = 1, . . . , n}, the sample complexity of the DEMINE estimator does not involve the model class F and the sample complexity is greatly reduced compared to MINE-f. DEMINE estimates I(X,Z) ∧\n∞,θ̃ when infinite number of samples are provided, defined as:\nI(X,Z) ∧\n∞,θ̃ = EPXZTθ̃(X,Z)− EPXEPZ e T θ̃ (X,Z) + 1\n≤ supθ∈Θ EPXZTθ(X,Z)− EPXEPZ e Tθ(X,Z) + 1 ≤ I(X;Z)\n(7)\nWe now derive the sample complexity of DEMINE defined as the number of samples n required for I(X,Z) ∧ n,θ̃ to be a good approximation to I(X,Z) ∧ ∞,θ̃ in Theorem 1.\nTheorem 1. For Tθ̃(X,Z) bounded by [L,U ], given any accuracy and confidence δ, we have:\nPr(|I(X,Z) ∧ n,θ̃ − I(X,Z) ∧ ∞,θ̃| ≤ ) ≥ 1− δ\nwhen the number of validation samples n satisfies:\nn ≥ n∗, s.t. f(n∗) ≡ min 0≤ξ≤\n2e − 2ξ\n2n∗\n(U−L)2 + 4e − ( −ξ)\n2n∗\n2(eU−eL)2 = δ (8)\n2MINE lower bound can also be interpreted in the predictive way, but will result in a higher sample complexity than MINE-f lower bound. We choose MINE-f in favor of a lower sample complexity over bound tightness.\nProof. Since Tθ̃(X,Z) is bounded by [L,U ], applying the Hoeffding inequality to the first half of Eq.5 yields:\nPr(| 1 n n∑ i=1 Tθ̃(xi, zi)− EPXZTθ̃(X,Z)| ≥ ξ) ≤ 2e − 2ξ 2n (U−L)2\nAs eTθ(X,Z) is bounded by [eL, eU ], applying the Hoeffding inequality twice to the second half of Eq.5:\nPr(|EPXEPZ e Tθ(X,Z) − 1\nn ∑n i=1 EPZ e T θ̃ (xi,z)| ≥ ζ) ≤ 2e− 2ζ2n (eU−eL)2\nPr(|EPZ 1n ∑n i=1 e Tθ(xi,z) − 1 n ∑n j=1 1 n ∑n i=1 e T θ̃ (xi,zj)| ≥ ζ) ≤ 2e− 2ζ2n (eU−eL)2\nCombining the above bounds results in:\nPr(|I(X,Z) ∧ n,θ̃ − I(X,Z) ∧ ∞,θ̃| ≤ ξ + 2ζ) ≥ 1− 2e − 2ξ 2n (U−L)2 − 4e− 2ζ2n (eU−eL)2\nBy solving ξ to minimize n according to Eq.8 we have:\nPr(|I(X,Z) ∧ n,θ̃ − I(X,Z) ∧ ∞,θ̃| ≤ ) ≥ 1− δ.\nTheorem 1 also implies the following MI lower confidence interval under limited number of samples\nPr(I(X;Z) ≥ I(X,Z) ∧\nn,θ̃ − ) ≥ 1− δ\nCompared to MINE, as per the example shown in Section 3, for M = 1 (i.e. L = −1 and U = 1), δ = 0.05, = 0.1, our estimator requires n = 10, 742 compared to MINE requiring n = 18, 756, 256 i.i.d validation samples to estimate a lower bound, which makes MI-based dependency analysis feasible for domains where data collection is prohibitively expensive, e.g. fMRI scans. In practice, sample complexity can be further optimized by optimizing hyperparameters U and L. Note that unlike Eq.3, Theorem 1 bounds the closeness of the DEMINE estimate, I(X,Z) ∧\nn,θ̃, not to-\nwards the MI lower bound supθ∈Θ IMINE-f, but towards the MI lower bound I(X,Z) ∧\n∞,θ̃. Therefore, the sample complexity of DEMINE as in Eq.8 makes fair comparison with the sample complexity of MINE as in Eq.4. MINE’s higher sample complexity stems from the need to bound the generalization error of Tθ(X,Z) on unseen {(x, z)}. Existing generalization bounds are known to be overly loose, as over-parameterized neural networks have been shown to generalize well in classification and regression tasks (Zhang et al., 2016). By using a learning-based formulation, DEMINE not only avoids the need to bound generalization error, but also allows further generalization improvements by learning θ̃ through meta-learning.\nIn the following section, we present a meta-learning formulation, Meta-DEMINE, that learns θ̃ for generalization given the same model class and training samples." }, { "heading": "4.2 META-LEARNING", "text": "Given training data {(xi, zi)train, i = 1, . . .m}, Meta-DEMINE first generates MI estimation tasks each consisting of a meta-training split A and a meta-val split B through a novel task augmentation process. And then a parameter initialization θinit is then learned to maximize MI estimation performance on the generated tasks using initialization θinit as shown in Eq.9.\nθinit = argmin θ(0)∈Θ\nE(A,B)∈T L((x, z)B, θ(t)),with , θ(t) ≡ MetaTrain ( (x, z)A, θ (0) ) . (9)\nHere θ(t) = MetaTrain ( (x, z)A, θ (0) )\nis the meta-training process of starting from an initialization θ(0) and applying Stochastic Gradient Descent (SGD) 3 over t steps to learn θ where in every meta training iteration we have:\nθ(t) ← θ(t−1) − γ∇L((x, z)A, θ(t−1)). 3In practice, the Adam optimizer (Kingma & Ba, 2014) is used for faster optimization. The Adam optimizer uses first and second order momentums of the gradient to speed up optimization. Illustrating SGD for simplicity.\nFinally, θ̃ is learned using the entire training set {(xi, zi)train, i = 1, . . . ,m} with θinit as initialization: θ̃ = MetaTrain ( (x, z)train, θinit ) .\nTask Augmentation: Meta-DEMINE adapts MAML (Finn et al., 2017a) for MI lower bound maximization. MAML has been shown to improve generalization performance in N -class K-shot image classification. MI estimation, however, does not come with predefined classes and tasks. A naive approach to produce tasks would be through cross-validation – partitioning training data into meta-training and meta-validation splits. However, merely using cross-validation tasks is prone to overfitting – a θinit, which memorizes all training samples would as a result have memorized all metavalidation splits. Instead, Meta-DEMINE generates tasks by augmenting the cross-validation tasks through task augmentation. Training samples are first split into meta-training and meta-validation splits, and then transformed using the same random invertible transformation to increase task diversity. Meta-DEMINE generates invertible transformation by sequentially composing the following functions:\nMirror : m(x) = (2n− 1)x, n ∼ Bernoulli( 1 2 ), Permute : P (x) = n\nP d, Permute dimensions. Offset : O(x) = x+ , ∼ U(−0.1, 0.1), Gamma : G(x) = sign(x) |x|γ , γ ∼ U(0.5, 2),\nSince the MI between two random variables is invariant to invertible transformations on each variable, MetaTrain(·, ·) is expected to arrive at the same MI lower bound estimation regardless of the transformation applied. At the same time, memorization is greatly suppressed, as the same pair (x, z) can have different log p(x,z)p(x)p(z) under different transformations. More sophisticated invertible transformations (affine, piece-wise linear) can also be added. Task augmentation is an orthogonal approach to data augmentation. Using image classification as an example, data augmentation generates variations of the image, translated, or rotated images assuming that they are valid examples of the class. Task augmentation on the other hand, does not make such an assumption. Task augmentation requires the initial parameters θinit to be capable of recognizing the same class in a world where all images are translated and/or rotated, with the assumption that the optimal initialization should easily adapt to both the upright world and the translated and/or rotated world.\nOptimization: Solving θinit using the meta-learning formulation Eq.9 poses a challenging optimization problem. The commonly used approach is back propagation through time (BPTT) which computes second order gradients and directly back propagates gradients from MetaTrain((x, z)A, θ(0)) to θinit. BPTT is very effective for a small number of optimization steps, but is vulnerable to exploding gradients and is memory intensive. In addition to BPTT, we find that stochastic finite difference algorithms such as Evolution Strategies (ES) (Salimans et al., 2017) and Parameter-Exploring Policy Gradients (PEPG) (Sehnke et al., 2010) can sometimes improve optimization robustness. In practice, we switch betwen BPTT and PEPG depending on the number of meta-training iterations. Meta-DEMINE algorithm is specified in Algorithm 1." }, { "heading": "5 EVALUATION ON SYNTHETIC DATASETS", "text": "Dataset. We evaluate our approaches DEMINE and Meta-DEMINE against baselines and state-ofthe-art approaches on 3 synthetic datasets: 1D Gaussian, 20D Gaussian and sine wave. For 1D and 20D Gaussian datasets, following Belghazi et al. (2018), we define two k-dimensional multivariate Gaussian random variables X and Z which have component-wise correlation corr(Xi, Zj) = δijρ, where ρ ∈ (−1, 1) and δij is Kronecker’s delta. Mutual information I(X;Z) has a closed form solution I(X;Z) = −k ln(1 − ρ2). For the sine wave dataset, we define two random variables X and Z, where X ∼ U(−1, 1), Z = sin(aX + π2 ) + 0.05 , and ∼ N (0, 1). Estimating mutual information accurately given few pairs of (X,Z) requires the ability to extrapolate the sine wave given few examples. Ground truth MI for sine wave dataset is approximated by running the the KSG Estimator (Kraskov et al., 2004) on 1, 000, 000 samples.\nImplementation. We compare our estimators, DEMINE and Meta-DEMINE, against the KSG estimator (Kraskov et al., 2004) MI-KSG and MINE-f (Belghazi et al., 2018). For both DEMINE and Meta-DEMINE, we study variance reduction mode, referred to as -vr, where hyperparameters are selected by optimizing 95% confident estimation mean (µ − 2σµ) and statistical significance mode, referred to as -sig, where hyperparameters are selected by optimizing 95% confident MI\nAlgorithm 1 Meta-DEMINE Input Data: {(x, z)train, (x, z)val} Parameters: batch B, Meta Learning Iterations NM , Task Augmentation Iterations NT , Optimization Iterations NO, Ratio r, Learning rate η, Meta Learning Rate ηmeta Output: MI, Tθinit(X,Z), Tθ(X,Z)\n1: for i = 1 : NM do 2: for j = 1 : NT do 3: A = r × train, B = train−A 4: Split (x, z)train into (x, z)A and (x, z)B 5: Transformation Rx for x, Rx(·) = m(P(O(G(·)))) 6: Transformation Rz for z, Rz(·) = m(P(O(G(·)))) 7: θ(0)meta ← θinit 8: for k = 1 : NO do 9: Sample a batch of (x, z)B ∼ (x, z)A\n10: Compute L ( (Rx(x), Rz(z))B, θ (k) meta ) 11: Compute∇ θ (k) meta L – gradient for θmeta 12: Update θmeta using Adam Kingma & Ba (2014) with η 13: end for 14: Compute Lmeta ( (Rx(x), Rz(z))B, θ (NO) meta\n) 15: Compute∇θ0Lmeta – gradient to θinit using BPTT 16: end for 17: Update θinit using Adam Kingma & Ba (2014) with ηmeta 18: end for 19: θ(0) ← θinit 20: for i = 1 : NO do 21: Sample a batch of (x, z)B ∼ (x, z)train 22: Compute L ( (x, z)B, θ (i) ) 23: Compute gradient∇θL 24: Update θ using Adam with η 25: end for 26: Compute MI = L ( (x, z)val, θ (NO) ) 27: return MI, θinit, θ(NO)\nlower bound (µ − ). Samples (x, z) are split 50%-50% into (x, z)train and (x, z)val. We use a separable network architecture Tθ(x, z) =M ( tanh(w cos 〈 f(x), g(z) 〉 + b)− t ) . f and g are MLP encoders that embed signals x and z into vector embeddings. Hyperparameters t ∈ [−1, 1] and M control upper and lower bounds Tθ(x, z) ∈ [−M(1 + t),M(1 − t)]. Parameters w and b are learnable parameters. MLP design and optimization hyperparameters are selected using Bayesian hyperparameter optimization (Bergstra et al., 2013) described below.\nHyperparameter search on DEMINE-vr and DEMINE-sig was conducted using the hyperopt package 4. Seven hyperparameters were involved in hyperparameter search: 1) number of encoder layers [1, 5], 2) encoder hidden size [8, 256], 3) learning rate η [10−4, 3× 10−1] in log scale, 4) number of optimization iterations NO [5, 200] (sine wave [5, 5000]) in log scale, 5) batch size B [256, 1024], 6) M , [10−3, 5] in log scale, 7) t, [−1, 1]. Mean µ and sample standard deviation σ of MI estiamte computed over 3-fold cross-validation on (x, z)train. DEMINE-vr maximizes two sigma low µ−2σµ where σµ = 1√3σ due to 3-fold cross-validation. DEMINE-sig maximizes statistical significance µ− where is two-sided 95% confidence interval of MI. Meta-DEMINE-vr and Meta-DEMINEsig subsequently reuse these hyperparameters as DEMINE-vr and DEMINE-sig.\nMeta-learning hyperparameters are chosen as outer loop NM = 3, 000 iterations, task augmentation NT = 1 iterations, r = 0.8, ηmeta = η3 , with task augmentation mode m(P (O(·))). NO was capped at 30 iterations for 1D and 20D Gaussian datasets due to memory limit. For the sine wave datasets with large NO, we used PEPG (Sehnke et al., 2010) rather than BPTT.\n4Hyperopt package: https://github.com/hyperopt/hyperopt.\nFor MI-KSG, we use off-the-shelf implementation by Gao et al. (2017) with default number of nearest neighbors k = 3. MI-KSG does not provide any confidence interval. For MINE-f, we use the same network architecture same as DEMINE-vr. we implement both the original formulation which optimizes Tθ on (x, z) till convergence (10k iters), as well as our own implementation MINE-f-ES with early stopping, where optimization is stopped after the same number of iterations as DEMINEvr to control overfitting.\nResults. Figure 1(a) shows MI estimation performance on 20D Gaussian datasets with varying ρ ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5} using N = 300 samples. Results are averaged over 5 runs to compare estimator bias, variance and confidence. Note that Meta-DEMINE-sig detects the highest p < 0.05 confidence MI, outperforming DEMINE-sig which is a close second. Both detect p < 0.05 statistically significant dependency starting ρ = 0.3, whereas estimations of all other approaches are low confidence. It shows that in contrary to common belief, estimating the variational lower bounds with high confidence can be challenging under limited data. MINE-f estimates MI > 3.0 and MINE-f-ES estimates positive MI when ρ = 0, both due to overfitting, despite MINE-f-ES having the lowest empirical bias. DEMINE variants have relatively high empirical bias but low variance due to tight upper and lower bound control, which provides a different angle to understand bias-variance trade off in MI estimation (Poole et al., 2018).\nFigure 1(b,c,d) shows MI estimation performance on 1D, 20D Gaussian and sine wave datasets with fixed ρ = 0.8, 0.3 and a = 8π respectively, with varyingN ∈ {30, 100, 300, 1000, 3000} number of samples. More samples asymptotically improves empirical bias across all estimators. As opposed to 1D Gaussian datasets which are well solved byN = 300 samples, higher-dimensional 20D Gaussian and higher-complexity sine wave datasets are much more challenging and are not solved using N = 3000 samples with a signal-agnostic MLP architecture. DEMINE-sig and Meta-DEMINE-sig detect p < 0.05 statistically significant dependency on not only 1D and 20D Gaussian datasets where x and z have non-zero correlation, but also on the sine wave datasets where correlation between x and z is 0. This means that DEMINE-sig and Meta-DEMINE-sig can be used for nonlinear dependency testing to complement linear correlation testing.\nWe study the effect of cross-validation meta-learning and task augmentation on 20D Gaussian with ρ = 0.3 and N = 300. Figure 2 plots performance of Meta-DEMINE-vr over NM = 3000 meta iterations under combinations of task augmentations modes and number of adaptation iterations NO ∈ {0, 20}. Overall, task augmentation modes which involve axis flipping m(·) and permutation P (·) are the most successful. With NO = 20 steps of adaptation, task augmentation modes P (·), m(P (·)) and m(P (O(·))) prevent overfitting and improves performance. The performance improvements of task augmentation is not simply from change in batch size, learning rate or number of optimization iterations, because meta-learning without task augmentation for both NO = 0 and 20 could not outperform baseline. Meta-learning without task augmentation and with task augmentation but using only O(·) or G(·) result in overfitting. Task augmentation with m(·) or m(P (O(G(·)))) prevent overfitting, but do not provide performance benefits, possibly because their complexity is insufficient or excessive for 20 adaptation steps. Further more, task augmentation with no adaptation (NO = 0) falls back to data augmentation, where samples from transformed distributions are directly used to learn Tθ(x, z). Data augmentation withO(·) outperforms no augmentation, but is unable to outperform baseline and suffers from overfitting. It shows that task augmentation provides improvements orthogonal to data augmentation." }, { "heading": "6 APPLICATION: FMRI INTER-SUBJECT CORRELATION (ISC) ANALYSIS", "text": "Humans use language to effectively transmit brain representations among conspecifics. For example, after witnessing an event in the world, a speaker may use verbal communication to evoke neural representations reflecting that event in a listener’s brain (Hasson et al., 2012). The efficacy of this transmission, in terms of listener comprehension, is predicted by speaker-listener neural synchrony and synchrony among listeners (Stephens et al., 2010). To date, most work has measured brainto-brain synchrony by locating statistically significant inter-subject correlation (ISC); quantified as the Pearson product-moment correlation coefficient between response time series for corresponding voxels or regions of interest (ROIs) across individuals (Hasson et al., 2004; Schippers et al., 2010; Silbert et al., 2014; Nastase et al., 2019). Using DEMINE and Meta-DEMINE for statistical dependency testing, we can extend ISC analysis to capture nonlinear and higher-order interactions in continuous fMRI responses. Specifically, given synchronized fMRI response frames in two brain\nTable 1: Number of HCP-MMP1 regions with significant correlation (r) and MI (DEMINE, Meta-\nDEMINE) during listening.\nNo. shared r DEMINE Meta -sig -DEMINE\n-sig\nr 37 24 23 DEMINE-sig 24 28 26 Meta-DEMINE-sig 23 26 29\nTable 2: Segment classification accuracy for NeuralMI versus Pearson’s correlation in 1-vs-rest*.\nClassification ISC Mask dDMN Mask Accuracy (%) P F Br Bk MI P F Br Bk MI\nChance 3.7 1.8 2.6 1.9 N/A 3.7 1.8 2.6 1.9 N/A Pearson’s r 1vR 35.0 20.4 25.8 31.5 N/A 14.8 6.4 11.8 9.9 N/A DEMINE-vr 1vR 42.8 28.0 32.8 35.9 0.637 16.5 7.9 11.6 12.0 0.035 Meta-DEMINE-vr 1vR 47.2 32.5 39.9 41.0 0.752 13.7 7.9 8.2 8.9 0.031 Abbreviations: P: Pieman; F: Forgot; Br: Bronx; Bk: Black, MI: Mutual Information.\n*Note that all the results are averaging over the subjects.\nregions X and Z across K subjects Xi, Zi, i = 1, . . . ,K as random variables. We model the conditional mutual information I(Xi;Zj |i 6= j) as the MI form of pair-wise ISC analysis. By definition, I(Xi;Zj |i 6= j) first computes MI between activationsXi and Zj from subjects i and j respectively, and then average across pairs of subjects i 6= j. It can be lower bounded using Eq. 7 by learning a Tθ(x, z) shared across all subject pairs.\nDataset. We study MI-based and correlation-based ISC on a fMRI story comprehension dataset by Nastase et al. (2019) with 40 participants listening to four spoken stories. Average story duration is 11 minutes. An fMRI frame with full brain coverage is captured at repetition time 1 TR =1.5 seconds with 2.5mm isotropic spatial resolution. We restricted our analysis to subsets of voxels defined using independent data from previous studies: functionally-defined masks of high ISC voxels (ISC; 3,800 voxels) and dorsal Default-Mode Network voxels (dDMN; 3,940 voxels) from Simony et al. (2016) as well as 180 HCP-MMP1 multimodal cortex parcels from Glasser et al. (2016). All masks were defined in MNI space.\nImplementation. We compare MI-based ISC using DEMINE and Meta-DEMINE with correlationbased ISC using Pearson’s correlation. DEMINE and Meta-DEMINE setup follows Section 5. The fMRI data were partitioned by subject into a train set of 20 subjects and a validation set of 20 different subjects. Residual 1D CNN is used instead of MLP as the encoder for studying temporal dependency. For Pearson’s correlation, high-dimensional signals are reshaped to 1D for correlation analysis. Effective sample size for confidence interval calculation is the number of unique nonoverlapping fMRI samples.\nResults. We first examine, for the fine grained HCM-MMP1 brain regions, which have p < 0.05 statistically significant MI and Pearson’s correlation. Table 1 shows the result. Overall, more regions have statistically significant correlation than dependency. This is expected because correlation requires less data to detect. But Meta-DEMINE is able to find 6 brain regions that have statistically significant dependency but lacks significant correlation. This shows that MI analysis can be used to complement correlation-based ISC analysis.\nBy considering temporal ISC over time, fMRI signals can be modeled with improved accuracy. In Table 2 we apply DEMINE and Meta-DEMINE with L = 10TRs (15s) sliding windows as random variables to study amount of information that can be extracted from ISC and dDMN masks. We use between-subject time-segment classification (BSC) for evaluation (Haxby et al., 2011; Guntupalli et al., 2016). Each fMRI scan is divided into K non-overlapping L = 10 TRs time segments. The BSC task is one versus rest retrieval: retrieve the corresponding time segment z of an individual given a group of time segments x excluding that individual, measured by top-1 accuracy. For retrieval score, Tθ(X,Z) is used for DEMINE and Meta-DEMINE and ρ(X,Z) is used for Pearson’s correlation as a simple baseline. With CNN as encoder, DEMINE and Meta-DEMINE model the signal better and achieve higher accuracy. Also. Meta-DEMINE is able to extract 0.75 nats of MI from the ISC mask over 10 TRs or 15s, which could potentially be improved by more samples." }, { "heading": "7 CONCLUSION", "text": "We illustrated that a predictive view of the MI lower bounds coupled with meta-learning results in data-efficient variational MI estimators, DEMINE and Meta-DEMINE, that are capable of performing statistical test of dependency. We also showed that our proposed task augmentation reduces overfitting and improves generalization in meta-learning. We successfully applied MI estimation to real world, data-scarce, fMRI datasets. Our results suggest a greater avenue of using neural networks and meta-learning to improve MI analysis and applying neural network-based information theory tools to enhance the analysis of information processing in the brain. Model-agnostic, high-confidence, MI lower bound estimation approaches – including MINE, DEMINE and Meta-DEMINE– are limited to estimating small MI lower bounds up to O(logN) as pointed out in (McAllester & Statos, 2018), where N is the number of samples. In real fMRI datasets, however, strong dependency is rare and existing MI estimation tools are limited more by their ability to accurately characterize the dependency. Nevertheless, when quantitatively measuring strong dependency, cross-entropy (McAllester & Statos, 2018) or model-based quantities, alternatives to MI, such as correlation or CCA, may be measured with high confidence." }, { "heading": "A APPENDIX", "text": "A.1 THE DEMINE ALGORITHM\nAlgorithm 2 DEMINE Input Data: {(x, z)train, (x, z)val} Parameters: Batch B, Iterations NO, Learning rate η Output: MI, Tθ(X,Z)\n1: θ(0) ← Xavier Initialization (Glorot & Bengio, 2010) 2: for i = 1 : NO do 3: Sample a batch of (xi, zi)B ∼ (x, z)train 4: Compute L ( (xi, zi)B, θ (i−1) )\n5: Compute∇(i)θ L – gradient for θ 6: Update θ(i) using Adam (Kingma & Ba, 2014) with η 7: end for 8: MI = I(X,Z) ∧\nn,θ(NO)\n9: return MI, θ(NO)\nA.2 ADDITIONAL DETAILS OF THE FMRI DATASET\nThe dataset we used contains 40 participants (mean age = 23.3 years, standard deviation = 8.9, range: 1853; 27 female) recruited to listen to four spoken stories56. The stories were renditions of “Pie Man” and “Running from the Bronx” by Jim OGrady (O’Grady, 2018b;a), “The Man Who Forgot Ray Bradbury” by Neil Gaiman (Gaiman, 2018), and “I Knew You Were Black” by Carol Daniel (Daniel, 2018); story durations were 7, 9, 14, and 13 minutes, respectively. After scanning, participants completed a questionnaire comprising 25-30 questions per story intended to measure narrative comprehension. The questionnaires included multiple choice, True/False, and fill-in-theblank questions, as well as four additional subjective ratings per story. Functional and structural images were acquired using a 3T Siemens Prisma with a 64-channel head coil. Briefly, functional images were acquired in an interleaved fashion using gradient-echo echo-planar imaging with a multiband acceleration factor of 3 (TR/TE = 1500/31 ms where TE stands for “echo time”, resolution = 2.5 mm isotropic voxels, full brain coverage).\nAll fMRI data were formatted according to the Brain Imaging Data Structure (BIDS) standard (Gorgolewski et al., 2016) and preprocessed using the fMRIPrep library (Esteban et al., 2018). Functional data were corrected for slice timing, head motion, and susceptibility distortion, and normalized to MNI space using nonlinear registration. Nuisance variables comprising head motion parameters, framewise displacement, linear and quadratic trends, sine/cosine bases for high-pass filtering (0.007 Hz), and six principal component time series from cerebrospinal fluid (CSF) and white matter (WM) were regressed out of the signal using the Analysis of Functional NeuroImages (AFNI) software suite (Cox, 1996).\nThe fMRI data compriseX ∈ RVi×T for each subject, where Vi represents the flattened and masked voxel space and T represents the number of samples (in TRs) during auditory stimulus presentation.\nAdditional Details on Dataset Collection Functional and structural images were acquired using a 3T Siemens Magnetom Prisma with a 64-channel head coil. Functional, blood-oxygenation-leveldependent (BOLD) images were acquired in an interleaved fashion using gradient-echo echo-planar imaging with pre-scan normalization, fat suppression, a multiband acceleration factor of 3, and no in-plane acceleration: TR/TE = 1500/31 ms, flip angle = 67◦, bandwidth = 2480 hz per pixel, resolution = 2.5 mm3 isotropic voxels, matrix size = 96 x 96, Field of view (FoV) = 240 x 240 mm, 48 axial slices with roughly full brain coverage and no gap, anteriorposterior phase encoding. At the beginning of each scanning session, a T1-weighted structural scan (where T1 stands for\n5Two of the stories were told by a professional storyteller undergoing an fMRI scan; however, fMRI data for the speaker were not analyzed for the present work due to the head motion induced by speech production.\n6The study was conducted in compliance with the Institutional Review Board of the University\n“longitudinal relaxation time”), was acquired using a high-resolution single-shot MagnetizationPrepared 180 degrees radio-frequency pulses and RApid Gradient-Echo (MPRAGE) sequence with an in-plane acceleration factor of 2 using GeneRalized Autocalibrating Partial Parallel Acquisition (GRAPPA): TR/TE/TI = 2530/3.3/1100 ms where TI stands for inversion time, flip angle = 7◦, resolution = 1.0 x 1.0 x 1.0 mm voxels, matrix size = 256 x 256, FoV = 256 x 256 x 176 mm, 176 sagittal slices, ascending acquisition, anteriorposterior phase encoding, no fat suppression, 5 min 53 s total acquisition time. At the end of each scanning session a T2-weighted (where T2 stands for “transverse relaxation time”) structural scan was acquired using the same acquisition parameters and geometry as the T1-weighted structural image: TR/TE = 3200/428 ms, 4 minutes 40 seconds total acquisition time. A field map was acquired at the beginning of each scanning session, but was not used in subsequent analyses.\nAdditional Details on Dataset Preprocessing Preprocessing was performed using the fMRIPrep library7 Esteban et al. (2018), a Nipype library8 (Gorgolewski et al., 2011) based tool. T1-weighted images were corrected for intensity non-uniformity using the N4 bias field correction algorithm (Tustison et al., 2010) and skull-stripped using Advanced Normalization Tools (ANTs) (Avants et al., 2008). Nonlinear spatial normalization to the International Consortium for Brain Mapping (ICBM) 152 Nonlinear Asymmetrical template version 2009c (Fonov et al., 2009) was performed using ANTs. Brain tissue segmentation cerebrospinal fluid, white matter, and gray matter was was performed using FSL library’s9 FAST tool Zhang et al. (2001). Functional images were slice timing corrected using AFNI software’s 3dTshift (Cox, 1996) and corrected for head motion using FSL library’s MCFLIRT tool (Jenkinson et al., 2002). “Fieldmap-less” distortion correction was performed by co-registering each subject’s functional image to that subject’s intensity-inverted T1-weighted image (Wang et al., 2017) constrained with an average field map template (Treiber et al., 2016). This was followed by co-registration to the corresponding T1-weighted image using FreeSurfer software’s10 boundary-based registration (Greve & Fischl, 2009) with 9 degrees of freedom. Motion correcting transformations, field distortion correcting warp, BOLD-to-T1 transformation and T1-to-template (MNI) warp were concatenated and applied in a single step with Lanczos interpolation using ANTs. Physiological noise regressors were extracted applying “a Component Based Noise Correction Method” aCompCor (Behzadi et al., 2007). Six principal component time series were calculated within the intersection of the subcortical mask and the union of CSF and WM masks calculated in T1w (T1 weighted) space, after their projection to the native space of each functional run. Framewise displacement (Power et al., 2014) was calculated for each functional run. Functional images were downsampled to 3 mm resolution. Nuisance variables comprising six head motion parameters (and their derivatives), framewise displacement, linear and quadratic trends, sine/cosine bases for high-pass filtering (0.007 Hz cutoff), and six principal component time series from an anatomically-defined mask of cerebrospinal fluid and white matter were regressed out of the signal using AFNI’s 3dTproject (Cox, 1996). Functional response time series were z-scored for each voxel.\nA.3 COMPARISON BETWEEN HSIC AND DEMINE\nWe first review the Hilbert-Schmidt independence criterion (HSIC), a widely-studied correlationbased independence criterion and discuss its connections with the MINE family of mutual information lower bound methods, and then study DEMINE and a spectral HSIC implementation on the synthetic datasets.\nThe HSIC approach (Gretton et al., 2005b;a) is based on a necessary and sufficient condition of independence: two random variables X and Z are independent if and only if for all bounded or positive functions f and g Rd 7→ R, EXZf(X)g(Z) − EXf(X)EZg(Z) = 0, or equivalently EXZ(f(X)−EXf(X))(g(Z)−EZg(Z)) = 0. A proof can be constructed by showing equivalence to the definition of independence, P (X,Z) = P (X)P (Z).\nTo construct an independence test, existing approaches (Gretton et al., 2005b;a) use Reproducing Kernel Hilbert Spaces (RKHS) for f and g, a function space that not only covers all functions between [0, 1], but also allows computationally efficient estimation or bounding of\n7https://github.com/poldracklab/fmriprep 8https://github.com/nipy/nipype 9https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSL\n10https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferWiki\nCOCO(X,Z) = supf,g EXZf(X)g(Z)− EXf(X)EZg(Z) given samples, and test COCO(X,Z) = 0. Confidence intervals are derived through McDiarmid’s inequality, or using closed-form distributions to approximate the test statistics to a certain order of moments, and compute the confidence interval from the closed-form distribution.\nThe COCO(X,Z) used by HSIC estimators bears great resemblance to the MINE family of mutual information estimators. In fact, it can be shown that\nCOCO(X,Z) = supf,g E(x,z)∼PXZf(x)g(z)− Ex∼PX f(x)EZ∼PZ g(z) = supf,g E(x,z)∼PXZf(x)g(z)− Ex∼PX ,z∼PZ log e f(x)g(z)\n≥ supf,g E(x,z)∼PXZf(x)g(z)− Ex∼PX logEz∼PZ e f(x)g(z) ≈ IEB1 ≥ supf,g E(x,z)∼PXZf(x)g(z)− logEx∼PX ,z∼PZ e f(x)g(z) ≈ IMINE ≥ supf,g E(x,z)∼PXZf(x)g(z)− Ex∼PX ,z∼PZ e f(x)g(z) + 1 ≈ IMINE-f,IEB\n(10)\nIt means that within a family of decomposable functions where Tθ(X,Z) = f(X)g(Z), COCO(X,Z) is an upperbound to the MINE estimates. In addition, the equivalence of COCO(X,Z) = 0 and I(X,Z) = 0 seems to suggest a form of mutual information bound. On the other hand, MINE allows the use of non-decomposable Tθ(X,Z). Existing results on MINE (Poole et al., 2018) seem to suggest that a non-decomposable Tθ(X,Z) gives superior empirical mutual information estimation performance over a decomposable Tθ(X,Z). The necessity of non-decomposable Tθ(X,Z) designs and mutual information lower bounds under decomposable designs of Tθ(X,Z) may be subjects of further research.\nSimilar to the MINE estimators, HSIC-based estimators tend to have loose confidence intervals due to the need to bound generalization error of kernels f and g on unseen data points. We expect a cross-validation-based approach like DEMINE to also improve the performance of the HSIC-based estimators.\nComparison between DEMINE and HSIC on synthetic benchmarks. We compare Canonical Correlation Analysis (CCA), DEMINE, DEMINE-meta and HSIC for independent testing on our 4 synthetic Gaussian and sine wave benchmarks presented in Section 5. Results for a single random seed is reported for a compact presentation, but we have ran experiments using multiple random seeds and find the result of a single random seed representative enough.\nFor CCA, we compute p-value using the χ2 test. For HSIC, we report p-value using a publicly available implementation for a spectral HSIC test (Zhang et al., 2018)11. The default kernel is used. Hyperparameters are set to recommended setting when available. For DEMINE and DEMINE-meta, the setup is identical to Section 5. A 2-sided 95% confidence interval is reported, but showing only the lower side.\nExperiment results are compiled in Table 3. Statistically significant dependence detections with p < 0.05 are bolded. Results show that spectral HSIC requires less data to test dependency for the simple Gaussians dataset. But on the more challenging sine wave dataset, DEMINE-sig and DEMINEmeta-sig perform better. Overall, we find DEMINE more complementary to linear correlations for dependency testing on complex signals. Note that Gaussian kernels are used for spectral HSIC. More complex kernels have potential to improve results." }, { "heading": "B SANITY CHECK ON STATISTICAL DEPENDENCY TESTING", "text": "We performed sanity check of our approach, as well as several statistical dependency testing implementations that we compare against. We run different statistical dependency testing implementations on our 1D Gaussian ρ = 0.0, N = 30 samples dataset where X and Z are independent. A large number of runs with different random seeds are performed. False positive rate of p < 0.05 statistical\n11https://github.com/oxmlcs/kerpy. We also experimented with classic HSIC with gamma approximation Gretton et al. (2005a) https://github.com/amber0309/HSIC and a block HSIC implementation (Zhang et al., 2018) from https://github.com/oxmlcs/kerpy, but find that they both report significantly more than 5% false positives for independent 1D and 20D gaussians ρ = 0 atN = {30, 100} across 100,000-1,000,000 random seeds, indicating errors in confidence interval calculations.\n12This is a false positive case for CCA, because for this sine wave data ground truth correlation is 0.\nsignificance was recorded to validate if different implementations actually follow such false positive rates. Correct implementations should have false positive rate lower or equal to 0.05. Results are summarized in Table 4. Statistically significant deviations (under Hoeffding inequality) are marked in bold font. The number of runs for DEMINE is relatively low, but no false positives were found. Low false positive rate of DEMINE might be due to partly the conservative estimation provided by Hoeffding inequality, and partly the generalization gap between train and test splits." } ]
2,019
null
SP:eb9803ef7698cade762d39290f842a7b3bf897d0
[ "This paper proposes a variational hyper recurrent neural network which is a combination of the variational RNN and the hypernetwork. The hypernetwork is an RNN whose output modifies the parameters of the variational RNN dynamically at runtime. Overall, this seems like an extension of the idea of using a hypernetwork with the VRNN (rather than the RNN as done in Ha. et. al). The model is trained via the FIVO objective. The model and learning algorithm are compared to the variational RNN and tested on a variety of synthetic settings where the VHRNN outperforms the VRNN in held-out likelihood. The performance gains are investigated on synthetic datasets where the paper notes that the VHRNN is often quicker to adapt variations that happen within seqences (for example, the paper considers a dataset where multiple patterns are stitched together into a sequence and study the changes in the KL divergence and reconstruction at switch points). On four real-world sequential datasets, the paper finds that the model outperforms the VRNN across many configurations and with a fewer number of parameters.", "In this paper the authors propose an architecture based on variational autoencoders and hyper-networks. The basic idea is that the weights of the underlying RNN/autoencoder are not fixed, but are coming from another RNN/feed-forward network which captures the underlying dynamics and adjusts the weights accordingly. The experimental results show the benefit of the model compared to a similar method without hypernets." ]
In this work, we propose a novel probabilistic sequence model that excels at capturing high variability in time series data, both across sequences and within an individual sequence. Our method uses temporal latent variables to capture information about the underlying data pattern and dynamically decodes the latent information into modifications of weights of the base decoder and recurrent model. The efficacy of the proposed method is demonstrated on a range of synthetic and real-world sequential data that exhibit large scale variations, regime shifts, and complex dynamics.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Shikhar Murty", "Michael Noukhovitch", "Thien Huu Nguyen", "Harm de Vries", "Aaron Courville" ], "title": "Systematic generalization: What is required and can it be learned", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tim Bollerslev" ], "title": "Generalized autoregressive conditional heteroskedasticity", "venue": "Journal of econometrics,", "year": 1986 }, { "authors": [ "Nicolas Boulanger-Lewandowski", "Yoshua Bengio", "Pascal Vincent" ], "title": "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription", "venue": "arXiv preprint arXiv:1206.6392,", "year": 2012 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "arXiv preprint arXiv:1511.06349,", "year": 2015 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance weighted autoencoders", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder–decoder for statistical machine translation", "venue": "In EMNLP,", "year": 2014 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Robert F Engle" ], "title": "Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1982 }, { "authors": [ "Emily Fox", "Erik B Sudderth", "Michael I Jordan", "Alan S Willsky" ], "title": "Nonparametric bayesian learning of switching linear dynamical systems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Marco Fraccaro", "Søren Kaae Sønderby", "Ulrich Paquet", "Ole Winther" ], "title": "Sequential neural models with stochastic layers", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Zoubin Ghahramani", "Geoffrey E Hinton" ], "title": "Switching state-space models", "venue": "Technical report, Citeseer,", "year": 1996 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE international conference on acoustics, speech and signal processing,", "year": 2013 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Ramon Huerta", "Thiago Mosqueiro", "Jordi Fonollosa", "Nikolai F Rulkov", "Irene Rodriguez-Lujan" ], "title": "Online decorrelation of humidity and temperature in chemical sensors for continuous monitoring", "venue": "Chemometrics and Intelligent Laboratory Systems,", "year": 2016 }, { "authors": [ "Rafal Jozefowicz", "Oriol Vinyals", "Mike Schuster", "Noam Shazeer", "Yonghui Wu" ], "title": "Exploring the limits of language modeling", "venue": "arXiv preprint arXiv:1602.02410,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Ryan Kiros", "Yukun Zhu", "Ruslan R Salakhutdinov", "Richard Zemel", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Skip-thought vectors", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Rui Luo", "Weinan Zhang", "Xiaojun Xu", "Jun Wang" ], "title": "A neural stochastic volatility model", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Chris J Maddison", "John Lawson", "George Tucker", "Nicolas Heess", "Mohammad Norouzi", "Andriy Mnih", "Arnaud Doucet", "Yee Teh" ], "title": "Filtering variational objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tomas Mikolov", "Martin Karafiát", "Lukas Burget", "Jan Cernockỳ", "Sanjeev Khudanpur" ], "title": "Recurrent neural network based language model", "venue": "In Interspeech,", "year": 2010 }, { "authors": [ "Kevin P Murphy" ], "title": "Switching kalman filters", "venue": null, "year": 1998 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chao-Yuan Wu", "Amr Ahmed", "Alex Beutel", "Alexander J Smola", "How Jing" ], "title": "Recurrent recommender networks", "venue": "In WSDM,", "year": 2017 } ]
[ { "heading": null, "text": "In this work, we propose a novel probabilistic sequence model that excels at capturing high variability in time series data, both across sequences and within an individual sequence. Our method uses temporal latent variables to capture information about the underlying data pattern and dynamically decodes the latent information into modifications of weights of the base decoder and recurrent model. The efficacy of the proposed method is demonstrated on a range of synthetic and real-world sequential data that exhibit large scale variations, regime shifts, and complex dynamics." }, { "heading": "1 INTRODUCTION", "text": "Recurrent neural networks (RNNs) are the natural architecture for sequential data as they can handle variable-length input and output sequences. Initially invented for natural language processing, long short-term memory (LSTM; Hochreiter & Schmidhuber 1997), gated recurrent unit (GRU; Cho et al. 2014) as well as the later attention-augmented versions (Vaswani et al., 2017) have found wide-spread successes from language modeling (Mikolov et al., 2010; Kiros et al., 2015; Jozefowicz et al., 2016) and machine translation (Bahdanau et al., 2014) to speech recognition (Graves et al., 2013) and recommendation systems (Wu et al., 2017). However, RNNs use deterministic hidden states to process input sequences and model the system dynamics using a set of time-invariant weights, and they do not necessarily have the right inductive bias for time series data outside the originally intended domains.\nMany natural systems have complex feedback mechanisms and numerous exogenous sources of variabilities. Observations from such systems would contain large variations both across sequences in a dataset as well as within any single sequence; the dynamics could be switching regimes drastically, and the noise process could also be heteroskedastic. To capture all these intricate patterns in RNN with deterministic hidden states and a fixed set of weights requires learning about the patterns, the subtle deviations from the patterns, the conditions under which regime transitions occur which is not always predictable. Outside of the deep learning literature, many time series models have been proposed to capture specific types of high variabilities. For instance, switching linear dynamical models (Ackerson & Fu, 1970; Ghahramani & Hinton, 1996; Murphy, 1998; Fox et al., 2009) aim to model complex dynamical systems with a set of simpler linear patterns. Conditional volatility models (Engle, 1982; Bollerslev, 1986) are introduced to model time series with heteroscedastic noise process whose noise level itself is a part of the dynamics. However, these models usually encode specific inductive biases in a hard way, and cannot learn different behaviors and interpolate among the learned behaviors as deep neural nets.\nIn this work, we propose a new class of neural recurrent latent variable model, called the variational hyper RNN (VHRNN), which can perform dynamic regime identification and re-identification dynamically at inference time. Our model captures complex time series without encoding a large number of patterns in static weights, but instead only encodes base dynamics that can be selected and adapted based on run time observations. Thus it can easily learn to express a rich set of behaviors including but not limited to the ones mentioned above. Our model can dynamically identify the underlying pattern, express uncertainty due to observation noise, lack of information, or model misspecification. As such, VHRNN can model complex patterns with fewer parameters; and when given lots of parameters, it generalizes better than previous methods.\nThe VHRNN is built upon the previous variational RNN (VRNN) models (Chung et al., 2015) and hypernetworks (Ha et al., 2016). The VRNN models introduce stochastic latent variables at every time step, which are inferred using a variational recognition model. The overall model is trained by maximizing the evidence lower bound (ELBO). In VRNN, the latent variables capture the information in the stochastic hidden states and are then fed as input to the RNN and decoding model to produce reconstructed observations. While in our work, the latent variables are decoded to produce the RNN transition weights and observation projection weights in the style of hypernetworks (Ha et al., 2016), i.e., dynamically generating the scaling and bias vectors to adjust the base weights of the RNN. We demonstrate that the proposed VHRNN model is better at capturing different types of variability on several synthetic as well as real-world time series datasets." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Variational Autoencoder Variational autoencoder (VAE) is one of the most popular unsupervised approaches to learning a compact representation from data (Kingma & Welling, 2013). It uses a variational distribution q(z|x) to approximate the intractable posterior distribution of the latent variable z. With the use of variational approximation, it maximizes the evidence lower bound (ELBO) of the marginal log-likelihood of data\nL(x) = Eq(z|x)[log p(x|z)]−DKL(q(z|x) ‖ p(z)) ≤ log p(x), where p(z) is a prior distribution of z and DKL denotes the Kullback–Leibler (KL) divergence. The approximate posterior q(z|x) is usually formulated as a Gaussian with a diagonal covariance matrix. Variational RNN for Sequential Data Variational autoencoders have demonstrated impressive performance on non-sequential data like images. Many following works (Bowman et al., 2015; Chung et al., 2015; Fraccaro et al., 2016; Luo et al., 2018) extend the domain of VAE models to sequential data. Among them, variational RNN (VRNN; Chung et al. 2015) further incorporate a latent variable at each time step into their models. A prior distribution conditioned on the contextual information and a variational posterior is proposed at each time step to optimize a step-wise variational lower bound. Sampled latent variables from the variational posterior are decoded into the observation at the current time step. The VHRNN model makes use of the same factorization of sequential data and joint distribution of latent variables as in VRNN. However, in VHRNN model, the latent variables also parameterize the weights for decoding and transition in RNN cell across time steps, giving the model more flexibility to deal with variations within and across sequences.\nImportance Weighted Autoencoder and Filtering Variational Objective A parallel stream of work to improve latent variable models with variational inference study tighter bounds of the data’s log-probability than ELBO. Importance Weighted Autoencoder (IWAE; Burda et al. 2016) estimates a different variational bound of the log-likelihood, which is provably tighter than ELBO. Filtering Variational Objective (FIVO; Maddison et al. 2017) exploits the temporal structure of sequential data and uses particle filtering to estimate the data log-likelihood. FIVO still computes a step-wise IWAE bound based on the sampled particles at each time step, but it shows better sampling efficiency and tightness than IWAE. We use FIVO as the objective to train and evaluate our models.\nHyperNetworks Our model is motivated by HyperNetworks (Ha et al., 2016) which use one network to generate the parameters of another. The dynamic version of HyperNetworks can be applied to sequence data, but due to lack of latent variables, can only capture uncertainty in the output variables. For discrete sequence data such as text, categorical output variables can model multi-model outputs very well; but on continuous time series with the typical Gaussian output variables, the model is much less capable at dealing with stochasticity. Furthermore, it does not allow straightforward interpretation of the model behaviour using the time-series of KL divergence as we do in Sec. 4. With the augmentation of latent variables, VHRNN is much more capable of modelling uncertainty. It is worth noting that Bayesian HyperNetworks (Krueger et al., 2017) also have a latent variable in the context of Hypernetworks. However, the goal of Bayesian HypernNtwork is an improved version of Bayesian neural net to capture model uncertainty. The work of Krueger et al. (2017) has no recurrent structure and cannot be applied to sequential data. Furthermore, the use of normalizing flow dramatically limits the flexibility of the decoder architecture design, unlike in VHRNN. Dezfouli et al. (2019) recently proposed to learn a disentangled low-dimensional latent space such that samples from the latent space are used to generate parameters of an RNN model. Different latent variables could account for the behaviours of different subjects. In spite of similarity in combining RNN\nwith HyperNetworks, the motivations and architectures are fundamentally different. The work of Dezfouli et al. (2019) intends to learn a low-dimensional interpretable latent space that represents the difference between subjects in decision-making process while our models tries to better handle the variances both within and across sequences in general time-series data modeling. The work of Dezfouli et al. (2019) generate a set of parameters that is shared across time steps for each latent variable. In contrast, our model samples a latent variable and dynamically generates non-shared weights at each time step, which we believe is essential for handling variance of dynamics within sequences." }, { "heading": "3 MODEL FORMULATION", "text": "Variational Hyper RNN A recurrent neural network (RNN) can be characterized by ht = gθ(xt,ht−1), where xt and ht are the observation and hidden state of the RNN at time step t, and θ is the fixed weights of the RNN model. The hidden state ht is often used to generate the output for other learning tasks, e.g., predicting the observation at the next time step. We augment the RNN with a latent random variable zt, which is also used to output the non-shared parameters of the RNN at time step t.\nht = gθ(zt,ht−1)(xt, zt,ht−1), (1)\nwhere θ(zt,ht−1) is a hypernetwork that generates the parameters of the RNN at time step t. The latent variable zt can also be used to determine the parameters of the generative model p(xt|z≤t,x<t):\nxt|z≤t,x<t ∼ N (µdect ,Σ dec t ), where (µ dec t ,Σ dec t ) = φ dec ω(zt,ht−1) (zt,ht−1). (2)\nWe hypothesize that the previous observations and latent variables, characterized by ht−1, define a prior distribution p(zt|x<t, z<t) over the latent variable zt,\nzt|x<t, z<t ∼ N (µpriort ,Σ prior t ), where (µ prior t ,Σ prior t ) = φ prior(ht−1). (3)\nEq. 2 and 3 result in the following generation process of sequential data:\np(x≤T , z≤T ) = T∏ t=1 p(zt|x<t, z<t)p(xt|x<t, z≤t). (4)\nThe true posterior distributions of zt conditioned on observations x≤t and latent variables z<t are intractable, posing a challenge in both sampling and learning. Therefore, we introduce an approximate posterior q(zt|x≤t, z<t) such that\nzt|x≤t, z<t ∼ N (µenct ,Σ enc t ), where (µ enc t ,Σ enc t ) = φ enc(xt,ht−1). (5)\nThis approximate posterior distribution enables the model to be trained by maximizing a variational lower bound, e.g., ELBO (Kingma & Welling, 2013), IWAE (Burda et al., 2016) and FIVO (Maddison et al., 2017). We refer to the main components of our model, including g, φdec, φenc, φprior as primary networks and refer to the components responsible for generating parameters, θ and ω, as hyper networks in the following sections.\nImplementation Following the practice of VAE, we parametrize the covariance matrices Σpriort , Σdect and Σ enc t as diagonal matrices. Note that Σ prior t in our model is no longer an identity matrix as in a vanilla VAE; it is the output of φprior and depends on the hidden state ht−1 at the previous time step.\nThe recurrence model g in Eq. 1 is implemented as an RNN cell, which takes as input xt and zt at each time step t and updates the hidden states ht−1. The parameters of g are generated by the hyper network θ(zt,ht−1), as illustrated in Figure 1b. θ is also implemented using an RNN to capture the history of data dynamics, with zt and ht−1 as input at each time step t. However, it is computationally costly to generate all the parameters of g using θ(zt,ht−1). Following the practice of previous works (Ha et al., 2016; Krueger et al., 2017), the hyper network θ maps zt and ht−1 to bias and scaling vectors. The scaling vectors modify the parameters of g by scaling each row of the weight matrices, routing information in the input and hidden state vectors through different channels. To better illustrate this mechanism, we exemplify the recurrence model g using an RNN cell with LSTM-style update rules and gates. Let ∗ ∈ {i, f, g, o} denote the one of the four LSTM-style gates in g. W∗ and U∗ denote the input and recurrent weights of each gate in LSTM cell respectively. The hyper network θ(zt,ht−1) outputs di∗ and dh∗ that are the scaling vectors for the input weights W∗ and recurrent weights U∗ of the recurrent model g in Eq. 1. The overall implementation of g in Eq. 1 can be described as follows:\nit = σ (dii(zt,ht−1) ◦ (Wiyt) + dhi(zt,ht−1) ◦ (Uiht−1)) , ft = σ (dif(zt,ht−1) ◦ (Wfyt) + dhf(zt,ht−1) ◦ (Ufht−1)) , gt = tanh (dig(zt,ht−1) ◦ (Wgyt) + dhg(zt,ht−1) ◦ (Ught−1)) , ot = σ (dio(zt,ht−1) ◦ (Woyt) + dho(zt,ht−1) ◦ (Uoht−1)) , ct = ft ◦ ct−1 + it ◦ gt, ht = ot ◦ tanh (ct) ,\nwhere ◦ denotes the Hadamard product and yt is a fusion (e.g., concatenation) of observation xt and latent variable zt. For simplicity of notation, bias terms are ignored from the above equations. This implementation of the recurrence model in VHRNN is further illustrated in the diagram in Fig. 6 in appendix.\nAnother hyper network ω(zt,ht−1) generates the parameters of the generative model in Eq. 2. It is implemented as a multilayer perceptron (MLP). Similar to θ(zt,ht−1), the outputs are the bias and scaling vectors that modify the parameters of the decoder φdecω(zt,ht−1)." }, { "heading": "4 SYSTEMATIC GENERALIZATION ANALYSIS OF VHRNN", "text": "In terms of the general functional form Eq. 1, the recurrence of VRNN and VHRNN both depend on zt and ht−1, so a sufficiently large VRNN could capture the same behaviour as VHRNN in theory. However, VHRNN’s structure better encodes the inductive bias that the underlying dynamics could change, that they could slightly deviate from the typical behaviour in a regime, or there could be drastic switch to a new regime. With finite training data and finite parameters, this inductive bias could lead to qualitatively different learned behaviour, which we demonstrate and analyze now.\nIn the spirit of Bahdanau et al. (2019), we perform a systematic generalization study of VHRNN in comparison to the VRNN baseline. We train the models on one synthetic dataset with each sequence generated by fixed linear dynamics and corrupted by heteroskedastic noise process. We demonstrate that VHRNN can disentangle the two contributions of variations and learn the different base patterns of the complex dynamics while doing so with fewer parameters. Furthermore, VHRNN can generalize to a wide range of unseen dynamics, albeit the much simpler training set.\nThe synthetic dataset is generated by the following recurrence equation:\nxt = Wxt−1 + σt t, (6)\nwhere t ∈ R2 is a two-dimensional standard Gaussian noise and x0 is randomly initialized from a uniform distribution over [−1, 1]2. For each sequence, W ∈ R2×2 is sampled from 10 predefined random matrices {Wi}10i=1 with equal probability; σt is the standard deviation of the additive noise at time t and takes value from {0.25, 1, 4}. The noise level shifts twice within a sequence; i.e., there\nare exactly two t’s such that σt 6= σt−1. We generate 800 sequences for training, 100 sequences for validation, and 100 sequences for test using the same sets of predefined matrices. The models are trained and evaluated using FIVO as the objective. The results on the test set are almost the same as those on the training set for both VRNN and VHRNN. We also find that VHRNN shows better performance than VRNN with fewer parameters, as shown in Tab. 1, column Test. The size of the hidden state in RNN cells is set to be the same as the latent size for both types of models.\nWe further study the behavior of VRNN and VHRNN under the following systematically varied settings:\n• NOISELESS In this setting, sequences are generated using a similar recurrence rule with the same set of predefined weights without the additive noise at each step. That is, σt = 0 in Eq. 6 for all time step t. The exponential growth of data could happen when the singular values of the underlying weight matrix are greater than 1. • SWITCH In this setting, three NOISELESS sequences are concatenated into one, which\ncontains regime shifts as a result. This setting requires the model to identify and re-identify the underlying pattern after observing changes. • RAND In this setting, the deterministic transition matrix in Eq. 6 is set to the identity matrix\n(i.e., W = I), leading to long sequences of pure random walks with switching magnitudes of noise. The standard deviation of the additive noise randomly switches up to 3 times within {0.25, 1, 4} in one sequence. • LONG In this setting, we generate extra-long NOISELESS sequences with twice the total\nnumber of steps using the same set of predefined weights. The data scale can exceed well beyond the range of training data when exponential growth happens. • ZERO-SHOT In this setting, NOISELESS sequences are generated such that the training\ndata and test data use different sets of weight matrices. • ADD In this setting, sequences are generated by a different recurrence rule: xt = xt−1 +b,\nwhere b and x0 are uniformly sampled from [0, 1]2.\nTab. 1 illustrates the experimental results. We can see that VRNN model, depending on model complexity, either underfits the original data generation pattern (Test) or fails to generalize to more complicated settings. In contrast, the VHRNN model does not suffer from such problems and uniformly outperforms VRNN models under all settings. To qualitatively study the behavior of VHRNN and VRNN, we consider a VRNN with a latent dimension of 8 and a VHRNN with a latent dimension of 4 and make the following observations:\nDynamic Regime Identification and Re-identification Fig. 2 shows a sample sequence under the NOISELESS setting. VRNN has high KL divergence between the prior and the variational posterior most of the time. In contrast, VHRNN has a decreasing trend of KL divergence while still making accurate mean reconstruction as it observes more data. As the KL divergence measures the discrepancy between prior defined in Eq. 3 and the posterior that has information from the current observation, simultaneous low reconstruction and low KL divergence means that the prior distribution would be able to predict with low errors as well, indicating that the correct underlying dynamics model has likely been utilized. This trend even generalizes to settings with sources of variation unseen in the training data, namely ZEROSHOT and ADD. We speculate that this trend implies the model’s ability to identify the underlying data generation pattern in the sequence. The decreasing trend is especially apparent when a sudden and big change in scale happens. We hypothesize that larger changes in scale can better help our model, VHRNN, identify the underlying data generation process because our model is trained on sequential data generated with compound noise. The observation further corroborates our conjecture that the KL divergence would rise again once the sequence\nswitches from one underlying weight to another, as shown in Fig. 3. It is worth noting that the KL increase happens with some latency after the sequence switches in the SWITCH setting as the model reacts to the change and tries to reconcile with the prior belief of the underlying regime in effect.\nUncertainty Identification Fig. 4 shows that the predicted log-variance of VHRNN can more accurately reflect the change of noise levels under the RAND setting than VRNN. VHRNN can also better handle uncertainty than VRNN in the following two situations. As shown in Fig. 3f, VHRNN can more aggressively adapt its variance prediction based on the scale of the data than VRNN. It keeps its predicted variance at a low level when the data scale is small and increases the value when the scale of data becomes large. VHRNN makes inaccurate mean prediction relatively far from the target value when the switch of underlying generation dynamics happens in the SWITCH setting. The switch of the weight matrix is another important source of uncertainty. We observe that VHRNN would also make a large log-variance prediction in this situation, even the scale of the observation is small. Aggressively increasing its uncertainty about the prediction when a switch happens avoids VHRNN model from paying high reconstruction cost as shown by the second spike in Fig. 3f. This increase of variance prediction also happens when exponential becomes apparent in setting LONG and the scale of observed data became out of the range of the training data. Given the large scale change of the data, such flexibility to predict large variance is key for VHRNN to avoid paying large reconstruction cost.\nThese two advantages of VHRNN over VRNN not only explain the better performance of VHRNN on the synthetic data but also are critical to RNNs’ ability to model real-world data with large variations\nboth across and within sequences. Examples under other settings showing the above properties are deferred to the Appendix." }, { "heading": "5 EXPERIMENTS ON REAL-WORLD DATA", "text": "We experiment with the VHRNN model on several real-world datasets and compare it against VRNN model. VRNN trained and evaluated using FIVO (Maddison et al., 2017) demonstrates the state-of-the-art performance on various sequence modeling tasks. Our experiments demonstrate the superior parameter-performance efficiency and generalization ability of VHRNN over VRNN. All the models are trained using FIVO (Maddison et al., 2017) and we report FIVO per step when evaluating models. Two polyphonic music dataset are considered: JSB Chorale and Piano-midi.de (BoulangerLewandowski et al., 2012). We also train and test our models on a financial time series data and the HT Sensor dataset (Huerta et al., 2016), which contains sequences of sensor readings when different types of stimuli are applied in an environment during experiments. We also trained and evaluated the HyperLSTM model without latent variables proposed by Ha et al. (2016) on a few of the datasets above. The results and comparisons are deferred to the appendix.\nFor the VRNN model, we use a single-layer LSTM and set the dimension of the hidden state to be the same as the latent dimension. For the VHRNN model, θ in Eq. 1 is implemented using a single-layer LSTM to generate weights for the recurrence module in the primary networks. We use an RNN cell with LSTM-style gates and update rules for the recurrence module g in our experiments. The hidden state sizes of both the primary network and hyper network are the same as the latent dimension. A linear transformation directly maps the hyper hidden state to the scaling and bias vectors in the primary network. More details on the architectures of encoder, generation and prior networks are elaborated in the appendix.\nPolyphonic Music The JSB Chorale and Piano-midi.de are music datasets (Boulanger-Lewandowski et al., 2012) with complex patterns and large variance both within and across sequences. The datasets are split into the standard train, validation, and test sets. More details on data preprocessing, training and evaluation setup are deferred to the appendix.\nWe report the FIVO per time step of VHRNNs and VRNNs and their parameter counts in Fig. 5a and Fig. 5b. The results show that VHRNNs have better performance and parameter efficiency. The number of parameters and FIVO per time step of each model are plotted in the figures, and the latent dimension is also annotated. The parameter-performance plots show that the VHRNN model has uniformly better performance than VRNN with a comparable number of parameters. The best FIVO achieved by VHRNN on JSB dataset is −6.76 (VHRNN-14) compared to −6.92 for VRNN (VRNN-32), which requires close to one third more parameters. This best VRNN model is even worse than the smallest VHRNN model we have evaluated. It is also observed that VHRNN is less prone to overfitting and has better generalization ability than VRNN when the number of parameters keeps growing. Similar trends can be seen on the Piano-midi.de dataset in Fig. 5b. We also find that the better performance of VHRNN over VRNN can generalize to the scenario where we replace LSTM with Gated Recurrent Unit (GRU). Experimental results using GRU implementation are deferred to the appendix.\nStock Financial time series data, such as daily prices of stocks, are highly volatile with large noise. The market volatility is affected by many external factors and can experience tremendous changes\nin a sudden. To test the models’ ability to adapt to different volatility levels and noise patterns, we compare VHRNN and VRNN on stock price data collected in a period when the market went through rapid changes. The data are collected from 445 stocks in the S&P500 index in 2008 when a global financial crisis happened. The dataset contains the opening, closing, highest and lowest prices, and volume on each day. The networks are trained on sequences from the first half of the year and tested on sequences from the second half, during which the market suddenly became significantly more volatile due to the financial crisis.\nThe evaluation results are shown in Fig. 5c. The plot shows that VHRNN models consistently outperform VRNN models regardless of the latent dimension and number of parameters. The results indicate that VHRNN can have better generalizability to sequential data in which the underlying data generation pattern suddenly shifts even if the new dynamics are not seen in the training data.\nHT Sensor The comparison is also performed on a dataset with less variation and simpler patterns than the previous datasets. The HT Sensor dataset contains sequences of gas, humidity, and temperature sensor readings in experiments where some stimulus is applied after a period of background activity (Huerta et al., 2016). There are only two types of stimuli in the experiments: banana and wine. In some sequences, there is no stimulus applied, and they only contain readings under background noise. Experimental results on HT Sensor dataset are shown in Fig. 5d.\nIt is observed that VHRNN has comparable performance as VRNN on the HT Senor Dataset when using a similar number of parameters. For example, VHRNN achieves a FIVO per time step of 14.41 with 16 latent dimensions and 24200 parameters, while VRNN shows slightly worse performance with 28 latent dimensions and approximately 26000 parameters. When the number of parameters goes slightly beyond 34000, the FIVO of VHRNN decays to 12.45 compared to 12.37 of VRNN." }, { "heading": "6 ABLATION STUDY", "text": "We further investigate the effects of hidden state and latent variable on the performance of variational hyper RNN in the following two aspects: the dimension of the latent variable and the contributions by hidden state and latent variable as inputs to hyper networks.\nLatent Dimension In previous experiments on real-world datasets, the latent dimension and hidden state dimension are set to be the same for each model. This causes VHRNN to have significantly more parameters than a VRNN when using the same latent dimension. To eliminate the effects of the difference in model size, we allow the latent dimension and hidden state dimension to be different. We also reduce the hidden layer size of the hyper network that generates the weight of the decoder. These changes allow us to compare VRNN and VHRNN models with the same latent dimension and a similar number of parameters. The results on JSB Chorale datasets are presented in Tab. 2 in which we denote latent dimension by Z dim. We observe that VHRNNs always have better FIVO with the same latent dimensions than VRNNs. The results show that the superior performance of VHRNN over VRNN does not stem from smaller latent dimension when using the comparable number of parameters.\nInputs to the Hyper Networks We retrain and evaluate the performance of VHRNN models on JSB Chorale dataset and the synthetic sequences when feeding the latent variable only, the hidden state only, or both to the hyper networks. The results are shown in Tab. 3. It is observed that VHRNN has the best performance and generalization ability when it takes the latent variable as its only input. Relying on the primary network’s hidden state only or the combination of latent variable and hidden state leads to worse performance. When the dimension of the hidden state is 32, VHRNN only taking the hidden state as hyper input suffers from over-parameterization and has worse performance than VRNN with the same dimension of the hidden state. On the test set of synthetic data, VHRNN obtains the best performance when it takes both hidden state and latent variable as inputs. We surmise that this difference is due to the fact that historical information is critical to determine the underlying recurrent weights and current noise level for synthetic data. However, the ablation study on both datasets shows the importance of the sampled latent variable as an input to the hyper networks. Therefore, both hidden state and latent variable are used as inputs to hyper networks on other datasets for consistency." }, { "heading": "7 CONCLUSION", "text": "In this paper, we introduce the variational hyper RNN (VHRNN) model, which can generate parameters based on the observations and latent variables dynamically. Such flexibility enables VHRNN to better model sequential data with complex patterns and large variations within and across samples than VRNN models that use fixed weights. VHRNN can be trained with the existing off-the-shelf variational objectives. Experiments on synthetic datasets with different generating patterns show that VHRNN can better disentangle and identify the underlying dynamics and uncertainty in data than VRNN. We also demonstrate the superb parameter-performance efficiency and generalization ability of VHRNN on real-world datasets with different levels of variability and complexity." }, { "heading": "D REAL-WORLD DATASETS PREPROCESSING DETAILS", "text": "Polyphonic Music Each sample in the polyphonic music datasets, JSB Chorale and Piano-midi.de is represented as a sequence of 88-dimensional binary vectors. The data are preprocessed by meancentering along each dimension per dataset.\nStock We randomly select 345 companies and use their daily stock price and volume in the first half of 2008 to obtain training data. We another 50 companies’ data in the second half of 2008 to generate validation set and get the test set from the remaining 50 companies during the second half of 2008. The sequences are first preprocessed by taking log ratio of the values between consecutive days. Each sequence has a fixed length of 125. The log ratio sequences are normalized using the mean and standard deviation of the training set along each dimension.\nHT Sensor The HT Sensor dataset collects readings from 11 sensors under certain stimulus in an experiment. The readings of the sensors are recorded at a rate of once per second. We segment a sequence of 3000 seconds every 1000 seconds in the dataset and downsample the sequence by a rate of 30. Each sequence we obtained has a fixed length of 100. The types of sequences include pure background noise, stimulus before and after background noise and stimulus between two periods of background noise. The data are normalized to zero mean and unit variance along each dimension. We use 532 sequences for training, 68 sequences for validation and 74 sequences for testing." }, { "heading": "E TRAINING AND EVALUATION DETAILS ON REAL-WORLD DATASETS.", "text": "For all the real-world data, the models, both VRNN and VHRNN, are trained with batch size of 4 and particle size of 4. When evaluating the models, we use particle size of 128 for polyphonic music datasets and 1024 for Stock and HT Sensor datasets.\nF VHRNN AND VRNN HIDDEN UNIT VERSUS PERFORMANCE COMPARISON\nWe also compare VHRNN and VRNN by plotting the models’ performance against their number of hidden units. The models we considered here are the same as the models presented in Fig. 5: We use a single-layer LSTM model for the RNN part; the dimension of LSTM’s hidden state is the same as the latent dimension. It’s worth noting that as VHRNN uses two LSTM models, one primary network and one hyper network. Therefore, the number of hidden units in an VHRNN model is twice the number of latent dimension. We can see that VHRNN also dominates the performance of VRNN with a similar or fewer number of hidden units in most of the settings. Furthermore, the fact that VHRNN almost always outperforms VRNN for all parameter or hidden unit sizes precisely shows the superiority of the new architecture. The results from Fig. 5 and Fig. 10 are consolidated in into Tab. 4.\nG VHRNN AND VRNN PERFORMANCE-PARAMETER COMPARISON USING GRU ON JSB CHORALE DATASET\nFig. 11 shows the parameter performance plots of VHRNN and VRNN using GRU implementation on the JSB Chorale dataset. VHRNN models consistently outperform VRNN models under all settings.\nH VHRNN AND HYPERLSTM PERFORMANCE COMPARISON\nWe compare our VHRNN models using LSTM cell with the HyperLSTM models proposed in HyperNetworks Ha et al. (2016) on JSB Chorale and Stock datasets. Compared with VHRNN, HyperLSTM does not have latent variables. Therefore, it does not have an encoder or decoder either. Our implementation of HyperLSTM resembles the recurrence model of VHRNN defined in Equation 6. At each time step, HyperLSTM model predicts the output distribution by mapping the RNN’s hidden state to the parameters of binary distributions for JSB Chorale dataset and a mixture of Gaussian for Stock dataset. We consider 3 and 5 as the number of components in the Gaussian\nmixture distribution. HyperLSTM models are trained with the same batch size and learning rate as VHRNN models.\nWe show the parameter-performance comparison between VHRNN, VRNN and HyperLSTM models in Fig. 12. The number of components used by HyperLSTM for Stock dataset is 5 in the plot. Since HyperLSTM models do not have latent variable, the indicator on top of each point shows the number of hidden units in each model for all the three of them. The number of hidden units for HyperLSTM model is also twice the dimension of hidden states as HyperLSTM has two RNNs, one primary and one hyper. We report FIVO for VHRNN and VRNN models and exact log likelihood for HyperLSTM models. Even though FIVO is a lower-bound of log likelihood, we can see that the performance of VHRNN completely dominates HyperLSTM no matter what number of hidden units we use. Actually, the performance of HyperLSTM is even worse than VRNN models which do not even have hyper networks. The results indicates the importance of modeling complex time-series data.\nWe also show the hidden-units-performance comparison between VHRNN and VRNN in Fig. 13. The comparison shows similar results.\nComplete experiment results of HyperLSTM models on the two datasets are shown in Tab. 5.\nI VHRNN WITHOUT TEMPORAL STRUCTURE IN HYPER NETWORKS\nUsing an RNN to generate the parameters of another RNN has been studied in HyperNetworks Ha et al. (2016) and delivers promising performance. It also seems like a natural choice as the hidden state of the primary RNN can represent the history of observed data while the hidden state of the hyper RNN can track the history of data generation dynamics. However, it is still intriguing to study other design choices that do not have the recurrence structure in the hyper networks for VHRNN. As an ablation study, we experimented with VHRNN models that replace the RNN with a three-layer feed-forward network as the hyper network θ for the recurrence model g as defined in Equation 6. We keep the other components of VHRNN unchanged on JSB Chorale, Stock and the synthetic dataset. The evaluation results using FIVO are presented in Tab. 6 and systematic generalization study results on the synthetic dataset are shown in Tab. 7. We denote the original VHRNN with recurrence structure in θ as VHRNN-RNN and the variant without the recurrence structure as VHRNN-MLP.\nAs we can see, given the same latent dimension, VHRNN-MLP models have more parameters than VHRNN-RNN models. VHRNN-MLP can have slightly better performance than VHRNN-RNN in some cases but it performs worse than VHRNN-RNN in more settings. The performance of VHRNN-MLP also degrades faster than VHRNN-RNN on the JSB Chorale dataset as we increase the latent dimension. Moreover, systematic generalization study on the synthetic dataset also shows that VHRNN-MLP has worse performance than VHRNN-RNN no matter in the test setting or in the systematically varied settings." } ]
2,019
null
SP:74aafec80535022cbdd83067763fd7bced294ace
[ "This paper proposes to analyze the loss of neural networks in the Fourier domain. Since this is computationally expensive for larger-dimensional datasets, the analysis instead first projects the data onto the principal component of the data, and then using a Gaussian kernel estimation (which has nice properties in the Fourier domain). The analysis finds that DNNs tend to learn low-frequency components before high-frequency ones.", "The paper studies the training process of NNs through the lens of Fourier analysis. The authors argue that during the training process, NNs will first learn low frequencies part of the function first and then the high frequency part. To verify this claim empirically, the author propose two methods: 1. examine the convergence of different frequencies in a pre-selected direction in the frequency space during training; 2. examine the convergence rate of the 2-norm of low v.s. high frequencies during training. Through the experimental results of these two methods, the authors conclude that NNs learn the low frequency components before the high frequency components. The authors also discuss a potential application of this observation to solving high dimensional PDEs: coupling DNNs training (good at learning low frequency components) with the Jacobi method (good at learning high frequency components). Finally, the authors also provide some theoretical intuition (Thm 1., 2.) why low frequency components are learned faster and an explanation why NNs could generalize well on images but perform poorly on tasks like learning parity functions. " ]
We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective. We demonstrate a very universal Frequency Principle (FPrinciple) — DNNs often fit target functions from low to high frequencies — on high-dimensional benchmark datasets such as MNIST/CIFAR10 and deep neural networks such as VGG16. This F-Principle of DNNs is opposite to the behavior of most conventional iterative numerical schemes (e.g., Jacobi method), which exhibit faster convergence for higher frequencies for various scientific computing problems. With theories under an idealized setting, we illustrate that this F-Principle results from the smoothness/regularity of the commonly used activation functions. The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function. This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or a randomized dataset.
[]
[ { "authors": [ "Devansh Arpit", "Stanislaw Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron Courville", "Yoshua Bengio" ], "title": "A closer look at memorization in deep networks", "venue": "arXiv preprint arXiv:1706.05394,", "year": 2017 }, { "authors": [ "Peter L Bartlett", "Vitaly Maiorov", "Ron Meir" ], "title": "Almost linear vc dimension bounds for piecewise polynomial networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "Ronald Newbold Bracewell", "Ronald N Bracewell" ], "title": "The Fourier transform and its applications, volume 31999", "venue": null, "year": 1986 }, { "authors": [ "Wei Cai", "Zhi-Qin John Xu" ], "title": "Multi-scale deep neural networks for solving high dimensional pdes", "venue": "arXiv preprint arXiv:1910.11710,", "year": 1910 }, { "authors": [ "Wei Cai", "Xiaoguang Li", "Lizuo Liu" ], "title": "A phase shift deep neural network for high frequency wave equations in inhomogeneous media", "venue": "Arxiv preprint,", "year": 2019 }, { "authors": [ "Weinan E", "Bing Yu" ], "title": "The deep ritz method: A deep learning-based numerical algorithm for solving variational problems", "venue": "Communications in Mathematics and Statistics,", "year": 2018 }, { "authors": [ "Weinan E", "Jiequn Han", "Arnulf Jentzen" ], "title": "Deep learning-based numerical methods for highdimensional parabolic partial differential equations and backward stochastic differential equations", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Weinan E", "Chao Ma", "Lei Wu" ], "title": "A priori estimates of the generalization error for two-layer neural networks", "venue": "arXiv preprint arXiv:1810.06397,", "year": 2018 }, { "authors": [ "Lawrence C Evans" ], "title": "Partial differential equations", "venue": null, "year": 2010 }, { "authors": [ "Yuwei Fan", "Lin Lin", "Lexing Ying", "Leonardo" ], "title": "Zepeda-Núnez. A multiscale neural network based on hierarchical matrices", "venue": "arXiv preprint arXiv:1807.01883,", "year": 2018 }, { "authors": [ "Jiequn Han", "Linfeng Zhang", "Roberto Car" ], "title": "Deep potential: A general representation of a many-body potential energy surface", "venue": "arXiv preprint arXiv:1707.01478,", "year": 2017 }, { "authors": [ "Moritz Hardt", "Benjamin Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "arXiv preprint arXiv:1509.01240,", "year": 2015 }, { "authors": [ "Juncai He", "Lin Li", "Jinchao Xu", "Chunyue Zheng" ], "title": "Relu deep neural networks and linear finite elements", "venue": "arXiv preprint arXiv:1807.03973,", "year": 2018 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "Yuehaw Khoo", "Jianfeng Lu", "Lexing Ying" ], "title": "Solving parametric pde problems with artificial neural networks", "venue": "arXiv preprint arXiv:1707.03351,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 (canadian institute for advanced research)", "venue": "URL http://www. cs. toronto. edu/kriz/cifar. html,", "year": 2010 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Tao Luo", "Zheng Ma", "Zhi-Qin John Xu", "Yaoyu Zhang" ], "title": "Theory of the frequency principle for general deep neural networks", "venue": "arXiv preprint arXiv:1906.09235,", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Maxwell Nye", "Andrew Saxe" ], "title": "Are efficient deep representations learnable? 2018", "venue": null, "year": 2018 }, { "authors": [ "Nasim Rahaman", "Devansh Arpit", "Aristide Baratin", "Felix Draxler", "Min Lin", "Fred A Hamprecht", "Yoshua Bengio", "Aaron Courville" ], "title": "On the spectral bias of deep neural networks", "venue": "arXiv preprint arXiv:1806.08734,", "year": 2018 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Shaked Shammah" ], "title": "Failures of gradient-based deep learning", "venue": "arXiv preprint arXiv:1703.07950,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Lei Wu", "Zhanxing Zhu", "Weinan E" ], "title": "Towards understanding generalization of deep learning: Perspective of loss landscapes", "venue": "arXiv preprint arXiv:1706.10239,", "year": 2017 }, { "authors": [ "Zhi-Qin J Xu", "Yaoyu Zhang", "Yanyang Xiao" ], "title": "Training behavior of deep neural network in frequency domain", "venue": "arXiv preprint arXiv:1807.01251,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Hui-Ling Zhen", "Xi Lin", "Alan Z Tang", "Zhenhua Li", "Qingfu Zhang", "Sam Kwong" ], "title": "Nonlinear collaborative scheme for deep neural networks", "venue": "arXiv preprint arXiv:1811.01316,", "year": 2018 }, { "authors": [ "C C" ], "title": "EXPERIMENTAL SETTINGS In Fig. 5, the parameters of the DNN is initialized by a Gaussian distribution with mean 0 and standard deviation 0.1. We use a tanh-DNN with widths 1-8000-1 with full batch training. The learning rate is 0.0002", "venue": "The DNN is trained by Adam optimizer (Kingma & Ba,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Understanding the training process of Deep Neural Networks (DNNs) is a fundamental problem in the area of deep learning. We find a common behavior of the gradient-based training process of DNNs, that is, a Frequency Principle (F-Principle):\nDNNs often fit target functions from low to high frequencies during the training process.\nIn another word, at the early stage of training, the low-frequencies are fitted and as iteration steps of training increase, the high-frequencies are fitted. For example, when a DNN is trained to fit y = sin(x) + sin(2x), its output would be close to sin(x) at early stage and as training goes on, its output would be close to sin(x) + sin(2x). F-Principle was observed empirically in synthetic low-dimensional data with MSE loss during DNN training (Xu et al., 2018; Rahaman et al., 2018). However, in deep learning, empirical phenomena could vary from one network structure to another, from one dataset to another and could exhibit significant difference between synthetic data and highdimensional real data. Therefore, the universality of the F-Principle remains an important problem for further study. Especially for high-dimensional real problems, because the computational cost of high-dimensional Fourier transform is prohibitive in practice, it is of great challenge to demonstrate the F-Principle. On the other hand, the mechanism underlying the F-Principle and its implication to the application of DNNs, e.g., design of DNN-based PDE solver, as well as their generalization ability are also important open problems to be addressed.\nIn this work, we design two methods, i.e., projection and filtering methods, to show that the FPrinciple exists in the training process of DNNs for high-dimensional benchmarks, i.e., MNIST (LeCun, 1998), CIFAR10 (Krizhevsky et al., 2010). The settings we have considered are i) different DNN architectures, e.g., fully-connected network, convolutional neural network (CNN), and VGG16 (Simonyan & Zisserman, 2014); ii) different activation functions, e.g., tanh and rectified linear unit (ReLU); iii) different loss functions, e.g., cross entropy, mean squared error (MSE), and loss energy functional in variational problems. These results demonstrate the universality of the F-Principle.\nTo facilitate the designs and applications of DNN-based schemes, we characterize a stark difference between DNNs and conventional numerical schemes on various scientific computing problems, where most of the conventional methods (e.g., Jacobi method) exhibit the opposite convergence behavior\n— faster convergence for higher frequencies. This difference implies that DNN can be adopted to accelerate the convergence of low frequencies for computational problems.\nWe also intuitively explain with theories under an idealized setting how the smoothness/regularity of commonly used activation functions contributes to the F-Principle. Note that this mechanism is rigorously demonstrated for DNNs of general settings in a subsequent work (Luo et al., 2019). Finally, we discuss that the F-Principle provides an understanding of good generalization of DNNs in many real datasets (Zhang et al., 2016) and poor generalization in learning the parity function (Shalev-Shwartz et al., 2017; Nye & Saxe, 2018), that is, the F-Principle which implies that DNNs prefer low frequencies, is consistent with the property of low frequencies dominance in many real datasets, e.g., MNIST/CIFAR10, but is different from the parity function whose spectrum concentrates on high frequencies. Compared with previous studies, our main contributions are as follows:\n1. By designing both the projection and filtering methods, we consistently demonstrate the F-Principle for MNIST/CIFAR10 over various architectures such as VGG16 and various loss functions.\n2. For the application of solving differential equations, we show that (i) conventional numerical schemes learn higher frequencies faster whereas DNNs learn lower frequencies faster by the FPrinciple, (ii) convergence of low frequencies can be greatly accelerated with DNN-based schemes.\n3. We present theories under an idealized setting to illustrate how smoothness/regularity of activation function contributes to the F-Principle.\n4. We discuss in detail the implication of the F-Principle to the generalization of DNNs that DNNs are implicitly biased towards a low frequency function and provide an explanation of good and poor generalization of DNNs for low and high frequency dominant target functions, respectively." }, { "heading": "2 FREQUENCY PRINCIPLE", "text": "The concept of “frequency” is central to the understanding of F-Principle. In this paper, the “frequency” means response frequency NOT image (or input) frequency as explained in the following.\nImage (or input) frequency (NOT used in the paper): Frequency of 2-d function I : R2 → R representing the intensity of an image over pixels at different locations. This frequency corresponds to the rate of change of intensity across neighbouring pixels. For example, an image of constant intensity possesses only the zero frequency, i.e., the lowest frequency, while a sharp edge contributes to high frequencies of the image.\nResponse frequency (used in the paper): Frequency of a general Input-Output mapping f . For example, consider a simplified classification problem of partial MNIST data using only the data with label 0 and 1, f(x1, x2, · · · , x784) : R784 → {0, 1}mapping 784-d space of pixel values to 1-d space, where xj is the intensity of the j-th pixel. Denote the mapping’s Fourier transform as f̂(k1, k2, · · · , k784). The frequency in the coordinate kj measures the rate of change of f(x1, x2, · · · , x784) with respect to xj , i.e., the intensity of the j-th pixel. If f possesses significant high frequencies for large kj , then a small change of xj in the image might induce a large change of the output (e.g., adversarial example). For a dataset with multiple classes, we can similarly define frequency for each output dimension. For real data, the response frequency is rigorously defined via the standard nonuniform discrete Fourier transform (NUDFT), see Appendix A.\nFrequency Principle: DNNs often fit target functions from low to high (response) frequencies during the training process. An illustration of F-Principle using a function of 1-d input is in Appendix B. The F-Principle is rigorously defined through the frequency defined by the Fourier transform (Appendix A, Bracewell & Bracewell (1986)) and the converging speed defined by the relative error. By using high-dimensional real datasets, we then experimentally demonstrate F-Principle at the levels of both individual frequencies (projection method) and coarse-grained frequencies (filtering method)." }, { "heading": "3 F-PRINCIPLE IN MNIST/CIFAR10 THROUGH PROJECTION METHOD", "text": "Real datasets are very different from synthetic data used in previous studies. In order to utilize the F-Principle to understand and better use DNNs in real datasets, it is important to verify whether the F-Principle also holds in high-dimensional real datasets.\nIn the following experiments, we examine the F-Principle in a training dataset of {(xi,yi)}n−1i=0 where n is the size of dataset. xi ∈ Rd is a vector representing the image and yi ∈ {0, 1}10 is the output (a one-hot vector indicating the label for the dataset of image classification). d is the dimension of the input (d = 784 for MNIST and d = 32× 32× 3 for CIFAR10). Since the high dimensional discrete Fourier transform (DFT) requires prohibitively high computational cost, in this section, we only consider one direction in the Fourier space through a projection method for each examination." }, { "heading": "3.1 EXAMINATION METHOD: PROJECTION", "text": "For a dataset {(xi,yi)}n−1i=0 we consider one entry of 10-d output, denoted by yi ∈ R. The high dimensional discrete non-uniform Fourier transform of {(xi, yi)}n−1i=0 is ŷk = 1n ∑n−1 i=0 yi exp (−i2πk · xi). The number of all possible k grows exponentially on dimension d. For illustration, in each examination, we consider a direction of k in the Fourier space, i.e., k = kp1, p1 is a chosen and fixed unit vector, hence |k| = k. Then we have ŷk = 1n ∑n−1 i=0 yi exp (−i2π(p1 · xj)k), which is essentially the 1-d Fourier transform of {(xp1,i, yi)}n−1i=0 , where xp1,i = p1 · xi is the projection of xi on the direction p1 (Bracewell & Bracewell, 1986). For each training dataset, p1 is chosen as the first principle component of the input space. To examine the convergence behavior of different frequency components during the training, we compute the relative difference between the DNN output and the target function for selected important frequencies k’s at each recording step, that is, ∆F (k) = |ĥk− ŷk|/|ŷk|, where ŷk and ĥk are 1-d Fourier transforms of {yi}n−1i=0 and the corresponding DNN output{hi}n−1i=0 , respectively, along p1. Note that each response frequency component, ĥk, of DNN output evolves as the training goes." }, { "heading": "3.2 MNIST/CIFAR10", "text": "In the following, we show empirically that the F-Principle is exhibited in the selected direction during the training process of DNNs when applied to MNIST/CIFAR10 with cross-entropy loss. The network for MNIST is a fully-connected tanh DNN (784-400-200-10) and for CIFAR10 is two ReLU convolutional layers followed by a fully-connected DNN (800-400-400-400-10). All experimental details of this paper can be found in Appendix C. We consider one of the 10-d outputs in each case using non-uniform Fourier transform. As shown in Fig. 1(a) and 1(c), low frequencies dominate in both real datasets. During the training, the evolution of relative errors of certain selected frequencies (marked by black squares in Fig. 1(a) and 1(c)) is shown in Fig. 1(b) and 1(d). One can easily observe that DNNs capture low frequencies first and gradually capture higher frequencies. Clearly, this behavior is consistent with the F-Principle. For other components of the output vector and other directions of p, similar phenomena are also observed." }, { "heading": "4 F-PRINCIPLE IN MNIST/CIFAR10 THROUGH FILTERING METHOD", "text": "The projection method in the previous section enables us to visualize the F-Principle in one direction for each examination at the level of individual frequency components. However, demonstration by this method alone is insufficient because it is impossible to verify the F-Principle at all potentially informative directions for high-dimensional data. To compensate the projection method, in this\nsection, we consider a coarse-grained filtering method which is able to unravel whether, in the radially averaged sense, low frequencies converge faster than high frequencies." }, { "heading": "4.1 EXAMINATION METHOD: FILTERING", "text": "The idea of the filtering method is as follows. We split the frequency domain into two parts, i.e., a low-frequency part with |k| ≤ k0 and a high-frequency part with |k| > k0, where | · | is the length of a vector. The DNN is trained as usual by the original dataset {(xi,yi)}n−1i=0 , such as MNIST or CIFAR10. The DNN output is denoted as h. During the training, we can examine the convergence of relative errors of low- and high- frequency part, using the two measures below\nelow =\n(∑ k 1|k|≤k0 |ŷ(k)− ĥ(k)|2∑\nk 1|k|≤k0 |ŷ(k)|2\n) 1 2\n, ehigh =\n(∑ k(1− 1|k|≤k0)|ŷ(k)− ĥ(k)|2∑\nk(1− 1|k|≤k0)|ŷ(k)|2\n) 1 2\n,\nrespectively, where ·̂ indicates Fourier transform, 1k≤k0 is an indicator function, i.e.,\n1|k|≤k0 = { 1, |k| ≤ k0, 0, |k| > k0.\nIf we consistently observe elow < ehigh for different k0’s during the training, then in a mean sense, lower frequencies are first captured by the DNN, i.e., F-Principle.\nHowever, because it is almost impossible to compute above quantities numerically due to high computational cost of high-dimensional Fourier transform, we alternatively use the Fourier transform of a Gaussian function Ĝδ(k), where δ is the variance of the Gaussian function G, to approximate 1|k|>k0 . This is reasonable due to the following two reasons. First, the Fourier transform of a Gaussian is still a Gaussian, i.e., Ĝδ(k) decays exponentially as |k| increases, therefore, it can approximate 1|k|≤k0 by Ĝ\nδ(k) with a proper δ(k0) (referred to as δ for simplicity). Second, the computation of elow and ehigh contains the multiplication of Fourier transforms in the frequency domain, which is equivalent to the Fourier transform of a convolution in the spatial domain. We can equivalently perform the examination in the spatial domain so as to avoid the almost impossible high-dimensional Fourier transform. The low frequency part can be derived by\nylow,δi , (y ∗G δ)i, (1)\nwhere ∗ indicates convolution operator, and the high frequency part can be derived by yhigh,δi , yi − y low,δ i . (2) Then, we can examine\nelow =\n(∑ i |y low,δ i − h\nlow,δ i |2∑\ni |y low,δ i |2\n) 1 2\n, ehigh =\n(∑ i |y high,δ i − h\nhigh,δ i |2∑\ni |y high,δ i |2\n) 1 2\n, (3)\nwhere hlow,δ and hhigh,δ are obtained from the DNN output h, which evolves as a function of training epoch, through the same decomposition. If elow < ehigh for different δ’s during the training, F-Principle holds; otherwise, it is falsified. Next, we introduce the experimental procedure.\nStep One: Training. Train the DNN by the original dataset {(xi,yi)}n−1i=0 , such as MNIST or CIFAR10. xi is an image vector, yi is a one-hot vector.\nStep Two: Filtering. The low frequency part can be derived by\nylow,δi = 1\nCi n−1∑ j=0 yjG δ(xi − xj), (4)\nwhere Ci = ∑n−1 j=0 G\nδ(xi − xj) is a normalization factor and Gδ(xi − xj) = exp ( −|xi − xj |2/(2δ) ) . (5)\nThe high frequency part can be derived by yhigh,δi , yi − y low,δ i . We also compute h low,δ i and hhigh,δi for each DNN output hi.\nStep Three: Examination. To quantify the convergence of hlow,δ and hhigh,δ, we compute the relative error elow and ehigh at each training epoch through Eq. (3)." }, { "heading": "4.2 DNNS WITH VARIOUS SETTINGS", "text": "With the filtering method, we show the F-Principle in the DNN training process of real datasets for commonly used large networks. For MNIST, we use a fully-connected tanh-DNN (no softmax) with MSE loss; for CIFAR10, we use cross-entropy loss and two structures, one is small ReLU-CNN network, i.e., two convolutional layers, followed by a fully-connected multi-layer neural network with a softmax; the other is VGG16 (Simonyan & Zisserman, 2014) equipped with a 1024 fully-connected layer. These three structures are denoted as “DNN”, “CNN” and “VGG” in Fig. 2, respectively. All are trained by SGD from scratch. More details are in Appendix C.\nWe scan a large range of δ for both datasets. As an example, results of each dataset for several δ’s are shown in Fig. 2, respectively. Red color indicates small relative error. In all cases, the relative error of the low-frequency part, i.e., elow, decreases (turns red) much faster than that of the high-frequency part, i.e., ehigh. Therefore, as analyzed above, the low-frequency part converges faster than the high-frequency part. We also remark that, based on the above results on cross-entropy loss, the F-Principle is not limited to MSE loss, which possesses a natural Fourier domain interpretation by the Parseval’s theorem. Note that the above results holds for both SGD and GD." }, { "heading": "5 F-PRINCIPLE IN SOLVING DIFFERENTIAL EQUATION", "text": "Recently, DNN-based approaches have been actively explored for a variety of scientific computing problems, e.g., solving high-dimensional partial differential equations (E et al., 2017; Khoo et al., 2017; He et al., 2018; Fan et al., 2018) and molecular dynamics (MD) simulations (Han et al., 2017). However, the behaviors of DNNs applied to these problems are not well-understood. To facilitate the designs and applications of DNN-based schemes, it is important to characterize the difference between DNNs and conventional numerical schemes on various scientific computing problems. In this section, focusing on solving Poisson’s equation, which has broad applications in mechanical engineering and theoretical physics (Evans, 2010), we highlight a stark difference between a DNN-based solver and the Jacobi method during the training/iteration, which can be explained by the F-Principle.\nConsider a 1-d Poisson’s equation:\n−∆u(x) = g(x), x ∈ Ω , (−1, 1), (6) u(−1) = u(1) = 0. (7)\nWe consider the example with g(x) = sin(x)+4 sin(4x)−8 sin(8x)+16 sin(24x) which has analytic solution uref(x) = g0(x) + c1x+ c0, where g0 = sin(x) + sin(4x)/4− sin(8x)/8 + sin(24x)/36, c1 = (g0(−1)−g0(1))/2 and c0 = −(g0(−1)+g0(1))/2. 1001 training samples {xi}ni=0 are evenly spaced with grid size δx in [0, 1]. Here, we use the DNN output, h(x; θ), to fit uref(x) (Fig. 3(a)). A\nDNN-based scheme is proposed by considering the following empirical loss function (E & Yu, 2018),\nIemp = n−1∑ i=1 ( 1 2 |∇xh(xi)|2 − g(xi)h(xi) ) δx+ β ( h(x0) 2 + h(xn) 2 ) . (8)\nThe second term in Iemp(h) is a penalty, with constant β, arising from the Dirichlet boundary condition (7). After training, the DNN output well matches the analytical solution uref . Focusing on the convergence of three peaks (inset of Fig. 3(a)) in the Fourier transform of uref , as shown in Fig. 3(b), low frequencies converge faster than high frequencies as predicted by the F-Principle. For comparison, we also use the Jacobi method to solve problem (6). High frequencies converge faster in the Jacobi method (Details can be found in Appendix D), as shown in Fig. 3(c).\nAs a demonstration, we further propose that DNN can be combined with conventional numerical schemes to accelerate the convergence of low frequencies for computational problems. First, we solve the Poisson’s equation in Eq. (6) by DNN with M optimization steps (or epochs), which needs to be chosen carefully, to get a good initial guess in the sense that this solution has already learned the low frequencies (large eigenvalues) part. Then, we use the Jacobi method with the new initial data for the further iterations. We use ‖h− uref‖∞ , maxx∈Ω |h(x)− uref(x)| to quantify the learning result. As shown by green stars in Fig. 3(d), ‖h− uref‖∞ fluctuates after some running time using DNN only. Dashed lines indicate the evolution of the Jacobi method with initial data set to the DNN output at the corresponding steps. If M is too small (stop too early) (left dashed line), which is equivalent to only using Jacobi, it would take long time to converge to a small error, because low frequencies converges slowly, yet. If M is too big (stop too late) (right dashed line), which is equivalent to using DNN only, much time would be wasted for the slow convergence of high frequencies. A proper choice of M is indicated by the initial point of orange dashed line, in which low frequencies are quickly captured by the DNN, followed by fast convergence in high frequencies of the Jacobi method.\nThis example illustrates a cautionary tale that, although DNNs has clear advantage, using DNNs alone may not be the best option because of its limitation of slow convergence at high frequencies. Taking advantage of both DNNs and conventional methods to design faster schemes could be a promising direction in scientific computing problems." }, { "heading": "6 A PRELIMINARY THEORETICAL UNDERSTANDING", "text": "A subsequent theoretical work (Luo et al., 2019) provides a rigorous mathematical study of the FPrinciple at different frequencies for general DNNs (e.g., multiple hidden layers, different activation functions, high-dimensional inputs). The key insight is that the regularity of DNN converts into the decay rate of a loss function in the frequency domain. For an intuitive understanding of this key insight, we present theories under an idealized setting, which connect the smoothness/regularity of the activation function with different gradient and convergence priorities in frequency domain.\nThe activation function we consider is σ(x) = tanh(x), which is smooth in spatial domain and its derivative decays exponentially with respect to frequency in the Fourier domain. For a DNN of one\nhidden layer withm nodes, 1-d input x and 1-d output: h(x) = ∑m j=1 ajσ(wjx+bj), aj , wj , bj ∈ R. We also use the notation θ = {θlj} with θ1j = aj , θ2j = wj , and θ3j = bj , j = 1, · · · ,m. The\nloss at frequency k is L(k) = 12 ∣∣∣ĥ(k)− f̂(k)∣∣∣2, ·̂ is the Fourier transform, f is the target function.\nThe total loss function is defined as: L = ∫ +∞ −∞ L(k) dk. Note that according to Parseval’s theorem, this loss function in the Fourier domain is equal to the commonly used MSE loss. We have the following theorems (The proofs are at Appendix E.). Define W = (w1, w2, · · · , wm)T ∈ Rm. Theorem 1. Considering a DNN of one hidden layer with activation function σ(x) = tanh(x), for any frequencies k1 and k2 such that |f̂(k1)| > 0, |f̂(k2)| > 0, and |k2| > |k1| > 0, there exist positive constants c and C such that for sufficiently small δ, we have\nµ ({ W : ∣∣∣∂L(k1)∂θlj ∣∣∣ > ∣∣∣∂L(k2)∂θlj ∣∣∣ for all l, j} ∩Bδ) µ(Bδ) ≥ 1− C exp(−c/δ),\nwhere Bδ ⊂ Rm is a ball with radius δ centered at the origin and µ(·) is the Lebesgue measure.\nTheorem 1 indicates that for any two non-converged frequencies, with small weights, the lowerfrequency gradient exponentially dominates over the higher-frequency ones. Due to Parseval’s theorem, the MSE loss in the spatial domain is equivalent to the L2 loss in the Fourier domain. To intuitively understand the higher decay rate of a lower-frequency loss function, we consider the training in the Fourier domain with loss function of only two non-zero frequencies.\nTheorem 2. Considering a DNN of one hidden layer with activation function σ(x) = tanh(x). Suppose the target function has only two non-zero frequencies k1 and k2, that is, |f̂(k1)| > 0, |f̂(k2)| > 0, |k2| > |k1| > 0, and |f̂(k)| = 0 for k 6= k1, k2. Consider the loss function of L = L(k1) + L(k2) with gradient descent training. Denote\nS = { ∂L(k1)\n∂t ≤ 0, ∂L(k1) ∂t ≤ ∂L(k2) ∂t\n} ,\nthat is, L(k1) decreases faster than L(k2). There exist positive constants c and C such that for sufficiently small δ, we have\nµ ({W : S holds} ∩Bδ) µ(Bδ) ≥ 1− C exp(−c/δ),\nwhere Bδ ⊂ Rm is a ball with radius δ centered at the origin and µ(·) is the Lebesgue measure." }, { "heading": "7 DISCUSSIONS", "text": "DNNs often generalize well for real problems (Zhang et al., 2016) but poorly for problems like fitting a parity function (Shalev-Shwartz et al., 2017; Nye & Saxe, 2018) despite excellent training accuracy for all problems. Understanding the differences between above two types of problems, i.e., good and bad generalization performance of DNN, is critical. In the following, we show a qualitative difference between these two types of problems through Fourier analysis and use the F-Principle to provide an explanation different generalization performances of DNNs.\nFor MNIST/CIFAR10, we examine ŷtotal,k = 1ntotal ∑ntotal−1 i=0 yi exp (−i2πk · xi), where {(xi, yi)}ntotal−1i=0 consists of both the training and test datasets with certain selected output component, at different directions of k in the Fourier space. We find that ŷtotal,k concentrates on the low frequencies along those examined directions. For illustration, ŷtotal,k’s along the first principle component are shown by green lines in Fig. 4(a, b) for MNIST/CIFAR10, respectively. When only the training dataset is used, ŷtrain,k well overlaps with ŷtotal,k at the dominant low frequencies.\nFor the parity function f(x) = ∏d j=1 xj defined on Ω = {−1, 1}d, its Fourier transform is f̂(k) =\n1 2d ∑ x∈Ω ∏d j=1 xje −i2πk·x = (−i)d ∏d j=1 sin 2πkj . Clearly, for k ∈ [− 1 4 , 1 4 ] d, the power of the parity function concentrates at k ∈ {− 14 , 1 4} d and vanishes as k→ 0, as illustrated in Fig. 4(c) for the direction of 1d. Given a randomly sampled training dataset S ⊂ Ω with s points, the nonuniform\nFourier transform on S is computed as f̂S(k) = 1s ∑ x∈S ∏d j=1 xje\n−i2πk·x. As shown in Fig. 4(c), f̂(k) and f̂S(k) significantly differ at low frequencies.\nBy experiments, the generalization ability of DNNs can be well reflected by the Fourier analysis. For the MNIST/CIFAR10, we observed the Fourier transform of the output of a well-trained DNN on {xi}ntotal−1i=0 faithfully recovers the dominant low frequencies, as illustrated in Fig. 4(a) and 4(b), respectively, indicating a good generalization performance as observed in experiments. However, for the parity function, we observed that the Fourier transform of the output of a well-trained DNN on {xi}i∈S significantly deviates from f̂(k) at almost all frequencies, as illustrated in Fig. 4(c), indicating a bad generalization performance as observed in experiments.\nThe F-Principle implicates that among all the functions that can fit the training data, a DNN is implicitly biased during the training towards a function with more power at low frequencies. If the target function has significant high-frequency components, insufficient training samples will lead to artificial low frequencies in training dataset (see red line in Fig. 4(c)), which is the wellknown aliasing effect. Based on the F-Principle, as demonstrated in Fig. 4(c), these artificial low frequency components will be first captured to explain the training samples, whereas the high frequency components will be compromised by DNN. For MNIST/CIFAR10, since the power of high frequencies is much smaller than that of low frequencies, artificial low frequencies caused by aliasing can be neglected. To conclude, the distribution of power in Fourier domain of above two types of problems exhibits significant differences, which result in different generalization performances of DNNs according to the F-Principle." }, { "heading": "8 RELATED WORK", "text": "There are different approaches attempting to explain why DNNs often generalize well. For example, generalization error is related to various complexity measures (Bartlett et al., 1999; Neyshabur et al., 2017; E et al., 2018), local properties (sharpness/flatness) of loss functions at minima (Keskar et al., 2016; Wu et al., 2017), stability of optimization algorithms (Hardt et al., 2015), and implicit bias of the training process (Soudry et al., 2018; Arpit et al., 2017; Xu et al., 2018). On the other hand, several works focus on the failure of DNNs (Shalev-Shwartz et al., 2017; Nye & Saxe, 2018), e.g., fitting the parity function, in which a well-trained DNN possesses no generalization ability. We propose that the Fourier analysis can provide insights into both success and failure of DNNs.\nF-Principle was first discovered in (Xu et al., 2018; Rahaman et al., 2018) simultaneously through simple synthetic data and not very deep networks. In the revised version, Rahaman et al. (2018) examines the F-Principle in the MNIST dataset. However, they add noise to MNIST, which contaminates the labels and damages the structure of real data. They only examine not very deep (6-layer) fully connected ReLU network with MSE loss, while cross-entropy loss is widely used. This paper verified that F-Principle holds in the training process of MNIST and CIFAR10, both CNN and fully connected networks, very deep networks (VGG16) and various loss functions, e.g., MSE Loss, cross-entropy loss and variational loss function. In the aspect of theoretical study, based on the key mechanism found by the theoretical study in this paper, Luo et al. (2019) shows a rigorous proof of the F-Principle for general DNNs. The theoretical study of the gradient of tanh(x) in the Fourier domain is adopted\nby Rahaman et al. (2018), in which they generalize the analysis to ReLU and show similar results. Thm 1 is also used to analyze a nonlinear collaborative scheme for deep network training (Zhen et al., 2018). In the aspect of application, based on the study of the F-Principle in this paper, Cai et al. (2019) and Cai & Xu (2019) design DNN-based algorithms to solve high-dimensional and high-frequency problems." }, { "heading": "C EXPERIMENTAL SETTINGS", "text": "In Fig. 5, the parameters of the DNN is initialized by a Gaussian distribution with mean 0 and standard deviation 0.1. We use a tanh-DNN with widths 1-8000-1 with full batch training. The learning rate is 0.0002. The DNN is trained by Adam optimizer (Kingma & Ba, 2014) with the MSE loss function.\nIn Fig. 1, for MNIST dataset, the training process of a tanh-DNN with widths 784-400-200-10 is shown in Fig. 1(a) and 1(b). For CIFAR10 dataset, results are shown in Fig. 1(c) and 1(d) of a ReLU-CNN, which consists of one convolution layer of 3 × 3 × 64, a max pooling of 2 × 2, one convolution layer of 3 × 3 × 128, a max pooling of 2 × 2, followed by a fully-connected DNN with widths 800-400-400-400-10. For both cases, the output layer of the network is equipped with a softmax. The network output is a 10-d vector. The DNNs are trained with cross entropy loss by Adam optimizer (Kingma & Ba, 2014). (a, b) are for MNIST with a tanh-DNN. The learning rate is 0.001 with batch size 10000. After training, the training accuracy is 0.951 and test accuracy is 0.963. The amplitude of the Fourier coefficient with respect to the fourth output component at each frequency is shown in (a), in which the red dots are computed using the training data. Selected frequencies are marked by black squares. (b) ∆F (k) at different training epochs for the selected frequencies. (c, d)\nare for CIFAR10 dataset. We use a ReLU network of a CNN followed by a fully-connected DNN. The learning rate is 0.003 with batch size 512. (c) and (d) are the results with respect to the ninth output component. After training, the training accuracy is 0.98 and test accuracy is 0.72.\nIn Fig. 2, for MNIST, we use a fully-connected tanh-DNN with widths 784-400-200-10 and MSE loss; for CIFAR10, we use cross-entropy loss and a ReLU-CNN, which consists of one convolution layer of 3× 3× 32, a max pooling of 2× 2, one convolution layer of 3× 3× 64, a max pooling of 2× 2, followed by a fully-connected DNN with widths 400-10 and the output layer of the network is equipped with a softmax. The learning rate for MNIST and CIFAR10 is 0.015 and 0.003, respectively. The networks are trained by Adam optimizer (Kingma & Ba, 2014) with batch size 10000. For VGG16, the learning rate is 10−5. The network is trained by Adam optimizer (Kingma & Ba, 2014) with batch size 500.\nIn Fig. 3, the samples are evenly spaced in [0, 1] with sample size 1001. We use a DNN with widths 1-4000-500-400-1 and full batch training by Adam optimizer (Kingma & Ba, 2014). The learning rate is 0.0005. β is 10. The parameters of the DNN are initialized following a Gaussian distribution with mean 0 and standard deviation 0.02.\nIn Fig. 4, the settings of (a) and (b) are the same as the ones in Fig. 1. For (c), we use a tanh-DNN with widths 10-500-100-1, learning rate 0.0005 under full batch-size training by Adam optimizer (Kingma & Ba, 2014). The parameters of the DNN are initialized by a Gaussian distribution with mean 0 and standard deviation 0.05." }, { "heading": "D CENTRAL DIFFERENCE SCHEME AND JACOBI METHOD", "text": "Consider a one-dimensional (1-d) Poisson’s equation:\n−∆u(x) = g(x), x ∈ Ω = (−1, 1) (9)\nu(x) = 0, x = −1, 1.\n[−1, 1] is uniformly discretized into n+ 1 points with grid size h = 2/n. The Poisson’s equation in Eq. (9) can be solved by the central difference scheme,\n−∆ui = − ui+1 − 2ui + ui−1\n(δx)2 = g(xi), i = 1, 2, · · · , n, (10)\nresulting a linear system Au = g, (11)\nwhere\nA = 2 −1 0 0 · · · 0 −1 2 −1 0 · · · 0 0 −1 2 −1 · · · 0 ... ... · · · ...\n0 0 · · · 0 −1 2 (n−1)×(n−1) , (12)\nu = u1 u2 ...\nun−2 un−1\n , g = (δx)2 g1 g2 ...\ngn−2 gn−1 , xi = 2 in . (13) A class of methods to solve this linear system is iterative schemes, for example, the Jacobi method. Let A = D −L−U , where D is the diagonal of A, and L and U are the strictly lower and upper triangular parts of −A, respectively. Then, we obtain\nu = D−1(L + U)u + D−1g. (14)\nAt step t ∈ N, the Jacobi iteration reads as\nut+1 = D−1(L + U)ut + D−1g. (15)\nWe perform the standard error analysis of the above iteration process. Denote u∗ as the true value obtained by directly performing inverse of A in Eq. (11). The error at step t+1 is et+1 = ut+1−u∗. Then, et+1 = RJet, where RJ = D−1(L + U). The converging speed of et is determined by the eigenvalues of RJ , that is,\nλk = λk(RJ) = cos kπ\nn , k = 1, 2, · · · , n− 1, (16)\nand the corresponding eigenvector vk’s entry is\nvk,i = sin ikπ\nn , i = 1, 2, · · · , n− 1. (17)\nSo we can write\net = n−1∑ k=1 αtkvk, (18)\nwhere αtk can be understood as the magnitude of e t in the direction of vk. Then,\net+1 = n−1∑ k=1 αtkRJvk = n−1∑ k=1 αtkλkvk. (19)\nαt+1k = λkα t k.\nTherefore, the converging rate of et in the direction of vk is controlled by λk. Since\ncos kπ n = − cos (n− k)π n , (20)\nthe frequencies k and (n − k) are closely related and converge with the same rate. Consider the frequency k < n/2, λk is larger for lower frequency. Therefore, lower frequency converges slower in the Jacobi method." }, { "heading": "E PROOF OF THEOREMS", "text": "The activation function we consider is σ(x) = tanh(x).\nσ(x) = tanh(x) = ex − e−x\nex + e−x , x ∈ R.\nFor a DNN of one hidden layer with m nodes, 1-d input x and 1-d output:\nh(x) = m∑ j=1 ajσ(wjx+ bj), aj , wj , bj ∈ R, (21)\nwhere wj , aj , and bj are called parameters, in particular, wj and aj are called weights, and bj is also known as a bias. In the sequel, we will also use the notation θ = {θlj} with θ1j = aj , θ2j = wj , and θlj = bj , j = 1, · · · ,m. Note that σ̂(k) = − iπsinh(πk/2) where the Fourier transformation and its inverse transformation are defined as follows:\nf̂(k) = ∫ +∞ −∞ f(x)e−ikx dx, f(x) = 1 2π ∫ +∞ −∞ f̂(k)eikx dk.\nThe Fourier transform of σ(wjx+ bj) with wj , bj ∈ R, j = 1, · · · ,m reads as\n̂σ(wj ·+bj)(k) = 2πi\n|wj | exp ( ibjk wj ) 1 exp(− πk2wj )− exp( πk 2wj ) . (22)\nThus\nĥ(k) = m∑ j=1 2πaj i |wj | exp ( ibjk wj ) 1 exp(− πk2wj )− exp( πk 2wj ) . (23)\nWe define the amplitude deviation between DNN output and the target function f(x) at frequency k as\nD(k) , ĥ(k)− f̂(k).\nWrite D(k) as D(k) = A(k)eiφ(k), where A(k) ∈ [0,+∞) and φ(k) ∈ R are the amplitude and phase of D(k), respectively. The loss at frequency k is L(k) = 12 |D(k)|\n2, where | · | denotes the norm of a complex number. The total loss function is defined as: L = ∫ +∞ −∞ L(k) dk. Note that according to Parseval’s theorem, this loss function in the Fourier domain is equal to the commonly used loss of mean squared error, that is, L = ∫ +∞ −∞ 1 2 (h(x)− f(x))\n2 dx. For readers’ reference, we list the partial derivatives of L(k) with respect to parameters\n∂L(k)\n∂aj =\n2π wj sin (bjk wj − φ(k) ) E0, (24)\n∂L(k)\n∂wj = [ sin (bjk wj − φ(k) )(π2ajk w3j E1 − 2πaj w2j )\n− 2πajbjk w3j cos (bjk wj − φ(k) )] E0, (25)\n∂L(k)\n∂bj =\n2πajbjk w2j cos (bjk wj − φ(k) ) E0, (26)\nwhere\nE0 = sgn(wj)A(k)\nexp( πk2wj )− exp(− πk 2wj\n) ,\nE1 = exp( πk2wj ) + exp(− πk 2wj )\nexp( πk2wj )− exp(− πk 2wj\n) .\nThe descent increment at any direction, say, with respect to parameter θlj , is\n∂L\n∂θlj = ∫ +∞ −∞ ∂L(k) ∂θlj dk. (27)\nThe absolute contribution from frequency k to this total amount at θlj is∣∣∣∣∂L(k)∂θlj ∣∣∣∣ ≈ A(k) exp (−|πk/2wj |)Flj(θj , k), (28)\nwhere θj , {wj , bj , aj}, θlj ∈ θj , Flj(θj , k) is a function with respect to θj and k, which can be found in one of Eqs. (24, 25, 26).\nWhen the component at frequency k where ĥ(k) is not close enough to f̂(k), exp (−|πk/2wj |) would dominate Glj(θj , k) for a small wj . Through the above framework of analysis, we have the following theorem. Define W = (w1, w2, · · · , wm)T ∈ Rm. (29) Theorem. Consider a one hidden layer DNN with activation function σ(x) = tanhx. For any frequencies k1 and k2 such that |f̂(k1)| > 0, |f̂(k2)| > 0, and |k2| > |k1| > 0, there exist positive constants c and C such that for sufficiently small δ, we have\nµ ({ W : ∣∣∣∂L(k1)∂θlj ∣∣∣ > ∣∣∣∂L(k2)∂θlj ∣∣∣ for all l, j} ∩Bδ) µ(Bδ)\n≥ 1− C exp(−c/δ), (30)\nwhere Bδ ⊂ Rm is a ball with radius δ centered at the origin and µ(·) is the Lebesgue measure.\nWe remark that c and C depend on k1, k2, |f̂(k1)|, |f̂(k2)|, sup |ai|, sup |bi|, and m.\nProof. To prove the statement, it is sufficient to show that µ(Slj,δ)/µ(Bδ) ≤ C exp(−c/δ) for each l, j, where\nSlj,δ := { W ∈ Bδ : ∣∣∣∣∂L(k1)∂θlj ∣∣∣∣ ≤ ∣∣∣∣∂L(k2)∂θlj ∣∣∣∣} . (31) We prove this for S1j,δ , that is, θlj = aj . The proofs for θlj = wj and bj are similar. Without loss of generality, we assume that k1, k2 > 0, bj > 0, and wj 6= 0, j = 1, · · · ,m. According to Eq. (24), the inequality |∂L(k1)∂aj | ≤ | ∂L(k2) ∂aj | is equivalent to\nA(k2)\nA(k1)\n∣∣∣∣∣exp( πk1 2wj )− exp(− πk12wj )\nexp( πk22wj )− exp(− πk2 2wj )\n∣∣∣∣∣ · ∣∣∣ sin(bjk2wj − φ(k2) )∣∣∣ ≥ ∣∣∣ sin(bjk1 wj − φ(k1) )∣∣∣ (32) Note that |ĥ(k)| ≤ C ∑m j=1 |aj | |wj | exp(− πk 2|wj | ) for k > 0. Thus\nlim W→0 ĥ(k) = 0 and lim W→0 D(k) = −f̂(k). (33)\nTherefore, lim W→0 A(k) = |f̂(k)| and lim W→0 φ(k) = π + arg(f̂(k)). (34)\nFor W ∈ Bδ with sufficiently small δ, A(k1) > 12 |f̂(k1)| > 0 and A(k2) < 2|f̂(k2)|. Also note that | sin( bjk2wj − φ(k2))| ≤ 1 and that for sufficiently small δ,∣∣∣∣∣exp( πk1 2wj )− exp(− πk12wj )\nexp( πk22wj )− exp(− πk2 2wj )\n∣∣∣∣∣ ≤ 2 exp(−π(k2 − k1)2|wj | ) . (35)\nThus, inequality (32) implies that∣∣∣ sin(bjk1 wj − φ(k1) )∣∣∣ ≤ 8|f̂(k2)| |f̂(k1)| exp ( − π(k2 − k1) 2|wj | ) . (36)\nNoticing that 2π |x| ≤ | sinx| (|x| ≤ π 2 ) and Eq. (34), we have for W ∈ Slj,δ , for some q ∈ Z,∣∣∣bik1\nwi − arg(f̂(k1))− qπ ∣∣∣ ≤ 8π|f̂(k2)| |f̂(k1)| exp ( − π(k2 − k1) 2δ ) (37)\nthat is,\n− c1 exp(−c2/δ) + qπ + arg(f̂(k1)) ≤ bik1 wi ≤ c1 exp(−c2/δ) + qπ + arg(f̂(k1)), (38)\nwhere c1 = 8π|f̂(k2)| |f̂(k1)| and c2 = π(k2 − k1). Define I := I+ ∪ I− where\nI+ := {wj > 0 : W ∈ S1j,δ}, I− := {wj < 0 : W ∈ S1j,δ}. (39) For wj > 0, we have for some q ∈ Z,\n0 < bjk1\nc1 exp(−c2/δ) + qπ + arg(f̂(k1)) ≤ wj ≤\nbjk1\n−c1 exp(−c2/δ) + qπ + arg(f̂(k1)) . (40)\nSince W ∈ Bδ and c1 exp(−c2/δ) + arg(f̂(k1)) ≤ 2π, we have bjk12π+qπ ≤ wj ≤ δ. Then Eq. (40) only holds for some large q, more precisely, q ≥ q0 := bjkπδ − 2. Thus we obtain the estimate for the (one-dimensional) Lebesgue measure of I+\nµ(I+) ≤ ∞∑ q=q0 ∣∣∣∣∣ bjk1−c1 exp(−c2/δ) + qπ + arg(f̂(k1)) − bjk1c1 exp(−c2/δ) + qπ + arg(f̂(k1)) ∣∣∣∣∣\n≤ 2|bj |k1c1 exp(−c2/δ) · ∞∑ q=q0\n1\n(qπ + arg(f̂(k1)))2 − (c1 exp(−c2/δ))2\n≤ C exp(−c/δ). (41)\nThe similar estimate holds for µ(I−), and hence µ(I) ≤ C exp(−c/δ). For W ∈ Bδ, the (m− 1) dimensional vector (w1, · · · , wj−1, wj+1, · · · , wm)T is in a ball with radius δ in Rm−1. Therefore, we final arrive at the desired estimate\nµ(S1j,δ) µ(Bδ) ≤ µ(I)ωm−1δ\nm−1\nωmδm ≤ C exp(−c/δ), (42)\nwhere ωm is the volume of a unit ball in Rm.\nTheorem. Considering a DNN of one hidden layer with activation function σ(x) = tanh(x). Suppose the target function has only two non-zero frequencies k1 and k2, that is, |f̂(k1)| > 0, |f̂(k2)| > 0, and |k2| > |k1| > 0, and |f̂(k)| = 0 for k 6= k1, k2. Consider the loss function of L = L(k1) + L(k2) with gradient descent training. Denote\nS = { ∂L(k1)\n∂t ≤ 0, ∂L(k1) ∂t ≤ ∂L(k2) ∂t\n} ,\nthat is, L(k1) decreases faster than L(k2). There exist positive constants c and C such that for sufficiently small δ, we have\nµ ({W : S holds} ∩Bδ) µ(Bδ) ≥ 1− C exp(−c/δ),\nwhere Bδ ⊂ Rm is a ball with radius δ centered at the origin and µ(·) is the Lebesgue measure.\nProof. By gradient descent algorithm, we obtain\n∂L(k1) ∂t = ∑ l,j ∂L(k1) ∂θlj ∂θlj ∂t\n= − ∑ l,j ∂L(k1) ∂θlj ∂(L(k1) + L(k2)) ∂θlj\n= − ∑ l,j ( ∂L(k1) ∂θlj )2 − ∑ l,j ∂L(k1) ∂θlj ∂L(k2) ∂θlj ,\n∂L(k2)\n∂t = − ∑ l,j ( ∂L(k2) ∂θlj )2 − ∑ l,j ∂L(k1) ∂θlj ∂L(k2) ∂θlj ,\nand ∂L\n∂t = ∂ (L(k1) + L(k2)) ∂t = − ∑ l,j ( ∂L(k1) ∂θlj + ∂L(k2) ∂θlj )2 ≤ 0. (43)\nTo obtain\n0 < ∂L(k1) ∂t − ∂L(k2) ∂t = − ∑ l,j\n[( ∂L(k1)\n∂θlj\n)2 − ( ∂L(k2)\n∂θlj\n)2] , (44)\nit is sufficient to have ∣∣∣∣∂L(k1)∂θlj ∣∣∣∣ > ∣∣∣∣∂L(k2)∂θlj ∣∣∣∣ . (45) Eqs. (43, 44) also yield to\n∂L(k1)\n∂t < 0.\nTherefore, Eq. (45) is a sufficient condition for S. Based on the theorem 1, we have proved the theorem 2.\nF MEMORIZING 2-D IMAGE\nWe train a DNN to fit a natural image (See Fig. 6(a)), a mapping from coordinate (x, y) to gray scale strength, where the latter is subtracted by its mean and then normalized by the maximal absolute value. First, we initialize DNN parameters by a Gaussian distribution with mean 0 and standard deviation 0.08 (initialization with small parameters). From the snapshots during the training process, we can see that the DNN captures the image from coarse-grained low frequencies to detailed high frequencies (Fig. 6(b)). As an illustration of the F-Principle, we study the Fourier transform of the image with respect to x for a fixed y (red dashed line in Fig. 6(a), denoted as the target function f(x) in the spatial domain). The DNN can well capture this 1-d slice after training as shown in Fig. 6(c). Fig. 6(d) displays the amplitudes |f̂(k)| of the first 40 frequency components. Due to the small initial parameters, as an example in Fig. 6(d), when the DNN is fitting low-frequency components, high frequencies stay relatively small. As the relative error shown in Fig. 6(e), the first five frequency peaks converge from low to high in order.\nNext, we initialize DNN parameters by a Gaussian distribution with mean 0 and standard deviation 1 (initialization with large parameters). After training, the DNN can well capture the training data, as shown in the left in Fig. 6(f). However, the DNN output at the test pixels are very noisy, as shown in the right in Fig. 6(f). For the pixels at the red dashed lines in Fig. 6(a), as shown in Fig. 6(g), the DNN output fluctuates a lot. Compared with the case of small initial parameters, as shown in Fig. 6(h), the convergence order of the first five frequency peaks do not have a clear order." } ]
2,019
null
SP:b2a573333b5b1b89b68f307c2b5de571fc84a481
[ "This paper proposes a semi-supervised active learning method to reduce the labeling cost. In the proposed method, a selection criterion to better integrate AL selection mechanism in SSL training framework is designed. The simple metric that aims to measure the inconsistency across a certain number of meaningful perturbations. It considers N perturbed samples of the original input data x, which can be obtained by standard augmentation operations (e.g. random crops and horizontal flips for image data). Then the variance is adopted to quantify consistency. In this way, the proposed method prefers data samples with high values, which may possess varying level of difficulty for the model to classify. To verify the effectiveness of the proposed method, several baseline methods are compared on several benchmark data sets, and the proposed method has achieved better performance. Meanwhile, to deal with the “cold start” problem, a measure that is found to be empirically correlated with the AL target loss is proposed, and this measure can be used to assist in determining the proper start size. However, there are some minor concerns:", "This paper proposes a new combination method for active learning and semi-supervised learning, where the objective is to make predictions that are robust to perturbations (for SSL) and select points for labeling with labels that differ under perturbations. This technique achieves 2x label efficiency over SSL with uniform-random sampling. Additionally, the authors assess (at least for CIFAR-10 with batch size 50) the best starting random seed set as 100 labels, known as K_0 in this work. This work yields pretty good empirical results and has a conceptually unified approach to SSL and active learning building off of recent works. " ]
Active learning (AL) integrates data labeling and model training to minimize the labeling cost by prioritizing the selection of high value data that can best improve model performance. Readily-available unlabeled data are used for selection mechanisms, but are not used for model training in most conventional pool-based AL methods. To minimize the labeling cost, we unify unlabeled sample selection and model training based on two principles. First, we exploit both labeled and unlabeled data using semi-supervised learning (SSL) to distill information from unlabeled data that improves representation learning and sample selection. Second, we propose a simple yet effective selection metric that is coherent with the training objective such that the selected samples are effective at improving model performance. Experimental results demonstrate superior performance of our proposed principles for limited labeled data compared to alternative AL and SSL combinations. In addition, we study an important problem – “When can we start AL?”. We propose a measure that is empirically correlated with the AL target loss and can be used to assist in determining the proper start point.
[]
[ { "authors": [ "Ben Athiwaratkun", "Marc Finzi", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "There are many consistent explanations of unlabeled data: Why you should average", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Aharon Azulay", "Yair Weiss" ], "title": "Why do deep convolutional networks generalize so poorly to small image transformations", "venue": "arXiv preprint arXiv:1805.12177,", "year": 2018 }, { "authors": [ "Maria-Florina Balcan", "Andrei Broder", "Tong Zhang" ], "title": "Margin based active learning", "venue": "In International Conference on Computational Learning Theory,", "year": 2007 }, { "authors": [ "Maria-Florina Balcan", "Alina Beygelzimer", "John Langford" ], "title": "Agnostic active learning", "venue": "Journal of Computer and System Sciences,", "year": 2009 }, { "authors": [ "William H Beluch", "Tim Genewein", "Andreas Nürnberger", "Jan M Köhler" ], "title": "The power of ensembles for active learning in image classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": null, "year": 1905 }, { "authors": [ "Klaus Brinker" ], "title": "Incorporating diversity in active learning with support vector machines", "venue": "In Proceedings of the 20th international conference on machine learning", "year": 2003 }, { "authors": [ "David Cohn", "Les Atlas", "Richard Ladner" ], "title": "Improving generalization with active learning", "venue": "Machine learning,", "year": 1994 }, { "authors": [ "Corinna Cortes", "Giulia DeSalvo", "Mehryar Mohri", "Ningshan Zhang" ], "title": "Agnostic active learning without constraints", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Corinna Cortes", "Giulia DeSalvo", "Mehryar Mohri", "Ningshan Zhang", "Claudio Gentile" ], "title": "Active learning with disagreement graphs", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sanjoy Dasgupta", "Daniel Hsu" ], "title": "Hierarchical sampling for active learning", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Sanjoy Dasgupta", "Daniel J Hsu", "Claire Monteleoni" ], "title": "A general agnostic active learning algorithm. In Advances in neural information processing", "venue": null, "year": 2008 }, { "authors": [ "Thomas Drugman", "Janne Pylkkonen", "Reinhard Kneser" ], "title": "Active and semi-supervised learning in asr: Benefits on the acoustic and language models", "venue": "arXiv preprint arXiv:1903.02852,", "year": 2019 }, { "authors": [ "Thomas Drugman", "Janne Pylkkönen", "Reinhard Kneser" ], "title": "Active and semi-supervised learning in ASR: benefits on the acoustic and language models", "venue": null, "year": 2019 }, { "authors": [ "Ehsan Elhamifar", "Guillermo Sapiro", "Allen Yang", "S Shankar Sasrty" ], "title": "A convex optimization framework for active learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2013 }, { "authors": [ "Alexander Freytag", "Erik Rodner", "Joachim Denzler" ], "title": "Selecting influential examples: Active learning with expected model output changes", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yuhong Guo" ], "title": "Active instance sampling via matrix partition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Mahmudul Hasan", "Amit K Roy-Chowdhury" ], "title": "Context aware active learning of activity recognition models", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Neil Houlsby", "José Miguel Hernández-Lobato", "Zoubin Ghahramani" ], "title": "Cold-start active learning with robust ordinal matrix factorization", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Juan Eugenio Iglesias", "Ender Konukoglu", "Albert Montillo", "Zhuowen Tu", "Antonio Criminisi" ], "title": "Combining generative and discriminative models for semantic segmentation of ct scans via active learning", "venue": "In Biennial International Conference on Information Processing in Medical Imaging,", "year": 2011 }, { "authors": [ "Ajay J Joshi", "Fatih Porikli", "Nikolaos Papanikolopoulos" ], "title": "Multi-class active learning for image classification", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Ksenia Konyushkova", "Raphael Sznitman", "Pascal Fua" ], "title": "Learning active learning from data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David D Lewis", "Jason Catlett" ], "title": "Heterogeneous uncertainty sampling for supervised learning", "venue": "In Machine learning proceedings", "year": 1994 }, { "authors": [ "David D Lewis", "William A Gale" ], "title": "A sequential algorithm for training text classifiers", "venue": "In SIGIR94,", "year": 1994 }, { "authors": [ "Oisin Mac Aodha", "Neill DF Campbell", "Jan Kautz", "Gabriel J Brostow" ], "title": "Hierarchical subquery evaluation for active learning on a graph", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Andrew Kachites McCallumzy", "Kamal Nigamy" ], "title": "Employing em and pool-based active learning for text classification", "venue": "In Proc. International Conference on Machine Learning (ICML),", "year": 1998 }, { "authors": [ "Hieu T Nguyen", "Arnold Smeulders" ], "title": "Active learning using pre-clustering", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin A Raffel", "Ekin Dogus Cubuk", "Ian Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Phill Kyu Rhee", "Enkhbayar Erdenee", "Shin Dong Kyun", "Minhaz Uddin Ahmed", "Songguo Jin" ], "title": "Active and semi-supervised learning for object detection with imperfect data", "venue": "Cognitive Systems Research,", "year": 2017 }, { "authors": [ "Dan Roth", "Kevin Small" ], "title": "Margin-based active learning for structured output spaces", "venue": "In European Conference on Machine Learning,", "year": 2006 }, { "authors": [ "Nicholas Roy", "Andrew McCallum" ], "title": "Toward optimal active learning through monte carlo estimation of error reduction", "venue": "ICML, Williamstown, pp", "year": 2001 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Burr Settles", "Mark Craven", "Soumya Ray" ], "title": "Multiple-instance active learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "H Sebastian Seung", "Manfred Opper", "Haim Sompolinsky" ], "title": "Query by committee", "venue": "In Proceedings of the fifth annual workshop on Computational learning theory,", "year": 1992 }, { "authors": [ "Katrin Tomanek", "Udo Hahn" ], "title": "Semi-supervised active learning for sequence labeling", "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume", "year": 2009 }, { "authors": [ "Simon Tong", "Daphne Koller" ], "title": "Support vector machine active learning with applications to text classification", "venue": "Journal of machine learning research,", "year": 2001 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Juho Kannala", "Yoshua Bengio", "David Lopez-Paz" ], "title": "Interpolation consistency training for semi-supervised learning", "venue": "International Joint Conferences on Artifical Intelligence,", "year": 2019 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation for consistency training", "venue": null, "year": 1904 }, { "authors": [ "Yi Yang", "Zhigang Ma", "Feiping Nie", "Xiaojun Chang", "Alexander G Hauptmann" ], "title": "Multi-class active learning by uncertainty sampling with diversity maximization", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Donggeun Yoo", "In So Kweon" ], "title": "Learning loss for active learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Donggeun Yoo", "In So Kweon" ], "title": "Learning loss for active learning", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Xiaojin Zhu", "John Lafferty", "Zoubin Ghahramani" ], "title": "Combining active learning and semisupervised learning using gaussian fields and harmonic functions. In ICML 2003 workshop on the continuum from labeled to unlabeled data in machine learning and data mining, volume", "venue": null, "year": 2003 } ]
[ { "heading": null, "text": "Active learning (AL) integrates data labeling and model training to minimize the labeling cost by prioritizing the selection of high value data that can best improve model performance. Readily-available unlabeled data are used for selection mechanisms, but are not used for model training in most conventional pool-based AL methods. To minimize the labeling cost, we unify unlabeled sample selection and model training based on two principles. First, we exploit both labeled and unlabeled data using semi-supervised learning (SSL) to distill information from unlabeled data that improves representation learning and sample selection. Second, we propose a simple yet effective selection metric that is coherent with the training objective such that the selected samples are effective at improving model performance. Experimental results demonstrate superior performance of our proposed principles for limited labeled data compared to alternative AL and SSL combinations. In addition, we study an important problem – “When can we start AL?”. We propose a measure that is empirically correlated with the AL target loss and can be used to assist in determining the proper start point." }, { "heading": "1 INTRODUCTION", "text": "Deep learning models are improved when trained with more labeled data (Goodfellow et al., 2016). A standard deep learning development procedure involves constructing a large-scale labeled dataset and optimizing a model with it. Yet, in many real-world scenarios, large-scale labeled datasets can be very costly to acquire, especially when expert annotators are required, as in medical diagnosis and loan prediction. An ideal framework would integrate data labeling and model training to improve model performance with minimal amount of labeled data.\nActive learning (AL) (Balcan et al., 2009) assists the learning procedure by judicious selection of unlabeled samples for labeling, with the goal of maximizing the ultimate model performance with minimal labeling cost. We focus on pool-based AL where an unlabeled data pool is given initially and the AL mechanism iteratively selects batches to label in conjunction with training. As the name “learning-based AL selection” suggests, each batch is selected with guidance from the previouslytrained model, is labeled, and then added into the labeled dataset on which the model is trained.\nMaximization of performance with minimal labeled data requires properly leveraging model learning and AL sample selection, especially in early AL cycles. While using the unlabeled pool as the candidates in the AL sample selection phase, it is natural for pool-based AL methods to integrate SSL objectives to improve performance by learning meaningful data representations from the unlabeled pool (Zhu et al., 2003; Tomanek & Hahn, 2009). However, fewer existing AL methods consider SSL during training (Drugman et al., 2019a; Rhee et al., 2017; Zhu et al., 2003; Sener & Savarese, 2018) compared to those utilizing only labeled samples. Moreover, we believe that AL selection criterion should be in coherence with the SSL objectives to select the most valuable samples, since (1) unsupervised losses could alter the learned representation and decision manifolds significantly (Oliver et al., 2018), and the AL sample selection should reflect that; (2) SSL already results in the embodiment of knowledge from unlabeled data in a meaningful way; thus AL selection should reflect the extra value of the labeled data on top of it. Motivated by these observations, we\npropose an AL framework that combines SSL with AL and also a selection metric that is strongly related to the training objective.\nIn the absence of labeled data, a common practice to initiate AL is to uniformly select a small starting subset of data for labeling. Learning-based AL selection is then used in subsequent cycles. The size of the starting subset affects AL performance – when the start size is not sufficiently large, the models learned in subsequent AL cycles are highly-skewed and result in biased selection, a phenomenon commonly known as the cold start problem (Konyushkova et al., 2017; Houlsby et al., 2014). When cold start issues arise, learning-based selection yields samples that lead to lower performance improvement than using naive uniform sampling (Konyushkova et al., 2017). Increasing the start size alleviates the cold start problem, but consumes a larger portion of the labeling budget before learning-based AL selection is utilized. With better understanding of data, our method relieves this problem by allowing learning-based sample selection to be initialized from a much smaller start size. However, an ideal solution is determining a proper start size that is large enough in avoiding cold start problems, yet sufficiently small to minimize the labeling cost. To this end, we propose a measure that is empirically shown to be helpful in estimating the proper start size.\nContributions: We propose a simple yet effective selection metric that is in coherent with training objectives in SSL. The proposed AL method is based on an insight that has driven recent advances in SSL (Berthelot et al., 2019; Verma et al., 2019; Xie et al., 2019): a model should be consistent in its decisions between a sample and its meaningfully-distorted versions. This motivates us to introduce an AL selection principle: a sample along with its distorted variants that yields low consistency in predictions indicates that the SSL model is incapable of distilling useful information from that unlabeled sample, thus human labeling is needed. Experiments demonstrate that our proposed metric outperforms previous methods integrated with SSL. With various quantitative and qualitative analyses, we demonstrate the rationale behind why such a selection criteria is highly effective in AL. In addition, in an exploratory analysis we propose a measure that can be used to assist in determining the proper start size to mitigate cold start problems." }, { "heading": "2 RELATED WORK", "text": "Extensive research has been done in AL (Dasgupta et al., 2008; Dasgupta & Hsu, 2008; Balcan et al., 2009; Cortes et al., 2019a). Traditional AL methods can be roughly classified into three categories: uncertainty-based methods, diversity-based methods and expected model change-based methods. Among uncertainty-based ones, methods based on max entropy (Lewis & Catlett, 1994; Lewis & Gale, 1994) and max margin (Roth & Small, 2006; Balcan et al., 2007; Joshi et al., 2009) are popular for their simplicity. Some other uncertainty-based methods measure distances between samples and the decision boundary (Tong & Koller, 2001; Brinker, 2003). Most uncertainty-based methods use heuristics, while recent work (Yoo & Kweon, 2019a) directly learns the target loss of inputs jointly with the training phase and shows promising results. Diversity-based methods select diverse samples that span the input space the most (Nguyen & Smeulders, 2004; Mac Aodha et al., 2014; Hasan & Roy-Chowdhury, 2015; Sener & Savarese, 2018). There are also methods that consider both uncertainty and diversity (Guo, 2010; Elhamifar et al., 2013; Yang et al., 2015). The third category estimates the future model status and selects samples that encourage optimal model improvement (Roy & McCallum, 2001; Settles et al., 2008; Freytag et al., 2014).\nBoth AL and SSL aim to improve learning with limited labeled data, thus they are naturally related. Only a few works have considered combining AL and SSL in different tasks. In Drugman et al. (2019b), joint application of SSL and AL is considered for speech understanding, and significant error reduction is demonstrated with limited labeled speech data. For AL, their selection criteria is based on a confidence score that quantifies the observed probabilities of words being correct. Rhee et al. (2017) propose an active semi-supervised learning system which demonstrates superior performance in the pedestrian detection task. Zhu et al. (2003) combine AL and SSL using Gaussian fields and validate their method on synthetic datasets. Sener & Savarese (2018) also consider SSL during AL cycles. However, in their setting, the performance improvement is marginal when adding SSL in comparison to their supervised counterpart, potentially due to the suboptimal SSL method.\nAgreement-based methods, also referred as “query-by-committee”, base the selection decisions on the opinions of a committee which consist of independent AL metrics or models (Seung et al., 1992; Cohn et al., 1994; McCallumzy & Nigamy, 1998; Iglesias et al., 2011; Beluch et al., 2018; Cortes et al., 2019b). Our method is related to agreement-based AL where samples are determined based\nAlgorithm 1 A semi-supervised learning based AL framework Require: Unlabeled data poolD, the total number of steps T , AL batch size K, start size K0 |D| B0 ← uniformly sampling from D with |B0| = K0 U0 ← D\\B0 L0 ← {(x,J (x)) : x ∈ B0}, where J (x) stands for the assigned label of x. for t = 0, . . . , T − 1 do\n(training) Mt ← argminM { 1 |Lt| ∑ (x,y)∈Lt Ll(x, y,M) + 1 |Ut| ∑ x∈Ut Lu(x,M) } (selection) Bt+1 ← argmaxB⊂Ut {C(B,Mt), s.t. |B| = K} (labeling) Lt+1 ← Lt ∪ {(x,J (x)) : x ∈ Bt+1} (pool update) Ut+1 ← Ut \\Bt+1\nend for MT ← argminM { 1 |LT | ∑ (x,y)∈LT Ll(x, y,M) + 1 |UT | ∑ x∈UT Lu(x,M) } return MT\non the conformity of different metrics or models. Specifically, our method selects data that mostly disagrees with the predictions of its augmentations." }, { "heading": "3 CONSISTENCY-BASED SEMI-SUPERVISED AL", "text": "" }, { "heading": "3.1 PROPOSED METHOD", "text": "We consider the setting of pool-based AL, where an unlabeled data pool is available for selection of samples to label. To minimize the labeling cost, we propose a method that unifies selection and model updates, overviewed in Algorithm 1. The proposed method has two key aspects.\nMost conventional AL methods base model learning only on the available labeled data, which ignores the useful information in the unlabeled data. Our first contribution is incorporating a semisupervised learning (SSL) objective in the training phases of AL. Specifically, as shown in Algorithm 1, each model Mt is learned by minimizing an objective loss function of the form Ll + Lu. The model should both fit the labeled data well and obtain a good representations of the unlabeled data.\nThe design of the selection criterion plays a crucial role in integrating SSL and AL. To this end, our second contribution is a selection criterion C to better integrate AL selection mechanism in the SSL training framework.\nIt has been observed that predictions of deep neural networks are sensitive to small perturbations on the input data (Zheng et al., 2016; Azulay & Weiss, 2018). Recent successes in SSL (Athiwaratkun et al., 2019; Berthelot et al., 2019; Verma et al., 2019) are based on minimizing the notion of sensitivity to perturbations with the idea of inducing “consistency”, i.e., imposing similarity in predictions when the input is perturbed in a way that would not change its perceptual content. For consistency-based semi-supervised training, a common choice of loss is Lu(x,M) = D(P (Ŷ = `|x,M), P (Ŷ = `|x̃,M)), where D is a distance function such as KL divergence (Xie et al., 2019), or L2 norm (Laine & Aila, 2017; Berthelot et al., 2019) and x̃ denotes a perturbation (augmentation) of the input x. Our proposal is motivated by the following intuition. First, the unsupervised objective exploits unlabeled data by encouraging consistent predictions across slightly distorted version of each unlabeled sample. Labeling samples with highly inconsistent predictions is valuable, since these samples are hard to be minimized using Lu. Thus, they need human annotations to provide further useful supervision for model training. Second, the data that yields large model performance gain is not necessarily the data with the highest uncertainty, since neural network prefers learning with a particular curriculum (Bengio et al., 2009). The most uncertain data could be too hard to learn, and including them in training would be misleading. Thus, we argue that labeling samples that can be recognized to some extent but not consistently should benefit learning more compared to the most uncertain ones.\nSpecifically, we propose a simple metric C measures the inconsistency across perturbations. There are various ways to quantify consistency. Due to its empirically-observed superior performance, we\nchoose C(B,M) = ∑\nx∈B E(x,M), where\nE(x,M) = J∑\n`=1\nVar [ P (Ŷ = `|x,M), P (Ŷ = `|x̃1,M), ..., P (Ŷ = `|x̃N ,M) ] , (1)\nJ is the number of response classes and N is the number of perturbed samples of the original input data x, {x̃1, ..., x̃N}, which can be obtained by standard augmentation operations 1. Our method selects data samples with high C values for labeling, which may possess varying level of difficulty for the model to classify." }, { "heading": "3.2 COMPARISONS WITH BASELINES", "text": "The practical performance of our method is demonstrated on two commonly used datasets: CIFAR10 and CIFAR-100 (Krizhevsky et al., 2009) on the image classification task. Both datasets have 60K images in total, of which 10K images are for testing. CIFAR-10 consists of 10 classes and CIFAR100 has 100 fine-grained classes. Different variants of SSL methods encourage consistency loss in different ways. In our implementation, we adopt the recently-proposed state-of-the-art method, Mixmatch (Berthelot et al., 2019), which proposes a specific loss term to encourage consistency. Following (Berthelot et al., 2019), we use Wide ResNet-28 (Oliver et al., 2018) with 32 filters as the base model and keep the default hyper-parameters for different settings from (Berthelot et al., 2019). In each cycle, Mt is initialized with Mt−1. We select K = 0.5 · |L0| samples for labeling by default. 50 augmentations of each image are obtained by horizontally flipping and random cropping, but we observe that 5 augmentations can produce comparable results. For a fair comparison, different selection methods start from the same initial model (M0) and the reported results are over 5 trials.\nWe consider three representative selection methods for comparison:\n• Uniform indicates random selection (no AL). • Entropy is widely considered as an uncertainty-based baseline in previous methods (Sener\n& Savarese, 2018; Yoo & Kweon, 2019a). It selects uncertain samples that have maximum entropy of its predicted class probabilities. • k-center (Sener & Savarese, 2018) selects representative samples by maximizing the distance\nbetween a selected sample and its nearest neighbor in the labeled pool. The feature from the last fully connected layer of the target model is used to calculate distances between samples.\nAs shown in Table 1, our method significantly outperforms the baseline methods which only learn from labeled data at each cycle. When 200 samples in total are labeled, our method outperforms kcenter by 39.24% accuracy. Next, we focus on comparing different methods in SSL framework. Figure 1 shows the effectiveness of our consistency-based selection in SSL setting by comparing with the baselines, when they are integrated into SSL. Our method outperforms baselines by a clear margin: on CIFAR-10, with 250 labeled images, our method outperforms uniform (passive selection) by ∼ 2.5% and outperforms k-center, the state-of-the-art method, by ∼ 1.5%. As the number of labels increases, it is harder to improve model performance, but our method outperforms the uniform selection with 4K labels using only 2K labels, halving the labeled data requirements for the similar performance. Given access to all the labels (50K) for the entire training set, a fully-supervised model\n1We follow https://github.com/google-research/mixmatch to perform data augmentation: the input images are randomly flipped and then randomly cropped.\nachieves an accuracy of 95.83% (Berthelot et al., 2019). Our method with 4K examples has 30% more error compared to the fully supervised method. CIFAR-100 is a more challenging dataset as it has 10× more categories. On CIFAR-100, we observe a consistent outperformance of our method at all AL cycles." }, { "heading": "3.3 ANALYSES OF CONSISTENCY-BASED SELECTION", "text": "To build insights on its superior performance, we analyze the samples selected by our method from several attributes, which are known to be important for AL.\nUncertainty and overconfident mis-classification: Uncertainty-based AL methods query the data samples close to the decision boundary. However, deep neural networks yield poorly-calibrated uncertainty estimates when the raw outputs are considered – they tend to be overconfident even when they are wrong (Guo et al., 2017; Lakshminarayanan et al., 2017). entropy-based AL metrics would not distinguished such overconfident mis-classifications, thus result in suboptimal selection. Figure 2 (left) demonstrates that our consistency-based selection is superior in detecting high-confident mis-classification cases than entropy. We use entropy to measure the uncertainty of the selected samples by different methods in Figure 2 (middle). It compares different approaches and shows that uniform and k-center methods do not base selection on uncertainty at all, whereas consistency tends to select highly-uncertain samples but not necessarily the top ones. Such samples should contribute to the performance gap with entropy. Figure 2 (right) illustrates some selected samples that are mis-classified with high confidence.\nDiversity: Diversity has been proposed as a key factor for AL (Yang et al., 2015). k-center is a state-of-the-art AL method based on diversity (it prefers to select data points that span the whole input space). Towards this end, Figure 3 (right) visualizes the diversity of samples selected by different methods. We use principal components analysis to reduce the dimensionality of embedded samples to a two-dimensional space. uniform chooses samples equally-likely from the unlabeled pool. Samples selected by entropy are clustered in certain regions. On the other hand, consistency selects data samples as diverse as those selected by k-center. The average distances between top 1% samples selected by different methods are shown in Figure 3 (top-left). We can see that entropy chooses samples having small average distances, while consistency has a much larger average distance which is comparable to uniform and k-center.\nClass distribution complies with classification error: Figure 3 (bottom-left) shows the per-class classification error and the class distribution of samples selected by different metrics. Samples selected by entropy and consistency are correlated with per class classification error, unlike the samples selected by uniform and k-center." }, { "heading": "4 WHEN CAN WE START LEARNING-BASED AL SELECTION?", "text": "" }, { "heading": "4.1 COLD-START FAILURE", "text": "When the size of the initial labeled dataset is too small, the learned decision boundaries could be skewed and AL selection based on the model outputs could be biased. To illustrate the problem, Figure 4 shows the toy two-moons dataset using a simple support vector machine (in supervised setting with the RBF kernel) to learn the decision boundary (Oliver et al., 2018). As can be seen, the naive uniform sampling approach achieves better predictive accuracy by exploring the whole space. On the other hand, the samples selected by max entropy concentrate around a poorly learned boundary. In another example, we study the effects of cold start using deep neural networks on CIFAR-10, shown in Figure 5. Using uniform sampling to select different starting sizes, AL methods achieve different predictive accuracy. For example, the model starting with K0 = 50 data points clearly under-performs the model starting with K0 = 100 samples, when both models reach 150\nlabeled samples. It may due to the cold start problem encountered when K0 = 50. While, given a limited labeling budget, naively choosing a large start size is also not practically desirable, because it may lead to under-utilization of learning-based selection. For example, our method starting from K0 = 100 labeled samples has better performance than starting from 150 or 200, since we have more AL cycles in the former case given the same label budget.\nThe semi-supervised nature of our learning proposal encourages the practice of initiating learningbased sample selection from a much smaller start size. However, the learned model can still be skewed at extreme early AL stages. These observations motivate us to propose a systematic way of inferring a proper starting size. We analyze this problem and propose an approach to assist in determining the start size in practice." }, { "heading": "4.2 AN EXPLORATORY ANALYSIS IN START SIZE SELECTION", "text": "Recall from the last step of Algorithm 1, if T is set such that UT = ∅, i.e., if the entire dataset has been labeled, then the final model MT is trained to minimize the purely supervised loss Ll on the total labeled dataset LT . Consider the cross-entropy loss function for any classifier p(Ŷ |X), which we call the AL target loss:\nLl [ LT , p(Ŷ |X) ] = − 1 |LT | ∑ (x,y)∈LT log p(Ŷ = y|X = x). (2)\nNote that the goal of an AL method can be viewed as minimizing the AL target loss with a small subset of the entire training set LT (Zhu et al., 2003). In any intermediate AL step, we expect the loss on the current labeled subset to mimic the target loss. If cold start problems occur, the model does a poor job in approximating and minimizing equation 2. The quality of the samples selected\nin the subsequent AL cycles would be consequently poor. Therefore, it is crucial to understand the performance of the currently-learned model in minimizing the criterion in equation 2. However, since the labeled data set Lt at cycle t is a strict subset of the total training set LT , it is impossible to simply plug the most recently learned model Ŷ in equation 2 for direct calculation.\nOur approximation to the target loss is based on the following proposition, which gives upper and lower bounds on the expected loss, to which the target loss approximates:\nProposition 1. For any given distribution of Y , and any learned model Ŷ , we have H [ p(Y ), p(Ŷ ) ] −H[p(X)] ≤ RH [ p(Ŷ |X) ] = EX { H [ p(Y |X), p(Ŷ |X) ]} ≤ H [ p(Y ), p(Ŷ ) ] −H[p(X)]− log Ẑ, (3)\nwhere H[p, q] is the cross-entropy between two distributions p and q, H[p(X)] is the entropy of the random variable X , and Ẑ = minx,y p(X = x|Ŷ = y) .\nProposition 1 indicates that the expected cross-entropy loss can be both upper and lower bounded. In particular, both bounds involve the quantity H[p(Y ), p(Ŷ )], which suggests that H[p(Y ), p(Ŷ )] could potentially be tracked to analyze RH [p(Ŷ |X)] for different numbers of samples. Unlike the unavailable target loss on the entire training set, H[p(Y ), p(Ŷ )] does not need all data to be labeled. In fact, to compute H[p(Y ), p(Ŷ )], we just need to specify a distribution for Y , which could be assumed from prior knowledge or estimated using all of the labels in the starting cycle.\nIn Figure 6, we observe a strong correlation between the target loss and H[p(Y ), p(Ŷ )], where Y is assumed to be uniform. We see how H[p(Y ), p(Ŷ )] can be used to identify the trend when the actual target is minimized. Particularly, in SSL setting, a practitioner may set the starting set size to 100 or 150 labeled samples on CIFAR-10, as the value of H[p(Y ), p(Ŷ )] essentially ceases decreasing, which coincide with the oracle stopping points if we were given access to the target loss. In contrast, a start size of 50 has much higher H[p(Y ), p(Ŷ )], which leads to less favorable performance. A similar pattern in the supervised learning setting is shown in Figure 6." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "We presented a simple pool-based AL selection metric to select data for labeling by leveraging unsupervised information of unlabeled data during training. Experiments show that our method outperforms previous state-of-the-art AL methods under the SSL setting. Our proposed metric implicitly balances uncertainty and diversity when making selection. The design of our method focuses on the principles of consistency in SSL. For alternative SSL methods based on other principles, it is necessary to revisit AL selection with respect to their training objectives, which will be considered in future work. In addition, we study and address a very practically valuable yet challenging question — “When can we start learning-based AL selection?”. We present a measure to assist in determining proper start size. Experimental analysis demonstrates that the proposed measure correlates well with the AL target loss (i.e. the ultimate the supervised loss on all labeled data). In practice, it can be tracked to examine the model without requesting a large validation set." }, { "heading": "A PROOF OF PROPOSITION 1", "text": "Figure A1: An illustration of Proposition 1: the blue curve represents the (expected) cross-entropy, and the two red curves are the lower and upper bounds. The value − log Ẑt characterizes the range of the bounds.\nProof. DenoteX as the feature space and {1, . . . , J} as the label space. Note that by Baye’s formula and the law of total probability, we have\nRH [p(Ŷ |X)] = EX { H [ p(Y |X), p(Ŷ |X) ]} =−\n∑ x∈X J∑ y=1 p(Y = y|X = x) log p(Ŷ = y|X = x)p(X = x)\n=− J∑\ny=1 ∑ x∈X p(X = x, Y = y) log\n[ p(Ŷ = y)p(X = x|Ŷ = y)\np(X = x)\n]\n=− J∑\ny=1 ∑ x∈X p(X = x, Y = y) log p(Ŷ = y)− J∑ y=1 ∑ x∈X p(X = x, Y = y) log\n[ p(X = x|Ŷ = y)\np(X = x)\n]\n=− J∑\ny=1 p(Y = y) log p(Ŷ = y)− ∑ x∈X J∑ y=1 p(X = x, Y = y) log [ p(X = x|Ŷ = y) ]\n+ ∑ x∈X J∑ y=1 p(X = x, Y = y) log [p(X = x)]\n=H [ p(Y ), p(Ŷ ) ] + ∑ x∈X p(X = x) log [p(X = x)]− ∑ x∈X J∑ y=1 p(X = x, Y = y) log [ p(X = x|Ŷ = y) ]\n=H [ p(Y ), p(Ŷ ) ] −H [p(X)]− ∑ x∈X J∑ y=1 p(X = x, Y = y) log [ p(X = x|Ŷ = y) ] . (4)\nWe first give a lower bound. Note that p(X = x|Ŷ = y) ≤ 1 for any (x, y) ∈ X × [J ], so equation 4 implies that\nEX\n{ H [ p(Y |X), p(Ŷ |X) ]} ≥ H [ p(Y ), p(Ŷ ) ] −H [p(X)] .\nTo prove the upper bound, denote min(x,y)∈X×[J] p(X = x|Ŷ = y) = Ẑ ∈ (0, 1) where (x, y) ∈ X × [J ]. Then from equation 4\nEX\n{ H [ p(Y |X), p(Ŷ |X) ]} ≤H [ p(Y ), p(Ŷ ) ] −H [p(X)]− log Ẑ ∑ x∈X J∑ y=1 p(X = x, Y = y)\n=H [ p(Y ), p(Ŷ ) ] −H [p(X)]− log Ẑ." }, { "heading": "B MORE DISCUSSION", "text": "B.1 CONSISTENCY-BASED AL IN SUPERVISED LEARNING\nWe also curious about how well our method performs under supervised learning using only labeled samples. Following Yoo & Kweon (2019b), we start with 1000 labeled samples on CIFAR-10. As shown in Table 3, after 4 AL cycles (B = 500, totaling 3000 labels), uniform, k-center, entropy and our method (consistency) achieve accuracy of 80.81%, 81.70%, 82.67% and 82.75%, respectively. It shows that consistency still works even if our model is trained using only labeled samples. However, the improvement of consistency compared to other baseline methods (especially entropy) is marginal.\nB.2 OUT-OF-DISTRIBUTION AND CHALLENGING SAMPLES\nIn real-world scenarios, it is very likely that not all unlabeled data are irrelevant to the task. Therefore, if a sample remains high uncertainty given arbitrary perturbations, it is probably a out-ofdistribution example (Lee et al., 2018). In addition, selecting the hardest samples are not preferred, because it could be “over-challenging” for current model as suggested by the study of curriculum learning (Bengio et al., 2009). It can be easily inferred that our proposed selection can avoid such cases (see equation 1). More exploration of active learning with out-of-distribution samples is left for future work." } ]
2,019
CONSISTENCY-BASED SEMI-SUPERVISED ACTIVE LEARNING: TOWARDS MINIMIZING LABELING BUD-
SP:b4c82616d2410a07ecce89da0e5dc9428f9209ae
[ "This paper proposed to use Graph Neural Networks (GNN) to do type inference for dynamically typed languages. The key technique is to construct a type dependency graph and infer the type on top of it. The type dependency graph contains edges specifying hard constraints derived from the static analysis, as well as soft relationships specified by humans. Experiments on type predictions for TypeScript have shown better performance than the previous methods, with or without user specified types. ", "A method to predict likely type of program variables in TypeScript is presented. It consists of a translation of a program's type constraints and defined objects into a (hyper)graph, and a specialised neural message passing architecture to learn from the generated graphs. Experiments show that the method substantially outperforms sound typing in the TypeScript compiler, as well as a recent method based on deep neural networks." ]
As gradual typing becomes increasingly popular in languages like Python and TypeScript, there is a growing need to infer type annotations automatically. While type annotations help with tasks like code completion and static error catching, these annotations cannot be fully determined by compilers and are tedious to annotate by hand. This paper proposes a probabilistic type inference scheme for TypeScript based on a graph neural network. Our approach first uses lightweight source code analysis to generate a program abstraction called a type dependency graph, which links type variables with logical constraints as well as name and usage information. Given this program abstraction, we then use a graph neural network to propagate information between related type variables and eventually make type predictions. Our neural architecture can predict both standard types, like number or string, as well as user-defined types that have not been encountered during training. Our experimental results show that our approach outperforms prior work in this space by 14% (absolute) on library types, while having the ability to make type predictions that are out of scope for existing techniques.
[ { "affiliations": [], "name": "Jiayi Wei" }, { "affiliations": [], "name": "Maruth Goyal" }, { "affiliations": [], "name": "Greg Durrett" }, { "affiliations": [], "name": "Isil Dillig" } ]
[ { "authors": [ "Miltiadis Allamanis", "Marc Brockschmidt", "Mahmoud Khademi" ], "title": "Learning to represent programs with graphs", "venue": null, "year": 2017 }, { "authors": [ "Davide Ancona", "Elena Zucca" ], "title": "Principal typings for java-like languages", "venue": "In ACM SIGPLAN Notices,", "year": 2004 }, { "authors": [ "Gavin Bierman", "Martı́n Abadi", "Mads Torgersen" ], "title": "Understanding typescript", "venue": "ECOOP 2014 – Object-Oriented Programming,", "year": 2014 }, { "authors": [ "Benjamin Chung", "Paley Li", "Francesco Zappa Nardelli", "Jan Vitek" ], "title": "Kafka: Gradual typing for objects", "venue": "In ECOOP 2018-2018 European Conference on Object-Oriented Programming,", "year": 2018 }, { "authors": [ "Yann Dauphin", "Gokhan Tur", "Dilek Z. Hakkani-Tur", "Larry P. Heck" ], "title": "Zero-shot learning for semantic utterance classification", "venue": "In ICLR,", "year": 2013 }, { "authors": [ "Yotam Eshel", "Noam Cohen", "Kira Radinsky", "Shaul Markovitch", "Ikuya Yamada", "Omer Levy" ], "title": "Named entity disambiguation for noisy text", "venue": "In CoNLL,", "year": 2017 }, { "authors": [ "Ali Farhadi", "Ian Endres", "Derek Hoiem", "David Forsyth" ], "title": "Describing objects by their attributes", "venue": null, "year": 2017 }, { "authors": [ "Zheng Gao", "Christian Bird", "Earl T. Barr" ], "title": "To type or not to type: Quantifying detectable bugs in javascript", "venue": "In Proceedings of the 39th International Conference on Software Engineering,", "year": 2017 }, { "authors": [ "Caglar Gulcehre", "Sungjin Ahn", "Ramesh Nallapati", "Bowen Zhou", "Yoshua Bengio" ], "title": "Pointing the unknown words", "venue": "In Proceedings of the ACL,", "year": 2016 }, { "authors": [ "Stefan Hanenberg", "Sebastian Kleinschmager", "Romain Robbes", "Éric Tanter", "Andreas Stefik" ], "title": "An empirical study on the impact of static typing on software maintainability", "venue": "Empirical Software Engineering,", "year": 2013 }, { "authors": [ "Vincent J. Hellendoorn", "Christian Bird", "Earl T. Barr", "Miltiadis Allamanis" ], "title": "Deep learning type inference", "venue": "In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering,", "year": 2018 }, { "authors": [ "Abhinav Jangda", "Gaurav Anand" ], "title": "Predicting variable types in dynamically typed programming languages", "venue": "arXiv preprint arXiv:1901.05138,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Rabee Sohail Malik", "Jibesh Patra", "Michael Pradel" ], "title": "Nl2type: inferring javascript function types from natural language information", "venue": "In Proceedings of the 41st International Conference on Software Engineering,", "year": 2019 }, { "authors": [ "Lili Mou", "Ge Li", "Lu Zhang", "Tao Wang", "Zhi Jin" ], "title": "Convolutional neural networks over tree structures for programming language processing", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "Benjamin C Pierce", "David N Turner" ], "title": "Local type inference", "venue": "ACM Transactions on Programming Languages and Systems (TOPLAS),", "year": 2000 }, { "authors": [ "Veselin Raychev", "Martin Vechev", "Andreas Krause" ], "title": "Predicting program properties from ”big code", "venue": "In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2015 }, { "authors": [ "Xiang Ren", "Wenqi He", "Meng Qu", "Clare R Voss", "Heng Ji", "Jiawei Han" ], "title": "Label noise reduction in entity typing by heterogeneous partial-label embedding", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Jeremy G. Siek", "Walid Taha" ], "title": "Gradual typing for objects", "venue": "In ECOOP,", "year": 2007 }, { "authors": [ "Dmitriy Traytel", "Stefan Berghofer", "Tobias Nipkow" ], "title": "Extending hindley-milner type inference with coercive structural subtyping", "venue": "In Asian Symposium on Programming Languages and Systems,", "year": 2011 }, { "authors": [ "Michael M Vitousek", "Andrew M Kent", "Jeremy G Siek", "Jim Baker" ], "title": "Design and evaluation of gradual typing for python", "venue": "In ACM SIGPLAN Notices,", "year": 2014 }, { "authors": [ "Mingzhe Wang", "Yihe Tang", "Jian Wang", "Jia Deng" ], "title": "Premise selection for theorem proving by deep graph embedding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xiaolong Wang", "Yufei Ye", "Abhinav Gupta" ], "title": "Zero-shot recognition via semantic embeddings and knowledge graphs", "venue": null, "year": 2018 }, { "authors": [ "Zhaogui Xu", "Xiangyu Zhang", "Lin Chen", "Kexin Pei", "Baowen Xu" ], "title": "Python probabilistic type inference with natural language support", "venue": "In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering,", "year": 2016 } ]
[ { "heading": null, "text": "As gradual typing becomes increasingly popular in languages like Python and TypeScript, there is a growing need to infer type annotations automatically. While type annotations help with tasks like code completion and static error catching, these annotations cannot be fully determined by compilers and are tedious to annotate by hand. This paper proposes a probabilistic type inference scheme for TypeScript based on a graph neural network. Our approach first uses lightweight source code analysis to generate a program abstraction called a type dependency graph, which links type variables with logical constraints as well as name and usage information. Given this program abstraction, we then use a graph neural network to propagate information between related type variables and eventually make type predictions. Our neural architecture can predict both standard types, like number or string, as well as user-defined types that have not been encountered during training. Our experimental results show that our approach outperforms prior work in this space by 14% (absolute) on library types, while having the ability to make type predictions that are out of scope for existing techniques." }, { "heading": "1 INTRODUCTION", "text": "Dynamically typed languages like Python, Ruby, and Javascript have gained enormous popularity over the last decade, yet their lack of a static type system comes with certain disadvantages in terms of maintainability (Hanenberg et al., 2013), the ability to catch errors at compile time, and code completion support (Gao et al., 2017). Gradual typing can address these shortcomings: program variables have optional type annotations so that the type system can perform static type checking whenever possible (Siek & Taha, 2007; Chung et al., 2018). Support for gradual typing now exists in many popular programming languages (Bierman et al., 2014; Vitousek et al., 2014), but due to their heavy use of dynamic language constructs and the absence of principal types (Ancona & Zucca, 2004), compilers cannot perform type inference using standard algorithms from the programming languages community (Bierman et al., 2014; Traytel et al., 2011; Pierce & Turner, 2000), and manually adding type annotations to existing codebases is a tedious and error-prone task. As a result, legacy programs in these languages do not reap all the benefits of gradual typing.\nTo reduce the human effort involved in transitioning from untyped to statically typed code, this work focuses on a learning-based approach to automatically inferring likely type annotations for untyped (or partially typed) codebases. Specifically, we target TypeScript, a gradually-typed variant of Javascript for which plenty of training data is available in terms of type-annotated programs. While there has been some prior work on inferring type annotations for TypeScript using machine learning (Hellendoorn et al., 2018; Raychev et al., 2015), prior work in this space has several shortcomings. First, inference is restricted to a finite dictionary of types that have been observed during training time—i.e., they cannot predict any user-defined data types. Second, even without considering user-defined types, the accuracy of these systems is relatively low, with the current state-of-theart achieving 56.9% accuracy for primitive/library types (Hellendoorn et al., 2018). Finally, these techniques can produce inconsistent results in that they may predict different types for different token-level occurrences of the same variable.\nIn this paper, we propose a new probabilistic type inference algorithm for TypeScript to address these shortcomings using a graph neural network architecture (GNN) (Veličković et al., 2018; Li et al., 2016; Mou et al., 2016). Our method uses lightweight source code analysis to transform the program into a new representation called a type dependency graph, where nodes represent type variables and labeled hyperedges encode relationships between them. In addition to expressing logical constraints (e.g., subtyping relations) as in traditional type inference, a type dependency graph also incorporates contextual hints involving naming and variable usage.\nGiven such a type dependency graph, our approach uses a GNN to compute a vector embedding for each type variable and then performs type prediction using a pointer-network-like architecture (Vinyals et al., 2015). The graph neural network itself requires handling a variety of hyperedge types—some with variable numbers of arguments—for which we define appropriate graph propagation operators. Our prediction layer compares the vector embedding of a type variable with vector representations of candidate types, allowing us to flexibly handle user-defined types that have not been observed during training. Moreover, our model predicts consistent type assignments by construction because it makes variable-level rather than token-level predictions.\nWe implemented our new architecture as a tool called LAMBDANET and evaluated its performance on real-world TypeScript projects from Github. When only predicting library types, LAMBDANET has a top1 accuracy of 75.6%, achieving a significant improvement over DeepTyper (61.5%). In terms of overall accuracy (including user-defined types), LAMBDANET achieves a top1 accuracy of around 64.2%, which is 55.2% (absolute) higher than the TypeScript compiler.\nContributions. This paper makes the following contributions: (1) We propose a probabilistic type inference algorithm for TypeScript that uses deep learning to make predictions from the type dependency graph representation of the program. (2) We describe a technique for computing vector embeddings of type variables using GNNs and propose a pointer-network-like method to predict user-defined types. (3) We experimentally evaluate our approach on hundreds of real-world TypeScript projects and show that our method significantly improves upon prior work." }, { "heading": "2 MOTIVATING EXAMPLE AND PROBLEM SETTING", "text": "Figure 1 shows a (type-annotated) TypeScript program. Our goal in this work is to infer the types shown in the figure, given an unannotated version of this code. We now justify various aspects of our solution using this example.\nTyping constraints. The use of certain functions/operators in Figure 1 imposes hard constraints on the types that can be assigned to program variables. For example, in the forward function, variables x, y must be assigned a type that supports a concat operation; hence, x, y could have types like string, array, or Tensor, but not, for example, boolean. This observation motivates us to incorporate typing constraints into our model.\nContextual hints. Typing constraints are not always sufficient for determining the intended type of a variable. For example, for variable network in function restore, the typing constraints require network’s type to be a class with a field called time, but there can be many classes that have such an attribute (e.g., Date). However, the similarity between the variable name network\nand the class name MyNetwork hints that network might have type MyNetwork. Based on this belief, we can further propagate the return type of the library function readNumber (assuming we know it is number) to infer that the type of the time field in MyNetwork is likely to be number.\nNeed for type dependency graph. There are many ways to view programs—e.g., as token sequences, abstract syntax trees, control flow graphs, etc. However, none of these representations is particularly helpful for inferring the most likely type annotations. Thus, our method uses static analysis to infer a set of predicates that are relevant to the type inference problem and represents these predicates using a program abstraction called the type dependency graph.\nHandling user-defined types. As mentioned in Section 1, prior techniques can only predict types seen during training. However, the code from Figure 1 defines its own class called MyNetwork and later uses a variables of type MyNetwork in the restore method. A successful model for this task therefore must dynamically make inferences about user-defined types based on their definitions." }, { "heading": "2.1 PROBLEM SETTING", "text": "Our goal is to train a type inference model that can take as input an entirely (or partially) unannotated TypeScript project g and output a probability distribution of types for each missing annotation. The prediction space is Y(g) = Ylib ∪ Yuser(g), where Yuser(g) is the set of all user-defined types (classes/interfaces) declared within g, and Ylib is a fixed set of commonly-used library types. Following prior work in this space (Hellendoorn et al., 2018; Raychev et al., 2015; Xu et al., 2016), we limit the scope of our prediction to non-polymorphic and non-function types. That is, we do not distinguish between types such as List<T>, List<number>, List<string> etc., and consider them all to be of type List. Similarly, we also collapse function types like number→ string and string→ string into a single type called Function. We leave the extension of predicting structured types as future work." }, { "heading": "3 TYPE DEPENDENCY GRAPH", "text": "A type dependency graph G = (N,E) is a hypergraph where nodes N represent type variables and labeled hyperedges E encode relationships between them. We extract the type dependency graph of a given TypeScript program by performing static analysis on an intermediate representation of its source code, which allows us to associate a unique variable with each program sub-expression. As an illustration, Figure 2 shows the intermediate representation of the code from Figure 1.\nIntuitively, a type dependency graph encodes properties of type variables as well as relationships between them. Each hyperedge corresponds to one of the predicates shown in Table 1. We partition our predicates (i.e., hyperedges) into two classes, namely Logical and Contextual, where the former category can be viewed as imposing hard constraints on type variables and the latter category encodes useful hints extracted from names of variables, functions, and classes.\nFigure 3 shows some of the hyperedges in the type dependency graph G extracted from the intermediate representation in Figure 2. As shown in Figure 3(A), our analysis extracts a predicate Subtype(τ13, τ5) from this code because the type variable associated with the returned expression v4 must be a subtype of the enclosing function’s return type. Similarly, as shown in Figure 3(B), our analysis extracts a predicate Objectname,time,forward(τ8, τ1, τ2, τ9) because τ8 is an object type whose name, time, and forward members are associated with type variables τ1, τ2, τ9, respectively.\nIn contrast to the Subtype and Object predicates that impose hard constraints on type variables, the next two hyperedges shown in Figure 3 encode contextual clues obtained from variable names. Figure 3(C) indicates that type variable τ14 is associated with an expression named restore. While this kind of naming information is invisible to TypeScript’s structural type system (?), it serves as a useful input feature for our GNN architecture described in Section 4.\nIn addition to storing the unique variable name associated with each type variable, the type dependency graph also encodes similarity between variable and class names. The names of many program variables mimic their types: for example, instances of a class called MyNetwork might often be called network or network1. To capture this correspondence, our type dependency graph also contains a hyperedge called NameSimilar that connects type variables α and β if their corresponding tokenized names have a non-empty intersection.1\nAs shown in Table 1, there is a final type of hyperedge called Usage that facilitates type inference of object types. In particular, if there is an object access var y = x.l, we extract the predicate Usagel((τx, τy), (α1, β1), . . . , (αk, βk)) to connect x and y’s type variables with all classes αi that contain an attribute/method βi whose name is l. Figure 3 shows a Usage hyperedge extracted from the code in Figure 2. As we will see in the next section, our GNN architecture utilizes a special attention mechanism to pass information along these usage edges.\n1During tokenization, we split identifier names into tokens based on underscores and camel case naming. More complex schemes are possible, but we found this simple method to be effective." }, { "heading": "4 NEURAL ARCHITECTURE", "text": "Our neural architecture for making type predictions consists of two main parts. First, a graph neural network passes information along the type dependency graph to produce a vector-valued embedding for each type variable based on its neighbors. Second, a pointer network compares each variable’s type embedding to the embedding vectors of candidate types (both computed from the previous phase) to place a distribution over possible type assignments.\nGiven a type dependency graph G = (N,E), we first to compute a vector embedding vn for each n ∈ N such that these vectors implicitly encode type information. Because our program abstraction is a graph, a natural choice is to use a graph neural network architecture. From a high level, this architecture takes in initial vectors v0n for each node n, performs K rounds of message-passing in the graph neural network, and returns the final representation for each type variable.\nIn more detail, let vtn denote the vector representation of node n at the tth step, where each round consists of a message passing and an aggregation step. The message passing step computes a vectorvalued update to send to the jth argument of each hyper-edge e ∈ E connecting nodes p1, . . . , pa. Then, once all the messages have been computed, the aggregation step computes a new embedding vtn for each n by combining all messages sent to n:\nmte,pj = Msge,j(v t−1 p1 , . . . ,v t−1 pa ) v t n = Aggr(v t−1 n , {mte,n|e ∈ N (n)})\nHere, N is the neighborhood function, and Msge denotes a particular neural operation that depends on the type of the edge (FIXED, NARY, or NPAIRS), which we will describe later.\nInitialization. In our GNN, nodes correspond to type variables and each type variable is associated either with a program variable or a constant. We refer to nodes representing constants (resp. variables) as constant (resp. variable) nodes, and our initialization procedure works differently depending on whether or not n is a constant node. Since the types of each constant are known, we set the initial embedding for each constant node of type τ (e.g., string) to be a trainable vector cτ and do not update it during GNN iterations (i.e., ∀t,vtn = cτ ). On the other hand, if n is a variable node, then we have no information about its type during initialization; hence, we initialize all variable nodes using a generic trainable initial vector (i.e., they are initialized to the same vector but updated to different values during GNN iterations).\nMessage passing. Our Msg operator depends on the category of edge it corresponds to (see Table 1); however, weights are shared between all instances of the same hyperedge type. In what follows, we describe the neural layer that is used to compute messages for each type of hyperedge:\n• FIXED: Since these edges correspond to fixed arity predicates (and the position of each argument matters), we compute the message of the jth argument by first concatenating the embedding vector of all arguments and then feed the result vector to a 2-layer MLP for the jth argument. In addition, since hyperedges of type Access have an identifier, we also embed the identifier as a vector and treat it as an extra argument. (We describe the details of identifier embedding later in this section.)\n• NARY: Since NARY edges connect a variable number of nodes, we need an architecture that can deal with this challenge. In our current implementation of LAMBDANET, we use a simple architecture that is amenable to batching. Specifically, given an NARY edge El1,...,lk(α, β1, . . . , βk) (for Function and Call, the labels lj are argument positions), the set of messages for α is computed as {MLPα(vlj ‖vβj ) | j = 1 . . . k}, and the message for each βj is computed as MLPβ(vlj ‖vα). Observe that we compute k different messages for α, and the message for each βj only depends on the vector embedding of α and its position j, but not the vector embeddings of other βj’s.2\n• NPAIRS: This is a special category associated with Usagel((α ∗, β∗), (α1, β1), . . . , (αk, βk)). Re-\ncall that this kind of edge arises from expressions of the form b = a.l and is used to connect a and b’s type variables with all classes αi that contain an attribute/method βi with label l. Intuitively, if a’s type embedding is very similar to a type C, then b’s type will likely be the same as C.l’s type. Following this reasoning, we use dot-product based attention to compute the messages for α∗ and β∗. Specifically, we use α∗ and αj’s as attention keys and βj’s as attention values to compute the\n2In our current implementation, this is reducible to multiple FIXED edges. However, NARY edges could generally use more complex pooling over their arguments to send more sophisticated messages.\nmessage for β∗ (and switch the key-value roles to compute the message for α∗): mte,β∗ = ∑ j wjv t−1 βj w = softmax(a) aj = vαj · vα∗\nAggregation. Recall that the aggregation step combines all messages sent to node n to compute the new embedding vtn. To achieve this goal, we use a variant of the attention-based aggregation operator proposed in graph attention networks (Veličković et al., 2018).\nvtn = Aggr(v t−1 n , {mte,n|e ∈ N (n)}) = vt−1n + ∑ e∈N (n) weM1m t e,n (1)\nwhere we is the attention weight for the message coming from edge e. Specifically, the weights we are computed as softmax(a), where ae = LeakyReLu(vt−1n ·M2mte,n) , and M1 and M2 are trainable matrices. Similar to the original GAT architecture, we set the slope of the LeakyReLu to be 0.2, but we use dot-product to compute the attention weights instead of a linear model.\nIdentifier embedding. Like in Allamanis et al. (2017), we break variable names into word tokens according to camel case and underscore rules and assign a trainable vector for all word tokens that appear more than once in the training set. For all other tokens, unlike Allamanis et al. (2017), which maps them all into one single <Unknown> token, we randomly mapped them into one of the <Unknown-i> tokens, where i ranges from 0 to 50 in our current implementation. This mapping is randomly constructed every time we run the GNN and hence helps our neural networks to distinguish different tokens even if they are rare tokens. We train these identifier embeddings end-to-end along with the rest of our architecture.\nPrediction Layer. For each type variable n and each candidate type c ∈ Y(g), we use a MLP to compute a compatibility score sn,c = MLP(vn,uc), where uc is the embedding vector for c. If c ∈ Ylib, vc is a trainable vector for each library type c; if c ∈ Yuser(g), then it corresponds to a node nc in the type dependency graph of g, so we just use the embedding vector for nc and set uc = vnc . Formally, this approach looks like a pointer network (Vinyals et al., 2015), where we use the embeddings computed during the forward pass to predict “pointers” to those types.\nGiven these compatibility scores, we apply a softmax layer to turn them into a probability distribution. i.e., Pn(c|g) = exp(sn,c)/ ∑ c′ exp(sn,c′). During test time, we max over the probabilities to compute the most likely (or top-N) type assignments." }, { "heading": "5 EVALUATION", "text": "In this section, we describe the results of our experimental evaluation, which is designed to answer the following questions: (1) How does our approach compare to previous work? (2) How well can our model predict user-defined types? (3) How useful is each of our model’s components?\nDataset. Similar to Hellendoorn et al. (2018), we train and evaluate our model on popular opensource TypeScript projects taken from Github. Specifically, we collect 300 popular TypeScript projects from Github that contain between 500 to 10, 000 lines of code and where at least 10% of type annotations are user-defined types. Note that each project typically contains hundreds to thousands of type variables to predict, and these projects in total contain about 1.2 million lines of TypeScript code. Among these 300 projects, we use 60 for testing, 40 for validation, and the remainder for training.\nCode Duplication. We ran jscpd3 on our entire data set and found that only 2.7% of the code is duplicated. Furthermore, most of these duplicates are intra-project. Thus, we believe that code duplication is not a severe problem in our dataset.\nPreprocessing. Because some of the projects in our benchmark suite are only sparsely type annotated, we augment our labeled training data by using the forward type inference functionality provided by the TypeScript compiler.4 The compiler cannot infer the type of every variable and leaves many labeled as any during failed inference; thus, we exclude any labels in our data set.\n3A popular code duplication detection tool, available at https://github.com/kucherenko/jscpd. 4Like in many modern programming languages with forward type inference (e.g., Scala, C#, Swift), a\nTypeScript programmer does not need to annotate every definition in order to fully specify the types of a\nFurthermore, at test time, we evaluate our technique only on annotations that are manually added by developers. This is the same methodology used by Hellendoorn et al. (2018), and, since developers often add annotations where code is most unclear, this constitutes a challenging setting for type prediction.\nPrediction Space. As mentioned in Section 2.1, our approach takes an entire TypeScript project g as its input, and the corresponding type prediction space is Y(g) = Ylib ∪ Yuser(g). In our experiments, we set Yuser(g) to be all classes/interfaces defined in g (except when comparing with DeepTyper, where we set Yuser(g) to be empty), and for Ylib, we select the top-100 most common types in our training set. Note that this covers 98% (resp. 97.5%) of the non-any annotations for the training (resp. test) set.\nHyperparameters We selected hyperparameters by tuning on a validation set as we were developing our model. We use 32-dimensional type embedding vectors, and all MLP transformations in our model use one hidden layer of 32 units, except the MLP for computing scores in the prediction layer, which uses three hidden layers of sizes 32,16, and 8 (and size 1 for output). GNN message-passing layers from different time steps have independent weights.\nWe train our model using Adam (Kingma & Ba, 2014) with default parameters (α = 0.9, β = 0.999) and set the learning rate to be 10−3 initially but linearly decrease it to 10−4 until the 30th epoch. We use a weight decay of 10−4 for regularization and stop the training once the loss on validation set starts to increase (which usually happens around 30 epochs). We use the type annotations from a single project as a minibatch and limit the maximal batch size (via downsampling) to be the median of our training set to prevent any single project from having too much influence.\nImplementation Details. We implemented LAMBDANET in Scala, building on top of the Java high-performance Tensor library Nd4j(nd4), and used a custom automatic differentiation library to implement our GNN. Our GNN implementation does not use an adjacency matrix to represent GNN layers; instead, we build the hyperedge connections directly from our type dependency graph and perform batching when computing the messages for all hyperedges of the same type.\nCode Repository. We have made our code publicly available on Github.5" }, { "heading": "5.1 COMPARISON WITH DEEPTYPER", "text": "In this experiment, we compare LAMBDANET’s performance with DeepTyper (Hellendoorn et al., 2018), which treats programs as sequences of tokens and uses a bidirectional RNN to make type predictions. Since DeepTyper can only predict types from a fixed vocabulary, we fix both LAMBDANET and DeepTyper’s prediction space to Ylib and measure their corresponding top-1 accuracy. The original DeepTyper model makes predictions for each variable occurrence rather than declaration. In order to conduct a meaningful comparison between DeepTyper and LAMBDANET, we implemented a variant of DeepTyper that makes a single prediction for each variable (by averaging over the RNN internal states of all occurrences of the same variable before making the prediction). Moreover, for a fair comparison, we made sure both DeepTyper and LAMBDANET are using the same improved naming feature that splits words into tokens.\nOur main results are summarized below, where the Declaration (resp. Occurrence) column shows accuracy per variable declaration (resp. token-level occurrence). Note that we obtain occurrence-level accuracy from declaration-level accuracy by weighting each variable by its number of occurrences.\nModel Top1 Accuracy (%) Declaration Occurrence\nDeepTyper 61.5 67.4 LAMBDANETlib (K=6) 75.6 77.0\nprogram. Instead, they only need to annotate some key places (e.g., function parameters and return types, class members) and let the forward inference algorithm to figure out the rest of the types. Therefore, in our training set, we can keep the user annotations on these key places and run the TS compiler to recover these implicitly specified types as additional labels.\n5See https://github.com/MrVPlusOne/LambdaNet.\nAs we can see from the table, LAMBDANET achieves significantly better results compared to DeepTyper. In particular, LAMBDANET outperforms DeepTyper by 14.1% (absolute) for declarationlevel accuracy and by 9.6% for occurrence-level accuracy.\nNote that the accuracy we report for DeepTyper (67.4%) is not directly comparable to the original accuracy reported in Hellendoorn et al. (2018) (56.9%) for the following reasons. While we perform static analysis and have a strict distinction of library vs. user-defined types and only evaluate both tools on library type annotations in this experiment, their implementation treat types as tokens and does not have this distinctions. Hence, their model also considers a much larger prediction space consisting of many user-defined types—most of which are never used outside of the project in which they are defined—and is also evaluated on a different set of annotations than ours." }, { "heading": "5.2 PREDICTING USER-DEFINED TYPES", "text": "As mentioned earlier, our approach differs from prior work in that it is capable of predicting userdefined types; thus, in our second experiment, we extend LAMBDANET’s prediction space to also include user-defined types. However, since such types are not in the prediction space of prior work (Hellendoorn et al., 2018), we implemented two simpler baselines that can be used to calibrate our model’s performance. Our first baseline is the type inference performed by the TypeScript compiler, which is sound but incomplete (i.e., if it infers a type, it is guaranteed to be correct, but it infers type any for most variables).6 Our second baseline, called SIMILARNAME, is inspired by the similarity between variable names and their corresponding types; it predicts the type of each variable v to be the type whose name shares the most number of common word tokens with v.\nThe results of this experiment are shown in Table 2, which shows the top-1 and top-5 accuracy for both user-defined and library types individually as well as overall accuracy. In terms of overall prediction accuracy, LAMBDANET achieves 64.2% for top-1 and 84.5% for top-5, significantly outperforming both baselines. Our results suggest that our fusion of logical and contextual information to predict types is far more effective than rule-based incorporation of these in isolation." }, { "heading": "5.3 ABLATION STUDY", "text": "Table 3 shows the results of an ablation study in which (a) we vary the number of message-passing iterations (left) and (b) disable various features of our architecture design (right). As we can see from the left table, accuracy continues to improve as we increase the number of message passing iterations as high as 6; this gain indicates that our network learns to perform inference over long distances. The right table shows the impact of several of our design choices on the overall result.\n6For inferring types from the TypeScript compiler, we use the code provided by Hellendoorn et al. (2018). We found this method had a slightly lower accuracy than reported in their work.\nFor example, if we do not use Contextual edges (resp. Logical edges), overall accuracy drops by 14.5% (resp. 25.8%). These drops indicate that both kinds of predicates are crucial for achieving good accuracy. We also see that the attention layer for NPAIR makes a significant difference for both library and user-defined types. Finally, Simple Aggregation is a variant of LAMBDANET that uses a simpler aggregation operation which replaces the attention-based weighed sum in Eq 1 with a simple average. As indicated by the last row of Table 3 (right), attention-based aggregation makes a substantial difference for user-defined types." }, { "heading": "5.4 COMPARISON WITH JSNICE", "text": "Since JSNice (Raychev et al., 2015) cannot properly handle class definitions and user-defined types, for a meaningful comparison, we compared both tools’ performance on top-level functions randomly sampled from our test set. We filtered out functions whose parameters are not library types and manually ensured that all all the dependency definitions are also included. In this way, we constructed a small benchmark suite consisting of 41 functions. Among the 107 function parameter and return type annotations, LAMBDANET correctly predicted 77 of them, while JSNice only got 48 of them right. These results suggest that LAMBDANET outperforms JSNice, even when evaluated only on the places where JSNice is applicable." }, { "heading": "6 RELATED WORK", "text": "Type Inference using Statistical Methods. There are several previous works on predicting likely type annotations for dynamically typed languages: Raychev et al. (2015) and Xu et al. (2016) use structured inference models for Javascript and Python, but their approaches do not take advantage of deep learning and are limited to a very restricted prediction space. Hellendoorn et al. (2018) and Jangda & Anand (2019) model programs as sequences and AST trees and apply deep learning models (RRNs and Tree-RNNs) for TypeScript and Python programs. Malik et al. (2019) make use of a different source of information and take documentation strings as part of their input. However, all these previous works are limited to predicting types from a fixed vocabulary.\nGraph Embedding of Programs. Allamanis et al. (2017) are the first to use GNNs to obtain deep embedding of programs, but they focus on predicting variable names and misuses for C] and rely on static type information to construct the program graph. Wang et al. (2017) use GNNs to encode mathematical formulas for premise selection in automated theorem proving. The way we encode types has some similarity to how they encode quantified formulas, but while their focus is on higherorder formulas, our problem requires encoding object types. Veličković et al. (2018) are the first to use an attention mechanism in GNNs. While they use attention to compute node embeddings from messages, we use attention to compute certain messages from node embeddings.\nPredicting from an Open Vocabulary. Predicting unseen labels at test time poses a challenge for traditional machine learning methods. For computer vision applications, solutions might involve looking at object attributes (Farhadi et al., 2017) or label similarity Wang et al. (2018); for natural language, similar techniques are applied to generalize across semantic properties of utterances (Dauphin et al., 2013), entities (Eshel et al., 2017), or labels (Ren et al., 2016). Formally, most of these approaches compare an embedding of an input to some embedding of the label; what makes our approach a pointer network (Vinyals et al., 2015) is that our type encodings are derived during the forward pass on the input, similar to unknown words for machine translation (Gulcehre et al., 2016)." }, { "heading": "7 CONCLUSIONS", "text": "We have presented LAMBDANET, a neural architecture for type inference that combines the strength of explicit program analysis with graph neural networks. LAMBDANET not only outperforms other state-of-the-art tools when predicting library types, but can also effectively predict user-defined types that have not been encountered during training. Our ablation studies demonstrate the usefulness of our proposed logical and contextual hyperedges.\nFor future work, there are several potential improvements and extensions to our current system. One limitation of our current architecture is the simplified treatment of function types and generic types\n(i.e., collapsing them into their non-generic counterparts). Extending the prediction space to also include structured types would allow us to make full use of the rich type systems many modern languages such as TypeScript provide. Another important direction is to enforce hard constraints during inference such that the resulting type assignments are guaranteed to be consistent." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank DeepTyper authors, Vincent J. Hellendoorn, Christian Bird, Earl T. Barr, and Miltiadis Allamanis, for sharing their data set and helping us set up our experimental comparisons. We also thank the ICLR reviewers for their insightful comments and constructive suggestions. Finally, we would also like to thank Shankara Pailoor, Yuepeng Wang, Jocelyn Chen, and other UToPiA group members for their kind support and useful feedback. This project was supported in part by NSF grant CCF-1762299." } ]
2,020
LAMBDANET: PROBABILISTIC TYPE INFERENCE USING GRAPH NEURAL NETWORKS
SP:3489d6d9dde3bec6f8d50f309d28572c393eac61
[ "C1. Transformers (without positional encodings and without layer normalization), with 2 attention heads of dimension 1 and feed-forward layers (FFN) with 4 hidden nodes, are universal approximators of continuous permutation-equivariant functions f of compact support, relative to any Lp metric (1 <= p < \\infty). (Thm. 2, p. 3). (Without positional encodings, a function f computed by a Transformer is permutation equivariant: f(P(X)) = P(f(X)) for any permutation P of the columns of X, which are the vector encodings of the input tokens.)", "This paper tries to analyse the Transformer, widely applied building block of a neural network component, to improve understanding of the internals of the model. The analysis starts showing that the transformer blocks generate permutation equivalent maps and then shows that the transformer can approximate any permutation equivalent map in a compact domain with arbitrary precision. Three key steps are developed and used to prove the universal approximation of arbitrary permutation equivalent map: 1) quantization of input via feed-forward layers, 2) contextual mapping via attention layers, and 3) value mapping via feed-forward layers. By introducing positional embeddings, the paper relaxes the restriction on permutation equivalence and proves that the Transformer is a universal approximator of any sequence to sequence function." ]
Despite the widespread adoption of Transformer models for NLP tasks, the expressive power of these models is not well-understood. In this paper, we establish that Transformer models are universal approximators of continuous permutation equivariant sequence-to-sequence functions with compact support, which is quite surprising given the amount of shared parameters in these models. Furthermore, using positional encodings, we circumvent the restriction of permutation equivariance, and show that Transformer models can universally approximate arbitrary continuous sequence-to-sequence functions on a compact domain. Interestingly, our proof techniques clearly highlight the different roles of the self-attention and the feed-forward layers in Transformers. In particular, we prove that fixed width self-attention layers can compute contextual mappings of the input sequences, playing a key role in the universal approximation property of Transformers. Based on this insight from our analysis, we consider other simpler alternatives to selfattention layers and empirically evaluate them.
[ { "affiliations": [], "name": "Chulhee Yun" }, { "affiliations": [], "name": "Srinadh Bhojanapalli" }, { "affiliations": [], "name": "Ankit Singh Rawat" }, { "affiliations": [], "name": "Sashank J. Reddi" } ]
[ { "authors": [ "Nir Ailon", "Bernard Chazelle" ], "title": "The fast Johnson–Lindenstrauss transform and approximate nearest neighbors", "venue": "SIAM Journal on computing,", "year": 2009 }, { "authors": [ "François Chollet" ], "title": "Xception: Deep learning with depthwise separable convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D Manning" ], "title": "What does BERT look at? an analysis of BERT’s attention", "venue": null, "year": 1906 }, { "authors": [ "Andy Coenen", "Emily Reif", "Ann Yuan", "Been Kim", "Adam Pearce", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Visualizing and measuring the geometry of BERT", "venue": null, "year": 1906 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Yunchao Gong", "Sanjiv Kumar", "Henry A Rowley", "Svetlana Lazebnik" ], "title": "Learning binary codes for high-dimensional data using bilinear projections", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2013 }, { "authors": [ "Boris Hanin", "Mark Sellke" ], "title": "Approximating continuous functions by relu nets of minimal width", "venue": "arXiv preprint arXiv:1710.11278,", "year": 2017 }, { "authors": [ "John Hewitt", "Christopher D Manning" ], "title": "A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", "venue": null, "year": 2019 }, { "authors": [ "Kurt Hornik" ], "title": "Approximation capabilities of multilayer feedforward networks", "venue": "Neural networks,", "year": 1991 }, { "authors": [ "Lukasz Kaiser", "Aidan N Gomez", "Francois Chollet" ], "title": "Depthwise separable convolutions for neural machine translation", "venue": "arXiv preprint arXiv:1706.03059,", "year": 2017 }, { "authors": [ "Hongzhou Lin", "Stefanie Jegelka" ], "title": "ResNet with one-neuron hidden layers is a universal approximator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A robustly optimized BERT pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Zhou Lu", "Hongming Pu", "Feicheng Wang", "Zhiqiang Hu", "Liwei Wang" ], "title": "The expressive power of neural networks: A view from the width", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2015 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Jorge Pérez", "Javier Marinković", "Pablo Barceló" ], "title": "On the Turing completeness of modern neural network architectures", "venue": "arXiv preprint arXiv:1901.03429,", "year": 2019 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "Technical Report,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "Technical Report,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Akiyoshi Sannai", "Yuuki Takai", "Matthieu Cordonnier" ], "title": "Universal approximations of permutation invariant/equivariant functions by deep neural networks", "venue": "arXiv preprint arXiv:1903.01939,", "year": 2019 }, { "authors": [ "Laurent Sifre", "Stéphane Mallat" ], "title": "Rigid-motion scattering for image classification", "venue": "Ph. D. dissertation,", "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jesse Vig", "Yonatan Belinkov" ], "title": "Analyzing the structure of attention in a transformer language model", "venue": "arXiv preprint arXiv:1906.04284,", "year": 2019 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann N Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": null, "year": 1901 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime G. Carbonell", "Ruslan Salakhutdinov", "Quoc V. Le" ], "title": "XLNet: Generalized autoregressive pretraining for language understanding", "venue": null, "year": 1906 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Rich Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Self-attention based Transformer networks (Vaswani et al., 2017) have been at the center of the recent progress on various natural language processing (NLP) tasks, including machine translation (Vaswani et al., 2017), language modeling (Radford et al., 2018; 2019), and question answering (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019). All these tasks involve learning models that map an input sequence of tokens to an output sequence of tokens. Transformers make it feasible to train large models to approximate these sequence-to-sequence functions due to their ability to process the input tokens in a parallel way, as opposed to the sequential nature of RNNs and LSTMs.\nA Transformer block consists of two kinds of layers: a self-attention layer and a token-wise feedforward layer, with skip connections present in both layers. The self-attention layer transforms each input token embedding using a weighted combination of the embeddings of all tokens in the input sequence, where weights are generated by pairwise dot-products among the input token embeddings. The token-wise feed-forward layer then independently processes each of these modified input token embeddings without any interaction among them. Notably, Transformers employ parameter reuse across tokens, as both layers use the same parameters to process each token. Moreover, Transformers have to rely solely on the pairwise dot-products to capture interaction between the input tokens.\nGiven the parameter sharing and limited interactions between tokens, it is natural to wonder: what class of sequence-to-sequence functions can the Transformer networks represent? Also, what is the role of the two different kinds of layers? Are both layers needed to obtain the representation power of Transformers? In the existing literature, the advantage of Transformers has often been attributed to their capability of computing contextual embeddings/mappings of the input, as opposed to fixed word embeddings as in word2vec (Mikolov et al., 2013). Is it possible to formalize the notion of\n∗Based on work performed at Google Research New York\ncontextual mappings? If yes, can Transformers actually compute such mappings? Such questions still remain elusive.\nIn this paper, we provide a mathematical definition of contextual mappings and show that multi-head self-attention layers can indeed compute contextual mappings of the input sequences. We further show that this ability to compute contextual mappings coupled with the value mapping ability of the feed-forward layers makes Transformers universal approximators of any permutation equivariant sequence-to-sequence function. We also improve this result using positional encodings, and show that Transformers can represent any sequence-to-sequence function; i.e., the restriction of permutation equivariance can be removed by positional encodings.\nThese results on universal approximation of sequence-to-sequence functions raise a natural question: is it possible to have a more efficient architecture to compute contextual mappings, consequently, preserving the ability to universally approximate sequence-to-sequence functions? Towards this, we explore other architectures that can implement contextual mappings (to some extent), and experimentally evaluate their performance. In our experiments, we notice that the models that combine these simpler architectures with Transformers have better performance, compared to the standalone Transformers. We conclude the paper by presenting more discussion and interesting future research directions along these lines." }, { "heading": "1.1 SUMMARY OF OUR CONTRIBUTIONS", "text": "• We prove that Transformers are universal approximators of continuous and permutation equivariant sequence-to-sequence functions with compact support (Theorem 2). We also show that, if Transformers have trainable positional encodings added to the input, then they are universal approximators of continuous sequence-to-sequence functions on a compact domain (Theorem 3).\n• We formalize the notion of contextual mappings and show that the attention layers can compute contextual mappings, where each unique context is mapped to a unique vector (Lemma 6).\n• We experimentally evaluate other simpler layers that can compute contextual mappings to some extent, such as bi-linear projections and separable convolutions, and show that substituting some of the self-attention layers with these layers can result in better performance (Section 5)." }, { "heading": "1.2 RELATED WORKS & NOTATION", "text": "Analysis of attention-based models. Given the popularity of Transformers, there have been numerous works trying to understand the role of attention layers in natural language processing models. One such line of work focuses on probing the output of attention layers to understand the attention mechanism and internal language representation (Hewitt & Manning, 2019; Clark et al., 2019; Coenen et al., 2019; Vig & Belinkov, 2019). Although these results give valuable insights, a consistent theoretical analysis corroborating these findings is missing.\nUniversal approximation theorems. Universal approximation theorems are classical results in neural network theory, dating back many decades (Cybenko, 1989; Hornik, 1991). These results show that given unbounded width, a one-hidden-layer neural network can approximate arbitrary continuous function with compact support, up to any accuracy. Other results focusing on depth appeared more recently (Lu et al., 2017; Hanin & Sellke, 2017; Lin & Jegelka, 2018). In particular, Lu et al. (2017); Hanin & Sellke (2017) consider fully-connected ReLU networks whose input dimension is d, and show that networks with width d + 1 and unbounded depth are universal approximators of scalar-valued continuous functions. Lin & Jegelka (2018) show that a residual network with one hidden neuron per residual block is a universal approximator of scalar-valued functions, given unbounded depth. Although Transformer networks do have residual connections, due to their heavy parameter sharing, the existing analyses for residual networks do not extend to Transformers. Sannai et al. (2019) consider universally approximating permutation invariant/equivariant functions using fully-connected ReLU networks.\nTuring completeness results on Transformers. Recently, Pérez et al. (2019) have shown that Transformers with infinite precision are Turing complete, which is not the case in finite precision setting (Dehghani et al., 2018). We note that Turing completeness deals with computation on formal languages (thus discrete objects), while universal approximation focuses on functions on a continuum. In other words, these are two different concepts; and one does not imply another.\nNotation. We use the following notation in the paper. Given a matrix A, let Ai,j , Ai,:, and A:,j denote its (i, j)-th entry, i-th row, and j-th column, respectively. We use ‖A‖p to denote the entrywise `p norm of A. Let σ[·] be the softmax operator, which takes a matrix as input and applies softmax operation to each column of the matrix, which results in a column stochastic matrix, i.e., a matrix that has non-negative entries with each column summing to 1. We similarly define σH[·] to be the hardmax operator, which outputs the one-hot representation of the arg max entry for each column of the input matrix. If there are k arg max entries, then the output is 1/k for such entries. We use 1n to denote a vector of length n whose entries are all 1. We denote the 0-1 indicator function by 1 {·}. We use d and n to denote the embedding dimension and the sequence length, respectively. We assume throughout that n ≥ 2, as the Transformers reduce to residual networks when n = 1." }, { "heading": "2 TRANSFORMER NETWORKS", "text": "A Transformer block is a sequence-to-sequence function mapping Rd×n to Rd×n. It consists of two layers: a self-attention layer and a token-wise feed-forward layer, with both layers having a skip connection. More concretely, for an input X ∈ Rd×n consisting of d-dimensional embeddings of n tokens, a Transformer block with multiplicative or dot-product attention (Luong et al., 2015) consists of the following two layers1:\nAttn(X) = X + ∑h\ni=1 W iOW i VX · σ[(W iKX)TW iQX], (1)\nFF(X) = Attn(X) + W2 · ReLU(W1 ·Attn(X) + b11Tn ) + b21Tn , (2)\nwhere W iO ∈ Rd×m, W iV ,W iK ,W iQ ∈ Rm×d, W2 ∈ Rd×r,W1 ∈ Rr×d, b2 ∈ Rd, b1 ∈ Rr, and FF(X) is the output of the Transformer block. The number of heads h and the head size m are two main parameters of the attention layer; and r denotes the hidden layer size of the feed-forward layer.\nHere, we would like to point out that our definition of the self-attention layer (1) is an equivalent reformulation of (Vaswani et al., 2017), where they concatenate attention heads and multiply a matrix WO ∈ Rd×mh to the concatenation. One difference in our setup is the absence of layer normalization, which simplies our analysis while preserving the basic architecture of the Transformer.\nWe define the Transformer networks as the composition of Transformer blocks. The family of the sequence-to-sequence functions corresponding to the Transformers can be defined as:\nT h,m,r := {g : Rd×n → Rd×n | g is a composition of Transformer blocks th,m,r’s}.\nwhere th,m,r : Rd×n → Rd×n denotes a Transformer block defined by an attention layer with h heads of size m each, and a feed-forward layer with r hidden nodes.\nWe say that a function f : Rd×n → Rd×n is permutation equivariant if for any permutation matrix P , we have f(XP ) = f(X)P ; i.e., if we permute the columns of X , then the columns of f(X) are permuted in the same way. A Transformer block is permutation equivariant, which we formally prove in Section A. This consequently establishes the permutation equivariance of the class T h,m,r. Claim 1. A Transformer block th,m,r defines a permutation equivariant map from Rd×n to Rd×n.\nAs seen in above, both layers (cf. (1) and (2)) of a Transformer block employ parameter reuse/sharing, because each token/column undergoes the same transformations (e.g., W iQ, W i K , or W1) regardless of its position. Moreover, interactions between tokens can only be captured through pairwise dot-products in the softmax operator σ[·] (cf. (1)). Given such limitations in a single Transformer block’s representation power, it is not obvious what kinds of sequence-to-sequence functions T h,m,r can approximate; we provide the answer to this question in the next section." }, { "heading": "3 TRANSFORMERS ARE UNIVERSAL APPROXIMATORS OF SEQUENCE-TO-SEQUENCE FUNCTIONS", "text": "In this section, we present our theorems showing that the Transformer networks are universal approximators of sequence-to-sequence functions. Let us start by defining the target function class FPE, which consists of all continuous permutation equivariant functions with compact support that\n1In our proof we use bias vectors biQ for query projections in attention layers. We omit them here for brevity.\nmap Rd×n to Rd×n. Here, continuity is defined with respect to any entry-wise `p norm, 1 ≤ p <∞. Given two functions f1, f2 : Rd×n → Rd×n, for 1 ≤ p <∞, we define a distance between them as\ndp(f1, f2) := (∫ ‖f1(X)− f2(X)‖pp dX )1/p .\nThe following result shows that a Transformer network with a constant number of heads h, head size m, and hidden layer of size r can approximate any function in FPE. Theorem 2. Let 1 ≤ p < ∞ and > 0, then for any given f ∈ FPE, there exists a Transformer network g ∈ T 2,1,4, such that dp(f, g) ≤ .\nNext, we present our theorem on Transformers with positional encodings. In order to endow the Transformer networks with the ability to capture the information about the position of tokens in the input sequence, it is a common practice to add positional encodings E ∈ Rd×n to the input sequence before feeding it to the Transformer network (Vaswani et al., 2017; Devlin et al., 2018). Consider the functions represented by Transformers with positional encodings:\nT h,m,rP := {gP(X) = g(X + E) | g ∈ T h,m,r and E ∈ Rd×n}.\nHere we show that if E is trainable, these positional encodings are sufficient to remove the permutation equivariance restriction of the Transformers. Towards this, we define FCD to be the set of all continuous functions that map a compact domain in Rd×n to Rd×n. Note that FCD does not have the restriction of permutation equivariance as in FPE, but any f ∈ FCD is defined on a compact domain instead of the whole Rd×n. The following result states that, equipped with the trainable positional encodings, Transformers can approximate any sequence-to-sequence function in FCD. Theorem 3. Let 1 ≤ p < ∞ and > 0, then for any given f ∈ FCD, there exists a Transformer network g ∈ T 2,1,4P such that we have dp(f, g) ≤ .\nTheorems 2 and 3 provide an interesting characterization of the representation power of fixed-width Transformer networks. Since the function classes T h,m,r and T h,m,rP become richer as we increase the values of (h,m, r), our results establish that general Transformer networks are also universal approximators of sequence-to-sequence functions. Remarkably, none of the parameters (h,m, r) depend on the input sequence length n or embedding dimension d.\nHere, we would like to again point out that Theorems 2 and 3 appear quite surprising at a first glance, given the parameter sharing across all the tokens in a sequence, e.g., feed-forward layers are applied token-wise and the projection matrices in the self-attention layers are the same across different tokens. Furthermore, attention layers can only capture pairwise interaction between different tokens in the sequence. In the next subsection, we briefly describe one of our key steps in overcoming the aforementioned restrictions and proving universal approximation power of Transformers." }, { "heading": "3.1 A KEY STEP: SELF-ATTENTION LAYERS CAN IMPLEMENT CONTEXTUAL MAPPINGS", "text": "Let us consider a setting where we are interested in embedding two sentences: 1) I am happy; and 2) I am Bob. These sentences are fed to a sequence-to-sequence model as\nX = [X:,1,X:,2,X:,3] = [vI,vam,vhappy] and X̃ = [X̃:,1, X̃:,2, X̃:,3] = [vI,vam,vBob],\nwhere vI,vam,vhappy, and vBob denote d-dimensional embedding for the tokens ‘I’, ‘am’, ‘happy’, and ‘Bob’, respectively. Since the word ‘I’ occurs in different contexts in these sentences, in order to implement arbitrary sequence-to-sequence functions, the sequence-to-sequence model should map the two occurrences of ‘I’ to different values. We formally define this requirement below.\nDefinition 3.1 (Contextual mapping). Consider a finite set L ⊂ Rd×n. A map q : L → R1×n defines a contextual mapping if the map satisfies the following:\n1. For any L ∈ L, the n entries in q(L) are all distinct. 2. For any L,L′ ∈ L, with L 6= L′, all entries of q(L) and q(L′) are distinct.\nIn other words, a contextual mapping maps each token (column) of L ∈ L to a unique value which depends on the entire L; as a result, capturing the precise context of L. This allows the subsequent\ntoken-wise function (e.g., defined by the feed-forward layers in case of Transformer networks) to realize the outputs of any arbitrary sequence-to-sequence functions.\nAt the first thought, we can consider getting a contextual mapping by simply averaging all the tokens, because this can capture the one-word difference (e.g., “happy” vs. “Bob”) in two different contexts. However, if there are multiple words that are different, it is not guaranteed that the average will be different. Indeed, requiring unique mappings for all the tokens for any change in any number of tokens, is a steep requirement.\nWhile the self-attention layer does consider pair-wise interactions among different input tokens, it is not clear if this weak form of pair-wise interaction with shared projection weights is sufficient to extract the underlying context. The following result, which we sketch here, shows that self-attention layers can implement a permutation equivariant contextual mapping over almost all elements of a grid in [0, 1]d×n. We defer the full statement to Section 4.2. Lemma 6 (informal). Consider the grid Gδ := {0, δ, . . . , 1 − δ}d×n. Then, there exist a function gc : Rd×n → Rd×n composed of δ−d + 1 self-attention layers (h = 2,m = 1) and a vector u ∈ Rd such that q(L) := uT gc(L) satisfies the following properties, for a subset G̃δ ⊂ Gδ that contains almost all elements of Gδ:\n1. For any L ∈ G̃δ , the entries of q(L) are all distinct. 2. For any L,L′∈G̃δ such that L is not a permutation of L′, all entries of q(L), q(L′) are distinct.\nLemma 6 shows that a series of self-attention layers can implement contextual mappings, despite the apparent restriction that each of them can only capture pair-wise interaction. However, the restriction of permutation equivarance still exists because attention layers are inherently permutation equivariant. Coupled with the ability of token-wise feed-forward layers to map different values in q(L) to arbitrary output values, we can prove universal approximation capability of Transformers." }, { "heading": "3.2 PROOF OF THE UNIVERSAL APPROXIMATION THEOREM (THEOREM 2)", "text": "Next, we outline the proof of Theorem 2 in greater detail. We refer the reader to Section C for the proof of Theorem 3, since it is a modification of Theorem 2. Even though Theorems 2 and 3 do not specifically mention the required depth for approximation, our proof techniques do characterize it, and we show that our construction is tight in the number of parameters. We defer the discussion of depth to Section 4.4.\nRecall that we want to show that given a function f ∈ FPE, we can find a Transformer network g ∈ T 2,1,4 such that dp(f, g) ≤ . Without loss of generality, we can assume that the compact support of f is contained in [0, 1]d×n. We achieve our desired objective in three key steps: Step 1. Approximate FPE with piece-wise constant functions. We first use (a variant of) the classical result that any continuous function can be approximated up to arbitrary accuracy by piecewise constant functions. For δ > 0, we define the following class of piece-wise constant functions.\nFPE(δ) := { f : X 7→ ∑ L∈Gδ AL1 {X ∈ SL} | f is permutation equivariant, AL ∈ Rd×n } ,\nwhere Gδ := {0, δ, . . . , 1 − δ}d×n and, for a grid point L ∈ Gδ , SL := ∏d j=1 ∏n k=1[Lj,k, Lj,k + δ) ⊂ [0, 1]d×n denotes the associated cube of width δ. Let f ∈ FPE(δ) be such that dp(f, f) ≤ /3. Step 2. Approximate FPE(δ) with modified Transformers. We then consider a slightly modified architecture for Transformer networks, where the softmax operator σ[·] and ReLU(·) are replaced by the hardmax operator σH[·] and an activation function φ ∈ Φ, respectively. Here, the set of allowed activations Φ consists of all piece-wise linear functions with at most three pieces, where at least one piece is constant. Let T h,m,r denote the function class corresponding to the sequence-to-sequence functions defined by the modified Transformer networks. The following result establishes that the modified Transformer networks in T 2,1,1 can closely approximate functions in FPE(δ).\nProposition 4. For each f ∈ FPE(δ) and 1 ≤ p <∞, ∃ g ∈ T 2,1,1 such that dp(f, g) = O(δd/p).\nStep 3. Approximate modified Transformers with (original) Transformers. Finally, we show that g ∈ T 2,1,1 can be approximated by T 2,1,4. Let g ∈ T 2,1,4 be such that dp(g, g) ≤ /3.\nTheorem 2 now follows from these three steps, because we have\ndp(f, g) ≤ dp(f, f) + dp(f, g) + dp(g, g) ≤ 2 /3 +O(δd/p).\nChoosing δ small enough ensures that dp(f, g) ≤ . We refer the reader to Sections B.1 and B.2 in the supplementary material for the formal statements and proofs of Steps 1 and 3, respectively. As for Step 2, which is the most critical step in establishing the universal approximation property of Transformers, we provide a sketch of the proof of Proposition 4 in the next section, and refer the reader to Section B.3 for the complete proof." }, { "heading": "4 PROOF SKETCH OF PROPOSITION 4: DIFFERENT ROLES OF TWO LAYERS", "text": "As mentioned earlier, the heavy parameter sharing in Transformers makes the goal of universally approximating sequence-to-sequence functions seemingly difficult. Both the self-attention and the feed-forward layer weights inside a Transformer block are fixed across n tokens. In this section, we show that Transformers are able to overcome this architectural constraint, and compute contextual mappings of the entire input sequence just based on the pair-wise interactions. The token-wise feedforward layers then transform these contextual mappings to the desired output sequence.\nWe highlight these inner workings of Transformers en route to proving Proposition 4. We want to show that given a piece-wise constant function f ∈ FPE(δ), there exists a modified Transformer network g ∈ T 2,1,1 that closely approximates f . We achieve this goal by establishing the following three claims, which correspond to Lemmas 5, 6, and 7.\n1. Given an input X ∈ Rd×n, a series of feed-forward layers in the modified Transformer network can quantize X to an element L on the extended grid G+δ := {−δ−nd, 0, δ, . . . , 1− δ}d×n.\n2. Next, a series of self-attention layers in the modified Transformer network can take the input L and implement a contextual mapping q such that, for L and L′ that are not permutation of each other, all the elements in q(L) and q(L′) are distinct.\n3. Finally, a series of feed-forward layers in the modified Transformer network can map elements of the contextual embedding q(L) to the desired output value of f ∈ FPE at the input X .\nBefore discussing these three claims in detail, we note that even though a Transformer network stacks self-attention and feed-forward layers in an alternate manner, the skip connections enable these networks to employ a composition of multiple self-attention or feed-forward layers. Furthermore, as alluded earlier, these three steps clearly highlight the different roles that self-attention and feed-forward layers play in realizing the ability to universally approximate sequence-to-sequence functions: 1) self-attention layers compute precise contextual maps; and 2) feed-forward layers then assign the results of these contextual maps to the desired output values." }, { "heading": "4.1 QUANTIZATION BY FEED-FORWARD LAYERS", "text": "Since our objective in Proposition 4 is to approximate the function f ∈ FPE(δ), which takes a constant value on the cubes SL’s, the (modified) Transformer network approximating f first quantizes the input X according to these cubes. In particular, we want each input X ∈ SL to be mapped to the point L. The following result shows that a modified Transformer network can indeed implement this quantization map with a composition of multiple feed-forward layers.\nLemma 5. Consider a scalar quantization map gentq : R→ {−δ−nd, 0, δ, . . . , 1− δ}:\ngentq (t) = { kδ if kδ ≤ t < (k + 1)δ, k = 0, . . . , 1/δ − 1, −δ−nd otherwise.\nThere exists a function gq : Rd×n 7→ G+δ composed of d δ + d token-wise feed-forward layers with r = 1 and activations in Φ, which employs the scalar quantization gentq to each entry of its input.\nAs desired, the function gq maps any X ∈ SL to L. Furthermore, if any element of X is not in [0, 1], the element is mapped to −δ−nd, indicating that X is outside the compact support of f ∈ FPE(δ)." }, { "heading": "4.2 CONTEXTUAL MAPPING BY SELF-ATTENTION LAYERS", "text": "In this subsection, we show that the (modified) Transformer network can compute contextual mappings (cf. Definition 3.1) from the output L ∈ G+δ of the map gq (cf. Section 4.1) by using a composition of self-attention layers. The following lemma, sketched earlier in Section 3.1, shows that the (modified) Transformer networks can implement a permutation equivariant contextual mapping over almost all elements of Gδ , while mapping the rest of elements in G+δ to a disjoint set. Lemma 6. Consider the following subset of Gδ = {0, δ, . . . , 1− δ}d×n:\nG̃δ := {L ∈ Gδ | L:,i 6= L:,j for all i 6= j}. Assume that n ≥ 2 and δ−1 ≥ 2. Then, there exist a function gc : Rd×n → Rd×n composed of δ−d + 1 self-attention layers (h = 2,m = 1) that employ the σH operator, a vector u ∈ Rd, constants tl, tr ∈ R (0 < tl < tr), such that q(L) := uT gc(L) satisfies the following properties:\n1. For any L ∈ G̃δ , the entries of q(L) are all distinct. 2. For any L,L′∈G̃δ such that L is not a permutation of L′, all entries of q(L), q(L′) are distinct. 3. For any L ∈ G̃δ , all the entries of q(L) are in [tl, tr]. 4. For any L ∈ G+δ \\ G̃δ , all the entries of q(L) are outside [tl, tr].\nAt this point, a few remarks about the result in Lemma 6 are in order. First, since the Transformer networks are bound to implement permutation invariant maps, we require the Property 6.2 to hold for the pair of sequences that cannot be mapped to each other via permutation of columns. Furthermore, the self-attention layers implement the desirable contextual map for only G̃δ ⊆ Gδ , where all columns of L are distinct. Note that for small δ, Gδ \\ G̃δ constitutes a negligible fraction of Gδ because |Gδ \\ G̃δ| = O(δd|Gδ|). The function q in Lemma 6 maps the elements of G+δ \\ G̃δ outside [tl, tr]—the interval where the outputs of the contextual mapping for G̃δ reside." }, { "heading": "4.2.1 PROOF SKETCH OF LEMMA 6", "text": "Since Lemma 6 is one of the major technical contributions of this paper, we provide a short sketch of its proof. The complete proof is presented in Section B.5. For simplicity, we consider the case d = 1, so the input L ∈ G+δ is a row vector of length n. The key idea of the proof is that, using two attention heads of size 1, one can implement a selfattention layer that shifts up input entries that are in a specific interval, while leaving all other entries intact. We call this the selective shift operation. Since the entries in L are quantized, we apply the selective shift operation to 0, δ, . . . , 1 − δ using 1/δ attention layers. Interestingly, the value of the largest output entry after these operations is unique for each L ∈ G̃δ up to permutations. Using the largest entry, one can add one last layer that shifts up the entire matrix and outputs q(L) that satisfies Properties 6.1 and 6.2 of the lemma.\nMore concretely, the following function Ψ : R1×n → R1×n, parametrized by b, b′ ∈ R satisfying b < b′, can be implemented with two attention heads of size 1 with the hardmax (σH) operator:\nΨ(Z; b, b′)1,j = { maxk Z1,k −mink Z1,k if b < Z1,j < b′, 0 if Z1,j < b or Z1,j > b′.\nIf we define an attention layer of the form Z 7→ Z + Ψ(Z; b, b′), then any entry Z1,j in (b, b′) is shifted up by maxk Z1,k −mink Z1,k, while all the other entries stay untouched. We can choose b and b′ to selectively shift certain entries, hence the name selective shift operation.\nWe stack 1/δ self-attention layers, with attention parts δ−1Ψ(·; l − δ/2, l + δ/2) for each l ∈ {0, δ, . . . , 1 − δ}, in increasing order of l. With these layers, we can apply the selective shift operations to input entries of values 0, δ, . . . , 1 − δ. To see how the shift operations modify the input, now consider n = 2 for simplicity, and let L = [l1 l2] ∈ G̃δ . Without loss of generality, we can assume l1 < l2. The selective shift operation is applied to l1 first, shifting it by δ−1(maxL − minL) = δ−1(l2 − l1), resulting in l̃1 = l1 + δ−1(l2 − l1) > l2. After that, the operation on l2 shifts it up by δ−1(l̃1 − l2). Thus, the first 1/δ layers map L = [l1 l2] (l1 < l2) to\nL̃ = [ l̃1 l̃2 ] := [ l1 + δ −1(l2 − l1) l2 + (δ−2 − δ−1)(l2 − l1) ] .\nWe can show that the map from [l1 l2] ∈ {L ∈ G̃δ | l1 < l2} to l̃2 is one-to-one, and that 0 < l̃1 < l̃2 < δ\n−2. We then add one last layer that shifts all positive entries of L̃ by δ−3 max L̃ = δ−3 l̃2, whose output we denote by q(L) = [ δ−3 l̃2 + l̃1 δ −3 l̃2 + l̃2 ] . All entries of q(L) are in [δ−3 l̃2, δ −3 l̃2 + δ\n−2), and this interval is disjoint for different L’s because L 7→ l̃2 is one-to-one. Thus, q(L) satisfies Properties 6.1 and 6.2 of the lemma. The remaining details are in Section B.5." }, { "heading": "4.3 FUNCTION VALUE MAPPING BY FEED-FORWARD LAYERS", "text": "This brings us to the final step, which demonstrates the key utility of the feed-forward layers. After the contextual mapping by self-attention layers, each token captures the entire context available in the input sequence. The following result shows that token-wise application of a composition of feed-forward layers can map these tokens to the desired output values required by the function f .\nLemma 7. Let gc : Rd×n → Rd×n be the function from Lemma 6. Then, there exists a function gv : Rd×n → Rd×n composed of O(n( 1δ )\ndn/n!) token-wise feed-forward layers (r = 1) with activations in Φ such that gv is defined by a token-wise function gtknv : Rd → Rd on each column,\ngv(Z) = [ gtknv (Z:,1) · · · gtknv (Z:,n) ] ,\nwhere for all j ∈ {1, . . . , n},\ngtknv (gc(L):,j) = { (AL):,j if L ∈ G̃δ, 0d if L ∈ G+δ \\ G̃δ." }, { "heading": "4.4 TIGHTNESS OF CONSTRUCTIONS", "text": "We showed in this section that Theorem 2 requires O(n(1/δ)dn/n!) Transformer blocks for approximation, where δ is the width of the cubes. Each transformer block is of constant width, so it has O(d) parameters; this means that the total number of parameters is O(dn(1/δ)dn/n!). We note that this exponential dependence cannot be avoided in the worse case. If we assume continuity without any additional smoothness, quantizing the domain to cubes and approximating the function with constants require memorizing (output dim) × (num cubes)/n! real numbers, where the factor of 1/n! is due to permutation equivariance. Thus, Theorem 2 is optimal in the order of parameters.\nIf we compare with the residual network result (Lin & Jegelka, 2018), we can consider “flattening” X into a dn-dimensional vector and fitting the function. The proof technique in (Lin & Jegelka, 2018) requires O((1/δ)dn) layers, where each layer has O(dn) parameters: the total parameter requirement isO(dn(1/δ)dn). This shows that Transformers can approximate permutation equivariant functions in a more efficient way than residual networks.\nIn Section C, our proof of Theorem 3 shows that we requireO(n(1/δ)dn) layers to approximate continuous (not permutation equivariant) sequence-to-sequence functions. As seen from the argument above, this construction is also optimal in the order of parameters." }, { "heading": "5 DISCUSSION AND EXPERIMENTS", "text": "As detailed in Section 4, the ability of the self-attention layers to compute contextual mappings plays a crucial role in the universal approximation property. Interestingly, our analysis shows that replacing the dot-product attention in Transformers with any other component capable of computing contextual mappings should preserve this universal approximation property. This leads naturally to questions about the alternative architectures that realize certain kinds of contextual mappings at different computational and memory costs. We explore and discuss some examples of such alternatives in this section. Our preliminary empirical study demonstrates their practical utility." }, { "heading": "5.1 BI-LINEAR PROJECTION", "text": "Given token embeddings X as input, the bi-linear projection layer computes the following update.\nBProj(X) = X + WO ·X ·WP .\nThe bi-linear projection layer (Gong et al., 2013) is motivated from the ability of random (Gaussian) matrices to map sparse differences to dense vectors (Ailon & Chazelle, 2009). If there are two input\ncontexts X1 and X2 that differ in one token, their difference X1 −X2 is sparse; however, after random projection, the difference (X1 −X2)WP will be dense, and the numbers are distinct with high probability, implementing a form “pair-wise contextual mapping,”2 although different from the contextual mapping in Definition 3.1.\nThis layer advantageously incurs smaller number of matrix multiplications as compared to the dotproduct attention. That said, the number of parameters in this layer depend on the sequence length, making it harder to reuse the model across tasks with different input sequence lengths. Moreover, the weights used to compute the contextual embeddings (WP ) are independent of the inputs (X), whereas in self-attention the weights (σ[(W iKX)\nTW iQX]) depend on X . The first drawback can be addressed by replacing the linear projection with a depth-wise separable convolution layer, which is discussed in the next subsection." }, { "heading": "5.2 DEPTH-WISE SEPARABLE CONVOLUTIONS", "text": "A depth-wise convolution layer (Sifre & Mallat, 2014; Chollet, 2017; Kaiser et al., 2017) involves convolving each dimension of X with a corresponding convolution filter of size k:\nSepConv(X) = X + WO (X ∗WC) ,\nwhere WC ∈ Rd×k and (X ∗WC)i,: := Xi,: ∗ (WC)i,:. Unlike bi-linear projection, this layer can be used across tasks with different input sequence lengths as the number of parameters are independent of the sequence length. While a single layer is unable to compute contextual mappings when the filter size is small, stacking multiple such layers can potentially provide a cheaper way to compute contextual mappings. In fact, based on depth-wise separable convolutions, Wu et al. (2019) proposed a light-weight dynamic convolution architecture that performs competitively with Transformers on machine translation." }, { "heading": "5.3 EXPERIMENTS", "text": "We now present our experiments with these other architectures, with the goal of understanding the extent to which computing contextual mappings can capture the performance of Transformers. As discussed earlier, BProj and SepConv do not implement contextual mappings (cf. Definition 3.1), so we do not expect that either BProj or SepConv based models to have the same performance as the expensive Transformers. These models do not use input dependent weights to compute attention, and hence have weaker representation power. Instead, our goal is to see if we can use these cheaper layers to replace (some of) the expensive self-attention layers.\n2This guarantee only holds for a finite set (can be exponential in n) of fixed vectors in Rn.\nWe follow the experimental setting from Devlin et al. (2018) to train the Transformers, with the masked language model pre-training followed by a task specific fine-tuning, and work with a 12 layer architecture based on BERTBASE. We present our results on a question answering task (SQuAD) (Rajpurkar et al., 2016) and a sentence entailment task (MNLI) (Williams et al., 2018). In our first set of experiments we train models that employ BProj and SepConv layers, instead of the selfattention layer in eq.(1). We notice that, as expected, these simpler models have weaker performance than the self-attention layer. See Table 1 in Section D for a comparison of these models on MNLI.\nNext, we swap a varying number of the first few self-attention layers in BERTBASE with SepConv, implemented with filter reuse across dimensions (Wu et al., 2019)3. Fig. 1 illustrates the performance of these hybrid models. Interestingly, models with 1 or 2 convolution layers and rest the self-attention layers, perform better than models with only the self-attention layers. Note that, replacing self-attention layer with SepConv also reduces the computational cost and the number of parameters. One explanation we have is that the first few attention layers tend to attend broadly to the whole sequence (as empirically observed in (Clark et al., 2019)), and the cheaper convolution layers can perform this job more efficiently. A detailed evaluation of such hybrid architectures will be interesting future research.\nOur experiments also call for a deeper understanding of the exact nature of the embeddings computed by practical attention models. Since Transformers in practice have fixed depth, we believe that they might not be able to exactly implement contextual mappings as we defined in Definition 3.1. However, there is some preliminary empirical evidence that Transformers do implement some sort of “contextual mappings.” For example, Fig. 4 of Coenen et al. (2019) presents visualizations of embeddings of a single word in different contexts (sentences). They experimentally notice that Transformers, in addition to computing contextual mappings, also map a word into semantic clusters. Formalizing and evaluating this property of Transformers is an interesting direction for future work. We again note that Wu et al. (2019) have proposed an alternative way to compute such embeddings based on dynamic convolution layers. Evaluating the mappings computed by these models should shed more light on the workings of attention models and inspire efficient and better performing architectures." }, { "heading": "A PROOF OF CLAIM 1", "text": "Suppose XP was given as input, where P is a permutation matrix. First note that\n(W iKXP ) T (W iQXP ) = P T (W iKX) T (W iQX)P\nAfter the softmax operation, we get\nσ[P T (W iKX) T (W iQX)P ] = P Tσ[(W iKX) T (W iQX)]P .\nThen,\nAttn(XP ) = XP + h∑ i=1 W iO(W i VXP ) · P Tσ[(W iKX)T (W iQX)]P = Attn(X)P ,\nwhere we used PP T = I . Permutation equivariance of the token-wise feed-forward layer can be shown similarly:\nFF(XP ) = Attn(X)P + W2 · ReLU(W1 ·Attn(X)P + b11TnP ) + b21TnP = Attn(X)P + W2 · ReLU(W1 ·Attn(X) + b11Tn )P + b21TnP = FF(X)P ,\nwhere ReLU(XP ) = ReLU(X)P was used. This analysis shows that the function class T h,m,r(·) is restricted to permutation equivariant functions." }, { "heading": "B PROOF DETAILS OF THEOREM 2", "text": "We first define some additional notation. For a, b ∈ N where a ≤ b, let [a] = {1, . . . , a} and [a : b] = {a, a + 1, . . . , b − 1, b}. For a, b, c ∈ R where b − a > 0 is an integer multiple of c > 0, we write [a : c : b] := {a, a+ c, a+ 2c, . . . , b− c, b}.\nB.1 APPROXIMATING FPE WITH FPE(δ)\nLemma 8. For any given f ∈ FPE and 1 ≤ p <∞, one can find a δ∗ > 0 such that ∃ f ∈ FPE(δ∗) which satisfies dp(f, f) ≤ /3.\nProof Since f : Rd×n → Rd×n is a continuous function with compact support, the function is uniformly continuous. Since continuity is defined using entry-wise `p norm, and entry-wise `p norm is equivalent to entry-wise `∞ norm when the number of entries are finite, uniform continuity implies that\n∀ > 0,∃δ > 0 such that ∀X,Y , ‖X − Y ‖∞ < δ =⇒ ‖f(X)− f(Y )‖p < .\nThis means that given any /3 > 0, we have such a δ > 0. Using this δ, we can create a grid Gδ and corresponding cubes SL, as described in the main text. For any L ∈ Gδ , we define CL ∈ SL to be the center point of the cube SL. Then, we can define a piece-wise constant approximation f(X) = ∑ L∈Gδ f(CL)1 {X ∈ SL}. Note that, for any X ∈ SL, we have ‖X −CL‖∞ < δ, so\nby uniform continuity, we have ∥∥f(X)− f(X)∥∥\np = ‖f(X)− f(CL)‖p < /3. This proves that\ndp(f, f) < /3.\nAs for permutation equivariance, since f is permutation equivariant, we have f(CLP ) = f(CL)P for any permutation matrix P . For any X ∈ SL, we have XP ∈ SLP , so\nf(XP ) = f(CLP ) = f(CLP ) = f(CL)P = f(X)P .\nThus, the approximation f is also permutation equivariant. This proves the lemma.\nB.2 APPROXIMATING T 2,1,1 WITH T 2,1,4\nLemma 9. For each g ∈ T 2,1,1 and 1 ≤ p <∞, ∃ g ∈ T 2,1,4 such that dp(g, g) ≤ /3.\nProof Recall that Th,m,r refers to the class of functions representable with composition of Transformer blocks with h heads of size m in self-attention layers and r hidden nodes in feed-forward layers. The same notation holds for the modified Transformers T h,m,r. Note that the softmax operator on a matrix A can be made arbitrarily close to hardmax by scaling up A. That is, σ[λA]→ σH[A] as λ→∞. This means that by scaling up parameters inside σ, we can approximate σH arbitrarily closely. Thus, the modified self-attention layers can be approximated with the original self-attention layers of the same number of heads h and head size m.\nAlso, any arbitrary (possibly discontinuous) piecewise linear function φ ∈ Φ can be approximated arbitrarily closely by four ReLU’s. Note that φ ∈ Φ as at most three pieces, and at least one of the pieces is constant. For example, consider the following function φ ∈ Φ:\nφ(t) = b1 if t < c1, a2t+ b2 if c1 ≤ t < c2, a3t+ b3 if c2 ≤ t.\nThis function can be approximated by four ReLU’s, as claimed by the lemma:\nφ̃(t) = b1 + a2c1 + b2 − b1 ReLU(t− c1 + ) + ( a2 − a2c1 + b2 − b1 ) ReLU(t− c1)\n+\n( a3c2 + b3 − a2(c2 − )− b2\n− a2 ) ReLU(t− c2 + )\n+ ( a3 − a3c2 + b3 − a2(c2 − )− b2 ) ReLU(t− c2)\n= b1 if t < c1 − , a2c1+b2−b1 (t− c1) + a2c1 + b2 if c1 − ≤ t < c1, a2t+ b2 if c1 ≤ t < c2 − , a3c2+b3−a2(c2− )−b2\n(t− c2) + a3c2 + b3 if c2 − ≤ t < c2, a3t+ b3 if c2 ≤ t.\nAlso, as we make → 0, we can approximate φ as closely as possible using φ̃. The cases where the second or third piece is constant can be shown similarly. This means that the modified feed-forward layers (whose activation is φ ∈ Φ) with single hidden node can be approximated with the original feed-forward layers (ReLU) with four hidden nodes.\nThus, given any g ∈ T 2,1,1, there exists a function g ∈ T 2,1,4 arbitrarily close to g, by appropriately choosing the parameters to be large enough. This finishes the proof." }, { "heading": "B.3 FINISHING PROOF OF PROPOSITION 4", "text": "As we have already discussed in Section 4, we establish Proposition 4 in three steps:\n1. Given an input X , a group of feed-forward layers in the modified Transformer network can quantize X to an element L on the extended grid G+δ := {−δ−nd, 0, δ, . . . , 1− δ}d×n.\n2. Next, a group of self-attention layers in the modified Transformer network can take the input L and produce desirable contextual mappings q(L) such that, for L and L̃, that are not permutation of each other, all the elements in q(L) and q(L̃) are distinct.\n3. Finally, a group of feed-forward layers in the modified Transformer network can map elements of the contextual embedding q(L) to the desirable values, i.e., the output of f ∈ FPE on the input X .\nThese steps are formally stated in Lemmas 5, 6, and 7 in the main text. We present the proofs of these lemmas in the subsequent sections.\nWith the results established in these lemmas, we are now equipped with all the tools necessary to complete the proof of Proposition 4. Let us recall the functions gq, gc, and gv from Lemma 5, 6, and 7, respectively. We now show that the (modified) Transformer network g = gv ◦ gc ◦ gq approximates the underlying peicewise constant function f ∈ FPE over all points in its support except for a set of of measure O(δd).\nConsider a point X ∈ SL ⊂ [0, 1]d×n, where L ∈ G̃δ . By Lemma 5, we have that gq(X) = L. Thus, it follows from Lemmas 6 and 7 that\ngv ◦ gc ◦ gq(X) = gv ◦ gc(L) = [ gtknv (gc(L)·,1) g tkn v (gc(L)·,2) · · · gtknv (gc(L)·,n) ] = AL.\nOn the other hand, any point X ∈ ⋃\nL∈Gδ\\G̃δ SL ∪ (R d×n \\ [0, 1]d×n) is mapped by gq to L ∈\nG+δ \\ G̃δ; as a result, we get gv ◦ gc ◦ gq(X) = gv ◦ gc(L) = 0. Therefore, we have g(X) = gv◦gc◦gq(X) = AL = f(X) for X ∈ ⋃\nL∈G̃δ SL, and 0 everywhere else. Recall that f has its compact support in [0, 1]d, thus bounded; i.e., there exists B ≥ 0 such that ‖f(X)‖p ≤ B. The modified Transformer network g takes the same value as f on all points in [0, 1]d except for a set ⋃ L∈Gδ\\G̃δ SL that has measure O(δ\nd). This implies that dp(f, g) ≤ (Bpδd)1/p = O(δd/p)." }, { "heading": "B.4 PROOF OF LEMMA 5", "text": "The proof strategy is simple; using 1δ + 1 token-wise feed-forward layers, we implement the quantization function gentq that works on the first row of the input. Then stack another 1 δ + 1 layers that quantizes the second row, and so on.\nGiven input X , we first start by clipping X1,: in the set (−∞, 0)∪[1,+∞) and mapping the intervals to −δ−nd. This can be done by the following layer:\nZ 7→ Z + e(1)φ((e(1))TZ), φ(t) = { −t− δ−nd if t < 0 or t ≥ 1, 0 otherwise.\nNext, add 1/δ layers of the following form, for k = 0, δ, . . . , 1− δ. Z 7→ Z + e(1)φ((e(1))TZ − kδ1Tn ), φ(t) = {\n0 t < 0 or t ≥ δ −t 0 ≤ t < δ.\nEach layer quantizes X1,: in [kδ, kδ + δ) to kδ, without modifying other intervals.\nNote that both φ’s used in this construction are piecewise linear functions with three pieces, and at least one of them are constant. Thus, both φ’s are in Φ. We can repeat the same thing for the other rows, and at the end we will get a map from Rd×n to G+δ ." }, { "heading": "B.5 PROOF OF LEMMA 6", "text": "Selective shift operation. Before starting the proof, we first describe the key component of our proof, which we refer to the selective shift operation. Consider the following function, which can be expressed with a multiplicative attention head, with head size m = 1 and hardmax σH:\nψ(Z; bQ) = e (1)uTZσH[(u TZ)T (uTZ − bQ1Tn )]\nwhere u ∈ Rd is a vector that we will choose later, and e(1) = (1, 0, 0, . . . , 0) ∈ Rd is the standard basis vector.\nTo see what this function computes, first consider the j-th column of the attention score matrix: (uTZ)T (uTZ:,j − bQ). Note that, if uTZ:,j > bQ, σH will calculate arg max of uTZ, whereas if uTZ:,j < bQ, it will calculate arg min. Therefore, the (1, j)-th entry of ψ(Z; bQ) ∈ Rd×n can be written as\nψ(Z; bQ)1,j = u TZσH[(u TZ)T (uTZ:,j − bQ)] = { maxk u TZ:,k if uTZ:,j > bQ,\nmink u TZ:,k if uTZ:,j < bQ,\nfor j ∈ [n]. Note that due to e(1), all rows of ψ(Z; bQ) except the first row are zero. From this observation, one can define a function parametrized by bQ and b′Q, where bQ < b ′ Q, which consists of two attention heads:\nΨ(Z; bQ, b ′ Q) := ψ(Z; bQ)− ψ(Z; b′Q),\nΨ(Z; bQ, b ′ Q)1,j =\n{ maxk u\nTZ:,k −mink uTZ:,k if bQ < uTZ:,j < b′Q, 0 if uTZ:,j < bQ or uTZ:,j > b′Q.\nWhat this means is that, if we define an attention layer of the form Z 7→ Z + Ψ(Z; bQ, b′Q), then any column Z:,j satisfying uTZ:,j ∈ (bQ, b′Q) is shifted up in its first coordinate Z1,j by maxk u\nTZ:,k − mink uTZ:,k, while all the other coordinates stay untouched. We call this the selective shift operation, because we can choose bQ and b′Q to selectively shift certain entries of the input.\nBijective column id mapping. Recall that the input to this step is from the range of gq (Lemma 5), which is G+δ = {−δ−nd, 0, δ, . . . , 1 − δ}d×n. Now consider L ∈ G + δ and u = (1, δ−1, δ−2, . . . , δ−d+1).\nFor any j ∈ [n], it is easy to check two following facts:\n1. If Li,j 6= −δ−nd for all i ∈ [d], i.e., L:,j ∈ {0, δ, . . . , 1 − δ}d, then uTL:,j ∈ [0 : δ : δ−d+1 − δ], and the map L:,j 7→ uTL:,j from {0, δ, . . . , 1− δ}d to [0 : δ : δ−d+1 − δ] is a bijection.\n2. If there exists i ∈ [d] such that Li,j = −δ−nd, then uTL:,j ≤ −δ−nd + δ−d+1 − 1 < 0.\nTherefore, one can say that uTL:,j gives the “column id” for each possible value of L:,j ∈ {0, δ, . . . , 1− δ}d. The rough idea of the construction is to apply the selective shift operation to each column id, by setting u in the definition of Ψ(·) to be (1, δ−1, δ−2, . . . , δ−d+1) and choosing bQ = l − δ/2 and b′Q = l + δ/2 for each l ∈ [0 : δ : δ−d+1 − δ]. More concretely, we stack (1/δ)d attention layers, with attention parts δ−dΨ(·; l− δ/2, l+ δ/2) for each l ∈ [0 : δ : δ−d+1− δ], in increasing order of l. After that, we add an extra single-head attention layer with attention part δ−(n+1)dψ(·; 0).\nWe now divide possible input values L ∈ G+δ into three disjoint categories, and show how these layers change the input values at the end of all the layers. Recall the hierarchy G̃δ ⊂ Gδ ⊂ G+δ . The categories are defined as follows:\n1. L ∈ G̃δ . All entries are between 0 and 1− δ, and all columns are unique. 2. L ∈ Gδ \\ G̃δ . All entries are between 0 and 1− δ, but there are duplicate columns. 3. L ∈ G+δ \\Gδ . The point has at least one entry that equals to −δ−nd." }, { "heading": "B.5.1 CATEGORY 1", "text": "In Category 1, we have L ∈ G̃δ . Let lj := uTL:,j . Due to permutation equivariance, we can assume without loss of generality that lj’s are in increasing order: l1 < l2 < · · · < ln. The first (1/δ)d layers sweep the set [0 : δ : δ−d+1 − δ] and apply selective shift operation on each element in the set. This means that selective shift operation will be applied to l1 first, then l2, and then l3, and so on, regardless of the specific values of lj’s.\nFirst shift operation. In the first selective shift operation, the (1, 1)-th entry of L (L1,1) is shifted by the operation, while the other entries are left untouched. The updated value L̃1,1 is\nL̃1,1 = L1,1 + δ −d(maxk u TL:,k −mink uTL:,k) = L1,1 + δ−d(ln − l1). Therefore, after the operation, the output of the layer is [ L̃:,1 L:,2 . . . L:,n ] , and the new value\nof the first column L̃:,1 results in\nuT L̃:,1 = L̃1,1 + d∑ i=2 δ−i+1Li,1 = L1,1 + δ −d(ln − l1) + d∑ i=2 δ−i+1Li,1 = l1 + δ −d(ln − l1).\nLet us denote the updated “column id” uT L̃:,1 as l̃1. We can show that ln < l̃1, because\nl̃1 := l1 + δ −d(ln − l1) ≥ 0 + δ−d · δ = δ−d+1 > ln.\nTherefore, after updating, maxuT [ L̃:,1 L:,2 . . . L:,n ] = max{l̃1, l2, . . . , ln} = l̃1,\nand the new minimum is l2.\nSecond shift operation. The second selective shift operation is applied to l2, by which only one entry L1,2 will be shifted. The updated value L̃1,2 is\nL̃1,2 = L1,2 + δ −d(l̃1 − l2) = L1,2 + δ−d(l1 − l2) + δ−2d(ln − l1).\nAfter updating, the new inner product of u and L̃:,2 results in\nl̃2 := u T L̃:,2 = l2 + δ −d(l1 − l2) + δ−2d(ln − l1).\nWe can show that l̃1 < l̃2, because l1 + δ\n−d(ln − l1) < l2 + δ−d(l1 − l2) + δ−2d(ln − l1) ⇔ (δ−d − 1)(l2 − l1) < δ−d(δ−d − 1)(ln − l1),\nand the last inequality is true because δ−d > 1 and ln > l2. Since we have l̃1 < l̃2, and the new maximum in uT [ L̃:,1 L̃:,2 L:,3 . . . L:,n ] is now l̃2, and the new minimum is l3.\nRepeating the process. More generally, we can repeat this process, and show that the j-th shift operation shifts L1,j by δ−d(l̃j−1 − lj), and results in the new column id\nl̃j := u T L̃:,j = lj + j−1∑ k=1 δ−kd(lj−k − lj−k+1) + δ−jd(ln − l1).\nIn the general case, l̃j−1 < l̃j holds j = [2 : n], because\nl̃j−1 = lj−1 + j−1∑ k=2 δ−kd+d(lj−k − lj−k+1) + δ−(j−1)d(ln − l1)\n< l̃j = lj + j−1∑ k=1 δ−kd(lj−k − lj−k+1) + δ−jd(ln − l1)\n⇔ j−1∑ k=1 δ−kd+d(δ−d − 1)(lj−k+1 − lj−k) < δ−(j−1)d(δ−d − 1)(ln − l1),\nand the last inequality holds because\nδ−(j−1)d(ln − l1) > δ−(j−1)d j−1∑ k=1 (lj−k+1 − lj−k) > j−1∑ k=1 δ−kd+d(lj−k+1 − lj−k).\nTherefore, after the j-th selective shift operation, l̃j is the new maximum among {l̃1, . . . , l̃j , lj+1, . . . , ln} and lj+1 is the new minimum, which makes us possible to continue the process until the n-th operation.\nAfter n shift operations. As a result, after the whole sweep from 0 to δ−d+1 − δ by the first (1/δ)d layers, a total of n shift operations are applied, and the input L is mapped to a new point L̃, where uT L̃ = [ l̃1 l̃2 . . . l̃n ] and l̃1 < l̃2 < · · · < l̃n.\nWe can now prove the following technical lemma, whose proof is deferred to Appendix B.5.4:\nLemma 10. After n shift operations, l̃n = uT L̃:,n satisfies the following bounds:\nδ−(n−1)d+1(δ−d − 1) ≤ l̃n ≤ δ−nd+1(δ−d − 1)− δ(δ−d − 1)2.\nAlso, the map from [l1 l2 · · · ln] ∈ [0 : δ : δ−d+1 − δ] (where l1 < l2 < · · · < ln) to l̃n is one-to-one.\nGlobal shifting by the last layer. As mentioned earlier, after this sweep, there is another attention layer with attention part δ−(n+1)dψ(·; 0). Since 0 < l̃1 < · · · < l̃n, what it does to L̃ is that it adds δ−(n+1)d maxk u T L̃:,k = δ −(n+1)d l̃n to each entry in the first row of L̃. The output of this layer is defined to be the function gc(L).\nNow, in summary, for any L ∈ G̃δ , i ∈ [d], and j ∈ [n], we have\ngc(L)i,j =\n{ L1,j + ∑j−1 k=1 δ\n−kd(lj−k − lj−k+1) + δ−jd(ln − l1) + δ−(n+1)d l̃n if i = 1, Li,j if i ∈ [2, d],\nand for any L ∈ G̃δ and j ∈ [n],\nuT gc(L):,j = l̃j + δ −(n+1)d l̃n.\nChecking Properties 6.1 and 6.2. Given this result so far, it is now left to check if the constructed network is really a permutation equivariant contextual mapping, i.e., if it satisfies Properties 6.1 and 6.2 in Lemma 6.\nFirst, for any L ∈ G̃δ , Property 6.1 holds because we already know l̃1 < l̃2 < · · · < l̃n, so they are all distinct. As for Property 6.2, note that the upper bound on l̃n from Lemma 10 also holds for other l̃j’s, so\nuT gc(L):,j ∈ [δ−(n+1)d l̃n, δ−(n+1)d l̃n + δ−(n+1)d+1),\nfor all j ∈ [n]. Now, from Lemma 10, two L,L′ ∈ G̃δ (that are not permutations of each other) map to different l̃n and l̃′n, and they differ at least by δ. This means that two intervals [δ−(n+1)d l̃n, δ −(n+1)d l̃n+δ −(n+1)d+1) and [δ−(n+1)d l̃′n, δ −(n+1)d l̃′n+δ −(n+1)d+1) are guaranteed to be disjoint, so the entries of uT gc(L) and uT gc(L′) are all distinct. This proves Property 6.2.\nTherefore, we finished showing that the map gc(·) we constructed using (1/δ)d + 1 attention layers implements a permutation equivariant contextual mapping on G̃δ .\nChecking Property 6.3. It is now left to check if the map gc satisfies the other properties. At this point, we can check Property 6.3. From uT gc(L):,j ∈ [δ−(n+1)d l̃n, δ−(n+1)d l̃n + δ−(n+1)d+1) and Lemma 10, we can show that for any L ∈ G̃δ , we have\nδ−2nd+1(δ−d − 1) ≤ uT gc(L):,j < δ−(n+1)d(δ−nd+1(δ−d − 1)− δ(δ−d − 1)2) + δ−(n+1)d+1\n≤ δ−(2n+1)d+1(δ−d − 1),\nwhere we used δ−1 ≥ 2. This proves that all uT gc(L):,j are between tl = δ−2nd+1(δ−d − 1) and tr = δ\n−(2n+1)d+1(δ−d − 1). For the remaining input points L ∈ G+δ \\ G̃δ , we will check that uT gc(L):,j is outside the interval [tl, tr] (Property 6.4)." }, { "heading": "B.5.2 CATEGORY 2", "text": "In Category 2, we have L ∈ Gδ\\G̃δ . Here, all entries are between 0 and 1−δ, but there are duplicate columns. Again, let lj := uTL:,j , and assume without loss of generality that l1 ≤ l2 ≤ · · · ≤ ln.\nFor the input L in Category 2, there exist some j, j′ ∈ [n], j 6= j′, such that lj = lj′ . This means that when the input passes through the attention layer δ−dΨ(·; lj − δ/2, lj + δ/2), the selective shift operation for lj is applied to both j-th and j′-th columns; the two columns are coupled together. More generally, suppose we have n′ < n distinct columns.\nIf n′ = 1. In the extreme case of n′ = 1, we have maxj lj = minj lj , so the selective shift operation applied at lj does not shift the entry at all; therefore, at the end of the first (1/δ)d attention layers, L̃ = L.\nIf 1 < n′ ≤ n− 1. When 1 < n′ ≤ n− 1, let the n′ distinct values of lj’s be l′1, . . . , l′n′ . The shift operation is applied n′ times, to l′1, . . . , l ′ n′ , and shifts one or more entries at a time. After the first (1/δ)d layers, the output L̃ has n′ distinct l̃j = uT L̃:,j , 0 ≤ l̃1 ≤ l̃2 ≤ · · · ≤ l̃n, whose distinct values are the same as the numbers we get when we apply shift operations to a length-n′ sequence [l′1 . . . l ′ n′ ]. Then, applying the same calculations from Category 1 shows that\nl̃n = u T L̃:,n = l ′ n′ + n′−1∑ k=1 δ−kd(l′n′−k − l′n′−k+1) + δ−n ′d(l′n′ − l′1),\nand it follows from the upper bound in Lemma 10 that\nl̃n ≤ δ−n ′d+1(δ−d − 1)− δ(δ−d − 1)2 < δ−(n−1)d+1(δ−d − 1).\nNote that the RHS matches the lower bound in Lemma 10. This implies that the value of l̃n calculated from the input L ∈ Gδ \\ G̃δ (Category 2) is always strictly less (by at least δ) than that calculated from L ∈ G̃δ (Category 1).\nChecking Property 6.4. After the global shifting by the last layer with attention part δ−(n+1)dψ(·; 0), we get the output gc(L) which satisfies\nuT gc(L):,j = l̃j + δ −(n+1)d l̃n ≤ (δ−(n+1)d + 1)(δ−(n−1)d+1(δ−d − 1)− δ(δ−d − 1)2)\n< δ−2nd+1(δ−d − 1) =: tl.\nwhere the RHS is a lower bound on possible values of uT gc(L):,j for L ∈ G̃δ (Category 1). This means that the entries of uT gc(L) for Category 2 are outside [tl, tr], which satisfies Property 6.4." }, { "heading": "B.5.3 CATEGORY 3", "text": "In Category 3, we have L ∈ G+δ \\ Gδ; the point L has at least one entry that equals to −δ−nd. Let lj := u\nTL:,j , and recall that whenever a column L:,j has an entry that equals to −δ−nd, we have lj = u TL:,j ≤ −δ−nd+δ−d+1−1 < 0. Assume without loss of generality that l1 ≤ l2 ≤ · · · ≤ ln.\nRecall that the selective shift operation is applied to each element of [0 : δ : δ−d+1 − δ], not to negative values. In case of Category 3, we have mink uTL:,k = l1 < 0, and l1 never gets shifted upwards, so it remains as the minimum for the whole time.\nIf all lj’s are negative. In case where all lj’s are negative, selective shift operation never changes the input L, so we get L̃ = L. Since we have uT L̃ < 0Tn (entry-wise), the last layer with attention part δ−(n+1)dψ(·; 0) adds δ−(n+1)d mink uT L̃:,k < 0 to each entry in the first row of L̃, further pushing it to the negative side. Therefore, the final output gc(L) satisfies uT gc(L) < 0Tn < tl1 T n .\nIf not all lj’s are negative. Now consider the case where at least one lj is positive. Let i be the index that satisfies li−1 < 0 ≤ li. Then, selective shift operation does not affect l1, . . . , li−1, and then it shifts li by\nδ−d(max k uTL:,k −min k uTL:,k) = δ −d(ln − l1) ≥ δ−d(0 + δ−nd − δ−d+1 + 1) ≥ δ−(n+1)d+1,\nwhere we used δ−1 ≥ 2 at the last inequality. The next shift operations shift li+1, . . . , ln by even larger amount, so at the end of the first (1/δ)d layers, we have δ−(n+1)d+1 ≤ l̃i ≤ · · · ≤ l̃n, while l̃j = lj < 0 for j ∈ [i− 1].\nShifts by the last layer. Here, the last layer with attention part δ−(n+1)dψ(·; 0) acts differently for negative and positive l̃j’s. For negative l̃j’s, it adds δ−(n+1)d mink l̃k = δ−(n+1)dl1 < 0 to l̃1, . . . , l̃i−1, pushing them further to the negative side. For positive l̃j’s, the layer adds δ−(n+1)d maxk l̃k = δ\n−(n+1)d l̃n ≥ δ−(2n+2)d+1 to l̃i, . . . , l̃n, so that they are all greater than or equal to δ−(2n+2)d+1. Note that δ−(2n+2)d+1 > tr.\nChecking Property 6.4. Therefore, in both cases, we can see that the final output gc(L) satisfies uT gc(L):,j /∈ [tl, tr], for all j ∈ [n]. This completes the verification of Property 6.4." }, { "heading": "B.5.4 PROOF OF LEMMA 10", "text": "Proof of lower and upper bounds on l̃n are straightforward:\nl̃n := ln + n−1∑ k=1 δ−kd(ln−k − ln−k+1) + δ−nd(ln − l1)\n≥ δ−(n−1)d n−1∑ k=1 (ln−k − ln−k+1) + δ−nd(ln − l1) = (δ−nd − δ−(n−1)d)(ln − l1) ≥ δ−(n−1)d+1(δ−d − 1),\nl̃n ≤ ln + δ−d(l1 − ln) + δ−nd(ln − l1) ≤ δ−d+1 − δ + (δ−nd − δ−d)(δ−d+1 − δ) = δ−nd+1(δ−d − 1)− δ(δ−2d − 2δ−d + 1) = δ−nd+1(δ−d − 1)− δ(δ−d − 1)2.\nFor one-to-one property of the map, consider [l1 l2 · · · ln] and [l′1 l′2 · · · l′n] with increasing entries, which are mapped to l̃n and l̃′n, respectively. Suppose l̃n = l̃ ′ n. By definition,\nl̃n − l̃′n =(ln − l′n) + δ−d(ln−1 − ln − l′n−1 + l′n) + δ−2d(ln−2 − ln−1 − l′n−2 + l′n−1) + . . . + δ−(n−1)d(l1 − l2 − l′1 + l′2) + δ−nd(ln − l1 − l′n + l′1) = 0.\nNow assume for contradiction that ln 6= l′n. Then, we have −δ−d+1 + δ ≤ ln − l′n ≤ δ−d+1 − δ. However, the remaining terms have “coarse resolution”, and they can never cancel ln − l′n and make the sum zero, because for example, δ−d(ln−1 − ln − l′n−1 + l′n) can only have values 0, δ−d+1,−δ−d+1, 2δ−d+1,−2δ−d+1, . . . . Thus, ln = l′n must hold and the first term must be zero.\nSimilarly, assume that ln−1 6= l′n−1. Then, the second term is in the interval [−δ−2d+1 + δ−d+1, δ−2d+1 − δ−d+1]. Again, the remaining terms cannot cancel the second term, hence ln−1 = l ′ n−1 must hold. We can proceed this way, and show that lj = l ′ j must hold for all j ∈ [n], hence proving that the map is one-to-one." }, { "heading": "B.6 PROOF OF LEMMA 7", "text": "Note that |G+δ | = ( 1 δ + 1) dn, so the image of gc(G+δ ) (from Lemma 6) has finite number of distinct real numbers. Let M be the maximum over all these numbers. By construction of gc, we know that M > 0.\nTo construct a function gtknv that satisfies the statement of the lemma, we first implement the second part: gtknv (gc(L):,j) = 0d if L ∈ G+δ \\ G̃δ . Note from Lemma 6 that, for any L ∈ G̃δ , we have uT gc(L):,j ∈ [tl, tr] for all j, and for any L ∈ G+δ \\ G̃δ , uT gc(L):,j /∈ [tl, tr] for all j. Using this, we add the following feed-forward layer:\nZ 7→ Z − (M + 1)1nφ(uTZ), φ(t) = {\n0 if t ∈ [tl, tr] 1 if t /∈ [tl, tr].\nInput to this layer is gc(L). If L ∈ G̃δ , then φ(uT gc(L)) = 0Tn , so the output stays the same as the input. If L ∈ G+δ \\ G̃δ , then φ(uT gc(L)) = 1Tn , so all the entries of the input are shifted by −M − 1, and become strictly negative.\nRecall that by definition of G̃δ , all the entries of gc(L) for L ∈ G̃δ are nonnegative. So the next thing to do is mapping all strictly negative entries to zero. This can be done in a similar way as Lemma 5. For i ∈ [d], add the following layer:\nZ 7→ Z + e(i)φ((e(i))TZ), φ(t) = { −t if t < 0 0 if t ≥ 0.\nAfter these d layers, the output for L ∈ G+δ \\ G̃δ is a zero matrix, while the output for L ∈ G̃δ is gc(L).\nNow, it is left to map gc(L) to AL, for L ∈ G̃δ . Up to permutation equivariance, each different context L maps to n unique numbers uT gc(L), which are at least δ apart from each other. The idea of value mapping is to map each unique number to the corresponding output column.\nMore precisely, choose any L ∈ G̃δ . For each value of uT gc(L):,j , j ∈ [n], we add one feedforward layer\nZ 7→ Z + ((AL):,j − gc(L):,j)φ(u TZ − uT gc(L):,j1Tn ), φ(t) = { 0 t < −δ/2 or t ≥ δ/2, 1 −δ/2 ≤ t < δ/2.\nIf the input Z is a zero matrix, which is the case for L ∈ G+δ \\ G̃δ , uTZ = 0Tn . Since tl is much larger than 0, activation is all zero. Thus, zero input matrix remains the same at the output.\nIf the input Z is gc(L), where L ∈ G̃δ is not a permutation of L, then\nφ(uT gc(L)− uT gc(L):,j1Tn ) = 0Tn ,\nso gc(L) is left untouched.\nIf some other L is a permutation of L, and L:,i = L:,j , then\nφ(uT gc(L)− uT gc(L):,j1Tn ) = (e(i))T ,\nso i-th column of gc(L) will turn to\ngc(L):,i 7→ gc(L):,i + ((AL):,j − gc(L):,j) = gc(L):,i + ((AL):,i − gc(L):,i) = (AL):,i,\nwhich is the desired output. In conclusion, this layer maps the column gc(L):,j to (AL):,j , without affecting any other columns.\nAs seen above, we need one layer per each unique value of uT gc(L):,j for each L ∈ G̃δ . Note that there are O(n(1/δ)dn/n!) such numbers, so we can use O(n(1/δ)dn/n!) layers to finish our construction." }, { "heading": "C PROOF OF THEOREM 3", "text": "Proof of Theorem 3 can be done in a similar way as Theorem 2. As in the proof of Theorem 2, there are three parts: Lemma 8, Proposition 4, and Lemma 9. The statement and proof of Lemmas 8 and 9 can be done in almost the same way, this time without permutation equivariance.\nFor the proof of the second part, which corresponds to Proposition 4, we construct the network in a similar way. Recall that we can assume without loss of generality that X ∈ [0, 1]d×n. Choose\nE = 0 1 2 · · · n− 1 0 1 2 · · · n− 1 ... ... ...\n... 0 1 2 · · · n− 1 . Then, the first column of X + E is in [0, 1]d, second is in [1, 2]d, and so on; this means that for all rows, the coordinates are monotonically increasing. So we can use the same technique as the proof of Proposition 4 to divide the input values into cubes, quantize them to L, apply contextual mapping, and then value mapping. We describe each step in the following." }, { "heading": "C.1 QUANTIZATION BY FEED-FORWARD LAYERS", "text": "In a similar way as Lemma 5, the goal of this step is to quantize the input in [0, 1]d × [1, 2]d × · · · × [n− 1, n]d to its discrete version:\n[0 : δ : 1− δ]d × [1 : δ : 2− δ]d × · · · × [n− 1 : δ : n− δ]d.\nThis can be done by dn/δ feed-forward layers. We add dn/δ layers of the following form, for k = 0, δ, . . . , n− δ and i = 1, . . . , d:\nZ 7→ Z + e(i)φ((e(i))TZ − kδ1Tn ), φ(t) = {\n0 t < 0 or t ≥ δ −t 0 ≤ t < δ.\nAfter dn/δ layers, any input entry of X + E in [kδ, kδ + δ) is quantized to kδ." }, { "heading": "C.2 CONTEXTUAL MAPPING BY ATTENTION LAYERS", "text": "By Step 1, we quantized any input X + E to its quantized version. We call this quantized version L: L ∈ [0 : δ : 1− δ]d × [1 : δ : 2− δ]d × · · · × [n− 1 : δ : n− δ]d. As done in Lemma 6, we define u := (1, δ−1, . . . , δ−d+1) and lj := uTL:,j , for all j ∈ [n]. Note that, because L:,j ∈ [j − 1 : δ : j − δ]d, we have\n(j − 1)(1 + δ−1 + · · ·+ δ−d+1) ≤ lj ≤ (j − 1)(1 + δ−1 + · · ·+ δ−d+1) + δ−d+1 − δ,\nand l1 < l2 < · · · < ln. Notice that this corresponds to the Category 1 in the proof of Lemma 6. For simplicity of notation, let sj = (j − 1) ∑d−1 k=0 δ\n−k. We stack n(1/δ)d attention layers, with attention parts δ−dΨ(·; l − δ/2, l + δ/2) for each l ∈ ⋃n j=1[sj : δ : sj + δ\n−d+1 − δ], in increasing order of l.\nThese n(1/δ)d attention layers perform selective shift operations on lj’s, in increasing order of j. As seen in Appendix B.5.1, shift operations result in l̃1 < l̃2 < · · · < l̃n. Also, the map from L to l̃n is one-to-one, which can be shown in the same way as Appendix B.5.4. Since the range of lj’s are a bit different, we have a different upper bound on l̃n:\nl̃n := ln + n−1∑ k=1 δ−kd(ln−k − ln−k+1) + δ−nd(ln − l1)\n≤ ln + δ−d(l1 − ln) + δ−nd(ln − l1) ≤ sn + δ−d+1 − δ + (δ−nd − δ−d)(sn + δ−d+1 − δ) = (δ−nd − δ−d + 1) (\n(n− 1)δ −d − 1 δ−1 − 1\n+ δ−d+1 − δ )\n≤ (δ−nd − δ−d + 1)(δ−d − 1)(n− 1 + δ) < nδ−(n+1)d.\nFinally, we add an extra single-head attention layer with attention part nδ−(n+1)d−1ψ(·; 0). We define the output of this layer as gc(L). In a similar way as Appendix B.5.1, this layer shifts all the layers by nδ−(n+1)d−1 l̃n, thus making the intervals corresponding to different values of l̃n disjoint from each other. This ensures that different contexts L are mapped to distinct numbers in uT gc(L), thus implementing a contextual mapping." }, { "heading": "C.3 FUNCTION VALUE MAPPING BY FEED-FORWARD LAYERS", "text": "Now, it is left to map gc(L) to the desired output. As seen in the last step, each different context L maps to n unique numbers uT gc(L), which are at least δ apart from each other. The value mapping step can be done in a similar way as Lemma 7. The construction now requires O(n(1/δ)dn) layers because there is no permutation equivariance." }, { "heading": "D EXPERIMENTAL SETUP", "text": "For our experiments we follow the same setting as in BERT (Devlin et al., 2018). We first pre-train the models on the masked language modeling task and the next sentence prediction task. We use English Wikipedia corpus and BooksCorpus dataset (Zhu et al., 2015) for this pre-training. We use BERTBASE, a 12 layer Transformer model as the baseline. This model uses an embedding size of 768 and has 12 head self-attention layers and 3072 wide feed forward layers. We train it with the Adam optimizer, with .01 dropout and weight decay. We do pre-training for 250k steps with a batch size of 1024 and a max sequence length of 512. Pre-training takes around 2 days on 16 TPUv3 chips. We take the pre-train models and finetune them on the MNLI and SQuAD datasets separately using the same hyper-parameters as in Devlin et al. (2018). MNLI is a sentence entailment task in which, given a premise sentence, requires us to classify a hypothesis sentence into neutral, contradiction or entailment classes. We report the classification accuracy on this task. SQuAD is a question answering task, in which given a paragraph and a question, requires us to identify the answer as a span of the words in the paragraph. For this task we report both the F1 score and the Exact Match (EM) percentage. The metrics are reported on the dev sets of these datasets.\nFor our experiments with the depth-wise separable convolution layers, we follow the implementation in (Wu et al., 2019). We first use a GLU layer followed by the convolution layer. We use 16 separable convolution filters, of filter length 128, and reuse them, with each filter operating on 48 of the 768 dimensions of the input. This layer also has a skip connection and the output is normalized using layer normalization, similar to the self-attention layer. In our experiments, we replace the selfattention layers of the Transformers, in the lower layers, with this convolution layer. We keep the feed forward layer of the Transformer block the same.\nFor the experiments performed in this paper, one might consider an alternate explanation that the tasks considered maybe are easy, and do not require any advanced architecture to solve them, and even a simple architecture (bi-linear projection or separable convolution) might solve these tasks. To rule out this case we consider an even simpler architecture, namely average attention, as a baseline for our experiments.\nAverage attention. An average attention layer replaces the self-attention layer, and just computes the average of projections of all the other tokens. That is, we replace σ[(W iKX)\nTW iQX] in (1) with a matrix full of 1/n. The model still has the skip connections and the feed-forward layers like Transformer." } ]
2,020
ARE TRANSFORMERS UNIVERSAL APPROXIMATORS OF SEQUENCE-TO-SEQUENCE FUNCTIONS?
SP:214f7d764cebce811e175531d0b2e7f0c8dc18c3
[ "The paper proposes a new method for improving generative properties of VAE model. The idea is to train VAE in two stages: at first, train the vanilla VAE, then at the second stage freeze the encoder part and train the decoder part as a GAN generator with an additional regularizer which encourages cycle consistency in the latent space. Also the authors claim that other VAE-GAN hybrids which try to improve VAE model are “misguided” and poor samples and reconstructions of VAE are the consequence of minimum description length problem. ", "This paper proposes a hybrid VAE-GAN model, called the latent space renderer-GAN (LSR-GAN), with the goal to “imagine” the latent space of a VAE, and to improve the decoding and sampling quality of a VAE. First, a VAE-like model is trained, after which the encoder weights are frozen, and the decoder is trained as the generator of a GAN (together with an auxiliary discriminator). The generator loss also contains a reconstruction-like term in the latent space, described by the negative log density of the encoding distribution of the original latent conditioned on the output of the generator: -log q(z|g(z)). " ]
Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space. This allows us to “imagine” the information captured in the latent space. We argue that this is necessary to make a VAE into a truly generative model. We use our GAN to visualise the latent space of a standard VAE and of a β-VAE.
[]
[ { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "InfoVAE: Information Maximizing Variational Autoencoders", "venue": "URL http://arxiv.org/abs/ 1706.02262", "year": 2017 }, { "authors": [ "Bin Dai", "David Wipf" ], "title": "Diagnosing and enhancing VAE models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Anders Boesen Lindbo Larsen", "Sren Kaae Snderby", "Hugo Larochelle", "Ole Winther" ], "title": "Autoencoding beyond pixels using a learned similarity metric", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Fei Gao", "Yi Wang", "Panpeng Li", "Min Tan", "Jun Yu", "Yani Zhu" ], "title": "Deepsim: Deep similarity for image quality assessment", "venue": null, "year": 2017 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Neural photo editing with introspective adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "J. Rissanen" ], "title": "Modeling by shortest data", "venue": "description. Automatica,", "year": 1978 }, { "authors": [ "Xi Chen", "Diederik P. Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Variational Lossy Autoencoder", "venue": "URL http://arxiv.org/abs/1611.02731", "year": 2016 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative Adversarial Networks", "venue": "URL http://arxiv.org/abs/1406.2661", "year": 2014 }, { "authors": [ "Alireza Makhzani", "Jonathon Shlens", "Navdeep Jaitly", "Ian Goodfellow", "Brendan Frey" ], "title": "Adversarial autoencoders", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Huaibo Huang", "Ran He", "Zhenan Sun", "Tieniu Tan" ], "title": "IntroVAE: Introspective variational autoencoders for photographic image synthesis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Akash Srivastava", "Lazar Valkov", "Chris Russell", "Michael U Gutmann", "Charles Sutton" ], "title": "Veegan: Reducing mode collapse in GANs using implicit variational learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Jose San Pedro", "Stefan Siersdorfer" ], "title": "Ranking and classifying attractiveness of photos in folksonomies", "venue": "Proceedings of the 18th international conference on World wide web,", "year": 2009 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "CANs trained by a two time-scale update rule converge to a local Nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schlkopf", "Olivier Bachem" ], "title": "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations", "venue": "URL http: //arxiv.org/abs/1811.12359", "year": 2018 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in β-VAE", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "I. Higgins", "L. Matthey", "A. Pal", "C. Burgess", "X. Glorot", "M. Botvinick", "S. Mohamed", "A. Lerchner" ], "title": "β-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "arXiv preprint arXiv:1511.06349,", "year": 2015 }, { "authors": [ "Tong Che", "Yanran Li", "Athul Paul Jacob", "Yoshua Bengio", "Wenjie Li" ], "title": "Mode regularized generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yujia Li", "Kevin Swersky", "Rich Zemel" ], "title": "Generative moment matching networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Variational auto-encoders (VAEs) have made a significant impact since their introduction by Kingma and Welling (2014). However, one of their perceived problems is their reconstruction performance. This has spawned a wave of research into trying to improve the reconstruction performance (Zhao et al., 2017; Dai and Wipf, 2019; Larsen et al., 2016; Gao et al., 2017; Brock et al., 2017). We argue that such attempts are misguided. The whole point of VAEs is to capture only compressible information and discard information specific to any particular image. This is a consequence of the well known evidence lower bound or ELBO objective function consisting of a negative log-probability of generating the original image from the latent representation (this is often implemented as a mean squared error between the image and the reconstruction, although as we argue in Appendix A this term should be proportional to the logarithm of the mean squared error) and a KL-divergence between the probability distribution representing a latent code and a ‘prior distribution’ (usually taken as a multivariate normal with mean zero and unit variance). These two terms have a nice interpretation in terms of the minimum description length (Rissanen, 1978)—this has been described elsewhere, for example, Chen et al. (2016). The KL-term can be viewed as a measure of the amount of information in the latent code while the log-probability of the image measures the amount of information required to change the image produced by the decoder into the input image (see Section 3 for details). That is, the latent space of a VAE can be viewed as a model of the dataset—capturing compressible information while not encoding any image specific information (which is cheaper to communicate using the reconstruction loss).\nThe great strength of a VAE is that it builds a model of the dataset that does not over-fit (i.e. code for in-compressible features found in specific images). However, because of this it typically will not do a good job of reconstructing images as the latent code does not contain enough information to do the reconstruction (for very restrictive dataset such as MNIST and Celeb-A a lot of information can be captured in the latent space, but for more complex datasets like ImageNet or CIFAR the reconstructions are poor). Of course, if you want good reconstructions on the training set then the simplest solution is to remove the KL-divergence term and just use an autoencoder. However, having a model that does not over-fit the dataset can be useful, but in this case the decoder of a standard VAE should not be regarded as a generative model—that is not its purpose. If we wish to generate realistic looking images we need to imagine the information discarded by the encoder. As a rather simplified analogy, consider a verbal description of an image “a five year old girl in a blue dress standing on a beach”. If we asked different artists to depict such scene there is clearly not enough information to provide pixel-wise or feature-wise similarity between their interpretation although each artist could render a convincing image that satisfies the description. In a similar manner if we\nwant a VAE to act as a generative model we need to build a renderer that will imagine an image consistent with the latent variable representation.\nA simple way to achieve this is using a modified Generative Adversarial Network (GAN). We call such a model a latent space renderer-GAN (or LSR-GAN). To generate an image we choose a latent vector z from the prior distribution for the VAE. This is passed to a generator network that generates an image, x̂, with the same dimensions as that of the dataset used to train the VAE. The generated image has both to convince a discriminator network that it is a real image—as is usual for a GAN (Goodfellow et al., 2014)—at the same time the VAE encoder should map x̂ close to z. To accomplish this we add an additional cost to the normal GAN loss function for the generator (LGEN)\nLGEN − λ log(qφ(z|x̂)) (1)\nwhere qφ(·|x̂) is the probability distribution generated by the VAE encoder given an image x̂ and z is the latent vector that was put into the GAN generator. Note that when training the LSR-GAN we freeze the weights of the VAE encoder. The constant λ is an adjustable hyperparameter providing a trade-off between how realistic the image should look and how closely it captures the information in the latent space. This modification of the objective function can clearly be applied to any GAN or used with any VAE. Although the idea is simple, it provides a powerful method for visualising (imagining) the information stored in a latent space. Interestingly, it also appears to provide a powerful regularisation mechanism to stabilize the training for GANs.\nCombinations of VAEs and GANs are, of course, not new (Makhzani et al., 2016; Larsen et al., 2016; Brock et al., 2017; Huang et al., 2018; Srivastava et al., 2017). In all cases we are aware of GANs have been combined with VAEs to “correct” for the poor reconstruction performance of the VAE (see Appendix B for a more detailed discussion of the literature on VAE-GAN hybrids). As we have argued (and expound on in more detail in Section 3), we believe that the decoder of a VAE does the job it is designed to do. They cannot reconstruct images accurately, because the latent space of a VAE loses information about the image, by design. All we can do is imagine the type of image that a point in the latent space represents.\nIn the next section, we show examples of images generated by the LSR-GAN for both normal VAEs and β-VAEs (we also spend time describing VAEs, β-VAEs and the LSR-GAN in more detail). In addition, in this section we present a number of systematic experiments showing the performance of a VAE and LSR-GAN. In Section 3, we revisit the minimum description length formalism to explain why we believe a VAE is doomed to fail as a generative model. We conclude in Section 4. We cover more technical aspects in the appendices. In Appendix A we show that the correct loss function for a VAE requires minimising a term proportional to the logarithm of the mean squared error. In Appendix B we draw out the similarities and differences between our approach to hybridising VAEs with GANs and other work in this area. We present some additional experimental results in Appendix C. A detailed description of the architecture of LSR-GAN is given in Appendix D. We end the paper with Appendix E by showing some samples generated by randomly drawing latent variables and feeding them to the LSR-GAN." }, { "heading": "2 IMAGINING LATENT SPACES", "text": "A natural question to ask is what information about an image gets represented in the latent space of a VAE. To answer this we can use the VAE encoder to generate a distribution qφ(z|x) representing that image in the latent space (see Sections 2.1 for details on VAEs). From this distribution we can sample points in the latent space and feed this to the LSR-GAN generator. We show examples of this for both CIFAR-10 and ImageNet (down-sampled to 64 × 64) in Figure 1. In all cases in this paper the input images are taken from a test set that is independent of the training set. Note that both CIFAR-10 and ImageNet are “hard” for VAEs in the sense that they represent extremely diverse sets of images. As a consequence, the VAE latent space will struggle to store detailed information about the images and the VAE reconstructions will be poor. We have repeated this for a β-VAE (see section 2.3 for a full description of β-VAEs). We note that there is very little variation between the different samples drawn from qφ(z|x), particularly for the standard VAE (β = 1), showing that the latent space of the VAE is relatively smooth (there is more variation when β = 20).\nTo get a sense of the variation in the information stored in latent spaces we show in Figure 2 inputoutput pairs, where the left image is the input and right image is the output generated by the LSRGAN generator seeded with a latent vector encoding of the input image. The reconstructions capture the shape and background, but clearly loses a lot of detail. In some cases it appears that the type of object is being captured, although in the case of the boat with the β-VAE (with β = 20) the wrong object is being rendered." }, { "heading": "2.1 VARIATIONAL AUTOENCODERS", "text": "The structure of a VAE is represented schematically below.\nx ∼ D Encoder\n(parameters φ) (µφ,σ\n2 φ) z ∼ q(z|x,φ)\nDecoder (parameters θ)\nx̂ = Dθ(z)\nWe sample an input x from some dataset, D. To be concrete we will consider the case where the inputs are images, although clearly a VAE can be used to represent many different types of data. For each input x the encoder outputs a mean vector, µ, and standard deviation vector, σ, that describes an axis aligned normal distribution, qφ(z|x) = N ( z ∣∣µφ(x),diag(σ2φ(x))). A latent variable z is sampled from this distribution and then fed to a decoder. For simple black and white datasets such as MNIST the decoder outputs a scalar at each pixel location that can be interpreted as the probability that the pixel is black. For more complex datasets the decoder ususal generates a “reconstruction” x̂ = Dθ(z). The probability of generating a pixel value xi is then usually taken as\na normal distribution with mean x̂i (i.e. pθ(xi|z) = N (xi|x̂i, σ2)) and variance σ2 that measures the expected size of the errors between the input images, x, and the reconstructions, x̂.\nThe loss function for a VAE is equal to the negative evidence lower bound (ELBO) L = −Ex∼D [ Ez∼qφ(z|x)[log(pθ(x|z))] + KL ( qφ(z|x) ∥∥N (0, I))] . (2) As explained in Appendix A, log(pθ(x|z)) is chosen to be proportional to the logarithm of the reconstruction error between x̂ and the input image x—in our experiments this produced better reconstructions than replacing log(pθ(x|z)) with the mean squared error." }, { "heading": "2.2 LSR-GAN", "text": "LSR-GAN is a novel hybridization of VAE and GAN model. The most distinct difference of LSRGAN from previous work is that it is a two-stage model. In the first stage we train the VAE model. Having done this we freeze the weights of the VAE and train the GAN. We train the discriminator, D, of LSR-GAN in the same way as a normal GAN. That is, we minimise a loss function\nLD = −Ex[log(D(x))]− Ez [log(1−D(G(z)))] (3) where G is the generator or the decoder of LSR-GAN. The job of the discriminator, D is, to decide whether its import is a real image or not. Thus, to optimise the loss function we neet to maximize the log-probability of passing the real data, x, while minimising the log-probability of accepting a random sampling G(z) generated by a generator G seeded with a random latent vector z. The architecture of the generator is the same as that of a normal GAN but the loss function is slightly different. We add an additional term giving\nLG = Ez [log(D(G(z)))] + λ log(qφ(z|G(z))) . (4) The parameters of the discriminator and generator are trained in the usual tick-tock fashion using gradient descent. We built the VAE and the generator of GAN using a ResNet (He et al., 2016) as it gave slightly better performance than using a standard CNN. The architecture of the discriminator is the same as DCGAN (Radford et al., 2016). The architecture is described in Appendix D.\nTo test the LSR-GAN we use the VAE to generate a latent representation z for an image drawn from an independent test set. The latent vector is then used as a seed value for the generator in the LSRGAN. The LSR-GAN can get sharper reconstruction images than VAE (see Figure 3). Although not visually so obvious, we have used a quantitative measure of sharpness computed as luminancenormalised Laplacian (San Pedro and Siersdorfer, 2009, Section 3.1.2). For the reconstructed images from the VAE we obtained a measure of 0.17± 0.03 while for the LSR-GAN we obtain 0.28± 0.08 (i.e. an improvement of a factor of two). We have also computed the FID measure of image quality for CIFAR-10 (Heusel et al., 2017). For images seeded from a testing example the VAE achieved a score of 89.8 while LSR-GAN achieved a score of 44.1, while for images seeded with random latent variable (i.e. z ∼ N (0, I)) the FID score for the VAE is 138.6 while for the LSR-GAN it is 47.4. This should not be surprising. The decoder of the VAE is training only where there are training images. Despite the fact that the KL-divergence tries to ensure that as much latent space as possible is used, the constraint of minimising the reconstruction loss means that most of the latent space is far from a training example. Although the VAE does not do too badly generating testing examples,\nthese tend to be substantially closer in the latent space to the training examples than random samples. In contrast the LSR-GAN is trained on random samples so that the generator will have to produce “realistic” images over the whole latent space. Of course, whether these generated images represents anything recognisable is open to question. For diverse training sets such as CIFAR-19 and ImageNet this may be very difficult. What image should we expect from a latent vector halfway between a truck and a bird? In Appendix E we show images generated by seeding LSR-GAN with random latent variables for CIFAR-10, ImageNet, MNIST and Celeb-A." }, { "heading": "2.3 BETA-VAE", "text": "A goal of generating a latent representation is for the representation to be disentangled. Intuitively disentanglement seems clear: We would want information that is somehow causally independent to be encoded into orthogonal directions (or different variables) in our latent space (Bengio et al., 2013). Unfortunately, this is not only quite difficult to achieve in practice (at least, in an unsupervised setting), but it is even difficult to formulate (see Locatello et al. (2018)). Despite this difficulty, there have been many attempts to achieve disentanglement (Kim and Mnih, 2018; Chen et al., 2018; Burgess et al., 2018). One of the most prominent has been the β-VAE introduced by Higgins et al. (2017), where the KL-divergence term in a normal VAE is weighted by a parameter β\nL = −Ex∼D [ Ez∼qφ(z|x)[log(pθ(x|z))] + βKL ( qφ(z|x) ∥∥N (0, I))] . (5) The argument is that by making β 1 we encourage disentanglement. Contrariwise, by making β 1 we make a VAE closer to an auto-encoder. This improves the reconstruction performance on the training examples, but at the cost of allowing the latent space to over-fit the training set.\nIn Figure 4 we show examples of input-output pairs for different values of β. We observe that for large β the outputs are quite different from the input images in contrast to small β where many more details of the original input are captured.\nAlthough the LSR-GAN model generates slightly clearer, less blurry, images, it has a lower reconstruction error than the VAE decoder. We show the mean squared error measured on a testing set from CIFAR-10 as a function of β in Figure 5(a). This poor performance of the LSR-GAN is unsurprising, it uses the same information as the VAE (i.e. the information stored in the latent space). By producing sharper images it will pay the price of getting the boundary wrong. The blurry edges from the VAE is a way to hedge its bet and reduced the mean squared error. Interestingly, the mean squared error remains fairly constant as we increase β from a low value, until we reach β = 1 after which it rapidly increases. One interpretation of this fact is that the VAE with β = 1 is successfully encoding all the useful information (i.e. compressible information) so for reconstructing unseen images it will perform as well as an auto-encoder. As we increase β above 1, the reconstruction error increases rapidly.\nIn Figure 5(b) we show the classification performance as measured by a simple classifier trained on the CIFAR-10 training set. The classifier performance achieved an 84% correct classification on the raw images. We find little variation as we decrease β below 1. As we increase β above 1 the classification accuracy falls off. Again we can attribute this to the latent space of the VAE (with β = 1) capturing most useful information. Interestingly the high-β VAE fails to capture\n“objectness” well. This suggests that, at least for CIFAR-10, the type of object does not contain very much information about its appearance and is rapidly discarded." }, { "heading": "3 MINIMUM DESCRIPTION LENGTH", "text": "To understand what VAEs do it is useful to interpret them in the framework of the minimum description length (MDL) formalism. In MDL we consider communication a dataset D through a communication channel using as few bits as possible. We can do this using lossy compression, where we encode each input x by a code z, which we communicate down our channel. The receiver decodes the message and produces an approximation of the input x̂. To communicate the original information we send the code z together with the error = x−x̂ between the input x and the reconstruction x̂. Because the distribution of errors, p( ), is more sharply concentrated than the original inputs, p(x), this method allows us to communicate the image more efficiently than transmitting the raw pixel values. The expected cost of transmitting an input is\nL = Ex∼D [M(z) + E( )] where M(z) is the number of bits needed to communicate the code, z, and E( ) is the number of bits required to communicate the error, . In the MDL formalism we attempt to find a code that minimises the description length L. To communicate the model and errors we need to use an optimal coding strategy. Rather than specifier and actual code we can use the Shannon bound (i.e. the negative log-probability of the tokens we transmit). For this to be meaningful, we need to specify both the errors and code to a finite precision. The precision of the errors will determine the accuracy of the data we communicate. If the ith component of the error is distributed according to p( i) then the cost of communicating the error to a precision of ∆ is approximately − log(p( i) ∆) = − log(p( i)) − log(∆). The factor − log(∆) is common to all coding schemes so is irrelevant to choosing optimal codes z. In contrast the precision to which we transmit the model will directly determine the cost M(z). There is a balance to be struck: a more precise model can potential lead to a better reconstruction x̂, reducing the reconstruction cost, E( ), but at the same time increasing the cost, M(z), of communicating the code z.\nThe KL-divergence term, KL ( q(z) ∥∥p(z)) (also known as the relative entropy) can be interrupted as the communication cost (in nats) of transmitting a random variable z with uncertainty given by q(z) assuming an underlying probability distribution of all random variables of p(z). Using this interpretation we see that the loss function of a VAE is equivalent to the expected message length (in nats) of communicating a sample from the dataset D by using a random variable z with uncertainty q(z). By minimising the loss function we find a coding scheme with the minimum description length (or, at least, an approximate local minimum). By encoding a message as a random variable z drawn from a distribution qφ(z|x) the VAE is able to find an optimal balance between accuracy to which it transmits the model (determined by the standard deviation vector, σ, generated by the VAE encoder) and the need to reduce the reconstruction error. From an MDL perspective the ELBO is the correct objective function, and should not be regarded as a approximate lower bound to what we really want to achieve. If there are too many dimensions in the latent space then some of the\ncomponents of z (channel in information theory terms) are such that zi is approximated distributed by N (zi|0, 1) for all inputs x. The channel is effectively “switched off” (and it will be ignored by the decoder as it is just a source of random noise). This is referred to as latent variable collapse and is sometimes viewed as problematic, however, from the MDL viewpoint it acts as an elegant automatic dimensionality selection technique.\nThe job of the decoder in a variational autoencoder is to reconstruct the image only using information that can be compressed. Image specific information is ignored. For example, information about the precise shape of an object is probably not compressible. As a result the decoder tends to hedge its bets and has a blurry outline. Of course, some encoders and decoders will be better than others, but to date there is little evidence in the literature that the performances of VAEs are massively suboptimal, at least, when working with images. With an extremely powerful encoder and decoder and a limited dataset it would be possible for the encoder to communicate an identifier of the input image and for the decoder to reproduce the image just from the identifier, thus avoiding communicating any information about the visual content of the image—this requires that the decoder memorises all the images. This would be an extreme case of what is sometimes called posterior collapse. There is some evidence that with very strong encoders and decoders that the amount of information stored in the latent space (as measured by the KL-divergence) decreases (Bowman et al., 2015). This might point to a weakness of the VAE set-up—the MDL set-up really only makes sense when the dataset is arbitrarily large—, but this problem could be ameliorated by data augmentation. However, using standard CNN encoders and decoders we found no evidence for memorisation of the images (for example, the VAE would produce a similar level of reconstruction for images from a separate test set). For language modelling there seems to be more evidence that VAEs often fail to extract information in the latent space, but for images it seems likely that a properly trained VAE will extract a good fraction of the compressible information. We believe that the failure of the VAE decoder to produce high quality reconstructions (except in the case very of simple datasets such as MNIST and possibly CELEB-A) is because to do so would require communicating information that is noncompressible. As a consequence we should not think of the decoder of a VAE as a generative model: It will, by design, produce blurry and poor quality reconstructions. We want this to ensure that the latent space only captures information that is common across many images. We see the mapping from images to latent space as a many-to-one mapping. Thus, the mapping from the latent space to images will be ambiguous and the best we can do is imagine an image compatible with the latent variable: exactly what we have designed the LSR-GAN to do." }, { "heading": "4 CONCLUSION", "text": "VAEs are often taken to be a pauper’s GAN. That is, a method for generating samples that is easier to train than a GAN, but gives slightly worse results. If this is the only objective then it is clearly legitimate to modify the VAE in anyway that will improve its performance. However, we believe that this risks losing one of their most desirable properties, namely their ability to learn features of the whole dataset while avoiding encoding information specific to particular images. We have argued that because of this property, a VAE is not an ideal generative model. It will not be able to reconstruct data accurately and consequently will struggle even more with generating new samples. One of the weaknesses of the vast literature on VAEs is that it often attempts to improve them without regard to what makes VAEs special.\nAs we have argued in this paper, a consistent way of using the latent space of a VAE is to use a GAN as a data renderer, using the VAE encoder to ensure that the GAN is generating images that represent the information encoded in the VAE’s latent space. This involves “imagining” the information that the VAE disregards. LSR-GAN can be particularly useful in generating random samples, although, as shown in Appendix E, for very diverse datasets the samples are often not recognisable as real world objects. Although there are already many VAE-GAN hybrids, to the best of our knowledge, they are all designed to “fix” the VAE. In our view VAEs are not broken and “fixing” them is actually likely to break them (i.e. by encoding image specific information in the latent space). Although, the main idea in this paper is relatively simple, we believe its main contribution is as a corrective to the swath of literature on VAEs that, in our view, often throws the baby out with the bath water in an attempt to fix VAEs despite the fact that perform in exactly the way they were designed to." }, { "heading": "A ON THE ELBO", "text": "In the standard VAE we maximise the log-probability of generating the original image. In the original paper this was achieved by the decoder outputting a probability distribution akin to what happens in the latent space. More often it is assumed that the pixel errors are normally distributed with some variance σ2. Thus the log-probability of generating all the images is\n∑ x sinD Ez∼q(z|x)[log(p(x|z)] = N∑ i=1 log ( N (x|x̂, σ2) ) = − N∑ i=1 (xi − x̂i)2 2σ2 − N 2 log ( 2πσ2 ) where the sum is over all predicted pixels—i.e. the number of pixels in an image times the number of colour channels times the number of examples (or, more usually, the mini-batch size). However,\nσ2 = 1\nN N∑ i=1 (xi − x̂i)2\n(at least, if we make the natural assumption that the errors have mean zero). As a consequence∑ x∈D Ez∼q(z|x)[log(p(x|z)] = − N 2 − N 2 log ( 2πσ2 ) so that we should minimise N log ( σ2 ) /2. In information theory terms this tells us that it cheaper to\ncommunicate the residues if they are more tightly concentrated. Note that since σ2 is proportional to the mean squared error, EMSE, it suffices to minimise N log(EMSE) /2. We note that\n∂\n∂x̂i\nN 2 log ( 2πσ2 ) = x̂i − xi σ2\nwhich is precisely the gradient of\nN∑ i=1 (xi − x̂i)2 2σ2\nif we ignored the dependence of σ2 on x̂i. In many publically available implementations of VAEs the algorithm minimises ∑N i=1(xi−x̂i)2 which arbitrarily assumes σ2 = 12 rather than its true value. This means that these implementations are effectively running a β-VAE with some unknown β (in our experience with β > 1). This makes comparing results from different VAE implementations difficult. For example, rescaling outputs to lie in the range [−1, 1] rather than [0, 1] would change the effective β-value.\nB VAE-GAN HYBRIDS\nThe hybridisation of VAE (or autoencoder) and GAN models have been developed for several years. There are many attempts on this area and we compare LSR-GAN to the most related work in this section.\nThe adversarial types autoencoder is the most intuitive and simplest way to combine a VAE or an autoencdoer and a GAN models. Most of these models introduce a discriminator into the autoencoder training. AAE (Makhzani et al., 2016) applies a discriminator to distinguish the output of encoder and the random sample from the prior distribution. It uses this discriminator to replace the KL term in VAE. VAE/GAN (Larsen et al., 2016) is the first model that applied feature-wise errors and the input of its generator contains three different types images: the reconstruction images, the generated images and the real images. The same as our model, it collapse the decoder and the generator into one. MDGAN (Che et al., 2017) is another AE-GAN hybrid which is close to VAE/GAN, they try to match the manifold of GAN to real data by adding a geometric metrics regulariser and mode regulariser. None of these methods feed the output of generator back into the encoder or train their\nnetwork in two-stages, which is the biggest difference between these methods and ours. Also, many of these hybrid models adopt an autoencoder instead of VAE while the VAE in our model cannot be replaced by an autoencoder.\nThere are not many models that use the output of decoder to feed the encoder. The Introspective Adversarial Network (IAN) (Brock et al., 2017) is a unified model which means the discriminator is not separate. IAN only encodes the feature that extracted by discriminator rather than the raw images. The discriminator of IAN extracts features from both raw images and synthetic images. The generator accept both random sample and the output of the discriminator as inputs at the same time. In contrast, our models only accept one input. Another model that adopts the introspective method is IntroVAE (Huang et al., 2018), it constructs the inference model E and generator model G in a circulation loop. IntroVAE has the ability to generate high-resolution images. But it does not contain any discriminator network.\nThe most closely work to our LSR-GAN is VEEGAN (Srivastava et al., 2017). It introduces a second network Fθ to the GAN. The task of Fθ is to map both the real images and synthetic images to a Gaussian distribution which is what we ask the encoder to do. When the input of Fθ is the output of generator, the objective function minimise the distance between the input of generator and the output of Fθ. If the input of Fθ is real data, the objective function minimise the cross entropy between Gaussian prior and the output of Fθ. Another related model is the Generative moment matching networks (GMMN) (Li et al., 2015). In this model the autoencoder is frozen and they then minimize the maximum mean discrepancy (MMD) between the generated representation and data representation, and they use an uniform prior to generate the representations. In LSR-GAN, we match two Gaussian distributions in maximizing the probability distance. None of these related works are two-stages models except GMMN. Also, to the best of our knowledge, LSR-GAN is the first VAE-GAN hybrid model that applies the probability distance in the loss function." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "We briefly present some additional experimental data.\nC.1 DEPENDENCE OF LSR-GAN ON β\nIn Table 1 we present measurements of the performance of outputs from both VAEs and LSR-GAN for different values of β. Some of this data is also presented graphically in Figure 5, but we have included additional measurements.\nC.2 DEPENDENCE OF LSR-GAN ON λ\nThe performance of the LSR-GAN depends on the hyper-parameter λ. This balances the need to produce convincing images (from the discriminator’s point of view) with the requirement that the latent space of the GAN should be close to that for the VAE. These two objectives are not necessarily contradictory, although we will see that changing λ has benefits and drawbacks.\nIn Figure 6 we show the effect of changing λ over approximately three orders of magnitude on (a) the absolute classification accuracy (b) the classification accuracy compared to the class labels\npredicted by the classifier on the raw images (c) the mean squared reconstruction error and (d) the variance in the predictions when choosing different samples from qφ(z|x). We see that increasing λ improves the classification performance (both relative and absolute). However, and perhaps surprisingly, increasing λ produces a significant reduction in the reconstruction error. More intuitively it also causes a reduction in the variance between images sampled independently from qφ(z|x). That is, using the encoder in the LSR-GAN acts a regulariser ensuring close by points in latent space map to similar images. More details are given in Table 2." }, { "heading": "D ARCHITECTURE OF THE LSR-GAN", "text": "In this appendix we describe the detailed architecture of the VAE and LSR-GAN we used. Table 3 describes the structure of the VAE’s encoder and decoder and the GAN’s generator and discriminator networks. The encoder and decoder/generator are based on a ResNet. The ResNet block structure is shown in Figure 7. Both networks are optimized using Adam (Kingma and Ba, 2015) with a learning rate of 2 × 10−4 and β1 = 0.5. The code we used to implement the models is available at https://github.com/iclr-2020-zzz/LSR-GAN." }, { "heading": "E SAMPLING", "text": "In this appendix we show sample images generated by LSR-GAN starting with a random seed z ∼ N (0, I). These are shown in Figure 9 for an LSR-GAN trained on CIFAR-10 and ImageNet. Although the images superficially look reasonable on close inspection it is clear that most samples for the LSR-GAN trained on CIFAR-10 and ImageNet are not real world objects. This reflects the fact that the images for these two dataset are very variable leaving most of the latent space representing rather surreal objects.\nWe have also trained LSR-GAN on MNIST and Celeb-A with samples shown in Figure 9. Perhaps unsurprisingly, most samples are identifiable." } ]
2,019
null
SP:de54df40429714a5d975e1b0ef2c4b529ba5f6e3
[ "In this paper, the doubly adaptive stochastic gradient method (DASGrad) is introduced via augmenting adaptive moment methods with adaptive (as opposed to uniform) probabilities for data sampling. The convergence of the proposed method is analyzed in terms of regret bound and is compared to similar results for ADAM. The method is validated with experiments on both convex and non-convex objectives as well as in applications to transfer learning.", "In the paper, the authors propose a general form that covers most stochastic gradient methods so far, e.g. stochastic gradient descent, Adam, or adaptive probabilities methods. Then, they also provide a convergence analysis of convex problems. In the experiments, they compared the proposed DASGRAD method with Adam, AMSGrad or SGD. Experimental results show that the proposed method converges faster than compared methods. Following are my concerns:" ]
Adaptive moment methods have been remarkably successful for optimization under the presence of high dimensional or sparse gradients, in parallel to this, adaptive sampling probabilities for SGD have allowed optimizers to improve convergence rates by prioritizing examples to learn efficiently. Numerous applications in the past have implicitly combined adaptive moment methods with adaptive probabilities yet the theoretical guarantees of such procedures have not been explored. We formalize double adaptive stochastic gradient methods DASGRAD as an optimization technique and analyze its convergence improvements in a stochastic convex optimization setting, we provide empirical validation of our findings with convex and non convex objectives. We observe that the benefits of the method increase with the model complexity and variability of the gradients, and we explore the resulting utility in extensions to transfer learning.
[]
[ { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Leon Bottou", "Frank E. Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning, 2016", "venue": "URL http://arxiv.org/abs/1606.04838. cite arxiv:1606.04838", "year": 2016 }, { "authors": [ "Kevin W. Bowyer", "Nitesh V. Chawla", "Lawrence O. Hall", "W. Philip Kegelmeyer" ], "title": "SMOTE: synthetic minority over-sampling technique", "venue": "CoRR, abs/1106.1813,", "year": 2011 }, { "authors": [ "Dominik Csiba", "Zheng Qu", "Peter Richtrik" ], "title": "Stochastic dual coordinate ascent with adaptive probabilities", "venue": "In Proceedings of The 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "J. Mach. Learn. Res.,", "year": 2011 }, { "authors": [ "Charles Elkan" ], "title": "The foundations of cost-sensitive learning", "venue": "In Proceedings of the 17th International Joint Conference on Artificial Intelligence - Volume 2,", "year": 2001 }, { "authors": [ "Geoffrey E. Hinton" ], "title": "To recognize shapes, first learn to generate images", "venue": "Progress in brain research,", "year": 2007 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp", "venue": "minima. CoRR,", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "ADAM: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2012 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1,", "year": 2011 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of ADAM and beyond", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "CoRR, abs/1511.05952,", "year": 2015 }, { "authors": [ "Zebang Shen", "Hui Qian", "Tengfei Zhou", "Tongzhou Mu" ], "title": "Adaptive variance reducing for stochastic gradient descent", "venue": "In Proceedings of the 32nd International Joint Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross B. Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": "CoRR, abs/1604.03540,", "year": 2016 }, { "authors": [ "Sebastian U. Stich", "Anant Raj", "Martin Jaggi" ], "title": "Safe adaptive importance sampling", "venue": "CoRR, abs/1711.02637,", "year": 2017 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RMSProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Matthew D. Zeiler" ], "title": "ADADELTA: an adaptive learning rate method", "venue": "CoRR, abs/1212.5701,", "year": 2012 }, { "authors": [ "R. Zhu" ], "title": "Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares", "venue": "ArXiv e-prints,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION AND MOTIVATION", "text": "Stochastic gradient descent (SGD) is a widely used optimization method, and currently through backpropagation this algorithm has propelled the success of many deep learning applications. The Deep Learning community has particularly adopted variants of adaptive moment methods for SGD that specialize in high-dimensional features and non convex objectives, examples include ADAGRAD, ADADELTA, RMSPROP, ADAM and AMSGRAD (Duchi et al. (2011); Zeiler (2012); Tieleman & Hinton (2012); Kingma & Ba (2014); Reddi et al. (2018)). All these adaptive moment methods relied on the efficient use of the information of the geometry of the problem to improve the rate of convergence.\nIn parallel to the previous ideas, adaptive probabilities methods for SGD, traditionally focusing on convex objectives, have shown advantages over its uniform sampling baselines, by allowing a more efficient use of the gradient information (Zhu (2018); Shen et al. (2016); Bottou et al. (2016); Csiba et al. (2015); Stich et al. (2017)).\nAdaptive probabilities were introduced to the Deep Learning community by Hinton (2007) through the discriminative fine tuning procedure. The method was further explored in a range of applications like object detection, reinforcement learning and curriculum learning (Shrivastava et al. (2016); Schaul et al. (2015); Bengio et al. (2009)). In these examples the combination of adaptive moments and adaptive probabilities methods was implicit and its analysis as a pure optimization technique was still an open question.\nIn this paper we analyze the asymptotic convergence properties of combining adaptive probabilities and adaptive moments. To our knowledge such family has not yet been introduced as an optimization procedure. We will refer to this family of optimization algorithms as Double Adaptive Stochastic Gradient DASGRAD.\nWe show that the improvements of DASGRAD depend on its flexibility to control the variance of the adaptive moments methods. We prove the theoretical guarantees and improvements in a convex setting and validate these observations empirically in convex and deep learning objectives. Finally we demonstrate the generalization properties of the algorithm with a simple extension to importance weight transfer learning." }, { "heading": "2 ADAPTIVE GRADIENT METHODS", "text": "Notation. In order to facilitate the proofs and reading process we introduce some simplified notation that will be common to the analyzed algorithms. Let a, b ∈ Rd andM ∈ Sd+, then the multiplication of vector a by the inverse of M will be M−1a = a/M . Let √ a be the element-wise square root of vector a, a2 the element-wise square, a/b the element-wise division, and max(a, b) the elementwise max of vector a and vector b. Finally for any natural n the set {1, . . . , n} is denoted as [n]. Let T = {(xi,yi)}ni=1 be a training set; let f : Θ ×X × Y → R be a differentiable function that represents the empirical risk of an agent over T for the parameters θ ∈ Θ, with Θ ⊆ Rd a convex feasible set of parameters; let Sd+ the set of positive definite matrices in Rd×d, for a given matrix M ∈ Sd+ and parameter θ′ ∈ Θ; let ΠΘ,M be the projection operator over the feasible domain defined by ΠΘ,M (θ′) = arg minθ∈Θ ||M1/2(θ − θ′)||. 1\nFor the iterative stochastic optimization algorithmA, let it be a sampled index i at step t drawn from the training set indices [n], with it ∼ pt and pt ∈ ∆n+ = {p ∈ Rn : pi > 0 Σipi = 1}. We denote the evaluated risk f(θ,xi,yi) = fi(θ), the complete gradient∇f(θt) = 1nΣit∇fit(θt) and the stochastic gradient∇fit(θt), analogous a full descent directionmt = 1nΣitmit and a stochastic descent directionmit .\nStochastic Optimization Framework. To analyze the convergence of the stochastic optimization algorithm A we use the convex optimization setting where we assume that the objective function is convex with bounded gradients, that is ||∇fi(θ)||∞ ≤ G for all i ∈ [n], θ ∈ Θ, and finally the parameter space Θ has bounded diameter, that is ||θ − θ′||∞ ≤ D for all θ,θ′ ∈ Θ. For our purposes, the algorithm A at time t chooses a distribution over the training set p̂t ∈ ∆n+, obtains a training example it ∼ p̂t and its importance weights ŵit = (1/n)/pit , then updates its parameters θt ∈ Θ using the available data at time t and the importance weights ŵit to unbias the direction of the gradients. After the update, the algorithm incurs in a loss from an unknown function f(θt). To assess the performance of the algorithm after T steps we use the expected regret, which measures the difference of the loss at time t and the loss for optimal fixed parameter, along the possible trajectories induced by the chosen probabilities.\nR(A) = ∑T t=1 En [ fi(θt)−minθEn[fi(θ)] ]\nThe goal is to design an algorithm A that has sub linear expected regret R(A)T = O(T ), which in turn implies that the algorithm will converge on average to the optimal parameter.\nAlgorithm 1: Double Adaptive Methods Input: θ1 ∈ Θ, step size {αt > 0}Tt=1, functions {φt, ψt}Tt=1 for t = 1 to T do\nChoose p̂t ∈ ∆n+, and sample it ∼ p̂t Calculate git = ∇fit(θt) and ŵit = (1/n)/p̂it mit = φt (gi1 , . . . , git) and V̂it = ψt (gi1 , . . . , git) θ̂t+1 = θt − αtŵit V̂ −1/2 it\nmit θt+1 = ΠΘ,V̂ 1/2it (θ̂t+1)\nAlgorithm 1 constitutes the general family of double adaptive gradient methods. This algorithm comprehends the classical stochastic gradient descent, adaptive moment methods family Zeiler (2012); Tieleman & Hinton (2012); Kingma & Ba (2014); Reddi et al. (2018), and second order methods Duchi et al. (2011), varying the averaging functions of the past gradients with φt : Θ t → Rd, and approximating the Hessian matrix with the functions ψt : Θt → Sd+.\n1The projection operator enables the algorithm to deal with constrained optimization over compact convex domains that are equivalent to common regularization techniques like ridge and LASSO.\nAdaptive Probabilities Methods. The classic stochastic gradient descent algorithm is recovered with the following step size, sampling probabilities and functions:\nαt = α/ √ t pit = 1/n for all t ∈ [T ], i ∈ [n]\nφt(gi1 , . . . , git) = git ψt(gi1 , . . . , git) = I (SGD)\nAdaptive probabilities methods can be obtained simply by allowing the algorithm to choose a different probability p̂t at any time t:\nαt = α/ √ t p̂t ∈ ∆n+ for all t ∈ [T ]\nφt(gi1 , . . . , git) = git ψt(gi1 , . . . , git) = I (AP-SGD)\nSignificant improvements in the convergence rate of the algorithm can be obtained by cleverly choosing and computing such probabilities that in turn enables the algorithm to use data in a more efficient manner Stich et al. (2017). Fixed importance sampling is the case when p̂t = p for all t ∈ [T ]. Adaptive Moments Methods. Duchi et al. propelled interest and research on adaptive algorithms. In their work they noticed that SGD lacked good convergence behavior in sparse settings, and proposed a family of algorithms that allowed the methods to dynamically incorporate information about the geometry of the data Duchi et al. (2011). Following huge gains obtained with ADAGRAD, the deep learning community proposed variants based on exponential moving average functions for ψt like ADADELTA, RMSPROP, ADAM and most recently AMSGRAD (Zeiler (2012); Tieleman & Hinton (2012); Kingma & Ba (2014); Reddi et al. (2018)).\nThe first algorithm ADAGRAD is obtained by the following proximal functions:\nαt = 1/ √ t pit = 1/n for all t ∈ [T ], i ∈ [n]\nφt(gi1 , . . . , git) = git\nψt(gi1 , . . . , git) = 1\nt diag(Σtτ=1g 2 iτ )\n(ADAGRAD)\nThe ADAM/AMSGRAD algorithm is obtained by setting:\nαt = 1/ √ t pit = 1/n ∀t ∈ [T ], i ∈ [n]\nφt(gi1 , . . . , git) = Σ t τ=1β1(t)τgiτ\nvit = (1− β2)Σtτ=1βt−τ2 g2iτ v̂it = max(v̂it−1 ,vit) ψt(gi1 , . . . , git) = diag (v̂it)\n(ADAM/AMSGRAD)\nFortunately a very simple and computationally efficient way to implement ADAM is given by a recursion. RMSPROP is the particular case of ADAM when β1 = 0 and without maximum operator for the second moments vector, while ADAM is recovered without the maximum operator.\nDouble Adaptive Methods. The key idea behind both the adaptive probabilities methods and adaptive moment methods is the efficient use of the information available in the training data to improve the convergence of the algorithms. In the case of adaptive moment methods the diagonal approximations of the Hessian matrix use the information about the geometry of the problem while the adaptive sampling methods, the probabilities p̂it , prioritize the examples with the highest impact on the learning progress. As these improvements rely on complementary sources of information we can combine them into a general framework described by the double adaptive methods in Algorithm 1.\nTo analyze the theoretical improvement guarantees of the double adaptive methods we first we extend the adaptive moments convergence guarantees to the stochastic case with uniform sampling and then we compare them to the convergence guarantees using optimal probabilities." }, { "heading": "3 CONVERGENCE ANALYSIS", "text": "" }, { "heading": "3.1 CONVERGENCE OF ADAPTIVE MOMENTS METHODS", "text": "We first provide a regret bound of ADAM for weakly convex objectives with uniform probabilities adapting the arguments in Reddi et al. (2018) and Kingma & Ba (2014). Then we extend these results to the adaptive probabilities case.\nTheorem 1. Let {θt}Tt=1 be the sequence obtained with ADAM, then the regret bound is:\nR(ADAM) ≤ T∑ t=1\n1 2αt(1− β1t) En [ ||V̂ 1/4it (θt − θ ∗)||2 − ||V̂ 1/4it (θt+1 − θ ∗)||2 ] +\nT∑ t=1 αt 2(1− β1t) En [ ||V̂ −1/4it mit || 2 ]\n+ T∑ t=1 αtβ1t 2(1− β1t) ||V̂ −1/4t mt−1||2 + T∑ t=1 β1t 2αt(1− β1t) ||V̂ 1/4t (θt − θ∗)||2\nCorollary 1.1. Following the sequence {θt}Tt=1 of ADAM with step size αt = α/ √ t, averaging\nparameters β1 = β11 , β1t ≤ β1 for all t ∈ [T ], γ = β1/ √ β2 < 1 and uniform probabilities pit = 1/n. If we assume that Θ has bounded diameter D, ||∇fit(θ)||∞ ≤ G for all t ∈ [T ] and θ ∈ Θ, then the expected regret bound is:\nR(ADAM) ≤ D 2 √ T 2α(1− β1) En [ ||v̂1/4iT || 2 ]\n+ α √ 1 + log(T )\n2(1− β1)2 √ (1− β2)(1− γ) d∑ h=1 || ¯| g |1:T,h ||\n+ αGd 2α(1− β1)3 √ (1− β2)(1− γ) +\nD2\n2α(1− β1) T∑ t=1 √ tβT−t1 ||v̂ 1/4 t ||2" }, { "heading": "3.2 CONVERGENCE OF DOUBLE ADAPTIVE METHODS", "text": "Theorem 2. Let {θt}Tt=1 be a DASGRAD sequence, for a trajectory pt ∈ ∆n+ the regret bound is:\nR(DASGRAD) ≤ T∑ t=1\n1\n2αt(1− β1t) Ep1:t\n[ ||V̂ 1/4it (θt − θ ∗)||2 − ||V̂ 1/4it (θt+1 − θ ∗)||2 ] +\nT∑ t=1 αt 2(1− β1t) Ep1:t [ w2it ||V̂ −1/4 it mit ||2 ]\n+ T∑ t=1 αtβ1t 2(1− β1t) ||V̂ −1/4t mit−1 ||2 + T∑ t=1 β1t 2αt(1− β1t) ||V̂ 1/4t (θt − θ∗)||2\nCorollary 2.1. Following the sequence {θt}Tt=1 of DASGRAD, step size αt = α/ √ t, averaging\nparameters β1 = β11 , β1t ≤ β1 for all t ∈ [T ], γ = β1/ √ β2 < 1 and the optimal adaptive probabilities p̂it ∝ ||V̂ −1/4 it\nmit ||. If we assume that Θ has bounded diameter D and ||∇fit(θ)||∞ ≤ G for all t ∈ [T ] and θ ∈ Θ, then the expected regret bound is:\nR(DASGRAD) ≤ D 2 √ T\n2α(1− β1) Ep̂1:T\n[ ||v̂1/4iT || 2 ]\n+ α √ 1 + log(T )\n2(1− β1)2 √ (1− β2)(1− γ) d∑ h=1 || ¯| g |1:T,h || − T∑ t=1 Varn ( ||V̂ 1/4it mit || ) + αGd\n2α(1− β1)3 √ (1− β2)(1− γ) +\nD2\n2α(1− β1) T∑ t=1 √ tβT−t1 ||v̂ 1/4 t ||2" }, { "heading": "3.3 CONVERGENCE COMPARISON", "text": "Adaptive moment methods can improve classical gradient descent by integrating the geometry of the problem with a diagonal approximation of the Hessian an may achieve an exponentially smaller bound for the expected regret with respect to the dimensions of the input data d, when dealing with sparse features or small gradients in general. As shown by Duchi et al. (2011) for the adaptive moment methods in the sparse setting, the potential component and error component of the expected regret of Corollary 1.1 each will satisfy:\nEp1:T [ ||v̂1/4iT || 2 ] = Ep1:T\n[ d∑\nh=1\nv̂ 1/2 iT ,h\n] √ d and\nd∑ h=1 || ¯| g |1:T,h || √ dT\nwhich in turn translates to a much better expected regret bound than O( √ dT ) for classic SGD on weakly convex objectives and sparse inputs.\nIn parallel to the previous improvements, the adaptive probabilities methods can further speed the convergence by allowing the algorithm to evaluate the relative importance of each data point to maximize the expected learning progress, and minimize the variance of the stochastic gradient at each step. Given the optimal adaptive probabilities the error component of the expected regret in Corollary 2.1 satisfies:\nd∑ h=1 || ¯| g |1:T,h || − T∑ t=1 Varn ( ||V̂ 1/4it mit || )\nd∑ h=1 || ¯| g |1:T,h ||\nThis shows how the optimal adaptive sampling probabilities on a convex setting can further improve the convergence rate by allowing to flexibly control the variance of the gradients." }, { "heading": "4 DASGRAD IMPLEMENTATION", "text": "To obtain the optimal probabilities it is necessary to compute the norm of the gradient for each training sample at each step. Given the deep learning libraries today, this calculation renders optimal adaptive probabilities methods impractical for real applications 2. Still, for completeness of the exposition of the theoretical results we provide empirical evidence of the convergence improvements, to do it we use an approximation of the optimal algorithm from the double adaptive methods, following Algorithm 2.\nAlgorithm 2: DASGRAD approximation Input: θ1 ∈ Θ, functions {φt, ψt}Tt=1, frequency J for t = 1 to T do\nif t mod J = 0 then Compute p̂t ∈ ∆n+ setting p̂it ∝ ||V̂ −1/4 it\nmit ||+ Sample it ∼ p̂t using the segment tree Calculate git = ∇fit(θt) and ŵit = (1/n)/p̂it mt = β1tmt−1 + (1− β1t)gt and vt = β2vt−1 + (1− β2)g2t v̂t = max(v̂t−1,vt) and V̂t = diag(v̂t) θ̂t+1 = θt − αtŵit V̂ −1/2 it\nmit θt+1 = ΠΘ,V̂ 1/2it (θ̂t+1)\n2Current dynamical computational graph libraries compute the average gradient batches by default and available workarounds are still slow. See https://github.com/pytorch/pytorch/issues/7786 and https://github.com/tensorflow/tensorflow/issues/4897." }, { "heading": "5 EMPIRICAL RESULTS", "text": "In this section we provide empirical evidence of the convergence rates on classification problems using logistic regression and deep neural networks, using ADAM, AMSGRAD, and DASGRAD.\nLogistic Regression: For the convex setting we solve two classification problems with L2 regularization. For the non sparse feature experiment we use the MNIST digit dataset, which is composed of 60, 000 images of 28×28 hand written digits. For the sparse feature experiment we use the IMDB movie rating dataset which is composed of 25, 000 highly polar movie reviews and the sentiment label for the review Maas et al. (2011).3\nNeural Networks: For the non convex setting we perform one experiment, we use the CIFAR10 dataset, which is composed of 60, 000 colour images of 32 × 32 pixels labeled in 10 classes. For this multiclass classification problem we use a convolutional neural network following the SMALLCIFARNET architecture, consisting of two convolution filters combined with max pooling and local response normalization, followed by two fully connected layers of rectified linear units Krizhevsky et al. (2012). 4\n3For both experiments, we use a batch of size 32, with a probability update every 10 steps, and the step size αt = α/ √ t. We set β1 = 0.9, β2 = 0.99, and choose α through a grid search. For the MNIST dataset, for all three optimizers, the optimal learning rates are α = 0.01. For the IMDB dataset, we find the optimal learning rates to be α = 0.005 for ADAM, α = 0.006 for AMSGRAD, and α = 0.02 for DASGRAD.\n4For the experiment we use a batch size of 32, with a probability update every 300 steps, and step size of αt = α/ √ t. We set β1 = 0.9, β2 = 0.99, and choose α through a grid search, for which the optimal learning rate for all optimizers is α = 0.001.\nFrom the comparison in Figure 1, we observe that in all cases the DASGRAD optimization algorithm outperforms its adaptive moment counterparts represented by ADAM and AMSGRAD, as predicted by the theoretical analysis. The improvement is more significant for the IMDB dataset than it is for the MNIST dataset. From Figure 1 we can see that DASGRAD continues to outperform ADAM and AMSGRAD in the deep learning setting. These results reinforce the previous statement that the benefits from DASGrad increase with the complexity of the data and the models." }, { "heading": "6 DISCUSSION", "text": "" }, { "heading": "6.1 CONVERGENCE IMPROVEMENTS AND VARIANCE OF GRADIENTS", "text": "To further explore the relationship between variance and the improvements to the convergence rate of the DASGRAD algorithm, we implemented an online centroid learning experiment. Because of the linear relationship between the features and the gradients, we are able to explicitly control their variance. For this experiment, the empirical risk and gradients will be given by Rn(θ) = 1 2n ∑n i=1 ||θ − xi||2 and ∇f(θ,xi) = θ − xi.\nAs we can see from Figure 2 the greater the variance of the gradients, the greater the benefit that one can obtain from an adaptive probabilities method such as DASGRAD in convex objectives, since those probabilities will prioritize the data points with the most learning potential." }, { "heading": "6.2 FLEXIBLE CONTROL OF VARIANCE", "text": "Recent insights on the generalization properties of minibatch SGD for non convex objectives suggest that higher variance gradients tend to converge to flatter regions of the loss surfaces (Keskar et al. (2016)). Applications like curriculum learning that shape the learning procedure, by gradually making the task more difficult through importance sampling, may allow to maintain a higher variance of the gradients for longer steps, this combined with the previous intuitions offers an explanation of the mathematical basis of its success.\nWhile curriculum learning is contrary to the optimal probabilities of the double adaptive methods for convex settings, the underlying principle of flexible control of the variance of the gradients operates as the mechanism behind both procedures. This observations strengthens the argument that improving our understanding of the implicit optimization techniques in this algorithms can also improve our understanding that so far has relied mostly on intuitive explanations of their success." }, { "heading": "6.3 IMPORTANCE WEIGHT TRANSFER LEARNING", "text": "When the training T and test T ′ set do not share the same distribution, we may face a distribution mismatch problem. The DASGRAD algorithm is compatible with the cost re-weighting correction technique Elkan (2001); Bowyer et al. (2011) as we can set the importance weights wt for any trajectory of distributions pt, to unbias the gradients for the test distribution instead of the training.\nR(DASGRAD)T ′ = T∑ t=1 EpT ′ [ fi(θt)−minθEpT ′ [fi(θ)] ] =\nT∑ t=1 Ep1:t [ witfi(θt)−minθEpT ′ [fi(θ)] ] To test the generalization properties of the DASGRAD algorithm empirically, we unbalanced the MNIST training data set by reducing ninety percent the observations from the 1 and 3 digit. We set the importance weights to wit = (|Li|/m)/pit , where |Li| is the count of the label L associated with index i in test over m, the number of test samples. As we see in Figure 3 using DASGRAD with the correct importance weights has the desired generalization properties when facing a domain shift." }, { "heading": "7 CONCLUSION", "text": "Capability of learning from data efficiently is a prerequisite for practical success of complex learning models across various problem settings and application contexts. We have shown how double adaptive stochastic gradient descent methods enable efficient learning in a generalizable manner, while ensuring convergence improvement. We observed that DASGRAD algorithm outperforms currently prevalent variants of adaptive moment algorithms such as ADAM and AMSGRAD overall, in the context of the number of iterations required to achieve comparable performance, under the theoretical convergence guarantees in a stochastic convex optimization setting. With empirical validation in convex and non convex settings, we have shown that the advantages of DASGrad become more prominent with the increasing complexity of data and models, and with more variance in the gradients. We have also broadened our results to demonstrate generalization properties of our approach and its extensions to transfer learning, as well as intuitive connections to other learning scenarios." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF THEOREM 2\nThe proof of Theorem 2 assumes a convex differentiable objective function f , bounded diameter for the parameters, and bounded norm of the gradients for any trajectory of probabilities pt ∈ ∆n+.\nProof. We build an upper bound of the expected regret using the convexity of the loss: f(θt)− f(θ∗) ≤ 〈 gt, θt − θ∗〉 = En [〈 git , θt − θ∗〉]\nWhile using DASGRAD the update of the parameter will be given by the stochastic update dependent on the training example it and the current parameter θt:\nθt+1 = ΠΘ,V̂ 1/2it (θ̂t+1) = ΠΘ,V̂ 1/2it\n(θt − αtwitV̂ −1/2 it\nmit) = arg min θ∈Θ\n|| V̂ 1/4it (θt − αtwitV̂ −1/2 it mit) ||\nTo bound the expected regret of the algorithm, we use the fact that: θ̂t+1 − θ∗ = (θt − θ∗)− αtwitmit/ √ V̂it\n||V̂ 1/4it (θ̂t+1 − θ ∗)||2 = ||V̂ 1/4it (θt − θ ∗)||2 − 2αtwit〈mit , θt − θ∗〉+ α2tw2it ||V̂ −1/4 it mit ||2\n= ||V̂ 1/4it (θt − θ ∗)||2 − 2αtwit〈β1tmit−1 + (1− β1t)git , θt − θ∗〉+ α2tw2it ||V̂ −1/4 it mit ||2\nWe identify the first three components as the potential, the immediate cost, now with extra terms associated to the moving average, and the error.\nLemma 1. For any M ∈ Sd+ and convex feasible set Θ ⊆ Rd with the projection operator ΠΘ,M let u1 = ΠΘ,M (z1) and u2 = ΠΘ,M (z2) then:\n||M1/2 (u1 − u2) || ≤ ||M1/2 (z1 − z2) ||\nTaking the expectation at time t, and using the extended norm reduction property of the projections from Lemma 1 we obtain the following inequality:\nEpt [ ||V̂ 1/4it (θt+1 − θ)|| 2 ∣∣∣θt ] ≤ Ept [ ||V̂ 1/4it (θt − θ∗)||2 ∣∣∣θt ]\n− Ept [ 2αtwit〈β1tmit−1 + (1− β1t)git , θt − θ∗〉 ∣∣∣θt ]\n+ α2tEpt [ w2it ||V̂ 1/4 it mit ||2 ∣∣∣θt ] Since wt is such that the interior product will be unbiased, then:\nEpt [ ||V̂ 1/4it (θt+1 − θ)|| 2 ∣∣∣θt ] ≤ Ept [ ||V̂ 1/4it (θt − θ∗)||2 ∣∣∣θt ]\n− 2αt〈β1tmt−1 + (1− β1t)gt, θt − θ∗〉 + α2tEpt [ w2it ||V̂ 1/4 it mit ||2 ∣∣∣θt ] Finally rearranging the terms, summing until time T and taking expectations:\nR(DASGRAD)T ≤ T∑ t=1\n1\n2αt(1− β1t) Ep1:t\n[ ||V̂ 1/4it (θt − θ ∗)||2 − ||V̂ 1/4it (θt+1 − θ ∗)||2 ] +\nT∑ t=1 αt 2(1− β1t) Ep1:t [ w2it ||V̂ −1/4 it mit ||2 ]\n+ T∑ t=1 β1t 2(1− β1t) αt||V̂ −1/4t mt−1||2 + T∑ t=1 β1t 2αt(1− β1t) ||V̂ 1/4t (θt − θ∗)||2\n(1)\nLast line is Cauchy-Schwarz and Young’s inequality applied to the inner product of the extra terms associated with the moving average in the immediate cost.\nA.2 PROOF OF COROLLARY 2.1\nProof. The proof of Corollary 2 is in the line of the improvements provided by Reddi et al. to the convergence proof of Kingma & Ba for ADAM, we adapt the arguments to the stochastic case. We assess separately each component of the expected regret from Equation 1.\nLemma 2 addresses the potential, Lemma 3 the error, and Lemma 6 and Lemma 4 the moving average terms. The proof of Corollary 2.1 is a consequence of all the previous Lemmas using the optimal probabilities while Corollary 1.1 is the case with uniform probabilities. Following the sequence {θt}Tt=1 of DASGRAD, with step size αt = α/ √ t, averaging parameters\nβ1 = β11 and β1t ≤ β1 for all t ∈ [T ] and γ = β1/ √ β2 < 1. and bounded diameter D for Θ and ||∇fit(θ)||∞ ≤ G for all t ∈ [T ] and θ ∈ Θ. Lemma 2. From Equation 1 the potential component will satisfy: T∑ t=1 1 2αt(1− β1t) Ep1:t [ ||V̂ 1/4it (θt − θ ∗)||2 − ||V̂ 1/4it (θt+1 − θ ∗)||2 ] ≤ D 2 √ T 2α(1− β1) Ep1:T [ ||v̂1/4iT || 2 ]\nProof One can decompose the potential in the following manner: T∑ t=1 1 2αt(1− β1t) Ep1:t [ ||V̂ 1/4it (θt − θ ∗)||2 − ||V̂ 1/4it (θt+1 − θ ∗)||2 ] ≤\n1\n2α1(1− β1) Ep1\n[ ||V̂ 1/4i1 (θ1 − θ ∗)||2 ] − 1\n2αT (1− β1) Ep1:T\n[ ||V̂ 1/4iT (θT+1 − θ ∗)||2 ]\n+ 1\n2(1− β1) T∑ t=2 ( 1 αt Ep1:t [ ||V̂ 1/4it (θt − θ ∗)||2 ] − 1 αt−1 Ep1:t−1 [ ||V̂ 1/4it−1(θt−1 − θ ∗)||2 ])\n≤ 1 2α1(1− β1)\nEp1 [ ||v̂1/4i1 D 1|| 2 ]\n+ 1\n2(1− β1) T∑ t=2 ( 1 αt Ep1:t [ ||v̂1/4it D 1|| 2 ] − 1 αt−1 Ep1:t−1 [ ||v̂1/4it−1 D 1|| 2 ])\n≤ D 2\n2(1− β1)\n( 1\nα1 Ep1\n[ ||v̂i1 ||2 ] + T∑ t=2 ( 1 αt − 1 αt−1 ) Ept [ ||v̂it ||2 ])\n= D2\n2αT (1− β1) Ep1:T\n[ ||v̂1/4iT || 2 ] = D2 √ T\n2α(1− β1) Ep1:T\n[ ||v̂1/4iT || 2 ]\nThe first inequality comes from rearranging and the definition of β1t , the second inequality comes from the bounded diameter assumption applied to each entry of θt − θ∗ and using the Hadamard’s product to represent the original matrix multiplication, the third inequality5 comes from the definition of v̂it = max(v̂it−1 ,vit), the last equality comes from the property of the telescopic sequence. This completes the proof of Lemma 2.\n5The third inequality is of particular importance since Reddi et al. showed that it is one of the main flaws in the convergence analysis of ADAM and RMSPROP, and provided a simple fix to the adaptive moment methods that guarantees the non increasing property needed to achieve the telescopic sequence upper bound.\nLemma 3. From Equation 1 the error component, once evaluated in the optimal probabilities p̂t will satisfy:\nEp̂t [ ŵ2it ||V̂ 1/4 it mit ||2 ∣∣∣θt ] = En [ ||V̂ 1/4it mit ||2 ∣∣∣θt ]− Varn (||V̂ 1/4it mit ||) Proof Creating a lower bound with Cauchy-Schwarz and showing that it is achievable with the optimal probabilities p̂it ∝ ||V̂ 1/4 it mit ||.\nLemma 4. The first component of the extra terms associated with the moving average in Equation 1 will satisfy:\nT∑ t=1 αtβ1t 2αt(1− β1t) ||V̂ −1/4t mt−1||2 ≤ αGd 2α(1− β1)3 √ (1− β2)(1− γ)\nProof Following very similar arguments as those from Lemma 6, we can get:\nT∑ t=1 αtβ1t 2αt(1− β1t) ||V̂ −1/4t mt−1||2 ≤ α 2(1− β1)2 √ (1β2)(1− γ) T∑ t=1 βT−t1 ||gt||1\n≤ α 2(1− β1)2 √ (1β2)(1− γ) T∑ t=1 βT−t1 ||G 1||1 ≤ αGd 2α(1− β1)3 √ (1− β2)(1− γ)\nThis completes the proof of Lemma 4.\nLemma 5. To finish the second component of the extra terms associated with the moving average in Equation 1, will satisfy:\nT∑ t=1 β1t 2αt(1− β1t) ||V̂ 1/4t (θt − θ∗)||2 ≤ D2 2α(1− β1) T∑ t=1 √ tβT−t1 ||v̂ 1/4 t ||2\nProof\nT∑ t=1 β1t 2αt(1− β1t) ||V̂ 1/4t (θt − θ∗)||2 ≤ T∑ t=1 β1t 2αt(1− β1t) ||v̂1/4t D 1||2\n= D2\n2α T∑ t=1 √ t β1t (1− β1t) ||v̂1/4t ||2\n≤ D 2\n2α(1− β1) T∑ t=1 √ tβT−t1 ||v̂ 1/4 t ||2\nThis completes the proof of Lemma 5.\nLemma 6. From Equation 1 the error component, once evaluated in the optimal probabilities p̂t, and the total error will satisfy that:\nT∑ t=1 αt 2(1− β1t) Ep̂t [ ŵ2it ||V̂ 1/4 it mit ||2 ∣∣∣θt ] ≤ α √ 1 + log(T )\n2(1− β1)2 √ (1− β2)(1− γ) d∑ h=1 || ¯| g |1:T,h || − T∑ t=1 Varn ( ||V̂ 1/4it mit || )\nProof For Lemma 6 we follow Kingma & Ba, for every element at time t of the error component:\nαt 2(1-β1t)\nEn [ ||V̂ −1/4it mit || 2 ] ≤ αt 2(1-β1t) En [ ||V −1/4it mit || 2 ]\n= αt 2(1-β1t) En [ ||Σtτ=1β1(t)τgiτ ||2√ vit ] =\nαt 2(1-β1t)\nEn [ ||Σtτ=1β1(t)1/2τ β1(t)1/2τ giτ ||2√\nviτ ] ≤ αt\n2(1-β1t) En\n[( t∑\nτ=1\nβ1(t)τ\n)( t∑\nτ=1\nβ1(t)τ ||giτ ||2√ viτ\n)]\n≤ αt 2(1− β1) En\n[( t∑\nτ=1\nβt−τ1\n)( t∑\nτ=1\nβt−τ1 ||giτ ||2√ viτ\n)]\n≤ αt 2(1− β1)2 En t∑ τ=1 βt−τ1 ||giτ ||2√\n(1− β2)Σtτ=1βt−τ2 g2iτ ≤ αt\n2(1− β1)2 √ (1− β2) En t∑ τ=1 βt−τ1√ βt−τ2 ||giτ ||2 |giτ | = α\n2(1− β1)2 √ (1− β2) En [ 1√ t ( t∑\nτ=1\nγt−τ ||giτ ||1\n)]\nThe first inequality follows the definition of the auxiliary vectors v̂t = max(v̂t−1, vt), the second inequality comes from the non negativity of β1(t)τ . The third and fourth inequality comes from the decreasing property β1 ≤ β11 and β1t ≤ β11 and the property of the geometric sequence. The fifth inequality comes from the non negativity of β2 and ||giτ ||2, the last equality uses the definition of the step size. Using induction one can show that:\nT∑ t=1 αt 2(1− β1t) En [ ||V̂ 1/4it mit || 2 ] ≤ T∑ t=1\nα 2(1− β1)2 √ (1− β2) En [( T∑ τ=t γt−τ√ τ ||giτ ||1 )] (2)\nContinuing the proof of Lemma 6, let k = α 2(1−β1)2 √ (1−β2) , from Equation 2 we have that: T∑ t=1 αt 2(1− β1t) En [ ||V 1/4it mit || 2 ] ≤ T∑ t=1 k En [ 1√ t ( T∑ τ=t γt−τ ||giτ ||1 )]\n= k ( T∑ t=1 n∑ it=1 1 n ||git ||1 ( T∑ τ=t γt−τ√ τ ))\n≤ k ( T∑ t=1 n∑ it=1 1 n ||git ||1 ( T∑ τ=t γt−τ√ t ))\n= k ( T∑ t=1 n∑ it=1 1 n ||git ||1 1√ t ( T∑ τ=t γt−τ ))\n≤ k (1− γ) ( T∑ t=1 ( n∑\nit=1\n1 n ||git ||1 )( 1√ t ))\n= k\n(1− γ) d∑ h=1 ( T∑ t=1 ( n∑ it=1 1 n |git,h| )( 1√ t ))\n≤ k (1− γ) d∑ h=1\n √√√√ T∑\nt=1\n( n∑\nit=1\n1 n |git,h| )2√√√√ T∑ t 1 t ≤ k √ 1 + log(T )\n(1− γ)\nd∑ h=1 √√√√ T∑ t=1 ( n∑ it=1 1 n |git,h| )2 The first equality comes from a change of indexes, the second inequality is an upper bound for the arithmetic sequence that begins at t, the third inequality is an upper bound for the geometric sequence, the fourth inequality comes is an application of Cauchy-Schwarz inequality, finally the fifth inequality is an upper bound for the arithmetic sequence.\nT∑ t=1 Ep̂t [ ŵ2it ||V̂ 1/4 it mit ||2 ∣∣∣θt ] = T∑ t=1 En [ ||V̂ 1/4it mit || 2 ∣∣∣θt ]− T∑ t=1 Varn ( ||V̂ 1/4it mit || ) ≤ α √ 1 + log(T )\n2(1− β1)2 √ (1− β2)(1− γ) d∑ h=1 || ¯| g |1:T,h || − T∑ t=1 Varn ( ||V̂ 1/4it mit || ) With the optimal probabilities p̂it , we complete the proof of Lemma 6.\nFinally we can combine the results from Lemma 2 to 5 and obtain the following bound for the expected regret of the general double adaptive algorithms:\nCorollary 2.1\nR(DASGRAD) ≤ D 2 √ T\n2α(1− β1) Ep̂1:T\n[ ||v̂1/4iT || 2 ]\n+ α √ 1 + log(T )\n2(1− β1)2 √ (1− β2)(1− γ) d∑ h=1 || ¯| g |1:T,h || − T∑ t=1 Varn ( ||V̂ 1/4it mit || ) + αGd\n2α(1− β1)3 √ (1− β2)(1− γ) +\nD2\n2α(1− β1) T∑ t=1 √ tβT−t1 ||v̂ 1/4 t ||2" } ]
2,019
null
SP:b05dc7c8eb4108fcf028d86de6dc4db5e70fed49
[ "The authors generalize Gaussian random vectors to a broader class of \"concentrated\" vectors which they use as their primary tool for analysis of the latent representations learned by GANs. They show that the spectral behavior (i.e. spectra and leading eigenspaces) of the Gram matrix computed over GAN representations is the same as those produced by a high dimensional Gaussian Mixture Models (GMMs). Furthermore, they show that the \"sufficient statistics\" (i.e. measures of information encoded in the latent representations) for GANs depend only on their first and second moments. Thus, for data that follows Gaussian mixture patterns, GANs and GMMs behave identically. The authors also show that common neural network operations (linear/convolutional layers, pooling, batch normalization, ReLU activations) are Lipschitz transformations, where the Lipschitz constants can be bounded using spectral normalization (this is borrowed from previous work). They also provide some empirical analysis to verify their theoretical findings.", "In this paper, the authors claim to establish a Lipschitz bound on neural networks, initialized randomly, and trained until convergence. They also claim to establish probabilistic concentration of the resolvent of the Gram matrix from a mixture of k distributions with varying means and covariances. The authors also present the spectrum and leading eigenspace of the Gram matrix for representations by CNNs of images generated by a GAN." ]
This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called concentrated random vectors. Further exploiting the fact that Gram matrices, of the type G = XTX with X = [x1, . . . ,xn] ∈ Rp×n and xi independent concentrated random vectors from a mixture model, behave asymptotically (as n, p → ∞) as if the xi were drawn from a Gaussian mixture, suggests that DL representations of GAN-data can be fully described by their first two statistical moments for a wide range of standard classifiers. Our theoretical findings are validated by generating images with the BigGAN model and across different popular deep representation networks.
[ { "affiliations": [], "name": "GAUSSIAN MIXTURES" } ]
[ { "authors": [ "AK Suykens Johan" ], "title": "Least squares support vector machines", "venue": "World Scientific,", "year": 2002 }, { "authors": [ "Joseph Antognini", "Jascha Sohl-Dickstein" ], "title": "Pca of high dimensional random walks with comparison to neural network training", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Antreas Antoniou", "Amos Storkey", "Harrison Edwards" ], "title": "Data augmentation generative adversarial networks", "venue": "arXiv preprint arXiv:1711.04340,", "year": 2017 }, { "authors": [ "Florent Benaych-Georges", "Romain Couillet" ], "title": "Spectral analysis of the gram matrix of mixture models", "venue": "ESAIM: Probability and Statistics,", "year": 2016 }, { "authors": [ "Y. Bengio", "A. Courville", "P. Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Olivier Chapelle", "Bernhard Scholkopf", "Alexander Zien" ], "title": "Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Romain Couillet", "Florent Benaych-Georges" ], "title": "Kernel spectral clustering of large dimensional data", "venue": "Electronic Journal of Statistics,", "year": 2016 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Walid Hachem", "Philippe Loubaton", "Jamal Najim" ], "title": "Deterministic equivalents for certain functionals of large random matrices", "venue": "The Annals of Applied Probability,", "year": 2007 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Michel Ledoux" ], "title": "The concentration of measure phenomenon", "venue": "Number 89. American Mathematical Soc.,", "year": 2005 }, { "authors": [ "Zhenyu Liao", "Romain Couillet" ], "title": "Random matrices meet machine learning: A large dimensional analysis of ls-svm", "venue": "In ICASSP,", "year": 2017 }, { "authors": [ "Cosme Louart", "Romain Couillet" ], "title": "Concentration of measure and large random matrices with an application to sample covariance matrices", "venue": null, "year": 2019 }, { "authors": [ "Xiaoyi Mai", "Romain Couillet" ], "title": "A random matrix analysis and improvement of semi-supervised learning for large dimensional data", "venue": "arXiv preprint arXiv:1711.03404,", "year": 2017 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "GrÊgoire Montavon", "Mikio L Braun", "Klaus-Robert" ], "title": "MÃller. Kernel analysis of deep networks", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Andrew Y Ng", "Michael I Jordan", "Yair Weiss" ], "title": "On spectral clustering: Analysis and an algorithm", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Kevin Roth", "Aurelien Lucchi", "Sebastian Nowozin", "Thomas Hofmann" ], "title": "Stabilizing training of generative adversarial networks through regularization", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning representations by back-propagating errors", "venue": "Cognitive modeling,", "year": 1988 }, { "authors": [ "Jack W Silverstein", "Sang-Il Choi" ], "title": "Analysis of the limiting spectral distribution of large dimensional random matrices", "venue": "Journal of Multivariate Analysis,", "year": 1995 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Chih-Kuan Yeh", "Joon Kim", "Ian En-Hsu Yen", "Pradeep K Ravikumar" ], "title": "Representer point selection for explaining deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The performance of machine learning methods depends strongly on the choice of the data representation (or features) on which they are applied. This data representation should ideally contain relevant information about the learning task in order to achieve learning with simple models and small amount of samples. Deep neural networks (Rumelhart et al., 1988) have particularly shown impressive results by automatically learning representations from raw data (e.g., images). However, due to the complex structure of deep learning models, the characterization of their hidden representations is still an open problem (Bengio et al., 2009).\nSpecifically, quantifying what makes a given deep learning representation better than another is a fundamental question in the field of Representation Learning (Bengio et al., 2013). Relying on (Montavon et al., 2011) a data representation is said to be good when it is possible to build simple models on top of it that are accurate for the given learning problem. Montavon et al. (2011) have notably quantified the layer-wise evolution of the representation in deep networks by computing the principal components of the Gram matrix G` = {φ`(xi)ᵀφ`(xj)}ni,j=1 at each layer for n input data x1, . . . ,xn, where φ`(x) is the representation of x at layer ` of the given DL model, and the number of components controls the model simplicity. In their study, the impact of the representation at each layer is quantified through the prediction error of a linear predictor trained on the principal subspace of G`.\nPursuing on this idea, given a certain representation model x 7→ φ(x), we aim in this article at theoretically studying the large dimensional behavior, and in particular the spectral information (i.e., eigenvalues and dominant eigenvectors), of the corresponding Gram matrix G = {φ(xi)ᵀφ(xj)}ni,j=1 in order to determine the information encoded (i.e., the sufficient statistics) by the representation model on a set of real data x1, . . . ,xn. Indeed, standard classification and regression algorithms –along with the last layer of a neural network (Yeh et al., 2018)– retrieve the data information directly from functionals or the eigenspectrum of G1. To this end, though, one needs a statistical model for the representations given the distribution of the raw data (e.g., images) which is generally unknown. Yet, due to recent advances in generative models since the advent of Generative Adversarial Nets (Goodfellow et al., 2014), it is now possible to generate complex data\n1For instance, spectral clustering uses the dominant eigenvectors of G, while support vector machines use functionals (quadratic forms) involving G.\nstructures by applying successive Lipschitz operations to Gaussian random vectors. In particular, GAN-data are used in practice as substitutes of real data for data augmentation (Antoniou et al., 2017). On the other hand, the fundamental concentration of measure phenomenon (Ledoux, 2005) tells us that Lipschitz-ally transformed Gaussian vectors satisfy a concentration property. Precisely, defining the class of concentrated vectors x ∈ E through concentration inequalities of f(x), for any real Lipschitz observation f : E → R, implies that deep learning representations of GAN-data fall within this class of random vectors, since the mapping x 7→ φ(x) is Lipschitz. Thus, GAN-data are concentrated random vectors and thus an appropriate statistical model of realistic data.\nTargeting classification applications by assuming a mixture of concentrated random vectors model, this article studies the spectral behavior of Gram matrices G in the large n, p regime. Precisely, we show that these matrices have asymptotically (as n, p → ∞ with p/n → c < ∞) the same firstorder behavior as for a Gaussian Mixture Model (GMM). As a result, by generating images using the BigGAN model (Brock et al., 2018) and considering different commonly used deep representation models, we show that the spectral behavior of the Gram matrix computed on these representations is the same as on a GMM model with the same p-dimensional means and covariances. A surprising consequence is that, for GAN data, the aforementioned sufficient statistics to characterize the quality of a given representation network are only the first and second order statistics of the representations. This behavior is shown by simulations to extend beyond random GAN-data to real images from the Imagenet dataset (Deng et al., 2009).\nThe rest of the paper is organized as follows. In Section 2, we introduce the notion of concentrated vectors and their main properties. Our main theoretical results are then provided in Section 3. In Section 4 we present experimental results. Section 5 concludes the article.\nNotation: In the following, we use the notation from (Goodfellow et al., 2016). [n] denotes the set {1, . . . , n}. Given a vector x ∈ Rn, the `2-norm of x is given as ‖x‖2 = ∑n i=1 x 2 i . Given a p× n\nmatrix M , its Frobenius norm is defined as ‖M‖2F = ∑p i=1 ∑n j=1 M 2 ij and its spectral norm as ‖M‖ = sup‖x‖=1 ‖Mx‖. for the Hadamard product. An application F : E → F is said to be ‖F‖lip-Lipschitz, if ∀(x,y) ∈ E2, ‖F(x)−F(y)‖F ≤ ‖F‖lip · ‖x− y‖E and ‖F‖lip is finite." }, { "heading": "2 BASIC NOTIONS OF CONCENTRATED VECTORS", "text": "Being the central tool of our study, we start by introducing the notion of concentrated vectors. While advanced concentration notions have been recently developed in (Louart & Couillet, 2019) in order to specifically analyze the behavior of large dimensional sample covariance matrices, for simplicity, we restrict ourselves here to the sufficient so-called q-exponentially concentrated random vectors. Definition 2.1 (q-exponential concentration). Given a normed space (E, ‖ · ‖E) and a real q, a random vector x ∈ E is said to be q-exponentially concentrated if for any 1-Lipschitz real function f : E → R, there exists C ≥ 0 independent of dim(E) and σ > 0 such that for all t ≥ 0\nP {|f(x)− Ef(x)| > t} ≤ C e−(t/σ) q\n(1) which we denote x ∈ Eq(σ |E, ‖ · ‖E). We simply write x ∈ Eq(1 |E, ‖ · ‖E) if the tail parameter σ does not depend on dim(E), and x ∈ Eq(1) for x a scalar real random variable.\nTherefore, concentrated vectors are defined through the concentration of any 1-Lipschitz real scalar “observation”. One of the most important examples of concentrated vectors are standard Gaussian vectors. Precisely, we have the following proposition. See (Ledoux, 2005)) for more examples such as uniform and Gamma distribution. Proposition 2.2 (Gaussian vectors (Ledoux, 2005)). Let d ∈ N and x ∼ N (0, Id). Then x is a 2-exponentially concentrated vector independently on the dimension d, i.e. x ∈ E2(1 |Rd, ‖ · ‖).\nConcentrated vectors have the interesting property of being stable by application of Rd → Rp vector-Lipschitz transformations. Indeed, Lipschitz-ally transformed concentrated vectors remain concentrated according to the following proposition. Proposition 2.3 (Lipschitz stability (Louart & Couillet, 2019)). Let x ∈ Eq(1 |E, ‖ · ‖E) and G : E → F a Lipschitz application with Lipschitz constant ‖G‖lip which may depend on dim(F ). Then the concentration property on x is transferred to G(x), precisely\nx ∈ Eq(1 |E, ‖ · ‖E) ⇒ G(x) ∈ Eq(‖G‖lip |F, ‖ · ‖F ). (2)\nNote importantly for the following that the Lipschitz constant of the transformation G must be controlled, in order to constrain the tail parameter of the obtained concentration.\nIn particular, we have the coming corollary to Proposition 2.3 of central importance in the following.\nCorollary 2.4. Let G1, . . . ,Gn : Rd → Rp a set of n Lipschitz applications with Lipschitz constants ‖Gi‖lip. Let G : Rd×n → Rp×n be defined for each X ∈ Rd×n as G(X) = [G1(X:,1), . . . ,Gn(X:,n)]. Then,\nZ ∈ Eq(1 |Rd×n, ‖ · ‖F ) ⇒ G(Z) ∈ Eq (\nsup i ‖Gi‖lip | Rp×n, ‖ · ‖F\n) . (3)\nProof. This is a consequence of Proposition 2.3 since the map G is supi ‖Gi‖lip-Lipschitz with respect to (w.r.t.) the Frobenius norm. Indeed, for X,H ∈ Rd×n : ‖G(X + H) − G(X)‖2F ≤∑n i=1 ‖Gi‖2lip · ‖H:,i‖2 ≤ supi ‖Gi‖2lip · ‖H‖2F ." }, { "heading": "3 MAIN RESULTS", "text": "" }, { "heading": "3.1 GAN DATA: AN EXAMPLE OF CONCENTRATED VECTORS", "text": "Concentrated random vectors are particularly interesting from a practical standpoint for real data modeling. In fact, unlike simple Gaussian vectors, the former do not suffer from the constraint of having independent entries which is quite a restrictive assumption when modeling real data such as images or their non-linear features (e.g., DL representations). The other modeling interest of concentrated vectors lies in their being already present in practice as alternatives to real data. Indeed, adversarial neural networks (GANs) have the ability nowadays to generate random realistic data (for instance realistic images) by applying successive Lipschitz operations to standard Gaussian vectors (Goodfellow et al., 2014).\nA GAN architecture involves two networks, a generator model which maps random Gaussian noise to new plausible synthetic data and a discriminator model which classifies real data as real (from the dataset) or fake (for the generated data). The discriminator is updated directly through a binary classification problem, whereas the generator is updated through the discriminator. As such, the two models are trained alternatively in an adversarial manner, where the generator seeks to better deceive the discriminator and the former seeks to better identify the fake data (Goodfellow et al., 2014).\nIn particular, once both models are trained (when they reach a Nash equilibrium), DL representations of GAN-data –and GAN-data themselves– are schematically constructed in practice as follows:\nReal Data ≈ GAN Data = FN ◦ · · · ◦ F1(z), where z ∼ N (0, Id), (4)\nwhere d stands for the input dimension of the generator model, N the number of layers, and the Fi’s either Fully Connected Layers, Convolutional Layers, Pooling Layers, Up-sampling Layers and Activation Functions, Residual Layers or Batch Normalizations. All these operations happen to be Lipschitz applications. Precisely,\n• Fully Connected Layers and Convolutional Layers: These are affine operations which can be expressed as\nFi(x) = Wix + bi, for Wi the weight matrix and bi the bias vector. Here the Lipschitz constant is the operator norm (the largest singular value) of the weight matrix Wi, that is ‖Fi‖lip = supu6=0 ‖Wiu‖2 ‖u‖2 .\n• Pooling Layers and Activation Functions: Most commonly used activation functions and pooling operations are\nReLU(x) = max(0,x), MaxPooling(x) = [max(xS1), . . . ,max(xSq )] ᵀ,\nwhere Si’s are patches (i.e., subsets of [dim(x)]). These are at most 1-Lipschitz operations with respect to the Frobenius norm. Specifically, the maximum absolute sub-gradient of the ReLU activation function is 1, thus the ReLU operation has a Lipschitz constant of 1. Similarly, we can show that the Lipschitz constant of MaxPooling layers is also 1.\n• Residual Connections: Residual layers act the following way\nFi(x) = x + F (1)i ◦ · · · ◦ F (`) i (x),\nwhere the F (j)i ’s are Fully Connected Layers or Convolutional Layers with Activation Functions, and which are Lipschitz operations. Thus Fi is a Lipschitz operation with Lipschitz constant bounded by 1 + ∏` j=1 ‖F (j) i ‖lip.\n• Batch Normalization (BN) Layers: They consist in statistically standardizing (Ioffe & Szegedy, 2015) the vectors of a small batch B = {xi}bi=1 ⊂ Rd as follows: for each xk ∈ B\nFi(xk) = diag ( a√ σ2B + ) (xk − µB1d) + b\nwhere µB = 1db ∑b k=1 ∑d i=1[xk]i, σ 2 B = 1 db ∑b k=1 ∑d i=1([xk]i − µB)2, a, b ∈ Rd are parameters to be learned and diag(v) transforms a vector v to a diagonal matrix with its diagonal entries being those of v. Thus BN is a Lipschitz transformation with Lipschitz constant ‖Fi‖lip = supi | ai√σ2B+ |.\nTherefore, as illustrated in Figure 1, since standard Gaussian vectors are concentrated vectors as mentioned in Proposition 2.2 and since the notion of concentrated vectors is stable by Lipschitz transformations thanks to Proposition 2.3, GAN-data (and their DL representations) are concentrated vectors by design given the construction in Equation (4). Moreover, in order to generate data belonging to a specific class, Conditional GANs have been introduced (Mirza & Osindero, 2014); once again data generated by these models are concentrated vectors as a consequence of Corollary 2.4. Indeed, a generator of a Conditional GAN model can be seen as a set of multiple generators where each generates data of a specific class conditionally on the class label (e.g., BigGAN model (Brock et al., 2018)).\nYet, in order to ensure that the resulting Lipschitz constant of the combination of the above operations does not scale with the network or data size, so to maintain good concentration behaviors, a careful control of the learned network parameters is needed. This control happens to be already considered in practice in order to ensure the stability of GANs during the learning phase, notably to generate realistic and high-resolution images (Roth et al., 2017; Brock et al., 2018). The control of the Lipschitz constant of representation networks is also needed in practice in order to make them robust against adversarial examples (Szegedy et al., 2013; Gulrajani et al., 2017). This control is particularly ensured through spectral normalization of the affine layers (Brock et al., 2018), such as Fully Connected Layers, Convolutional Layers and Batch Normalization. Indeed, spectral normalization (Miyato et al., 2018) consists in applying the operation W ← W /σ1(W ) to the affine layers at each backward iteration of the back-propagation algorithm, where σ1(W ) stands for the largest singular value of the weight matrix W . Brock et al. (2018), have notably observed that, without spectral constraints, a subset of the generator layers grow throughout their GAN training and explode at collapse. They thus suggested the following spectral normalization –which happens to be less restrictive than the standard spectral normalization W ← W /σ1(W ) (Miyato et al., 2018)– to the affine layers:\nW ←W − (σ1(W )− σ∗)u1(W )v1(W )ᵀ (5)\nwhere u1(W ) and v1(W ) denote respectively the left and right largest singular vectors of W , and σ∗ is an hyper-parameter fixed during training.\nTo get an insight about the influence of this operation and to ensure that it controls the Lipschitz constant of the generator, the following proposition provides the dynamics of a random walk in the space of parameters along with the spectral normalization in Equation (5). Indeed, since stochastic gradient descent (SGD) consists in estimating the gradient of the loss function on randomly selected batches of data, it can be assimilated to a random walk in the space of parameters (Antognini & Sohl-Dickstein, 2018). Proposition 3.1 (Lipschitz constant control). Let σ∗ > 0 and G be a neural network composed ofN affine layers, each one of input dimension di−1 and output dimension di for i ∈ [N ], with 1-Lipschitz activation functions. Assume that the weights of G at layer i+ 1 are initialized as U([− 1√\ndi , 1√ di ]),\nand consider the following dynamics with learning rate η:\nW ←W − ηE, with Ei,j ∼ N (0, 1) W ←W −max(0, σ1(W )− σ∗)u1(W )v1(W )ᵀ.\n(6)\nThen, ∀ε > 0, the Lipschitz constant of G is bounded at convergence with high probability as:\n‖G‖lip ≤ N∏ i=1 ( ε+ √ σ2∗ + η 2didi−1 ) . (7)\nProof. The proof is provided in Appendix B.\nProposition 3.1 shows that the Lipschitz constant of a neural network is controlled when trained with the spectral normalization in Equation (5). In particular, recalling the notations in Proposition 3.1, in the limit where di →∞ with didi−1 → γi ∈ (0,∞) for all i ∈ [N ] and choosing the learning rate η = O(d−10 ), the Lipschitz constant of G is of order O(1) if it has finitely many layers N and σ∗ is constant. Therefore, with this spectral normalization, it can be assumed that ‖G‖lip = O(1) when dimensions grow. Figure 2 depicts the behavior of the Lipschitz constant of a linear layer with and without spectral normalization in the setting of Proposition 3.1, which confirms the obtained bound." }, { "heading": "3.2 MIXTURE OF CONCENTRATED VECTORS", "text": "In this section, we assume data to be a mixture of concentrated random vectors with controlledO(1) Lipschitz constant (e.g., DL representations of GAN-data as we discussed in the previous section). Precisely, let x1, . . . ,xn be a set of mutually independent random vectors in Rp. We suppose that these vectors are distributed as one of k classes of distribution laws µ1, . . . , µk with distinct means {m`}k`=1 and “covariances” {C`}k`=1 defined receptively as\nm` = Exi∼µ` [xi], C` = Exi∼µ` [xix ᵀ i ]. (8)\nFor some q > 0, we consider a q-exponential concentration property on the laws µ`, in the sense that for any family of independent vectors y1, . . . ,ys sampled from µ`, [y1, . . . ,ys] ∈ Eq(1 |Rp×s, ‖ · ‖F ). Without loss of generality, we arrange the xi’s in a data matrix X = [x1, . . . ,xn] such that, for each ` ∈ [k], x1+∑`−1j=1 nj , . . . ,x∑`j=1 nj ∼ µ`, where n` stands for the number of xi’s sampled from µ`. In particular, we have the concentration of X as X ∈ Eq(1 |Rp×n, ‖ · ‖F ). (9) Such a data matrix X can be constructed through Lipschitz-ally transformed Gaussian vectors (q = 2), with controlled Lipschitz constant, thanks to Corollary 2.4. In particular, DL representations of GAN-data are constructed as such, as shown in Section 3.1. We further introduce the following notations that will be used subsequently.\nM = [m1, . . . ,mk] ∈ Rp×k, J = [j1, . . . , jk] ∈ Rn×k and Z = [z1, . . . ,zn] ∈ Rp×n, where j` ∈ Rn stands for the canonical vector selecting the xi’s of distribution µ`, defined by (j`)i = 1xi∼µ` , and the zi’s are the centered versions of the xi’s, i.e. zi = xi −m` for xi ∼ µ`." }, { "heading": "3.3 GRAM MATRICES OF CONCENTRATED VECTORS", "text": "Now we study the behavior of the Gram matrix G = 1pX ᵀX in the large n, p limit and under the model of the previous section. Indeed, G appears as a central component in many classification, regression and clustering methods. Precisely, a finer description of the behavior of G provides access to the internal functioning and performance evaluation of a wide range of machine learning methods such as Least Squares SVMs (AK et al., 2002), Semi-supervised Learning (Chapelle et al., 2009) and Spectral Clustering (Ng et al., 2002). Indeed, the performance evaluation of these methods has already been studied under GMM models in (Liao & Couillet, 2017; Mai & Couillet, 2017; Couillet & Benaych-Georges, 2016) through RMT. On the other hand, analyzing the spectral behavior of G for DL representations quantifies their quality –through its principal subspace (Montavon et al., 2011)– as we have discussed in the introduction. In particular, the Gram matrix decomposes as\nG = 1\np JMᵀMJᵀ +\n1 p ZᵀZ + 1 p (JMᵀZ + ZᵀMJᵀ). (10)\nIntuitively G decomposes as a low-rank informative matrix containing the class canonical vectors through J and a noise term represented by the other matrices and essentially ZᵀZ. Given the form of this decomposition, RMT predicts –through an analysis of the spectrum of G and under a GMM model (Benaych-Georges & Couillet, 2016)– the existence of a threshold ξ function of the ratio p/n and the data statistics for which the dominant eigenvectors of G contain information about the classes only when ‖MᵀM‖ ≥ ξ asymptotically (i.e., only when the means of the different classes are sufficiently distinct).\nIn order to characterize the spectral behavior (i.e., eigenvalues and leading eigenvectors) of G under the concentration assumption in Equation (9) on X , we will be interested in determining the spectral distribution L = 1n ∑n i=1 δλi of G, with λ1, . . . , λn the eigenvalues of G, where δx stands for the Dirac measure at point x. Essentially, to determine the limiting eigenvalue distribution as p, n→∞ and p/n→ c ∈ (0,∞), a conventional approach in RMT consists in determining an estimate of the Stieltjes transform (Silverstein & Choi, 1995) mL of L, which is defined for some z ∈ C \\Supp(L)\nmL(z) = ∫ λ dL(λ) λ− z = 1 n tr ( (G− zIn)−1 ) . (11)\nHence, quantifying the behavior of the resolvent of G defined as R(z) = (G + zIn)−1 determines the limiting measure of L throughmL(z). Furthermore, since R(z) and G share the same eigenvectors with associated eigenvalues 1λi−z , the projector matrix corresponding to the top m eigenvectors U = [u1, . . . ,um] of G can be calculated through a Cauchy integral UUᵀ = 12πi ∮ γ R(−z)dz where γ is an oriented complex contour surrounding the top m eigenvalues of G.\nTo study the behavior of R(z), we look for a so-called deterministic equivalent (Hachem et al., 2007) R̃(z) for R(z), which is a deterministic matrix that satisfies for all A ∈ Rn×n and all u,v ∈ Rn of respectively bounded spectral and Eucildean norms, 1n tr(AR(z)) − 1 n tr(AR̃(z)) → 0 and uᵀ(R(z) − R̃(z))v → 0 almost surely as n → ∞. In the following, we present our main result which gives such a deterministic equivalent under the concentration assumption on X in Equation (9) and under the following assumptions.\nAssumption 3.2. As p→∞,\n1. p/n→ c ∈ (0,∞), 2. The number of classes k is bounded, 3. ‖m`‖ = O( √ p).\nTheorem 3.3 (Deterministic Equivalent for R(z)). Under the model described in Section 3.2 and Assumptions 3.2, we have R(z) ∈ Eq(p−1/2 |Rn×n, ‖ · ‖F ). Furthermore,∥∥∥ER(z)− R̃(z)∥∥∥ = O( √ log(p)\np\n) , R̃(z) = 1\nz diag\n{ In`\n1 + δ∗` (z) }k `=1 + 1 p z JΩzJ ᵀ (12)\nwith Ωz = MᵀQ̃(z)M diag { δ∗` (z)−1 δ∗` (z)+1 }k `=1 and Q̃(z) = ( 1 c k ∑k `=1 C` 1+δ∗` (z) + zIp )−1 ,\nwhere δ∗(z) = [δ∗1(z), . . . , δ ∗ k(z)] ᵀ is the unique fixed point of the system of equations\nδ`(z) = 1\np tr C` 1 c k k∑ j=1 Cj 1 + δj(z) + zIp −1 for each ` ∈ [k].\nSketch of proof. The first step of the proof is to show the concentration of R(z). This comes from the fact that the application X 7→ R(z) is 2z−3/2p−1/2-Lipschitz w.r.t. the Frobenius norm, thus we have by Proposition 2.3 that R(z) ∈ Eq(p−1/2 |Rn×n, ‖ · ‖F ). The second step consists in estimating ER(z) through a deterministic matrix R̃(z). Indeed, R(z) can be expressed as a function of Q(z) = (XXᵀ/p + zIp)−1 as R(z) = z−1(In − XᵀQ(z)X/p), and exploiting the result of (Louart & Couillet, 2019) which shows that EQ(z) can be estimated through Q̃(z), we obtain the estimator R̃(z) for ER(z). A more detailed proof is provided in Section A.3 of the Appendix.\nThis result allows specifically to (i) describe the limiting eigenvalues distribution of G, (ii) determine the spectral detectability threshold mentioned above, (iii) evaluate the asymptotic “content” of the leading eigenvectors of G and, much more fundamentally, (iv) infer the asymptotic performances of machine learning algorithms that are based on simple functionals of G (e.g., LS-SVM, spectral clustering etc.). Looking carefully at Theorem 3.3 we see that the spectral behavior of the Gram matrix G computed on concentrated vectors only depends on the first and second order statistics of the laws µ` (their means m` and “covariances” C`). This suggests the surprising result that G has the same behavior as when the data follow a GMM model with the same means and covariances. The asymptotic spectral behavior of G is therefore universal with respect to the data distribution laws which satisfy the aforementioned concentration properties (for instance DL representations of GAN-data). We illustrate this universality result in the next section by considering data as CNN representations of GAN generated images." }, { "heading": "4 APPLICATION TO CNN REPRESENTATIONS OF GAN-GENERATED IMAGES", "text": "In this section, we consider n = 1500 data x1, . . . ,xn ∈ Rp as CNN representations –across popular CNN architectures of different sizes p– of GAN-generated images using the generator of the Big-GAN model (Brock et al., 2018). We further use real images from the Imagenet dataset (Deng et al., 2009) for comparison. In particular, we empirically compare the spectrum of the Gram matrix of this data with the Gram matrix of a GMM model with the same means and covariances. We also consider the leading 2-dimensional eigenspace of the Gram matrix which contains clustering information as detailed in the previous section. Figure 3 depicts some images generated using the Big-GAN model (Top) and the corresponding real class images from the Imagenet dataset (Bottom). The Big-GAN model is visually able to generate highly realistic images which are by construction concentrated vectors, as discussed in Section 3.1.\nFigure 4 depicts the spectrum and leading 2D eigenspace of the Gram matrix computed on CNN representations of GAN generated and real images (in gray), and the corresponding GMM model with same first and second order statistics (in green). The Gram matrix is seen to follow the same spectral behavior for GAN-data as for the GMM model which is a natural consequence of the universality result of Theorem 3.3 with respect to the data distribution. Besides, and perhaps no longer surprisingly, we further observe that the spectral properties of G for real data (here CNN representations of real images) are conclusively matched by their Gaussian counterpart. This both theoretically and empirically confirms that the proposed random matrix framework is fully compliant with the theoretical analysis of real machine learning datasets." }, { "heading": "5 CONCLUSION", "text": "Leveraging on random matrix theory (RMT) and the concentration of measure phenomenon, we have shown through this paper that DL representations of GAN-data behave as Gaussian mixtures for linear classifiers, a fundamental universal property which is only valid in high-dimension of data. To the best of our knowledge, this result constitutes a new approach towards the theoretical understanding of complex objects such as DL representations, as well as the understanding of the behavior of more elaborate machine learning algorithms for complex data structures. In addition, the article explicitly demonstrated our ability, through RMT, to anticipate the behavior of a wide range of standard classifiers for data as complex as DL representations of the realistic and surprising images generated by GANs. This opens the way to a more systematic analysis and improvement of machine learning algorithms on real datasets by means of large dimensional statistics." }, { "heading": "A PROOF OF THEOREM 3.3", "text": "A.1 SETTING OF THE PROOF\nFor simplicity, we will only suppose the case k = 1 and we consider the following notations that will be used subsequently.\nx̄ = Exi, C = E[xixᵀi ], X0 = X − x̄1 ᵀ n, C0 = E[X0X ᵀ 0 /n].\nLet X−i = (x1, . . . ,xi−1, 0,xi, . . . ,xn)\nthe matrix X with a vector of zeros at its ith column.\nDenote the resolvents\nR =\n( XᵀX\np + zIn\n)−1 , Q = ( XXᵀ\np + zIp\n)−1 , Q−i = ( XXᵀ\np − xix\nᵀ i\np + zIp )−1 (13)\nAnd let\nQ̃ =\n( 1\nc\nC\n1 + δ + zIp\n)−1 , (14)\nwhere δ is the solution to the fixed point equation\nδ = 1\np tr\n( C ( 1\nc\nC\n1 + δ + zIp\n)−1) .\nA.2 BASIC TOOLS\nLemma A.1 ((Ledoux, 2005)). Let z ∈ Eq(1 |Rp, ‖ · ‖) and M ∈ Eq(1 |Rp×n, ‖ · ‖F ). Then, for some numerical constant C > 0\n• E ‖z‖ ≤ ‖Ez‖+ C√p, E ‖z‖∞ ≤ ‖Ez‖∞ + C √ log p. • E ‖M‖ ≤ ‖EM‖+ C √ p+ n, E ‖M‖F ≤ ‖EM‖F + C √ pn.\nLemma A.2. Denote Qx̄ = (x̄x̄ᵀ + zIp)−1, we have:\nQx̄x̄ = x̄\n‖x̄‖2 + z and ‖Q̃x̄‖, x̄Q̃x̄ = O(1).\nMoreover, if ‖x̄‖ ≥ √p, ‖Q̃x̄‖ = O(p−1/2).\nProof. Since zQx̄ = Ip −Qx̄x̄x̄ᵀ :\nzQx̄x̄ = x̄− ‖x̄‖2Qx̄x̄,\nand we recover the first identity of the Lemma.\nAnd since the matrix C0 is nonnegative symmetric, we have :\nQ̃x̄ =\n( 1\nc\nC0 + x̄x̄ ᵀ\n1 + δ + zIp\n)−1 x̄ ≤ c(1 + δ)x̄\n‖x̄‖2 + zc(1 + δ) .\nTherefore, x̄Q̃x̄ = c(1+δ)‖x̄‖ 2\n‖x̄‖2+zc(1+δ) = O(1) and:\n‖Q̃x̄‖ = c(1 + δ)‖x̄‖ ‖x̄‖2 + zc(1 + δ) ≤ ‖x̄‖ z = O(1) if ‖x̄‖ ≤ 1, c(1 + δ)\n‖x̄‖ = O(1) if ‖x̄‖ ≥ 1.\nProposition A.3. x̄ᵀE[Q]x̄ = x̄ᵀQ̃x̄ +O (√\nlog p p\n)\nProof. Let us bound:∣∣∣x̄ᵀQx̄− x̄ᵀQ̃x̄∣∣∣ ≤ c−1 1 + δ ∣∣∣∣E [x̄Qxixᵀi Q̃x̄(1pxᵀiQ−ixi − δ )] + 1 p E [ x̄ᵀQ−ixix ᵀ iQCQ̃x̄ ]∣∣∣∣ Now let us consider a supplementary random vector xn+1 following the same low as the xi’s and independent of X . We divide the set I = [n+1] into two sets I 1\n2 and I 2 2 of same cardinality (bn+12 c ≤\n#I 1 2 ,#I 2 2 ≤ dn+12 e), we note X 12 = (xi | i ∈ I 12 ), X 22 = (xi | i ∈ I 22 ) and we introduce the diagonal matrices ∆ = diag (\n1 px ᵀ iQ−ixi − δ | i ∈ I 12\n) , D = diag ( 1 + 1p+1x ᵀ iQxi | i ∈ I 22 ) .\nWe have the bound:∣∣∣∣E [x̄Qxixᵀi Q̃x̄(1pxᵀiQ−ixi − δ )]∣∣∣∣\n= ∣∣∣∣E [(1 + 1pxᵀn+1Qxn+1 ) xn+1Q+(n+1)xix ᵀ i Q̃x̄ ( 1 p xᵀiQ−ixi − δ )]∣∣∣∣ = 1\np2 ∣∣∣E [1ᵀDXᵀ2 2 Q+(n+1)X 1 2 ∆Xᵀ1 2 Q̃x̄ ]∣∣∣\n≤ √∣∣∣∣E [ 1p3 1ᵀDXᵀ22Q+(n+1)X 12 ∆2Xᵀ12Q+(n+1)X 22D1 ] E [ 1 p x̄ᵀQ̃X 1 2 Xᵀ1 2 Q̃x̄ ]∣∣∣∣ ≤ √√√√∣∣∣∣∣E [∥∥∥∥1pXᵀ22Q+(n+1)X 12 ∥∥∥∥2 ‖D‖2 ‖∆‖2 ] E [ x̄Q̃CQ̃x̄ ]∣∣∣∣∣ ≤ O (√ log p p ) ,\nthanks to Lemma A.1 and Lemma A.2 (the spectral norm of ∆ and D is just an infinity norm if we see them as random vectors of Rn). We can bound 1p ∣∣∣E [x̄ᵀQ−ixixᵀiQCQ̃x̄]∣∣∣ the same way to obtain the result of the proposition.\nProposition A.4. ‖E[xᵀiQ−iX−i]− x̄ᵀQ̃x̄1ᵀ 1+δ ‖ = O( √ log p)\nProof. Considering u ∈ Rn such that ‖u‖ = 1:∣∣∣∣∣E[xᵀiQ−iX−iu]− x̄ᵀQ̃x̄1ᵀu1 + δ ∣∣∣∣∣\n= ∣∣∣∣∣∣∣ n∑\nj=1 j 6=i\nujE\n[ xᵀiQ−i−j xj\n1 + 1px ᵀ jQ−j−i\nxj − x ᵀ i Q̃xj 1 + δ ]∣∣∣∣∣∣∣ ≤ √ n ∣∣∣∣∣E [ xᵀiQ−i−j xj\n1 + 1px ᵀ jQ−j−i\nxj −\nxᵀiQ−i−j xj\n1 + δ ]∣∣∣∣∣+ ∣∣∣∣ 11 + δE [xᵀiQ−i−j xj − xᵀi Q̃xj] ∣∣∣∣ (where i 6= j) ≤ √ n ∣∣∣∣E [x̄ᵀQxj (1pxᵀjQ−j−i xj − δ )]∣∣∣∣+√n ∣∣∣E [x̄ᵀQ−i−j x̄− x̄ᵀQ̃x̄]∣∣∣ ,\nwhere the first term is treated the same way as we did in the proof of Proposition A.3 and the second term is bounded thanks to Proposition A.3\nA.3 MAIN BODY OF THE PROOF\nProof of Theorem 3.3. Recall the definition of the resolvents R and Q in Equation (13). The first step of the proof is to show the concentration of R. This comes from the fact that the application Φ : X 7→ (XᵀX + zIn)−1 is 2z−3/2-Lipschitz w.r.t. the Frobenius norm. Indeed, by the matrix identity A−B = A(B−1 −A−1)B, we have\nΦ(X)− Φ(X + H) = Φ(X)(HᵀX + (X + H)ᵀH)Φ(X + H)\nAnd by the bounds ‖AB‖F ≤ ‖A‖ · ‖B‖F , ‖Φ(X)Xᵀ‖ ≤ z−1/2 and ‖Φ(X)‖ ≤ z−1, we have\n‖Φ(X + H)− Φ(X)‖F ≤ 2\nz3/2 ‖H‖F .\nTherefore, given X ∈ Eq(1 |Rp×n, ‖ · ‖F ) and since the application X 7→ R = Φ(X/ √ p) is 2z−3/2p−1/2-Lipschitz, we have by Proposition 2.3 that R ∈ Eq(p−1/2 |Rn×n, ‖ · ‖F ).\nThe second step consists in estimating ER(z) through a deterministic matrix R̃. Indeed, by the identity (MᵀM+zI)−1Mᵀ = Mᵀ(MMᵀ+zI)−1, the resolvent R can be expressed in function\nof Q as follows\nR = 1\nz\n( In − XᵀQX\np\n) , (15)\nthus a deterministic equivalent for R can therefore be obtained through a deterministic equivalent of the matrix XᵀQX . However, as demonstrated in Louart & Couillet (2019), the matrix Q has as a deterministic equivalent the matrix Q̃ defined in equation 14. In the following, we aim at deriving a deterministic equivalent for 1pX\nᵀQX in function of Q̃. Let u and v be two unitary vectors in Rn, and let us estimate\n∆ ≡ E [ uᵀ ( XᵀQX\np − X\nᵀQ̃X\np\n) v ] = 1\np E\n[ uᵀXᵀQCQ̃Xv\n1 + δ − 1 p uᵀXᵀQXXᵀQ̃Xv ] With the following matrix identities (to explore the independence of the columns of X):\nQ = Q−i − 1\np Q−ixix\nᵀ iQ , Qxi =\nQ−ixi\n1 + 1px ᵀ iQ−ixi\n, A−B = A(B−1 −A−1)B\nand the decomposition QXXᵀ = ∑n i=1 Qxix ᵀ i , we obtain:\n∆ = 1\np2 E [ n∑ i=1 uᵀXᵀQ−iCQ̃Xv 1 + δ − u ᵀXᵀQ−ixix ᵀ i Q̃Xv 1 + 1px ᵀ iQ−ixi − 1 p uᵀXᵀQ−ixix ᵀ iQCQ̃Xv 1 + δ ]\n= 1\np2 n∑ i=1 E\n[ uᵀXᵀ−iQ−iCQ̃X−iv\n1 + δ −\nuᵀXᵀ−iQ−ixix ᵀ i Q̃X−iv\n1 + 1px ᵀ iQ−ixi\n+ uix\nᵀ iQ−iCQ̃X−iv\n1 + δ +\nviu ᵀXᵀ−iQ−iCQ̃xi\n1 + δ + uivi xᵀiQ−iCQ̃xi 1 + δ\n− uix ᵀ iQ−ixix ᵀ i Q̃X−iv\n1 + 1px ᵀ iQ−ixi\n− viu\nᵀXᵀ−iQ−ixix ᵀ i Q̃xi 1 + 1px ᵀ iQ−ixi − uivi xᵀiQ−ixix ᵀ i Q̃xi 1 + 1px ᵀ iQ−ixi\n−1 p\nuᵀXᵀQ−ixix ᵀ iQCQ̃Xv\n1 + δ ] We can show with Holder’s inequality and the concentration bounds (mainly the fact that 1px ᵀ iQ−ixi concentrates around δ) developed in (Louart & Couillet, 2019), that most of the above quantities vanish asymptotically. As a toy example, we consider the following term:∣∣∣∣∣ 1p2 n∑ i=1 E [ uᵀXᵀ−iQ−iCQ̃X−iv 1 + δ − uᵀXᵀ−iQ−ixix ᵀ i Q̃X−iv 1 + 1px ᵀ iQ−ixi\n]∣∣∣∣∣ =\n∣∣∣∣∣ 1p2 n∑ i=1 E [ uᵀXᵀ−iQ−ixix ᵀ i Q̃X−iv\nδ − 1px ᵀ iQ−ixi\n(1 + δ)(1 + 1px ᵀ iQ−ixi) ]∣∣∣∣∣ ≤\n∣∣∣∣∣ 1p2 n∑ i=1 E [ (uᵀXᵀ−iQ−ixi)(x ᵀ i Q̃X−iv) ( δ − 1 p xᵀiQ−ixi )]∣∣∣∣∣ ≤\n∣∣∣∣∣∣1p n∑ i=1 ( E [( 1 √ p uᵀXᵀ−iQ−ixi )3] E [( 1 √ p xᵀi Q̃X−iv )3] E [( δ − 1 p xᵀiQ−ixi )3]) 13 ∣∣∣∣∣∣ = O ( 1 √ p\n) Similarly, we can show that:∣∣∣∣∣ 1p2 n∑ i=1 E [ uix ᵀ iQ−iCQ̃X−iv 1 + δ + viu ᵀXᵀ−iQ−iCQ̃xi 1 + δ\n+uivi xᵀiQ−iCQ̃xi\n1 + δ − 1 p\nuᵀXᵀQ−ixix ᵀ iQCQ̃Xv\n1 + δ\n]∣∣∣∣∣ = O ( 1 √ p )\nFinally, the remaining terms in ∆ can be estimated as follows:\n∆ = 1\np2 n∑ i=1 E\n[ −uix ᵀ iQ−ixix ᵀ i Q̃X−iv\n1 + 1px ᵀ iQ−ixi\n− viu\nᵀXᵀ−iQ−ixix ᵀ i Q̃xi\n1 + 1px ᵀ iQxi\n− uivi xᵀiQ−ixix ᵀ i Q̃xi\n1 + 1px ᵀ iQ−ixi\n] +O ( 1 √ p )\n= −2 p\nδuᵀ1x̄ᵀQ̃x̄1ᵀv\n1 + δ − δ\n2uᵀv\n1 + δ +O\n(√ log p\np ) Where the last equality is obtained through the following estimation:\n1\np2 n∑ i=1 E\n[ viu ᵀXᵀ−iQ−ixix ᵀ i Q̃xi\n1 + 1px ᵀ iQ−ixi\n] = 1\np n∑ i=1 E\nviuᵀXᵀ−iQ−ixi ( 1 px ᵀ i Q̃xi(1 + δ)− δ ( 1 + 1px ᵀ i Q̃xi )) (\n1 + 1px ᵀ iQ−ixi\n) (1 + δ) + 1\np n∑ i=1 viδE[uᵀXᵀ−iQ−ixi] (1 + δ)\nWith the following bound:∣∣∣∣1pxᵀi Q̃xi(1 + δ)− δ ( 1 + 1 p xᵀi Q̃xi )∣∣∣∣ = ∣∣∣∣1pxᵀi Q̃xi(1 + δ)− δ(1 + δ) + δ(1 + δ)− δ ( 1 + 1 p xᵀi Q̃xi\n)∣∣∣∣ ≤ ∣∣∣∣1pxᵀi Q̃xi − δ\n∣∣∣∣ (1 + 2δ), we have again with Holder’s inequality and Proposition A.4:\n1\np2 n∑ i=1 E\n[ viu ᵀXᵀ−iQ−ixix ᵀ i Q̃xi\n1 + 1px ᵀ iQxi\n] = 1\np n∑ i=1 viδu ᵀ1x̄ᵀQ̃x̄ 1 + δ +O\n(√ log p\np\n)\nNow that we estimated ∆, it remains to estimate E[ 1pX ᵀQ̃X]. Indeed, given two unit norm vectors u, v ∈ Rn we have:\nE [ 1\np uᵀXᵀQ̃Xv\n] = 1\np n∑ i,j=1 uivjE[xᵀi Q̃xj ] = 1 p n∑ i=1 n∑ j=1 j 6=i uivjx̄ ᵀQ̃x̄ + n∑ i=1 uiviδ\n= 1 p x̄ᵀQ̃x̄uᵀ11ᵀv + (δ − 1 p x̄ᵀQ̃x̄)uᵀv = 1 p x̄ᵀQ̃x̄uᵀM1v\nᵀ + δuᵀv +O ( 1\np ) since we have x̄ᵀQ̃x̄ = O(1) by Lemma A.2; we introduced the matrix M1 = 11ᵀ. Therefore we have the following estimation:\n1 p E [XᵀQX] = δ 1 + δ In + 1 p ( 1− δ 1 + δ ) x̄ᵀQ̃x̄M1 +O‖·‖ (√ log p p ) where A = B+O‖·‖(α(p)) means that ‖A−B‖ = O(α(p)). Finally, since R concentrates around its mean, we can then conclude:\nR = 1\nz\n( In − 1\np XᵀQX\n) = 1\nz\n1\n1 + δ In + δ − 1 pz(δ + 1) x̄ᵀQ̃x̄M1 +O‖·‖\n(√ log p\np\n) ." }, { "heading": "B PROOF OF PROPOSITION 3.1", "text": "Proof. Since the Lipschitz constant of a composition of Lipschitz functions is bounded by the product of their Lipschitz constants, we consider the case N = 1 and a linear activation function. In this case, the Lipschitz constant corresponds to the largest singular value of the weight matrix. We consider the following notations for the proof\nW̄t = Wt − ηEt with [Et]i,j ∼ N (0, 1) Wt+1 = W̄t −max(0, σ̄1,t − σ∗) ū1,tv̄ᵀ1,t\nwhere σ̄1,t = σ1(W̄t), ū1,t = u1(W̄t) and v̄1,t = v1(W̄t). The effect of spectral normalization is observed in the case where σ∗ > σ̄1,t, otherwise the Lipschitz constant is bounded by σ∗. We therefore have\n‖W̄t‖2F ≤ ‖Wt‖2F + η2d1d0 (16) ‖Wt+1‖2F = ‖W̄t‖2F + σ2∗ − σ̄21,t (17)\n• If ‖Wt+1‖F ≥ ‖Wt‖F , we have by equation 16 and equation 17 ‖W̄t‖2F ≤ ‖W̄t‖2F + σ2∗ − σ̄21,t + η2d1d0 ⇒ ‖W̄t‖ = σ̄1,t ≤ √ σ2∗ + η 2d1d0 = δ\nAnd since ‖Wt+1‖ ≤ ‖W̄t‖, we have ‖Wt+1‖ ≤ δ.\n• Otherwise, if there exits τ such that ‖Wτ+1‖F < ‖Wτ‖F , then for all ε > 0 there exists an iteration τ ′ ≥ τ such that ‖Wτ ′‖ ≤ δ + ε. Indeed, otherwise we denote εt = ‖Wt‖2 − δ2 and εt > 0 for all t ≥ τ . And if for all t ≥ τ , ‖Wt+1‖F ≤ ‖Wt‖F , we have by equation 16 and equation 17\n‖Wt‖2F − ‖Wt+1‖2F ≥ ‖W̄t‖2 − δ2 ≥ ‖Wt+1‖2 − δ2 = εt+1 Integrating the above expression from τ to T − 1 ≥ τ , we end up with\n‖Wτ‖2F − ‖WT ‖2F ≥ T−1∑ t=τ εt ⇒ 0 ≤ ‖WT ‖2F ≤ ‖Wτ‖2F − T−1∑ t=τ εt,\ntherefore, when T → ∞, εt has to tend to 0 otherwise the right hand-side of the last inequality will tend to −∞ which is absurd." } ]
2,019
null
SP:b16e4a52ca4ae6105ae7506b5446d31901b71d3f
[ "This paper tackles the problem of learning to label individual timesteps of sequential data, when given labels only for the sequence as a whole. The authors take an approach derived from the multiple-instance learning (MIL) literature that involves pooling the per-timestep predictions into a sequence-level prediction and to learn the per-timestep predictions without having explicit labels. They evaluate several pooling techniques and conclude that the log-sum-exp pooling approach is superior. The learned segmentations are used to train policies for multiple control skills, and these are used to solve hierarchical control tasks where the correct skill sequence is known.", "The paper presents a weakly supervised method for segmentation of trajectories into sub-skills inspired by multi-instance learning (MIL) in image classification by Andrews et al. (2002). This is done via training a classifier to label each observation per time-step with the probability of skills corresponding to that observation. These predictions are then accumulated throughout the trajectory to compute the probability of the skill in that trajectory. There is only a trajectory level supervision provided which specifies which skills are present with no specification of the order in which they appear. They empirically show that their model can achieve decent skill level classification scores on multiple environments provided that there is a large variety of demonstrations provided." ]
Learning useful and reusable skill, or sub-task primitives, is a long-standing problem in sensorimotor control. This is challenging because it’s hard to define what constitutes a useful skill. Instead of direct manual supervision which is tedious and prone to bias, in this work, our goal is to extract reusable skills from a collection of human demonstrations collected directly for several end-tasks. We propose a weakly-supervised approach for trajectory segmentation following the classic work on multiple instance learning. Our approach is end-to-end trainable, works directly from high-dimensional input (e.g., images) and only requires the knowledge of what skill primitives are present at training, without any need of segmentation or ordering of primitives. We evaluate our approach via rigorous experimentation across four environments ranging from simulation to real world robots, procedurally generated to human collected demonstrations and discrete to continuous action space. Finally, we leverage the generated skill segmentation to demonstrate preliminary evidence of zero-shot transfer to new combinations of skills. Result videos at https: //sites.google.com/view/trajectory-segmentation/.
[]
[ { "authors": [ "Jean-Baptiste Alayrac", "Piotr Bojanowski", "Nishant Agrawal", "Josef Sivic", "Ivan Laptev", "Simon Lacoste-Julien" ], "title": "Learning from narrated instruction videos", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "K. Ali", "K. Saenko" ], "title": "Confidence-rated multiple instance boosting for object detection", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Jacob Andreas", "Dan Klein", "Sergey Levine" ], "title": "Modular multitask reinforcement learning with policy sketches", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Stuart Andrews", "Ioannis Tsochantaridis", "Thomas Hofmann" ], "title": "Support vector machines for multipleinstance learning", "venue": "In NIPS,", "year": 2002 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Brenna D Argall", "Sonia Chernova", "Manuela Veloso", "Brett Browning" ], "title": "A survey of robot learning from demonstration", "venue": "Robotics and autonomous systems,", "year": 2009 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton" ], "title": "Feudal reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Thomas G Dietterich", "Richard H Lathrop", "Tomás Lozano-Pérez" ], "title": "Solving the multiple instance problem with axis-parallel rectangles", "venue": "Artificial intelligence,", "year": 1997 }, { "authors": [ "Yan Duan", "Marcin Andrychowicz", "Bradly Stadie", "OpenAI Jonathan Ho", "Jonas Schneider", "Ilya Sutskever", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "One-shot imitation learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint,", "year": 2018 }, { "authors": [ "Pedro F Felzenszwalb", "Ross B Girshick", "David McAllester", "Deva Ramanan" ], "title": "Object detection with discriminatively trained part-based models", "venue": "IEEE Tran. PAMI,", "year": 2010 }, { "authors": [ "Alex Graves", "Santiago Fernández", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "David Heckerman" ], "title": "A tractable inference algorithm for diagnosing multiple diseases", "venue": "arXiv preprint arXiv:1304.1511,", "year": 2013 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Ahmed Hussein", "Mohamed Medhat Gaber", "Eyad Elyan", "Chrisina Jayne" ], "title": "Imitation learning: A survey of learning methods", "venue": "ACM Computing Surveys (CSUR),", "year": 2017 }, { "authors": [ "Thomas Kipf", "Yujia Li", "Hanjun Dai", "Vinicius Zambaldi", "Alvaro Sanchez-Gonzalez", "Edward Grefenstette", "Pushmeet Kohli", "Peter Battaglia" ], "title": "Compile: Compositional imitation learning and execution", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ajay Mandlekar", "Yuke Zhu", "Animesh Garg", "Jonathan Booher", "Max Spero", "Albert Tung", "Julian Gao", "John Emmons", "Anchit Gupta", "Emre Orbay", "Silvio Savarese", "Li Fei-Fei" ], "title": "Roboturk: A crowdsourcing platform for robotic skill learning through imitation", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Iftekhar Naim", "Young Chol Song", "Qiguang Liu", "Henry Kautz", "Jiebo Luo", "Daniel Gildea" ], "title": "Unsupervised alignment of natural language instructions with video segments", "venue": "In Twenty-Eighth AAAI Conference on Artificial Intelligence,", "year": 2014 }, { "authors": [ "Ashvin V Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine" ], "title": "Visual reinforcement learning with imagined goals", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Scott Niekum", "Sarah Osentoski", "George Konidaris", "Andrew G Barto" ], "title": "Learning and generalization of complex tasks from unstructured demonstrations", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli" ], "title": "Zero-shot task generalization with multi-task deep reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Deepak Pathak", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Constrained convolutional neural networks for weakly supervised segmentation", "venue": "In ICCV, 2015a", "year": 2015 }, { "authors": [ "Deepak Pathak", "Evan Shelhamer", "Jonathan Long", "Trevor Darrell" ], "title": "Fully convolutional multi-class multiple instance learning", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Deepak Pathak", "Parsa Mahmoudieh", "Guanghao Luo", "Pulkit Agrawal", "Dian Chen", "Yide Shentu", "Evan Shelhamer", "Jitendra Malik", "Alexei A. Efros", "Trevor Darrell" ], "title": "Zero-shot visual imitation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Pedro O Pinheiro", "Ronan Collobert" ], "title": "Weakly supervised semantic segmentation with convolutional networks", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Alexander Richard", "Hilde Kuehne", "Juergen Gall" ], "title": "Weakly supervised action learning with rnn based fine-to-coarse modeling", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Pratyusha Sharma", "Lekha Mohan", "Lerrel Pinto", "Abhinav Gupta" ], "title": "Multiple interactions made easy (mime): Large scale demonstrations data for imitation", "venue": "arXiv preprint arXiv:1810.07121,", "year": 2018 }, { "authors": [ "Kyriacos Shiarlis", "Markus Wulfmeier", "Sasha Salter", "Shimon Whiteson", "Ingmar Posner. Taco" ], "title": "Learning task decomposition via temporal alignment for control", "venue": "arXiv preprint arXiv:1803.01840,", "year": 2018 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Richard S Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Cha Zhang", "John C Platt", "Paul A Viola" ], "title": "Multiple instance boosting for object detection", "venue": "In NIPS,", "year": 2005 }, { "authors": [ "Dimitri Zhukov", "Jean-Baptiste Alayrac", "Ramazan Gokberk Cinbis", "David Fouhey", "Ivan Laptev", "Josef Sivic" ], "title": "Cross-task weakly supervised learning from instructional videos", "venue": "arXiv preprint arXiv:1903.08225,", "year": 1903 } ]
[ { "heading": null, "text": "Learning useful and reusable skill, or sub-task primitives, is a long-standing problem in sensorimotor control. This is challenging because it’s hard to define what constitutes a useful skill. Instead of direct manual supervision which is tedious and prone to bias, in this work, our goal is to extract reusable skills from a collection of human demonstrations collected directly for several end-tasks. We propose a weakly-supervised approach for trajectory segmentation following the classic work on multiple instance learning. Our approach is end-to-end trainable, works directly from high-dimensional input (e.g., images) and only requires the knowledge of what skill primitives are present at training, without any need of segmentation or ordering of primitives. We evaluate our approach via rigorous experimentation across four environments ranging from simulation to real world robots, procedurally generated to human collected demonstrations and discrete to continuous action space. Finally, we leverage the generated skill segmentation to demonstrate preliminary evidence of zero-shot transfer to new combinations of skills. Result videos at https: //sites.google.com/view/trajectory-segmentation/." }, { "heading": "1 INTRODUCTION", "text": "Humans have an uncanny ability to generalize from one task to another using either few, or at times, no new examples. This wouldn’t be possible if they were to learn each new task from scratch. Humans rather extract reusable skills from already learned tasks and compose them to generalize to new tasks seamlessly. However, learning such repeatable skills has been a long standing challenge in sensorimotor control, partly because it is hard to define what constitutes a useful skill in itself.\nOne way to layout the scope of a skill is either by designing a corresponding reward function, or collecting expert demonstrations. For instance, consider a skill of reaching for an object. One can easily learn a policy for this skill by either using reinforcement learning with l2 distance as reward (Sutton & Barto, 1998), or by imitation learning from kinesthetic demonstrations (Argall et al., 2009; Hussein et al., 2017). However, neither of these approaches provide a natural form of supervision because the way this skill is performed in isolation can be drastically different from the way it could be used as part of some end task. For instance, reaching for a cup for pushing is very different from the way one would reach for a cup to pick it up for pouring. Therefore, defining a skill directly in a supervised manner could easily lead to biased set of examples.\nA promising alternative is to learn skills that are already embedded in some useful end tasks. Previous works have explored this in the context of an agent’s own exploration (Eysenbach et al., 2018; Nair et al., 2018; Pathak et al., 2018), where, the agent learns goal conditioned skill policies using data collected during its exploration phase. These skills are then used to plan for novel tasks at inference. However, exploration in itself is an open research problem, and hence, such approaches have difficulty in scaling to complex skills.\nIn this work, we follow an alternative paradigm where we extract skills from a collection of human demonstrations gathered to perform different end tasks. A straightforward way to extract reusable skills would be to get an expert to label each time-step of demonstrations with the corresponding skill label. However, this per-step segmentation supervision is tedious for the expert and too expensive to scale. Moreover, such a labeling would be biased towards what expert thinks is the right segmentation than towards a segmentation which helps learn the skills better. This leads to the question: is it\npossible to use the expert knowledge to only know the types of skills present in demonstration and figure out the segmentation from the data itself?\nInspired by the classic work in multiple instance learning (MIL) (Andrews et al., 2002), we propose a weakly-supervised approach for segmentation of human demonstration trajectories into primitive repeatable skills. Our approach assumes the access to only trajectory-level labels, i.e., what primitive skills, aka sub-tasks, are present in the demonstration. The key insight is to learn a primitive skills classifier model conditioned on the input sensory data (e.g. demonstration images), and incorporate a per-time step reasoning structure in this classifier. An overview of our approach is shown in Figure 1. Our model generates per time-step primitive skill label prediction estimates which are then accumulated via differentiable function to generate trajectory-level predictions. In contrast to classic MIL, where only the most confident prediction across time-steps is trained, our full model is trained end-to-end using trajectory-level multi-class loss directly from raw sensory images.\nHowever, we are training with only trajectory-level supervision, then why should our per-step predictions of segmentation model converge to meaningful skill primitives? Data comes to rescue! Since our model trains across a variety of demonstrations, and hence, it would have seen plenty of demonstration trajectories that contain a certain skill primitive (positives) and plenty that do not (negatives). The classification loss would force the segmentation model to focus on discriminative cues that are common across all positives, and absent from negatives. These discriminative cues corresponding to each skill would encourage the per time-step predictions to gradually correspond to the correct ground truth time-step labels.\nWe evaluate our approach in four different environments: (a) As proof of concept, we start with a simple 2D navigation task in grid-world setup where the demonstrations are programatically generated. (b) We then discuss result in robotics setup with continuous control actions, in particular, Jaco robotic arm performing button pushing demonstrations in a touch pad with procedurally generated demonstrations. (c) We test our approach in a robotic setup with actual human demonstrations collected on the RoboSuite benchmark. (d) Finally, we evaluate on a real robot dataset with actual human demonstrations collected kinesthetically. Across all these environments, our approach outperforms the other variants of MIL and achieves decent segmentation accuracy. We then show zero-shot generalization to tasks containing novel permutations of skills in Jaco environment." }, { "heading": "2 METHOD: SEGMENTING DEMONSTRATIONS INTO SKILL PRIMITIVES", "text": "Given a collection of human demonstration trajectories, our goal is to learn a labeling for skill primitives at each time-step of the sequence, i.e., per time-step skill segmentation. Let X be a human demonstration trajectory denoted by X = {x1, x2, x3 . . . xT }, where T is the length of the demonstration and xt denotes a tuple of observation (which is raw sensory image in our case) at time t\nand action taken from it. Note that the action data is optional for the purpose of skill segmentation, but can be useful post segmentation for learning skill policies via imitation. Let Y = {y1, y2, y3 . . . yT } be the latent ground truth labeling of skill primitives in the sequence. Each label yt belongs to one of the k labels from a set of all skill classes C = {1, . . . , k}, i.e., yt ∈ C. These per time-step labels are not only tedious for expert to annotate, but also difficult to scale. In this work, we do not assume access to yt during training, and learn the per time-step segmentation in a weakly-supervised fashion by only using trajectory-level 1-bit label during training, i.e., whether a skill class is present in the trajectory or not. After training, our model is able to directly segment demonstrations at inference, without requiring any labels of any kind. An overview of our method is shown in Figure 1.\nThe marginal probability of a skill primitive at each time-step of demonstration can be written as P (yt|θ, {xt, xt−1 . . . , x1}) where θ is the parameter vector of the segmentation model represented by a neural network in our case. If we had access to the true sequence labels Y , the network parameters θ can be easily learned via maximum log-likelihood by representing the probability as follows:\nP (yt|θ, {xt, xt−1 . . . , x1}) = 1\nZt exp\n( f(yt; θ, {xt, xt−1 . . . , x1}) ) (1)\nwhere Zt is the partition function at t, defined as Zt = ∑ k∈C exp ( ft(k; θ, {xt, xt−1 . . . , x1}) ) . The output of the function ft corresponds to the logit score value generated by the neural network. In order to model temporal dependency on across observation time-steps xt, we represent f(.) via a recurrent neural network, in particular, LSTM (Hochreiter & Schmidhuber, 1997)." }, { "heading": "2.1 WEAKLY-SUPERVISED TRAJECTORY SEGMENTATION", "text": "We are given a dataset of demonstration trajectories during training, D = {X1, . . . , Xn}, where n is total number of demonstrations available for training. Each demonstration trajectory is weakly labelled with what skill primitives are contained within the trajectory. Neither do we have access to which time-step densely correspond to which skill primitive, nor to the permutation in which the skills are executed in. Instead, we are only given a set of skill primitive labels CX ∈ C present in the demonstration trajectory X .\nAlthough our supervision is only at trajectory-level, we do not directly predict output labeling CX from input demonstration X . Instead, we instill the structure of per-step prediction in our weakly supervised segmentation model by first computing the per-step classification score f(yt; θ, {xt, xt−1 . . . , x1}) and then accumulate it across all time-steps to compute the probability of a class in the whole trajectory. This weakly-supervised setup is captured by classical paradigm of multiple instance learning (MIL) (Andrews et al., 2002). At inference, we use this per time-step score to compute the probability of skill primitives at each time t as described in Equation (1). There are multiple ways one could accumulate these per time-step scores discussed as follows." }, { "heading": "2.2 ACCUMULATION OF TIME-STEP PREDICTIONS", "text": "Intuitively, we would like an estimator that could generate an aggregated score for the class depending on how much each time-step votes for that class. We would ideally like the highest score value across time-steps to contribute most to the decision whether a class label ŶX is present in the trajectory X or not. One simple way to achieve that is to employ element-wise max operator, which is also the de facto approach in MIL to accumulate element-level scores. However, this would amount to passing gradients only to the most confident time-step and will completely eradicate the role of other time-steps. This is especially problematic in case of sequential trajectories because no skill primitive will be of only 1 time-step long. Hence, instead of max, we use a soft approximation to it which can take into account the contribution of all time-steps. In particular, we use log-sum-exp operator. Given the the logit score f(yt; θ, {xt, xt−1 . . . , x1}) at each time-step, the trajectory-level logit score g for class c ∈ C is computed as follows:\ng(c; θ,X) = log ( T∑\nt=1\nexp ( f(yt = c; θ, {xt, xt−1 . . . , x1}) )) (2)\nWe perform this operation for all c ∈ C and use softmax over g(c; θ,X) to compute trajectory-level probability distribution Q(c|X, θ). Finally, the parameters θ are optimized to maximize Q(c|X, θ)\nfor each class c with respect to the ground truth trajectory level tags CX . Note that this optimization is fully differentiable through Equation 2, and hence can be optimized via stochastic gradient descent in the end-to-end fashion. Pinheiro & Collobert (2015) has also showed the effectiveness of a temperature-based variant of log-sum-exp operation for semantic segmentation in images.\nHowever, these per-step scores f would be almost uniformly random in the beginning of the training process due to the absence of per-step supervision. Since we are training with only trajectorylevel supervision, why should these per-step predictions should ever converge to meaningful skill segmentation? It turns out to be the case because we are learning across large variety of demonstration examples. Hence, for each skill primitive we would have seen plenty of positive as well as negative trajectories. The loss suppresses the negative classes and encourages the positive ones, hence our segmentation model would be forced to focus on discriminative cues that are exclusively common among the trajectories containing the skill primitives and not common to the cues that help distinguish other skills. Since our trajectory-level segmentation is based on a deterministic transformation of per-step predictions, each per-step score will then be forced to focus on those discriminative cues. The discriminative nature encourages the per time-step predictions to slowly drift towards the true latent ground truth segmentation which are not available directly." }, { "heading": "3 IMPLEMENTATION DETAILS, SETUP AND BASELINES", "text": "Training and Evaluation Details Our segmentation model consists of a convolutional neural network encoder of each image of a trajectory and a one layer fully connected encoder of the action to a 32 dimensional feature that is concatenated to the image feature before being fed into an LSTM with 100 hidden units. We train with a batch size of 64 for the Dial, RoboSuite, and MIME environments and a batch size of 128 for the Grid-world environment. All models were trained with Adam with learning rate of 1e-4. For training, we use 50000, 2000, 1000, and 1600 trajectories for 2D Navigation, Dial, RoboSuite, and MIME Environments respectively. We evaluate our method on the training set, a validation set that consists of the same number of skill primitives (sub-tasks) per trajectory as in training, and a test set that consists of more skill primitives per trajectory than seen in training. The segmentation quality is measured by classification accuracy of the learned model averaged across all time-step. The time-step ground truth is only used for evaluation and not for training.\nBaselines We compare our approach to different formulations of weakly-supervised classification proposed earlier. None of these methods have been applied to a temporal trajectory segmentation before. In our work, we adapt them for sub-task segmentation. In particular, we compare to: (a) MIL (Andrews et al., 2002): In this baseline, we train the deep segmentation model using Classic MIL formulation as proposed by Andrews et al. (2002). We penalize the most confident time-step in the output corresponding to each sub-class to correctly predict the trajectory-level classes. (b) FCN-MIL (Pathak et al., 2015b): This approach is an adaptation of MIL for deep-networks. The MIL objective is treated as the loss function for training the segmentation network, but instead of training the most confident output, we train a neighborhood of k-elements near the most confident time-step, where k is a hyper-parameter chosen on the validation set. We found k=3, i.e., one extra time-step on each side of argmax, to work the best across all datasets. (c) CCNN (Pathak et al., 2015a): This is an alternative approach to tackle the weakly-supervised\nsetup. Instead of training the most confident output, a pseudo ground-truth per time-step truth is generated in a way that ensures that it is the closest pseudo ground-truth possible which contains only the classes as specified by the trajectory-level tags. One could additionally put threshold constraints on the lower-bound of time-steps devoted to the present classes in the pseudo ground truth. The segmentation is then pushed to this per time-step pseudo labels, and a new pseudo ground-truth is generated after each iteration. This approach provides per time-step supervision using only trajectory level-tags. The CCNN approach was originally applied for per-pixel segmentation of images in computer vision literature (Pathak et al., 2015a), and we adapt it for temporal trajectories. (d) Random: For sanity check, we randomly pick a sub-task class at every time-step with uniform probability. (e) Random-Cls: This is a random baseline with privileged class information even at test time. In this case, we only sample random uniformly from the set of classes that are already present in the trajectory. This is not a fair comparison but provides an estimate for the difficulty of the problem. Note that we use the same network architecture and training details across all baseline implementations. However, we tune each baseline separately in a thorough manner on validation sets. This is crucial to ensure an “apples-to-apples” comparison." }, { "heading": "4 RESULTS", "text": "We evaluate our approach and other baselines on four different datasets with very diverse characteristics. The 2D Grid-world environment has a discrete action space, while the Dial, RoboSuite, and MIME environments have a continuous action space. The Grid-world and Dial environments have demonstrations collected procedurally by hand-designed controllers, while the RoboSuite and MIME environments have demonstrations collected by humans. Human demonstration in RoboSuite are collected via teleoperation and kinesthetically in MIME. Grid world, Dial and Robosuite are in simulation while MIME is from real robot. A snapshot of these environments is shown in Figure 2." }, { "heading": "4.1 PROOF OF CONCEPT: 2D NAVIGATION IN DISCRETE TOY GRID-WORLD", "text": "In the grid-world environment, the action space consists of moving up, moving down, moving left, moving right, and picking up object it is hovering over. There are 5 different types of objects uniquely identified by their color and an end task would consist of picking up some subset of all the objects in a particular order. The primitives are defined as picking up a specific type of object. We train our segmentation model on trajectories with 2-4 skill primitives and test with 5 skill trajectories. Each instantiation of the environment has a different starting position of the agent, different starting position of the objects, and different set of objects needed to be grabbed. The image inputs used are 33 by 30 resolution color images, and the max trajectory lengths are 50. This environment serves as toy scenario for proof of concept. At the bottom-left of the image, there is an indicator which suggests which skill is being executed. Hence, an efficient approach should\nachieve 100% accuracy, as is the case with our method as shown in Table 1." }, { "heading": "4.2 DIAL CONTROL ENVIRONMENT: JACO ROBOTIC ARM BASED MANIPULATION", "text": "In the Dial environment, proposed in (Shiarlis et al., 2018), there is a torque-controlled JACO 6 DoF arm and a dial pad that the arm can interact with which is simulated in MuJoCo. There are naturally 10 different types of primitives available in this environment corresponding to pressing numbers zero through nine. We train our segmentation model on trajectories with two to four sub-tasks and test with five sub-task trajectories. Each instantiation of the environment has a different sequence of numbers it expects to be dialed in the correct order. The image inputs used are 112 by 112 resolution gray-scale images, and the max trajectory lengths are 100.\nOur method performs similar to FCN-MIL on the train/val, and better on test. It significantly outperforms the other baselines (Table 2). However, we show that our segmentation are more useful for zero-shot execution performance as explained in Section 4.5. Learning perfect segmentation in the Dial environment is very challenging because there is little signal in most of the trajectory for each skill to signify exactly which digit will be pressed until the arm reaches proximity of the digit." }, { "heading": "4.3 ROBOSUITE ENVIRONMENT: SAWYER ROBOTIC ARM BASED OBJECT PICK AND PLACE", "text": "The RoboSuite environment (Mandlekar et al., 2018) has a Sawyer 7 DoF arm and four different objects (bread, can, cereal, and milk) that the arm can interact with simulated with MuJoCo physics engine. There are four different types of primitives available in this environment corresponding to pick and place of bread, can, cereal, and milk to correct corresponding bin. We train for trajectories with two to three skill primitives and test on trajectories with four skill primitives. Each instantiation of the environment has a different sequence and set of objects that need to be picked up and placed into their corresponding bins. The image inputs used are 128 by 128 resolution color images and the max trajectory lengths are 100.\nOur method significantly outperforms all baselines in full trajectory segmentation by a significant margin. Only MIL performs above random present class on validation and test datasets. Learning perfect segmentation in the RoboSuite environment is also very challenging because there is very little signal in most of the trajectories of each subtask to signify exactly which object will be picked up until near the end of the primitive where the object has been picked up. The “pick object” portion of human demonstrations is usually much longer than the “place object” part because with the tele-operation setup the human stumbles a little bit until fully gripping the object. After the object is in the gripper, placing object in bin is a quick reach to the correct bin for the object." }, { "heading": "4.4 MIME ENVIRONMENT: BAXTER ROBOTIC ARM BASED MANIPULATION", "text": "MIME is a robotic-demonstration dataset that contains 8260 human-robot video demonstrations of 20 different robotic tasks (Sharma et al., 2018). We defined the following primitives for a subset\nof this dataset: reach for object, pour out object, stir inside object, stack objects, place object in box, wipe with rag (6 primitives). All videos have two primitives where one is to reach for object and the other is the action to do with or on the object. There is a held out test dataset for each robotic task which we use for evaluation. The image inputs used are 120 by 320 resolution grayscale images and the max trajectory lengths are 100. Our method beats all other baselines in full trajectory segmentation by at least 1.8x on the test set (Table 3)." }, { "heading": "4.5 ZERO-SHOT RESULTS: JACO MANIPULATION", "text": "We use our segmentation model to create sub-datasets for each of our primitives to train a behavior cloned skill policy for each. We then test our skill policies on performing higher sequence length tasks not seen in training data. During the creation of the sub-datasets, we rejected all segments smaller than 5 consecutive timesteps of the same labelled primitive. We applied gaussian smoothing on the segmentation prior to extraction to filter out noisy predictions.\nWe demonstrate the zero-shot capability of our model and baselines on the Dial control environment in Table 4. Our model performs at least 1.25x better than all baselines. We also show that although FCN-MIL had the same segmentation accuracy as our method, after rejecting smaller than 5 timestep segments our method has a significantly higher post rejection segmentation accuracy. We speculate this is due to our model committing less to a wrong prediction than the baselines. Therefore wrong predictions are more easily rejected with our segmentation model." }, { "heading": "5 RELATED WORK", "text": "Multiple Instance Learning is the sub-field that deals with learning with weak-labels (Dietterich et al., 1997), formulated mostly as max-margin. Classic formulations include MI-SVM (Andrews et al., 2002) or LSVM (Felzenszwalb et al., 2010). There have been boosting (Ali & Saenko, 2014; Zhang et al., 2005) and Noisy-OR models (Heckerman, 2013) formulations as well. This has been extensively explored in image segmentation (Pinheiro & Collobert, 2015; Pathak et al., 2015a). There is also work on learning to segment videos into primitive components that take place in each of the videos using human narration of the videos (Alayrac et al., 2018), (Naim et al., 2014), (Zhukov et al., 2019), (Richard et al., 2017). However collecting human narrations is an expensive process that cannot scale to very large datasets. In our setup, we need to label each timestep of a video with a further preferable constraint of having the labels as contiguous as possible.\nTraining goal conditioned policies have become very popular recently since they learn policies that are reusable for new tasks. The goals can be defined in state space (Schaul et al., 2015), (Andrychowicz\net al., 2017) or in image space (Pathak et al., 2018), (Nair et al., 2018). These methods however suffer from not being able to reach goals that are out of the training distribution. They also tend to greedily reach goals which make them suffer for long horizon tasks. This leads to having to break the problem down to checkpoint states and this can still be expensive and cumbersome for a human to define each new task (Pathak et al., 2018). Similar way to define a new task is one-shot imitation learning where a new task is defined with one demonstration of the task; this can also be expensive if one need to learn new tasks that are cumbersome for the human to collect (Duan et al., 2017).\nHierarchical policies have been used to solve long horizon tasks by having a meta-policy or manager that takes in states at a sparser time scale and chooses which sub-policy or short term goal for worker to achieve at dense time scale (Dayan & Hinton, 1993), (Vezhnevets et al., 2017), (Sutton et al., 1999), (Bacon et al., 2017). However, the subpolicies learned in this type of options framework are not intrepretable in their specialization and therefore tend to be hard to reuse for new tasks.\nLearning reusable and interpretable primitives from scratch has been very challenging. One way to tackle this is for humans to define the high level primitives we care about beforehand. We can then decompose complex tasks into subtasks / primitives and learn the primitives first (Oh et al., 2017). By learning the primitives first, an agent can then perform any new task that can be decomposed into those primitives without needing any new human demonstration data (Andreas et al., 2017). The human only needs to specify the order in which the primitives need to be done. In the presence of only the order of high level tags of what primitives were performed at the trajectory level, the optimization is non-trivial. Shiarlis et al. (2018) provide one approach using dynamic programming to efficiently compute gradients (Graves et al., 2006) through all possible trajectories of a given high level permutation of primitives performed in each trajectory. We however have presented an approach where we would not need the order in which the primitives were performed thereby making the labeling process of videos much cheaper. (Kipf et al., 2019) and (Niekum et al., 2012) do unsupervised primitive segmentation in environments using state space whereas we do weakly supervised primitive segmentation in image space in all of our environments ((Kipf et al., 2019) only use image input for their toy gridworld environment)." }, { "heading": "6 DISCUSSION", "text": "Obtaining primitives from existing experience and composing them to perform novel tasks presents a viable paradigm for tackling generalization to novel tasks. Due to the combinatorial nature of composition, this approach allows generalization to exponentially many scenarios with a linear number of primitives. This work provides an end-to-end trainable formulation for extracting primitives. Our results demonstrate strong segmentation accuracy, and show early evidence for zero-shot generalization via compositionality. Our primitive segmentations, obtained from demonstrations of end tasks, are more naturally extracted ‘options’ than those defined by an expert, which may be prone to being biased. Another alternative, besides zero-shot composition, is to treat these primitives as new atomic actions to develop a hierarchical framework for modeling long-horizon tasks. Deciding the threshold for segmenting actions into different levels of controller is one of the main bottlenecks in hierarchical control, and current approaches usually resort to using domain knowledge to solve the issue. These extracted primitives, a.k.a ‘macro’ actions, provide an alternative scaffolding which could bootstrap the hierarchy in a bottom-up manner. To promote these follow-up ideas, we will publicly release our code and environments." } ]
2,019
null
SP:745b6d9c4d1e7c3d0b52c91984356c66e3c7ba5b
[ "This paper presents a new one-shot NAS approach. Parameter updating and structure updating are optimized separately. In the process of parameter optimization, different from the previous methods, the network samples the structure according to the uniform probability distribution. This sampling method can avoid the coupling caused by optimizing the structural parameters at the same time. After training the supernet, the network uses the genetic algorithm to search the structure.", "Authors revise the one-shot NAS algorithm in this work. One-shot NAS that employs a supernet to share the weights between subnets is an efficient NAS algorithm. Authors develop a new training paradigm to train the supernet sufficiently. Specifically, they uniformly sample a single path from supernet at each iteration to make the training effective and stable." ]
We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing one-shot method (Bender et al., 2018), however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is performed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces (e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet.
[]
[ { "authors": [ "Bowen Baker", "Otkrist Gupta", "Nikhil Naik", "Ramesh Raskar" ], "title": "Designing neural network architectures using reinforcement learning", "venue": "arXiv preprint arXiv:1611.02167,", "year": 2016 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Jungwook Choi", "Zhuo Wang", "Swagath Venkataramani", "Pierce I-Jen Chuang", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "Pact: Parameterized clipping activation for quantized neural networks", "venue": "arXiv preprint arXiv:1805.06085,", "year": 2018 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Searching for a robust neural architecture in four gpu hours", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": "arXiv preprint arXiv:1902.07638,", "year": 2019 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Zechun Liu", "Baoyuan Wu", "Wenhan Luo", "Xin Yang", "Wei Liu", "Kwang-Ting Cheng" ], "title": "Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "arXiv preprint arXiv:1802.01548,", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Dimitrios Stamoulis", "Ruizhou Ding", "Di Wang", "Dimitrios Lymberopoulos", "Bodhi Priyantha", "Jie Liu", "Diana Marculescu" ], "title": "Single-path nas: Designing hardware-efficient convnets in less than 4 hours", "venue": null, "year": 1904 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Quoc V Le" ], "title": "Mnasnet: Platformaware neural architecture search for mobile", "venue": "arXiv preprint arXiv:1807.11626,", "year": 2018 }, { "authors": [ "Tom Véniat", "Ludovic Denoyer" ], "title": "Learning time/memory-efficient deep architectures with budgeted super networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "arXiv preprint arXiv:1812.03443,", "year": 2018 }, { "authors": [ "Bichen Wu", "Yanghan Wang", "Peizhao Zhang", "Yuandong Tian", "Peter Vajda", "Kurt Keutzer" ], "title": "Mixed precision quantization of convnets via differentiable neural architecture search", "venue": "arXiv preprint arXiv:1812.00090,", "year": 2018 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "Snas: stochastic neural architecture search", "venue": "arXiv preprint arXiv:1812.09926,", "year": 2018 }, { "authors": [ "Quanming Yao", "Ju Xu", "Wei-Wei Tu", "Zhanxing Zhu" ], "title": "Differentiable neural architecture search via proximal iterations", "venue": "arXiv preprint arXiv:1905.13577,", "year": 2019 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xinbang Zhang", "Zehao Huang", "Naiyan Wang" ], "title": "You only search once: Single shot neural architecture search via direct sparse optimization", "venue": "arXiv preprint arXiv:1811.01567,", "year": 2018 }, { "authors": [ "Zhao Zhong", "Junjie Yan", "Wei Wu", "Jing Shao", "Cheng-Lin Liu" ], "title": "Practical block-wise neural network architecture generation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhao Zhong", "Zichen Yang", "Boyang Deng", "Junjie Yan", "Wei Wu", "Jing Shao", "Cheng-Lin Liu" ], "title": "Blockqnn: Efficient block-wise neural network architecture generation", "venue": "arXiv preprint arXiv:1808.05584,", "year": 2018 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "arXiv preprint arXiv:1606.06160,", "year": 2016 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning automates feature engineering and solves the weight optimization problem. Neural Architecture Search (NAS) aims to automate architecture engineering by solving one more problem, architecture design. Early NAS approaches (Zoph et al., 2018; Zhong et al., 2018a;b; Liu et al., 2018a; Real et al., 2018; Tan et al., 2018) solves the two problems in a nested manner. A large number of architectures are sampled and trained from scratch. The computation cost is unaffordable on large datasets.\nRecent approaches (Wu et al., 2018a; Cai et al., 2018; Liu et al., 2018b; Xie et al., 2018; Pham et al., 2018; Zhang et al., 2018c; Brock et al., 2017; Bender et al., 2018) adopt a weight sharing strategy to reduce the computation. A supernet subsuming all architectures is trained only once. Each architecture inherits its weights from the supernet. Only fine-tuning is performed. The computation cost is greatly reduced.\nMost weight sharing approaches use a continuous relaxation to parameterize the search space (Wu et al., 2018a; Cai et al., 2018; Liu et al., 2018b; Xie et al., 2018; Zhang et al., 2018c). The architecture distribution parameters are jointly optimized during the supernet training via gradient based methods. The best architecture is sampled from the distribution after optimization. There are two issues in this formulation. First, the weights in the supernet are deeply coupled. It is unclear why inherited weights for a specific architecture are still effective. Second, joint optimization introduces further coupling between the architecture parameters and supernet weights. The greedy nature of the gradient based methods inevitably introduces bias during optimization and could easily mislead the architecture search. They adopted complex optimization techniques to alleviate the problem.\nThe one-shot paradigm (Brock et al., 2017; Bender et al., 2018) alleviates the second issue. It defines the supernet and performs weight inheritance in a similar way. However, there is no architecture relaxation. The architecture search problem is decoupled from the supernet training and addressed in a separate step. Thus, it is sequential. It combines the merits of both nested and joint optimization approaches above. The architecture search is both efficient and flexible.\nThe first issue is still problematic. Existing one-shot approaches (Brock et al., 2017; Bender et al., 2018) still have coupled weights in the supernet. Their optimization is complicated and involves sensitive hyper parameters. They have not shown competitive results on large datasets.\nThis work revisits the one-shot paradigm and presents a new approach that further eases the training and enhances architecture search. Based on the observation that the accuracy of an architecture using inherited weights should be predictive for the accuracy using optimized weights, we propose that the supernet training should be stochastic. All architectures have their weights optimized simultaneously. This gives rise to a uniform sampling strategy. To reduce the weight coupling in the supernet, a simple search space that consists of single path architectures is proposed. The training is hyperparameter-free and easy to converge.\nThis work makes the following contributions.\n1. We present a principled analysis and point out drawbacks in existing NAS approaches that use nested and joint optimization. Consequently, we hope this work will renew interest in the one-shot paradigm, which combines the merits of both via sequential optimization.\n2. We present a single path one-shot approach with uniform sampling. It overcomes the drawbacks of existing one-shot approaches. Its simplicity enables a rich search space, including novel designs for channel size and bit width, all addressed in a unified manner. Architecture search is efficient and flexible. Evolutionary algorithm is used to support real world constraints easily, such as low latency.\nComprehensive ablation experiments and comparison to previous works on a large dataset (ImageNet) verify that the proposed approach is state-of-the-art in terms of accuracy, memory consumption, training time, architecture search efficiency and flexibility." }, { "heading": "2 REVIEW OF NAS APPROACHES", "text": "Without loss of generality, the architecture search spaceA is represented by a directed acyclic graph (DAG). A network architecture is a subgraph a ∈ A, denoted as N (a,w) with weights w. Neural architecture search aims to solve two related problems. The first is weight optimization,\nwa = arg min w\nLtrain (N (a,w)) , (1)\nwhere Ltrain(·) is the loss function on the training set. The second is architecture optimization. It finds the architecture that is trained on the training set and has the best accuracy on the validation set, as\na∗ = arg max a∈A ACCval (N (a,wa)) , (2)\nwhere ACCval(·) is the accuracy on the validation set. Early NAS approaches perform the two optimization problems in a nested manner (Zoph & Le, 2016; Zoph et al., 2018; Zhong et al., 2018a;b; Baker et al., 2016). Numerous architectures are sampled from A and trained from scratch as in Eq. (1). Each training is expensive. Only small dataset (e.g., CIFAR 10) and small search space (e.g, a single block) are affordable.\nRecent NAS approaches adopt a weight sharing strategy (Cai et al., 2018; Liu et al., 2018b; Wu et al., 2018a; Xie et al., 2018; Bender et al., 2018; Brock et al., 2017; Zhang et al., 2018c; Pham et al., 2018). The architecture search space A is encoded in a supernet1, denoted as N (A,W ), where W is the weights in the supernet. The supernet is trained once. All architectures inherit their weights directly from W . Thus, they share the weights in their common graph nodes. Fine tuning of an architecture is performed in need, but no training from scratch is incurred. Therefore, architecture search is fast and suitable for large datasets like ImageNet.\nMost weight sharing approaches convert the discrete architecture search space into a continuous one (Wu et al., 2018a; Cai et al., 2018; Liu et al., 2018b; Xie et al., 2018; Zhang et al., 2018c). Formally, space A is relaxed to A(θ), where θ denotes the continuous parameters that represent the distribution of the architectures in the space. Note that the new space subsumes the original one, A ⊆ A(θ). An architecture sampled from A(θ) could be invalid in A.\n1“Supernet” is used as a general concept in this work. It has different names and implementation in previous approaches.\nAn advantage of the continuous search space is that gradient based methods (Liu et al., 2018b; Cai et al., 2018; Wu et al., 2018a; Véniat & Denoyer, 2018; Xie et al., 2018; Zhang et al., 2018c) is feasible. Both weights and architecture distribution parameters are jointly optimized, as\n(θ∗,Wθ∗) = arg min θ,W Ltrain(N (A(θ),W )). (3)\nAfter optimization, the best architecture a∗ is sampled from A(θ∗). Note that it could be invalid in A. If so, it is validated (e.g., by binarization of θ (Liu et al., 2018b)). It then inherits the weights from Wθ∗ and is fine-tuned.\nOptimization of Eq. (3) is challenging. First, the weights of the graph nodes in the supernet depend on each other and become deeply coupled during optimization. For a specific architecture, it inherits certain node weights from W . While these weights are decoupled from the others, it is unclear why they are still effective.\nSecond, joint optimization of architecture parameter θ and weights W introduces further complexity. Solving Eq. (3) inevitably introduces bias to certain areas in θ and certain nodes in W during the progress of optimization. The bias would leave some nodes in the graph well trained and others poorly trained. With different level of maturity in the weights, different architectures are actually non-comparable. However, their prediction accuracy is used as guidance for sampling inA(θ) (e.g., used as reward in policy gradient (Cai et al., 2018)). This would further mislead the architecture sampling. This problem is in analogy to the “dilemma of exploitation and exploration” problem in reinforcement learning. To alleviate such problems, existing approaches adopt complicated optimization techniques (see Table 7 for a summary). Nevertheless, there lacks a comprehensive evaluation of their effectiveness (Li & Talwalkar, 2019).\nTask constraints Real world tasks usually have additional requirements on a network’s memory consumption, FLOPs, latency, energy consumption, etc. These requirements only depends on the architecture a, not on the weights wa. Thus, they are called architecture constraints in this work. A typical constraint is that the network’s latency is no more than a preset budget, as\nLatency(a∗) ≤ Latmax. (4)\nNote that it is challenging to satisfy Eq. (2) and Eq. (4) simultaneously for most previous approaches. Some works augment the loss function Ltrain in Eq. (3) with soft loss terms that consider the architecture latency (Cai et al., 2018; Wu et al., 2018a; Xie et al., 2018; Véniat & Denoyer, 2018). However, it is hard, if not impossible, to guarantee a hard constraint like Eq. (4)." }, { "heading": "3 OUR SINGLE PATH ONE-SHOT APPROACH", "text": "As analyzed above, the coupling between architecture parameters and weights is problematic. This is caused by joint optimization of both. To alleviate the problem, a natural solution is to decouple the super net training and architecture search in two sequential steps. This leads to the so called one-shot approaches (Brock et al., 2017; Bender et al., 2018).\nIn general, the two steps are formulated as follows. Firstly, the supernet weight is optimized as\nWA = arg min W\nLtrain (N (A,W )) . (5)\nCompared to Eq. (3), the continuous parameterization of search space is absent. Only weights are optimized.\nSecondly, architecture searched is performed as\na∗ = arg max a∈A ACCval (N (a,WA(a))) . (6)\nDuring search, each sampled architecture a inherits its weights from WA as WA(a). The key difference of Eq. (6) from Eq. (1) and (2) is that architecture weights are ready for use. Evaluation of ACCval(·) only requires inference. Thus, the search is very efficient. The search is also flexible. Any adequate search algorithm is feasible. The architecture constraint like Eq. (4) can be exactly satisfied. Search can be repeated many times on the same supernet once\ntrained, using different constraints (e.g., 100ms latency and 200ms latency). These properties are absent in previous approaches. These make the one-shot paradigm attractive for real world tasks.\nOne problem in Sec. 2 still remains. The graph nodes’ weights in the supernet training in Eq.( 5) are coupled. It is unclear why the inherited weights WA(a) are still good for an arbitrary architecture a.\nThe recent one-shot approach (Bender et al., 2018) attempts to decouple the weights using a “path dropout” strategy. During an SGD step in Eq. (5), each edge in the supernet graph is randomly dropped. The random chance is controlled via a dropout rate parameter. In this way, the coadaptation of the node weights is reduced during training. Experiments in (Bender et al., 2018) indicate that the training is very sensitive to the dropout rate parameter. This makes the supernet training hard. A carefully tuned heat-up strategy is used. In our implementation of this work, we also found that the validation accuracy is very sensitive to the dropout rate parameter.\nSingle Path Supernet and Uniform Sampling. Let us restart to think about the fundamental principle behind the idea of weight sharing. The key to the success of architecture search in Eq. (6) is that, the accuracy of any architecture a on a validation set using inherited weightWA(a) (without extra fine tuning) is highly predictive for the accuracy of a that is fully trained. Ideally, this requires that the weight WA(a) to approximate the optimal weight wa as in Eq. (1). The quality of the approximation depends on how well the training loss Ltrain (N (a,WA(a))) is minimized. This gives rise to the principle that the supernet weightsWA should be optimized in a way that all architectures in the search space are optimized simultaneously. This is expressed as\nWA = arg min W\nEa∼Γ(A) [Ltrain(N (a,W (a)))] , (7)\nwhere Γ(A) is a prior distribution of a ∈ A. Note that Eq. (7) is an implementation of Eq. (5). In each step of optimization, an architecture a is randomly sampled. Only weights W (a) are activated and updated. So the memory usage is efficient. In this sense, the supernet is no longer a valid network. It behaves as a stochastic supernet (Véniat & Denoyer, 2018). This is different from (Bender et al., 2018).\nTo reduce the co-adaptation between node weights, we propose a supernet structure that each architecture is a single path, as shown in Fig.3. Compared to the path dropout strategy in (Bender et al., 2018), the single path strategy is hyperparameter-free. We compared the two strategies within the same search space (as in this work). Note that the original drop path in (Bender et al., 2018) may drop all operations in a block, resulting in a short cut of identity connection. In our implementation, it is forced that one random path is kept in this case since our choice block does not have an identity branch. We randomly select sub network and evaluate its validation accuracy during the training stage. Results in Fig.1 show that drop rate parameters matters a lot. Our single path strategy corresponds to using drop rate 1. It works the best, which also verifies the benefits of weight decoupling by our single path strategy.\nThe prior distribution Γ(A) is important. In this work, we empirically find that uniform sampling is good. This is not much of a surprise. A recent work also finds that purely random search is competitive to several SOTA NAS approaches( Li & Talwalkar (2019)). We also experimented with\na variant that samples the architectures uniformly according to their constraints, named uniform constraint sampling. Specifically, we randomly choose a range, and then sample the architecture repeatedly until the FLOPs of sampled architecture falls in the range. This is because a real task usually expects to find multiple architectures satisfying different constraints. In this work, we find the uniform constraint sampling method is slightly better. So we use it by default in this paper.\nWe note that sampling a path according to architecture distribution during optimization is already used in previous weight sharing approaches (Pham et al., 2018; Véniat & Denoyer, 2018; Wu et al., 2018a; Cai et al., 2018; Xie et al., 2018; Zhang et al., 2018c; Yao et al., 2019; Dong & Yang, 2019; Stamoulis et al., 2019). The difference is that, the distribution Γ(A) is a fixed priori during our training (Eq. (7)), while it is learnable and updated (Eq. (3)) in previous approaches (e.g. RL (Pham et al., 2018), policy gradient (Véniat & Denoyer, 2018; Cai et al., 2018), Gumbel Softmax (Wu et al., 2018a; Xie et al., 2018), APG (Zhang et al., 2018c)). As analyzed in Sec. 2, the latter makes the supernet weights and architecture parameters highly correlated and optimization difficult.\nComprehensive experiments in Sec. 4 show that our approach achieves better results than the SOTA methods. Note that there is no such theoretical guarantee that using a fixed prior distribution is inherently better than optimizing the distribution during training. Our better result likely indicates that the joint optimization in Eq. (3) is too difficult for the existing optimization techniques.\nSupernet Architecture and Novel Choice Block Design. Choice blocks are used to build a stochastic architecture. Fig. 3 illustrates an example case. A choice block consists of multiple architecture choices. For our single path supernet, each choice block only has one choice invoked at the same time. A path is obtained by sampling all the choice blocks.\nThe simplicity of our approach enables us to define different types of choice blocks to search various architecture variables. Specifically, we propose two novel choice blocks to support complex search spaces.\nChannel Number Search. We propose a new choice block based on weight sharing, as shown in Fig. 4. The main idea is to preallocate a weight tensor with maximum number of channels, and the system randomly selects the channel number and slices out the corresponding subtensor for convolution. With the weight sharing strategy, we found that the supernet can converge quickly.\nIn detail, assume the dimensions of preallocated weights are (max c out, max c in, ksize). For each batch in supernet training, the number of current output channels c out is randomly sampled. Then, we slice out the weights for current batch with the form Weights[: c out, : c in, :], which is used to produce the output. The optimal number of channels is determined in the search step.\nMixed-Precision Quantization Search. In this work, We design a novel choice block to search the bit widths of the weights and feature maps, as shown in Fig. 5. We also combine the channel search space discussed earlier to our mixed-precision quantization search space. During supernet training, for each choice block feature bit width and weight bit width are randomly sampled. They are determined in the evolutionary step. See Sec. 4 and Fig. 5 for details.\nEvolutionary Architecture Search. For architecture search in Eq. (6), previous one-shot works (Brock et al., 2017; Bender et al., 2018) use random search. This is not effective for a large search space. This work uses an evolutionary algorithm. Note that evolutionary search in NAS is used in (Real et al., 2018), but it is costly as each architecture is trained from scratch. In our search, each architecture only performs inference. This is very efficient.\nThe algorithm is elaborated in Algorithm 1. For all experiments, population size P = 50, max iterations T = 20 and k = 10. For crossover, two randomly selected candidates are crossed to produce a new one. For mutation, a randomly selected candidate mutates its every choice block with probability 0.1 to produce a new candidate. Crossover and mutation are repeated to generate enough new candidates that meet the given architecture constraints. Before the inference of an architecture, the statistics of all the Batch Normalization (BN) (Ioffe & Szegedy, 2015) operations are recalculated on a random subset of training data (20000 images on ImageNet). It takes a few seconds. This is because the BN statistics from the supernet are usually not applicable to the candidate nets. This is also referred in (Bender et al., 2018).\nFig. 2 plots the validation accuracy over generations, using both evolutionary and random search methods. It is clear that evolutionary search is more effective. Experiment details are in Sec. 4.\nThe evolutionary algorithm is flexible in dealing with different constraints in Eq. (4), because the mutation and crossover processes can be directly controlled to generate proper candidates to satisfy the constraints. Previous RL-based (Tan et al., 2018) and gradient-based (Cai et al., 2018; Wu et al., 2018a; Véniat & Denoyer, 2018) methods design tricky rewards or loss functions to deal with such constraints. For example, (Wu et al., 2018a) uses a loss function CE(a,wa) · α log(LAT(a))β to balance the accuracy and the latency. It is hard to tune the hyper parameter β to satisfy a hard constraint like Eq. (4).\nSummary. The combination of single path supernet, uniform sampling training strategy, evolutionary architecture search, and rich search space design makes our approach simple, efficient and flexible. Table 7 in Appendix performs a comprehensive comparison of our approach against previous weight sharing approaches on various aspects. Ours is the easiest to train, occupies the smallest memory, best satisfies the architecture (latency) constraint, and easily supports large datasets. Extensive results in Sec. 4 verify that our approach is the state-of-the-art." }, { "heading": "4 EXPERIMENT RESULTS", "text": "Dataset. All experiments are performed on ImageNet (Russakovsky et al., 2015). We randomly split the original training set into two parts: 50000 images for validation (50 images for each class exactly) and the rest as the training set. The original validation set is used for testing, on which all the evaluation results are reported, following (Cai et al., 2018).\nTraining. For the training of the supernet and retraining of the best architecture (after evolutionary search) from scratch, we use the same settings (including data augmentation strategy, learning rate schedule, etc.) as (Ma et al., 2018). The batch size is 1024. Supernet is trained for 120 epochs (150000 iterations) and the best architecture for 240 epochs (300000 iterations). Training uses 8 NVIDIA GTX 1080Ti GPUs.\nSearch Space: Building Blocks. First, we evaluate our method on the task of building block selection, i.e. to find the optimal combination of building blocks under a certain complexity constraint. Our basic building block design is inspired by a state-of-the-art manually-designed network – ShuffleNet v2 (Ma et al., 2018). Table 1 shows the overall architecture of the supernet. There are 20 choice blocks in total. Each choice block has 4 candidates, namely “choice 3”, “choice 5”, “choice 7” and “choice x” respectively (see Fig.6 in Appendix for details). They differ in kernel sizes and the number of depthwise convolutions. The size of the search space is 420.\nWe use FLOPs ≤ 330M as the complexity constraint, as the FLOPs of a plenty of previous networks lies in [300,330], including manually-designed networks (Howard et al., 2017; Sandler et al., 2018; Zhang et al., 2018b; Ma et al., 2018) and those obtained in NAS (Cai et al., 2018; Wu et al., 2018a; Tan et al., 2018).\nTable 2 shows the results. For comparison, we set up a series of baselines as follows: 1) select a certain block choice only (denoted by “all choice *” entries); note that different choices have different FLOPs, thus we adjust the channels to meet the constraint. 2) Randomly select some candidates from the search space. 3) Replace our evolutionary architecture optimization with random search used in (Brock et al., 2017; Bender et al., 2018). Results show that random search equipped with our single path supernet finds an architecture only slightly better that random select (73.8 vs. 73.7). It does no mean that our single path supernet is less effective. This is because the random search is too\nnaive to pick good candidates from the large search space. Using evolutionary search, our approach finds out an architecture that achieves superior accuracy (74.3) over all the baselines.\nSearch Space: Channels. Based on our novel choice block for channel number search, we first evaluate channel search on the baseline structure “all choice 3” (refer to Table 2): for each building block, we search the number of “mid-channels” (output channels of the first 1x1 conv in each building block) varying from 0.2x to 1.6x (with stride 0.2), where “k-x” means k times the number of default channels. Same as Sec. 4, we set the complexity constraint FLOPs ≤ 330M . Table 3 (first part) shows the result. Our channel search method has higher accuracy (73.9) than the baselines.\nTo further boost the accuracy, we search building blocks and channels jointly. There are two alternatives: 1) running channel search on the best building block search result of Sec. 4; or 2) searching on the combined search space directly. In our experiments, we find the results of the first pipeline is slightly better. As shown in Table 3, searching in the joint space achieves the best accuracy (74.7% acc.), surpassing all the previous state-of-the-art manually-designed (Ma et al., 2018; Sandler et al., 2018) or automatically-searched models (Tan et al., 2018; Zoph et al., 2018; Liu et al., 2018a;b; Cai et al., 2018; Wu et al., 2018a) under the complexity of ∼ 300M FLOPs.\nComparison with State-of-the-arts. Results in Table 3 shows our method is superior. Nevertheless, the comparisons could be unfair because different search spaces and training methods are used in previous works (Cai et al., 2018). To make direct comparisons, we benchmark our approach to the same search space of (Cai et al., 2018; Wu et al., 2018a). In addition, we retrain the searched models reported in (Cai et al., 2018; Wu et al., 2018a) under the same settings to guarantee the fair comparison.\nThe search space and supernet architecture in ProxylessNAS (Cai et al., 2018) is inspired by MobileNet v2 (Sandler et al., 2018) and MnasNet (Tan et al., 2018). It contains 21 choice blocks; each choice block has 7 choices (6 different building blocks and one skip layer). The size of the search space is 721. FBNet (Wu et al., 2018a) also uses a similar search space.\nTable 4 reports the accuracy and complexities (FLOPs and latency on our device) of 5 models searched by (Cai et al., 2018; Wu et al., 2018a), as the baselines. Then, for each baseline, our search method runs under the constraints of same FLOPs or same latency, respectively. Results shows that for all the cases our method achieves comparable or higher accuracy than the counterpart baselines. We also point out that since the target devices in (Cai et al., 2018; Wu et al., 2018a) are different from ours, the reported results may be sub-optimal on our platform.\nFurthermore, it is worth noting that all our 10 architectures in Table 4 are searched on the same supernet, justifying the flexibility and efficiency of our approach to deal with different complexity\nconstraints: supernet is trained once and searched multiple times. In contrast, previous methods (Wu et al., 2018a; Cai et al., 2018) have to train multiple supernets under various constraints. According to Table 6, searching is much cheaper than supernet training.\nApplication: Mixed-Precision Quantization. We evaluate our method on ResNet-18 and ResNet-34 as common practice in previous quantization works (e.g. (Choi et al., 2018; Wu et al., 2018b; Liu et al., 2018c; Zhou et al., 2016; Zhang et al., 2018a)). Following (Zhou et al., 2016; Choi et al., 2018; Wu et al., 2018b), we only search and quantize the res-blocks, excluding the first convolutional layer and the last fully-connected layer. In the search space, choices of weight and feature bit widths include {(1, 2), (2, 2), (1, 4), (2, 4), (3, 4), (4, 4)}. As for channel search, we search the number of “bottleneck channels” (i.e. the output channels of the first convolutional layer in each residual\nblock) in {0.5x, 1.0x, 1.5x}, where “k-x” means k times the number of original channels. The size of the search space is (3×6)N = 18N , whereN is the number of choice blocks (N = 8 for ResNet18 and N = 16 for ResNet-34). Note that for each building block we use the same bit widths for the two convolutions. We use PACT (Choi et al., 2018) as the quantization algorithm.\nTable 5 reports the results. The baselines are denoted as kWkA (k = 2, 3, 4), which means uniform quantization of weights and activations with k-bits. Then, our search method runs under the constraints of the corresponding BitOps. We also compare with a recent mixed-precision quantization search approach (Wu et al., 2018b). Results shows that our method achieves superior accuracy in most cases. Also note that all our results for ResNet-18 and ResNet-34 are searched on the same supernet. This is very efficient.\nSearch Cost Analysis. The search cost is a matter of concern in NAS methods. So we analyze the search cost of our method and previous methods (Wu et al., 2018a; Cai et al., 2018) (reimplemented by us). We use the search space of our building blocks to measure the memory cost of training supernet and overall time cost. All the supernets are trained for 150000 iterations with a batch size of 256. All models are trained with 8 GPUs. The Table 6 shows that our approach clearly uses less memory than\nother two methods because of the single path supernet. And our approach is much more efficient overall although we have an extra search step that costs less than 1 GPU day. Note Table 6 only compares a single run. In practice, our approach is more advantageous and more convenient to use when multiple searches are needed. As summarized in Table 7, it guarantees to find out the architecture satisfying constraints within one search. Repeated search is easily supported." }, { "heading": "A APPENDIX", "text": "Algorithm 1: Evolutionary Architecture Search 1 Input: supernet weights WA, population size P, architecture constraints C, max iteration T , validation\ndataset Dval 2 Output: the architecture with highest validation accuracy under architecture constraints 3 P0 := Initialize population(P, C); Topk := ∅; 4 n := P/2; Crossover number 5 m := P/2; Mutation number 6 prob := 0.1; Mutation probability 7 for i = 1 : T do 8 ACCi−1 := Inference(WA,Dval,Pi−1); 9 Topk := Update Topk(Topk,Pi−1,ACCi−1);\n10 Pcrossover := Crossover(Topk, n, C); 11 Pmutation := Mutation(Topk,m, prob, C); 12 Pi := Pcrossover ∪ Pmutation; 13 end 14 Return the architecture with highest accuracy in Topk;" } ]
2,019
null
SP:b451949bef9c16fe4dea78ef337d7f7dcbae3f90
[ "This paper proposes to learn static and dynamic channel pruning policies for convolutional neural networks. Static pruning depends only on the training dataset and is computed once before the model is deployed. Dynamic pruning is input-dependent. The policies are obtained with deep reinforcement learning on the training dataset using a combination of the loss function and storage/computation resource budgets as a reward signal. The key novelty in this paper is to combine static and dynamic pruning which can obtain the benefits from both worlds. Experimentally, the learned pruning policies are competitive with recent dynamic pruning approaches on CIFAR-10 and ILSVRC2012, in terms of both final test accuracy and number of parameters/inference time.", "This work introduces a Reinforcement Learning based framework that simultaneously learns both a static and dynamic pruning strategy. The combination allows the static pruner to decrease the required storage while the dynamic pruning can optimize the required compute using input-dependent pruned weights. The RL agent can dynamically learn the optimal sparsity distribution among the different layer of the networks while staying under a resource constraint as opposed to other methods which often enforce a layer level sparsity ratio. It demonstrates the efficacy of the algorithm on CIFAR10 and ILSVRC2012 and showed the effect of the tradeoff between static and dynamic pruning." ]
In this paper, we propose a deep reinforcement learning (DRL) based framework to efficiently perform runtime channel pruning on convolutional neural networks (CNNs). Our DRL-based framework aims to learn a pruning strategy to determine how many and which channels to be pruned in each convolutional layer, depending on each specific input instance at runtime. The learned policy optimizes the performance of the network by restricting the computational resource on layers under an overall computation budget. Furthermore, unlike other runtime pruning methods which require to store all channels parameters for inference, our framework can reduce parameters storage consumption for deployment by introducing a static pruning component. Comparison experimental results with existing runtime and static pruning methods on state-of-the-art CNNs demonstrate that our proposed framework is able to provide a tradeoff between dynamic flexibility and storage efficiency in runtime channel pruning.
[]
[ { "authors": [ "Tolga Bolukbasi", "Joseph Wang", "Ofer Dekel", "Venkatesh Saligrama" ], "title": "Adaptive neural networks for efficient inference", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Matthieu Courbariaux", "Itay Hubara", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1", "venue": "arXiv preprint arXiv:1602.02830,", "year": 2016 }, { "authors": [ "Misha Denil", "Babak Shakibi", "Laurent Dinh", "Marc’Aurelio Ranzato", "Nando de Freitas" ], "title": "Predicting parameters in deep learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Xin Dong", "Shangyu Chen", "Sinno Pan" ], "title": "Learning to prune deep neural networks via layer-wise optimal brain surgeon", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xitong Gao", "Yiren Zhao", "ukasz Dudziak", "Robert Mullins", "Cheng zhong Xu" ], "title": "Dynamic channel pruning: Feature boosting and suppression", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "K. He", "G. Gkioxari", "P. Dollr", "R. Girshick" ], "title": "Mask r-cnn", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Yang He", "Ping Liu", "Ziwei Wang", "Zhilan Hu", "Yi Yang" ], "title": "Filter pruning via geometric median for deep convolutional neural networks acceleration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Yihui He", "Ji Lin", "Zhijian Liu", "Hanrui Wang", "Li-Jia Li", "Song Han" ], "title": "Amc: Automl for model compression and acceleration on mobile devices", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Weizhe Hua", "Christopher De Sa", "Zhiru Zhang", "G Edward Suh" ], "title": "Channel gating neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Hei Law", "Jia Deng" ], "title": "Cornernet: Detecting objects as paired keypoints", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ji Lin", "Yongming Rao", "Jiwen Lu", "Jie Zhou" ], "title": "Runtime neural pruning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Lanlan Liu", "Jia Deng" ], "title": "Dynamic deep neural networks: Optimizing accuracy-efficiency trade-offs by selective execution", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Liu Liu", "Lei Deng", "Xing Hu", "Maohua Zhu", "Guoqi Li", "Yufei Ding", "Yuan Xie" ], "title": "Dynamic sparse graph for efficient deep learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhuang Liu", "Jianguo Li", "Zhiqiang Shen", "Gao Huang", "Shoumeng Yan", "Changshui Zhang" ], "title": "Learning efficient convolutional networks through network slimming", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu" ], "title": "Autopruner: An end-to-end trainable filter pruning method for efficient deep model inference", "venue": "CoRR, abs/1805.08941,", "year": 2018 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu", "Weiyao Lin" ], "title": "Thinet: A filter level pruning method for deep neural network compression", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Marc Masana", "Joost van de Weijer", "Luis Herranz", "Andrew D. Bagdanov", "Jose M. Alvarez" ], "title": "Domain-adaptive deep network compression", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Hanyu Peng", "Jiaxiang Wu", "Shifeng Chen", "Junzhou Huang" ], "title": "Collaborative channel pruning for deep networks", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Clemens Rosenbaum", "Tim Klinger", "Matthew Riemer" ], "title": "Routing networks: Adaptive selection of non-linear functions for multi-task learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li FeiFei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017", "venue": null, "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Introduction to Reinforcement Learning", "venue": null, "year": 1998 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Trevor Darrell", "Joseph E Gonzalez" ], "title": "Skipnet: Learning dynamic routing in convolutional networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yiren Zhao", "Xitong Gao", "Robert Mullins", "Chengzhong Xu" ], "title": "Mayo: A framework for autogenerating hardware friendly deep neural networks", "venue": "In Proceedings of the 2Nd International Workshop on Embedded and Mobile Deep Learning,", "year": 2018 }, { "authors": [ "Xingyi Zhou", "Dequan Wang", "Philipp Krähenbühl" ], "title": "Objects as points", "venue": "In arXiv preprint arXiv:1904.07850,", "year": 2019 }, { "authors": [ "Yi Zhu", "Karan Sapra", "Fitsum A. Reda", "Kevin J. Shih", "Shawn Newsam", "Andrew Tao", "Bryan Catanzaro" ], "title": "Improving semantic segmentation via video propagation and label relaxation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Zhuangwei Zhuang", "Mingkui Tan", "Bohan Zhuang", "Jing Liu", "Yong Guo", "Qingyao Wu", "Junzhou Huang", "Jinhui Zhu" ], "title": "Discrimination-aware channel pruning for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, convolutional neural networks (CNNs) have been proven to be effective in a wide range of computer vision tasks, such as image classification (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), objection detection (He et al., 2017; Zhou et al., 2019; Law & Deng, 2018), segmentation (He et al., 2017; Zhu et al., 2019). Therefore, nowadays, many computervision-based systems, such as automatic-driving cars, security surveillance cameras, and robotics, are built on the power of CNNs. However, since most state-of-the-art CNNs require expensive computation power for inference and huge storage space to store large amount of parameters, the limitation of energy, computation and storage on mobile or edge devices has become the major bottleneck on real-world deployments of CNNs. Existing studies have been focused on speeding up the execution of CNNs for inference on edge devices by model compression using matrix decomposition (Denil et al., 2013; Masana et al., 2017), network quantization (Courbariaux et al., 2016), network pruning (Dong et al., 2017), etc. Among these approaches, channel pruning has shown promising performance (He et al., 2017; Luo et al., 2017; Zhuang et al., 2018; Peng et al., 2019). Specifically, channel pruning discards an entire input or output channel and keep the rest of the model with structures.\nMost channel pruning approaches can be categorized into two types: runtime approaches and static approaches. Static channel pruning approaches aim to design a measurement to evaluate the importance of each channel over the whole training dataset and remove the least important channels to minimize the loss of performance after pruning. By permanently pruning a number of channels, the computation and storage cost of CNNs can be dramatically reduced when being deployed, and the inference execution can be accelerated consequently. Runtime channel pruning approaches have been recently proposed to achieve dynamic channel pruning on each specific instance (Gao et al., 2019; Luo & Wu, 2018). To be specific, the goal of runtime approaches aims to evaluate the channel importance at runtime, which is assumed to be different on different input instances. By pruning channels dynamically, different pruned structures can be considered as different routing of data stream inside CNNs. This kind of approaches is able to significantly improve the representation capability of CNNs, and thus achieve better performance in terms of prediction accuracy compared with static approaches. However, previous runtime approaches trade storage cost off dynamic flex-\nibility of pruning. To achieve dynamic pruning on different specific instances, all parameters of kernels are required to be stored (or even more parameters are introduced). This makes runtime approaches not applicable on resource-limited edge devices. Moreover, most of previous runtime approaches only evaluate the importance among channels in each single layer independently, without considering the difference in efficiency among layers.\nIn this paper, to address the aforementioned issues of runtime channel pruning approaches, we propose a deep reinforcement learning (DRL) based pruning framework. Basically, we aim to apply DRL to prune CNNs by maximizing received rewards, which are designed to satisfy the overall budget constraints along side with network’s training accuracy. Note that automatic channel pruning by DRL is a difficult task because the action space is usually very huge. Specifically, the discrete action space for the DRL agent is as large as the number of channels at each layer, and the action spaces may vary among layers since there are different numbers of channels in different layers. To facilitate pruning CNNs by DRL, for each layer, we first design a novel prediction component to estimate the importance of channels, and then develop a DRL-based component to learn the sparsity ratio of the layer, i.e., how many channels should be pruned.\nMore specifically, different from previous runtime channel pruning approaches, which only learn runtime importance of each channel, we propose to learn both runtime importance and additionally static importance for each channel. While runtime importance maintains the saliency of specific channels for each given specific input, the static importance captures the overall saliency of the corresponding channel among the whole dataset. According to each type of the channel importance, we further design different DRL agents (i.e., a runtime agent and a static agent) to learn a sparsity ratio in a layer-wise manner. The sparsity ratio learned by the runtime agent together with the estimated runtime importance of channels are used to generate runtime pruning structures, while the sparsity ratio learned by the static agent together with the estimated static importance of channels are used to generate static (permanent) pruning structures. By considering both the pruning structures, our framework is able to provide a trade-off between storage efficiency and dynamic flexibility for runtime channel pruning.\nIn summary, our contributions are 2-fold. First, we propose to prune channels by taking both runtime and static information of the environment into consideration. Runtime information endows pruning with flexibility based on different input instances while static information reduces the number of parameters in deployment, leading to storage reduction, which cannot be achieved by conventional runtime pruning approaches. Second, we propose to use DRL to determine sparsity ratios, which is different from the previous pruning approaches which manually set sparsity ratios. Extensive experiments demonstrate the effectiveness of our method." }, { "heading": "2 RELATED WORK AND PRELIMINARY", "text": "" }, { "heading": "2.1 STRUCTURE PRUNING", "text": "Wen et al. (2016) pioneered structure pruning in deep neural network by imposing the L2,1 norm in training. Under the same framework, Liu et al. (2017) regarded parameters in batch normalization as channel selection signal, which is minimized to achieve pruning during training. He et al. (2017) formulated channel pruning into a two-step iterative process including LASSO regression based channel selection and least square reconstruction. Luo et al. (2017) formulated channel pruning as minimization of difference of output features, which is solved by greedy selection. Zhuang et al. (2018) further considered early prediction, reconstruction loss and final loss to select importance channels. Overall, structure pruning methods accelerate inference by producing regular and compact model. However, this brought regularness requires preserving more parameters to ensure performance." }, { "heading": "2.2 DYNAMIC PRUNING", "text": "Dynamic pruning provides different pruning strategies according to input data. Wang et al. (2018) proposed to reduce computation by skipping layers or channels based on the analysis of input features. Luo & Wu (2018) proposed to use layer input to learn channel importance, which is then binarized for pruning. Gao et al. (2019) applied the same framework while extended features selection in both input and output features. Similarly, Liu & Deng (2018) introduced multiple branches\nfor runtime inference according to inputs. A gating module is learnt to guide the flow of feature maps. Bolukbasi et al. (2017) learned to choose the components of a deep network to be evaluated for each input adaptively. Early exit is introduced to accelerate computation. Dynamic pruning adaptively takes different actions for different inputs, which is able to accelerate the overall inference time. However, the original high-precision model needs to be stored, together with extra parameters for making specified pruning actions. Rosenbaum et al. (2018) proposed to learn routers to route layers output to different next layers, in order to adjust a network to multi-task learning." }, { "heading": "2.3 DEEP REINFORCEMENT LEARNING IN PRUNING", "text": "Channel selection is on trial using deep reinforcement learning. Lin et al. (2017) trained a LSTM model to remember and provide channel pruning strategy for backbone CNN model, which is conducted using reinforcement learning techniques. He et al. (2018) proposed to determine the compression ratio in each layer by training an agent regarding the pruning-retraining process as an environment." }, { "heading": "2.4 PRELIMINARY", "text": "Reinforcement Learning We consider a standard setup of reinforcement learning: an agent sequentially takes actions over a sequence of time steps in an environment, in order to maximize the cumulative reward (Sutton & Barto, 1998). This problem can be formulated as a Markov Decision Process (MDP) of a tuple (S , A, P , R, γ), where S is the state space, A is the action space, P : S × A × S → [0, 1] is transition probabilities, R : S × A → R is the reward function, and γ ∈ [0, 1) is the discount factor. The goal of reinforcement learning is to learn a policy π(a|s) that maximizes the objective of cumulative rewards over finite time steps,\nmax π T∑ t=0 R(st, at),\nwhere st ∈ S and at ∈ A are state and taken action at time step t respectively." }, { "heading": "3 DRL-BASED RUNTIME PRUNING FRAMEWORK", "text": "The overview of our proposed framework is presented in Fig. 1. To prune convolutional layer t, we learn two types of learnable channel importance: runtime channel importance ur ∈ RC×1 and static channel importance us ∈ RC×1, where C is the number of channels in layer t. The runtime channel importance ur is generated by a subnetwork importance predictor f(·), which takes the input feature map Fin as input, while the static channel importance us is randomly initialized and updated during training. Both ur and us indicate the channel importance of the full precision output feature map Fout through a convolution layer. Channels are selected to be pruned according to the values of each element in ur and us, and how many channels to be selected is decided by the sparsity ratios dr and ds, respectively. To learn the sparsity ratios dr and ds, two DRL agents, the runtime agent and the static agent, are introduced, where actions art and a s t are defined to set values of dr and ds, respectively. The detail of the two DRL agents are described in Sec. 3.3. Consequently,\na trade-off pruner g(·) is performed to balance the runtime and static pruning results, and output a decision mask M of binary values (1/0) to indicate which channels to be pruned (1: pruned, 0: preserved), as well as a unified channel importance vector u ∈ RC×1 as follows,\n[M,u] = g(ur,us, dr, ds). (1)\nThe final output after pruning is constructed by multiplying the full precision output feature map Fout, by 1−M and u as, F̂out = Fout ⊗ (1−M)⊗ u, (2) where⊗ is the broadcast element-wise multiplier, and 1 is the matrix of the same size as M with all the elements being 1. In the following, we introduce how to learn the runtime channel importance vector ur and the static channel importance vector us in Sec. 3.1, how to construct the trade-off pruner g(·) in Sec. 3.2, and how to design the two DRL agents in Sec. 3.3." }, { "heading": "3.1 LEARNABLE CHANNEL IMPORTANCE", "text": "We consider that a convolutional layer takes input of feature map Fin ∈ RCin×Hin×Win and generates an output feature map Fout ∈ RCout×Hout×Wout , where C∗, H∗ and W∗ are the number of channels, width and height of the feature map F∗, respectively. Each element of the channel importance vectors ur ∈ RCout and us ∈ RCout represents the importance value of the corresponding channel, respectively. In the following, we drop the subscript out for simplicity in presentation." }, { "heading": "3.1.1 RUNTIME CHANNEL IMPORTANCE", "text": "As mentioned above, the runtime channel importance ur of output feature Fout is predicted by a importance predictor f(·), which takes Fin as input. Therefore, ur can be considered as a function of Fin, whose values vary over different input instances. In this paper, we design a subnetwork to approximate f(·), which is expected to be of a small size and computationally efficient. Similar to many existing dynamic network pruning methods (Gao et al., 2019; Hu et al., 2018; Luo & Wu, 2018), we use global pooling layer as the first layer in f(·), because global pooling is computationally efficient and it can reduce the dimension of Fin dramatically. We then feed the output of global pooling into a fully-connected layer without any activation function. The output of fully-connected layer is the runtime channel importance vector ur.\nWhich channels to be preserved / pruned at runtime are determined according to the values of ur. We denote by Mr ∈ {0, 1}C a mask for pruning, where if the value is 0, then the corresponding channel is preserved, otherwise pruned. For now, suppose a sparsity ratio dr for runtime pruning has already been generated via the dynamic DRL agent, which will be introduced in Sec. 3.3. We then prune (C − ddrCe) channels with the smallest importance values in ur. Accordingly, the value of an element in Mr is set to be 1 if the corresponding channel is pruned, otherwise 0." }, { "heading": "3.1.2 STATIC CHANNEL IMPORTANCE", "text": "The static channel importance vector us is to capture the global information for pruning, and thus is learned from the whole dataset. It is randomly generated and learned through backpropagation. Similar to runtime channel pruning, given a sparsity ratio ds learned by the static DRL agent, (C − ddsCe) channels with smallest importance values in us are pruned, and a mask Ms ∈ {0, 1}C is generated to indicate the static pruning results." }, { "heading": "3.2 TRADE-OFF PRUNER", "text": "With the runtime and the static pruning decisions, Mr and Ms, we now propose a trade-off pruner to generate a unified channel pruning decision. The main idea behind the trade-off pruner is to 1) prune those channels which are agreed to be pruned by both decisions, and 2) prune a portion of the rest channels by weighted votes from both decisions.\nTo be specific, we define the mask representing channels pruned by both decisions as\nMo = Ms ∧Mr, (3) where ∧ is element-wise logical AND and 1/0 in mask represents logical true or false. The channels indicated to be pruned by Mo (i.e., the corresponding values are 1) are pruned in final. The channels\nwhich are determined to be pruned by Mr but not by Ms can be represented by a new mask Mr = Mr −Mo. Similarly, the channels which are determined to be pruned by Mr but not by Ms can be represented by another new mask Ms = Ms −Mo. To control the trade-off between Mr and Ms, we define a rate Rr denoting how much we trust the pruning decision made by Mr, while 1 − Rr is for Ms. That means the channels selected by Mr will be finally pruned with the rate Rr. Specifically, the number of channels which are selected by Mr and finally will be pruned is\nC ′ r = bRr(1>Mr)c, (4)\nwhere 1>Mr returns the number of channels selected by Mr. We then select the first C ′\nr-smallest important channels which are recommended to be pruned by Mr to form a mask M̂r. Similarly, for static pruning, we select the first C ′\ns-smallest important channels which are recommended to be pruned by Ms to form another mask M̂s, where C ′\ns = b(1−Rr)(1>Ms)c. The final trade-off pruning mask is defined as\nM = Mo + M̂r + M̂s. (5)\nMoreover, in this work, the unified channel importance is simply defined as follows,\nu = ur ⊗ us. (6)\nWith the trade-off pruning mask M and the unified channel importance u, the pruned output feature F̂out can be generated by Eq. 2." }, { "heading": "3.3 DRL BASED PRUNING", "text": "In this section, we present how to formulate the problems of learning the ratios ds and dt for static pruning and runtime pruning, as a MDP, and solve it via DRL, respectively." }, { "heading": "3.3.1 DRL FOR RUNTIME PRUNING", "text": "In the MDP for runtime pruning, we consider the t-th layer of the network as the t-th timestamp. The details of the MDP are listed as follows.\nState Given an input feature map Fin of layer t, we pass it to a global pooling layer to reduce its dimension to RCin , where Cin is the number of input channel of layer t. Since Cin varies among layers, we feed the output of global pooling to a layer-dependent encoder to project it to a fix-length vector srt , which is considered as as a state representation of DRL in the context of runtime pruning.\nAction The action art is defined as the sparsity ratio at layer t, alternating dr in runtime pruning mentioned in Section 3.1.1. Existing DRL-base pruning method RNP (Lin et al., 2017) uses a unified discrete actions space with k actions which are too coarse to achieve high accuracy. However, finegrained discrete action space as large as number of channels suffers from exploration difficulty. Therefore, instead of using discrete action spaces, we propose a continuous action space with action art ∈ (0, 1]. To avoid over-pruning the filters and crashing in training, we set a minimum sparsity ratio +α such that art ∈ (+α, 1].\nReward The reward function is proposed to consider both network accuracy and computation budget. We define the accuracy relative reward based on the loss of pruned backbone network,\nRracc = −LCNN , (7)\nwhere LCNN is the loss in CNN, and it may vary in scale among different training stage, i.e. large at beginning of training and small near convergence. To avoid the instability brought by the reward scale, Rracc is normalized by a moving average,\nRr ′\nacc = R r acc/βb, (8)\nβb = λβb−1 + (1− λ)Rracc, (9)\nwhere βb is the moving average at the b-th training batch and λ is the moving weight.\nTo force computation of the pruned network under a given computation budget, we define a exponential reward function of budget regarding reward Rrbud:\nRrbud = { exp(α1(Bcom −Bcom))− 1, Bcom > Bcom, 0, otherwise,\n(10)\nwhere Bcom is the computation consumption, which is calculated based on the current of pruned strategy, and Bcom is the given computation budget constraint. Finally we sum up the two rewards to form sparse rewards, with being non-zero at terminated step T and zeros at other time step t < T ,\nRrt =\n{ Rr ′\nacc +R r bud, t = T,\n0, t < T. (11)\nActor-Critic Agent To solve the continuous action space problem, we choose a commonly used actor-critic agent with a Gaussian policy. Actor-critic agent consists of two components: 1) actor outputs the mean and variance to form a Gaussian policy where the continuous action are sampled from; 2) critic outputs a scalar predicting the future discounted accumulated reward and assists the policy training. Actor network and Critic network share one-layer RNN which takes state srt as input. The output of RNN is fed into actor specific network constructed by two branches of fullyconnected layers, leading to the mean and variance of the Gaussian policy. The action is sampled for the Gaussian distribution outputed by the actor:\nart ∼ N (µ(srt ; θr), σ(srt ; θr)), (12)\nwhere µ(srt ; θ r) and σ(srt ; θ r) is the mean and variance outputed from actor network. The Critic specific network has one fully-connected layer after the shared RNN, and outputs the predictive value V (srt ; θ r).\nTo optimize the actor-critic agent, Proximal Policy Optimization (PPO) (Schulman et al., 2017) is used. Note that we relax the action art to (−∞,+∞) in PPO, and use truncate function to clip art in (+α, 1] when perform pruning.\nBesides, an additional regularizer is introduced to restrict the relaxed art staying in range (+α, 1],\nLa = 1\n2 ||art −max(min(art , 1),+α)||22. (13)" }, { "heading": "3.3.2 DRL FOR STATIC PRUNING", "text": "Similar to runtime pruning, the MDP in static pruning is also formulated layer-by-layer. The difference against runtime pruning is the definition of state and reward.\nState The state sst in static pruning is defined as the full shape of Fout, and does not depend on Fout and the current input data.\nAction Action ast is sampled from actor’s outputed Gaussian policy, and it is to alternate the sparsity ds in static pruning mentioned in Section 3.1.2.\nReward The reward function takes both network accuracy and parameters budget into consideration. The accuracy relative is defined as the same as that in runtime pruning:\nRsacc = R r′ acc. (14)\nTo reduce the number of parameters of network to satisfy the parameters storage budget, the parameters relative reward is defined in an exponential form as,\nRsparam = { exp(α2(Bparam −Bparam))− 1, Bparam > Bparam, 0, otherwise,\n(15)\nwhere Bparam is the number of preserved parameters after static pruning and Bparam is the parameters storage budget.\nActor-Critic Agent This agent is similar to the one in runtime pruning. It has the same architecture as runtime pruning but differs in introducing a fully-connected layer as the encoder before RNN. This agent is also optimized by PPO." }, { "heading": "3.4 INFERENCE", "text": "In inference, the static agent is not required any more because the static pruning strategy does not depend on individual input data points but the full shape of Fin. Therefore, the output action ast is fixed to each layer t. With the action ast and the rate Rr, we can decide which filters can be pruned permanently. Specifically, channels with ((1 − ast )(1 − Rr))-smallest static importance values are pruned permanently." }, { "heading": "4 EXPERIMENT", "text": "We evaluate our DRL pruning framework on two popular datasets: CIFAR-10 (Krizhevsky, 2009) and ImageNet ILSVRC2012 (Russakovsky et al., 2015), to show the advantage over other channel pruning methods. We analyze the effect of hyper-parameters and different sparsity settings on CIFAR-10. For CIFAR-10, we use M-CifarNet (Zhao et al., 2018) as the backbone CNN. On ImageNet ILSVRC2012, ResNet-18 is used as the backbone CNN." }, { "heading": "4.1 IMPLEMENTATION DETAILS", "text": "We start with a pretrained backbone CNN. Firstly we finetune the backbone CNN and train runtime importance predictor jointly, with sparsity dr = 1 and fixed all static pruning importance us to 1. Then we remove the restriction on the static pruning importance us, and train static pruning importance, the backbone CNN and the runtime importance predictor, with sparsity ds = 1 and runtime pruning sparsity fixed as dr = 1. After finetuning, we use the DRL agents to predict the sparsity given computation and storage constraints. The DRL agents and the CNN with runtime/static importance are trained in alternating manner: We first fix the CNN as well as runtime/static importance and train two DRL agents, regarding the CNN as environments. Then we fix two agents and finetune the CNN and runtime/static importance. We repeat these two steps until convergence is achieved. We use Adam optimizer for both DRL agent and CNN, and set learning rate at 10−6 for the DRL agents. For CNN finetuning and runtime/static importance training, the learning rate is set to 10−3 on CIFAR-10. On ImageNet ILSVRC2012, the learning rate starts from 10−3 and is divided by 10 after 15 millions iterations." }, { "heading": "4.2 EXPERIMENTAL RESULTS ON CIFAR-10", "text": "We compare our proposed method with the following state-of-the-art runtime pruning methods: FBS (Gao et al., 2019), RNP (Lin et al., 2017) on CIFAR-10. The comparison results at sparsity 0.5 and 0.7 are shown in Table 1 and Table 2 respectively. Note that for fair comparison with other methods, the computation and storage budget constraints in our method is calculated according to the sparsity of other methods. Under these constraints, our method does not necessarily lead to the same sparsity as other methods in each layer. RNP cannot set exact sparsity ratio. Instead, its average sparsity ratio is accessible only during testing, which is 0.537 in Table 1. The result of FBS is reproduced using the released code1. The column #Params represents the number of parameters compared to the backbone CNN.\nTable 1 shows that our method outperforms other state-of-the-art methods, achieving highest accuracy at an overall sparsity ratio of 0.5. Our method has very close computation speed-up compared to FBS, but outperforms FBS around 0.48% to 0.76%. When the runtime pruning strategy is solely considered by setting Rr = 1, our method surpasses other comparison methods, indicating that our DRL-based framework improves the performance of channel runtime pruning. By balancing runtime and static pruning via setting Rr = 0.5, our method reduces the number of the overall stored parameters and achieves lower accuracy drop than other methods. Table 2 shows that our method outperforms FBS at sparsity of 0.7. When Rr = 0.5, our method achieves better performance than the baseline CNN with 2× speed-up and contains less parameters. We also study the relation between Rr and network compactness in our framework. Fig. 2 demonstrates the impact of Rr when sparsity is 0.45. The hyper-parameter Rr determines how much we trust about runtime pruning. With Rr close to 1, the accuracy becomes higher due to the more dynamic network flexibility but the space of the parameters storage also increases. When Rr diminishes, the network accuracy decreases but the parameter storage is reduced.\nFig. 3 shows the performance of various sparsity ratios in our method. Again, our method does not prune with one single sparsity ratio for all layers, but uses the sparsity ratio to calculate computation and storage constraints, with which the sparsity ratio is learned for each layer. Fig. 3 demonstrates\n1https://github.com/deep-fry/mayo\nthat our method holds the accuracy when sparsity is larger than about 0.5, which corresponds to about 4× computational acceleration." }, { "heading": "4.3 EXPERIMENTAL RESULTS ON IMAGENET ILSVRC2012", "text": "We compare our method with state-of-the-art channel pruning methods on ImageNet ILSVRC2012 as shown in Table 3. In this experiment, we use ResNet-18 as the backbone CNN. Among the stateof-the-art pruning methods for comparison, FBS (Gao et al., 2019) and CGNN (Hua et al., 2018) are runtime pruning methods. The overall sparsity ratio of our method is 0.7, which is under the same setting of FBS. Our method with Rr = 0.5 achieves the smallest top-1 accuracy drop compared with other methods, and also achieves the highest top-1 accuracy after pruning. Overall, our proposed method achieves comparable or better performance compared to other methods with more acceleration. Our method has very close MACs to FBS, while the number of preserved parameters is reduced to 81.2% of the baseline." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present a deep reinforcement learning based framework for deep neural network channel pruning in both runtime and static sheme. Specially, channels are pruned according to input feature as runtime pruning, and based on entire training dataset as static pruning, with 2 reinforcement agents to determine the corresponding sparsity. Our method combines the merits of runtime and static pruning, and provides trade-off between storage and dynamic flexibility. Extensive experiments demonstrate the effectiveness of our proposed method." }, { "heading": "A ADDITIONAL EXPERIMENTAL RESULTS", "text": "A.1 COMPARISON TO SEPARATELY STATIC AND RUNTIME PRUNING\nIn this section, we compare our method with two additional baseline methods. One is a variation of our method by separately training static pruning and the training runtime pruning. In this method, we start from a pretrained backbone CNN, f(·) and us. Then we add the static DRL agent to prune channels statically by learning the static policy and us. Finally, we add the runtime DRL agent to prune channels dynamically, by fixing the static DLR agent and us, and updating the runtime DLR agent and f(·) only. Another method is to combine state-of-the-art static and runtime pruning methods. We start from a pretrain backbone CNN, and then prune channels with the static pruning method FPGM (He et al., 2019), and finally prune channels with the runtime method FBS (Gao et al., 2019). The experimental results are shown in Table4.\nA.2 STORAGE/ACCURACY TRADE-OFF\nBesides Fig. 2, to further illustrate the trade-off between storage and accuracy, we show additional results in Fig. 4." }, { "heading": "B PSEUDOCODE OF TRAINING PROCESS", "text": "Algorithm 1: Training process INPUT: pretrained backbone CNN, computation budget B̄com, storage budget B̄param OUTPUT: backbone CNN, importance predictor f(·), static pruning importance us, runtime and static DRL agents\nAdd runtime importance predictor f(·) and static pruning importance us.; us ← 1; ds ← 1; dr ← 1; while not converge do\nfix us, update f(·) and backbone CNN; end while not converge do\nupdate us, f(·) and backbone CNN; end add runtime DRL agent and static DRL agent to predict actions art and a s t , and alternate dr and ds at each layer t ; while not converge do\nfor i← 1 to N1 do Forward entired model; Compute rewards Rrt and R s t using budget B̄com and B̄param ;\nFix us, f(·) and backbone CNN, update runtime and static DRL agents by PPO loss; end for i← 1 to N2 do\nFix runtime and static DRL agents, update us, f(·) and backbone CNN by cross entropy loss\nend end" }, { "heading": "C TRAINING CURVE", "text": "We show the training curve of our method in Fig. 5. We train our method on CIFAR-10 at sparsity 0.5 with Rr = 0.5. The figure includes the accuracy curves evaluated on training set and test set, showing that our methods is stable during training." } ]
2,019
null
SP:95408bc20d6c07d7f4d7239faf8f08969e2f8722
[ "This paper presents a framework for learning hierarchical policies using a latent variable conditioned policy operating at the low level, with model based planning at the high level. Unlike prior work which does hierarchical reinforcement learning, the key technical contribution of this work is that they use planning with a latent dynamics model as their high level policy. They demonstrate the method on a humanoid walking task in the DeepMimic [1] environment.", "This paper proposes a latent variable model to perform imitation learning. The authors propose the model in the control-as-inference framework and introduce two additional latent variables: one that represents a latent state (z) and another that represents a latent action (h). For the generative model, the authors use a sequence latent variable model. For inferring the latent action, the authors use a particle filter. For inferring the states, the authors use an \"Adaptive path-integral autoencoder,\" though it was unclear where the controls \"u\" come from. (I assume u is the same as the actions, at which point inferring the states amounts to rollout the policy in the sequence latent variable model). The authors compare to not having the latent states and/or not having the latent actions, and demonstrate that they get better imitation learning scores." ]
We present a hierarchical planning and control framework that enables an agent to perform various tasks and adapt to a new task flexibly. Rather than learning an individual policy for each particular task, the proposed framework, DISH, distills a hierarchical policy from a set of tasks by self-supervision and reinforcement learning. The framework is based on the idea of latent variable models that represent high-dimensional observations using low-dimensional latent variables. The resulting policy consists of two levels of hierarchy: (i) a planning module that reasons a sequence of latent intentions that would lead to optimistic future and (ii) a feedback control policy, shared across the tasks, that executes the inferred intention. Because the reasoning is performed in low-dimensional latent space, the learned policy can immediately be used to solve or adapt to new tasks without additional training. We demonstrate the proposed framework can learn compact representations (3and 1-dimensional latent states and commands for a humanoid with 197and 36-dimensional state features and actions) while solving a small number of imitation tasks, and the resulting policy is directly applicable to other types of tasks, i.e., navigation in cluttered environments. The supplementary video is available at: https://bit.ly/2rwIfQn
[]
[ { "authors": [ "Brandon Amos", "Ivan Jimenez", "Jacob Sacks", "Byron Boots", "J Zico Kolter" ], "title": "Differentiable MPC for end-to-end planning and control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "John D Co-Reyes", "YuXuan Liu", "Abhishek Gupta", "Benjamin Eysenbach", "Pieter Abbeel", "Sergey Levine" ], "title": "Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Erwin Coumans" ], "title": "Bullet physics library", "venue": "Open source: bulletphysics. org,", "year": 2013 }, { "authors": [ "Marc Peter Deisenroth", "Dieter Fox", "Carl Edward Rasmussen" ], "title": "Gaussian processes for dataefficient learning in robotics and control", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2015 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Marco Fraccaro", "Simon Kamronn", "Ulrich Paquet", "Ole Winther" ], "title": "A disentangled recognition and nonlinear dynamics model for unsupervised learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Abhishek Gupta", "Russell Mendonca", "YuXuan Liu", "Pieter Abbeel", "Sergey Levine" ], "title": "Metareinforcement learning of structured exploration strategies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Ha", "Jürgen Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jung-Su Ha", "Hyeok-Joo Chae", "Han-Lim Choi" ], "title": "Approximate inference-based motion planning by learning and exploiting low-dimensional latent variable models", "venue": "IEEE Robotics and Automation Letters,", "year": 2018 }, { "authors": [ "Jung-Su Ha", "Young-Jin Park", "Hyeok-Joo Chae", "Soon-Seo Park", "Han-Lim Choi" ], "title": "Adaptive pathintegral autoencoders: Representation learning and planning for dynamical systems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Kristian Hartikainen", "Pieter Abbeel", "Sergey Levine" ], "title": "Latent space policies for hierarchical reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy P. Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Karol Hausman", "Jost Tobias Springenberg", "Ziyu Wang", "Nicolas Heess", "Martin Riedmiller" ], "title": "Learning an embedding space for transferable robot skills", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Hilbert Johan Kappen", "Hans Christian Ruiz" ], "title": "Adaptive importance sampling for control and inference", "venue": "Journal of Statistical Physics,", "year": 2016 }, { "authors": [ "Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick van der Smagt" ], "title": "Deep variational bayes filters: Unsupervised learning of state space models from raw data", "venue": "International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Rahul G Krishnan", "Uri Shalit", "David Sontag" ], "title": "Structured inference networks for nonlinear state space models", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Youngwoon Lee", "Shao-Hua Sun", "Sriram Somasundaram", "Edward Hu", "Joseph J Lim" ], "title": "Composing complex skills by learning transition policies with proximity reward induction", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Sergey Levine" ], "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review", "venue": "arXiv preprint arXiv:1805.00909,", "year": 2018 }, { "authors": [ "Corey Lynch", "Mohi Khansari", "Ted Xiao", "Vikash Kumar", "Jonathan Tompson", "Sergey Levine", "Pierre Sermanet" ], "title": "Learning latent plans from play", "venue": null, "year": 1903 }, { "authors": [ "Josh Merel", "Arun Ahuja", "Vu Pham", "Saran Tunyasuvunakool", "Siqi Liu", "Dhruva Tirumala", "Nicolas Heess", "Greg Wayne" ], "title": "Hierarchical visuomotor control of humanoids", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Josh Merel", "Leonard Hasenclever", "Alexandre Galashov", "Arun Ahuja", "Vu Pham", "Greg Wayne", "Yee Whye Teh", "Nicolas Heess" ], "title": "Neural probabilistic motor primitives for humanoid control", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Mustafa Mukadam", "Jing Dong", "Xinyan Yan", "Frank Dellaert", "Byron Boots" ], "title": "Continuous-time gaussian process motion planning via probabilistic inference", "venue": "The International Journal of Robotics Research,", "year": 2018 }, { "authors": [ "Masashi Okada", "Luca Rigazio", "Takenobu Aoshima" ], "title": "Path integral networks: End-to-end differentiable optimal control", "venue": "arXiv preprint arXiv:1706.09597,", "year": 2017 }, { "authors": [ "Xue Bin Peng", "Pieter Abbeel", "Sergey Levine", "Michiel van de Panne" ], "title": "DeepMimic: Exampleguided deep reinforcement learning of physics-based character skills", "venue": "ACM Trans. Graph.,", "year": 2018 }, { "authors": [ "Xue Bin Peng", "Michael Chang", "Grace Zhang", "Pieter Abbeel", "Sergey Levine" ], "title": "Mcp: Learning composable hierarchical control with multiplicative compositional policies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alexandre Piche", "Valentin Thomas", "Cyril Ibrahim", "Yoshua Bengio", "Chris Pal" ], "title": "Probabilistic planning with sequential monte carlo methods", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Konrad Rawlik", "Marc Toussaint", "Sethu Vijayakumar" ], "title": "On stochastic optimal control and reinforcement learning by approximate inference", "venue": "In Robotics: Science and Systems,", "year": 2012 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Archit Sharma", "Shixiang Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "arXiv preprint arXiv:1907.01657,", "year": 2019 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Aviv Tamar", "Yi Wu", "Garrett Thomas", "Sergey Levine", "Pieter Abbeel" ], "title": "Value iteration networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Emanuel Todorov" ], "title": "Optimality principles in sensorimotor control", "venue": "Nature neuroscience,", "year": 2004 }, { "authors": [ "Emanuel Todorov" ], "title": "General duality between optimal control and estimation", "venue": "In IEEE Conference on Decision and Control,", "year": 2008 }, { "authors": [ "Emanuel Todorov", "Zoubin Ghahramani" ], "title": "Unsupervised learning of sensory-motor primitives", "venue": "In Engineering in Medicine and Biology Society,", "year": 2003 }, { "authors": [ "Marc Toussaint" ], "title": "Robot trajectory optimization using approximate inference", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Marc Toussaint", "Kelsey Allen", "Kevin Smith", "Joshua B Tenenbaum" ], "title": "Differentiable physics and stable modes for tool-use and manipulation planning", "venue": "In Robotics: Science and Systems,", "year": 2018 }, { "authors": [ "Paul Vernaza", "Daniel D Lee" ], "title": "Learning and exploiting low-dimensional structure for efficient holonomic motion planning in high-dimensional spaces", "venue": "The International Journal of Robotics Research,", "year": 2012 }, { "authors": [ "Manuel Watter", "Jost Springenberg", "Joschka Boedecker", "Martin Riedmiller" ], "title": "Embed to control: A locally linear latent dynamics model for control from raw images", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Grady Williams", "Nolan Wagener", "Brian Goldfain", "Paul Drews", "James M Rehg", "Byron Boots", "Evangelos A Theodorou" ], "title": "Information theoretic MPC for model-based reinforcement learning", "venue": "In International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Brian D Ziebart", "Andrew L Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In AAAI,", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) aims to compute the optimal control policy while an agent interacts with the environment. Recent advances in deep learning enable RL frameworks to utilize deep neural networks to efficiently represent and learn a policy having a flexible and expressive structure. As a result, we’ve been witnessing RL agents that already achieved or even exceeded human-level performances in particular tasks (Mnih et al., 2015; Silver et al., 2017). The core of intelligence, however, is not just to learn a policy for a particular problem instance, but to solve various multiple tasks or immediately adapt to a new task. Given that a huge computational burden makes it unrealistic to learn an individual policy for each task, an agent should be able to reason about its action. If predictions about consequences of actions are available, e.g., by using an internal model (Ha & Schmidhuber, 2018; Kaiser et al., 2019), an intelligent agent can plan a sequence of its actions. Involving planning procedures in a control policy could provide adaptiveness to an agent, but it is often not trivial to learn such a prediction & planning framework: First, it is difficult to obtain the exact internal dynamic model directly represented in high-dimensional state (observation) space. Model errors inevitably become larger in the high-dimensional space and are accumulated along the prediction/planning horizon. This prohibits planning methods from producing a valid prediction and so a sensible plan. Second, and perhaps more importantly, planning methods cannot help but relying on some dynamic programming or search procedures, which quickly become intractable for problems with high degrees of freedom (DOFs) because the size of search space grows exponentially with DOFs, i.e., the curse of dimensionality (LaValle, 2006).\nCrucial evidence found in the cognitive science field is that there exists a certain type of hierarchical structure in the humans’ motor control scheme addressing the aforementioned fundamental difficulty (Todorov & Ghahramani, 2003; Todorov, 2004). Such a hierarchical structure is known to utilize two levels of parallel control loops, operating in different time scales; in a coarser scale, the high-level loop generates task-relevant commands for the agent to perform a given task, and then in a finer time scale, the low-level loop maps those commands into control signals while actively reacting to disturbances that the high-level loop could not consider (e.g., the spinal cord) (Todorov\n& Ghahramani, 2003). Because the low-level loop does not passively generate control signals from high-level commands, the high-level loop is able to focus only on the task-relevant aspects of the environment dynamics that can be represented in a low-dimensional form. Consequently, this hierarchical structure allows us for efficiently predicting and planning the future states to compute the commands.\nMotivated by this evidence, we propose a framework, termed \"DISH\", that DIStills a Hierarchical structure for reasoning and control. As depicted in Fig. 1, the proposed framework has two levels of hierarchy. The high-level loop represents an agent’s current state as a lowdimensional latent state and generates/reasons task-relevant high-level commands by predicting and planning the future in the latent space. The low-level loop receives the high-level commands as well as the current states and maps them into the high-dimensional control signal. Two different types of learning are required to build such a framework: (i) a lowdimensional latent representation for an internal model should be obtained from agent’s own experiences via self-supervised learning; (ii) a control policy should be learned while interacting with the environment via reinforcement learning.\nWe combined these two learning problems by transforming a multitask RL problem into generative model learning using the control-inference duality (Levine, 2018; Todorov, 2008; Rawlik et al., 2012). In this perspective, an agent equipped with a low-level control policy is viewed as a generative model that outputs trajectories according to high-level commands. Reasoning the high-level commands is then considered as a posterior inference problem; we introduce a low-dimensional internal model to make this inference tractable. We demonstrate that the proposed framework can learn the compact representation (3-dimensional latent states for a humanoid robot having 90-dimensional states) and the control policy while solving a small number of imitation tasks, and the learned planning and control scheme is immediately applicable to new tasks, e.g., navigation through a cluttered environment." }, { "heading": "2 RELATED WORK", "text": "Hierarchical RL: To apply task-specific policies learned from individual RL problems to various tasks, hierarchical structures are often considered where each learned policy serves as a low-level controlller, i.e., as a \"skill\", and a high-level controller selects which skills to perform in the context the agent lies at (Peng et al., 2018; 2019; Merel et al., 2019a; Lee et al., 2019). Peng et al. (2018; 2019) trained robust control policies for imitating a broad range of example motion clips and integrated multiple skills into a composite policy capable of executing various tasks. Merel et al. (2019a) similarly trained many imitation policies and utilized them as individual skills that a high-level controller chooses based on the visual inputs. Lee et al. (2019) included transition policies which help the agent smoothly switch between the skills. Another line of approaches is using continuous-valued latent variables to represent skills (Co-Reyes et al., 2018; Gupta et al., 2018; Eysenbach et al., 2019; Florensa et al., 2017; Hausman et al., 2018). Co-Reyes et al. (2018) proposed an autoencoder-like framework where an encoder compresses trajectories into latent variables, a state decoder reconstructs trajectories, and a policy decoder provides a control policy to follow the reconstructed trajectory. Gupta et al. (2018); Eysenbach et al. (2019); Florensa et al. (2017) also introduced latent variables to efficiently represent various policies. Instead of using one static latent variable, Merel et al. (2019b) proposed a framework that encodes expert’s demonstrations as latent trajectories and infers a latent trajectory from an unseen skill for one-shot imitation. Haarnoja et al. (2018a) proposed a hierarchical structure for RL problems where marginalization of low-level actions provides a new system for high-level action. In their framework, policies at all levels can be learned with different reward functions such that a high-level policy becomes easier to be optimized from the marginalization.\nNote that the above hierarchical RL approaches train the high-level policy by solving another RL problem; because the individual skill or the latent variables compress dynamics of the agent,\nvariations of them provide efficient exploration for the high-level RL. Our framework also considers low-dimensional and continuous latent trajectories to represent various policies. Rather than learning a high-level policy, however, our framework learns an internal model with which the high-level module performs reasoning; the agent can efficiently reason its high-level commands by searching the low-dimensional latent space with the learned internal model. The learned planning/control structure is then directly applicable to new sets of tasks the agent hasn’t met during training. Only a few recent works (Hafner et al., 2019; Sharma et al., 2019) incorporated reasoning processes into high-level modules, but neither of them exploits low-dimensional latent space for planning (Sharma et al., 2019) nor low-dimensional commands (Hafner et al., 2019). Our ablation study in Section 4.1 shows the effectiveness of utilizing both latent states and commands and, to our best knowledge, DISH is the first framework doing so.\nModel-based RL & Learning to Plan: Model-based RL algorithms attempt to learn the agent’s dynamics and utilize the planning and control methods to perform tasks (Williams et al., 2017; Deisenroth et al., 2015; Chua et al., 2018). Williams et al. (2017); Chua et al. (2018) utilized deep neural networks to model the dynamics and adopted the model predictive control method on the learned dynamics; Deisenroth et al. (2015) used the Gaussian processes as system dynamics, which leads to the efficient and stable policy search. Though these methods have shown impressive results, they are not directly applicable to systems having high DOFs because high-dimensional modeling is hard to be exact and even advanced planning and control methods are not very scalable to such systems. One exceptional work was proposed by Ha & Schmidhuber (2018), where the variational autoencoder and the recurrent neural network are combined to model the dynamics of the observation. They showed that a simple linear policy w.r.t the low-dimensional latent state can control the low DOFs agent, but (i) high-DOFs systems require a more complicated policy structure to output highdimensional actions and (ii) reasoning (or planning) by predicting the future is essential to solve a set of complex tasks. On the other hand, Ha et al. (2018a;b) trained the low-dimensional latent dynamics from expert’s demonstrations and generated motion plans using the learned dynamics; the high-dimensional motion plans were able to be computed efficiently, but the control policy for executing those plans was not considered. Some recent works have attempted to build the policy network in such way that resembles the advanced planning and optimal control methods: Tamar et al. (2016) encoded the value iteration procedures into the network; Okada et al. (2017); Amos et al. (2018) wired the network so as to resemble the path-integral control and the iterative LQR methods, respectively. The whole policy networks are trained end-to-end and, interestingly, system dynamics and a cost function emerge during the learning procedure. However, these methods were basically designed just to mimic the expert’s behaviors, i.e., addressing inverse RL problems, and also tried to find the control policy directly in the (possibly high-dimensional) state space." }, { "heading": "3 DISH: DISTILLING HIERARCHY FOR PLANNING AND CONTROL", "text": "" }, { "heading": "3.1 MULTITASK RL AS LATENT VARIABLE MODEL LEARNING", "text": "Suppose that a dynamical system with states s ∈ S is controlled by actions a ∈ A, where the states evolve with the stochastic dynamics p(sk+1|sk,ak) from the initial states p(s1). Let r̃k(sk,ak) denote a reward function that the agent wants to maximize with the control policy πθ(ak|sk). Reinforcement learning problems are then formulated as the following optimization problem:\nθ∗ = arg max θ Eqθ(s1:K ,a1:K) [ K∑ k=1 r̃k(sk,ak) ] , (1)\nwhere the controlled trajectory distribution qθ is given by:\nqθ(s1:K ,a1:K) ≡ p(s1) K∏ k=1 p(sk+1|sk,ak)πθ(ak|sk). (2)\nBy introducing an artificial binary random variable ot, called the optimality variable, whose emission probability is given by exponential of a state-dependent reward, i.e. p(Ok = 1|sk) = exp (rk(sk)), and by defining an appropriate action prior p(a) and corresponding the uncontrolled trajectory distribution, p(s1:K ,a1:K) ≡ p(s1) ∏K k=1 p(sk+1|sk,ak)p(ak), we can view the above RL problem as a probabilistic inference problem for a graphical model in Fig 2(a). The objective of such an\ninference problem is to find the optimal variational parameter, θ, such that the controlled trajectory distribution qθ(s1:K ,a1:K) fits the posterior distribution p(s1:K ,a1:K |O1:K = 1) best. More detailed derivations of this duality can be found in Appendix A.2 or in the tutorial paper (Levine, 2018).\nRather than solving one particular task, i.e., one reward function, agents are often required to perform various tasks. Let T be a set of tasks, and πθ∗t (ak|sk) be the optimal policy for t th task, i.e.,\nθ∗t = arg max θt Eqθt (s1:K ,a1:K) [ K∑ k=1 r̃ (t) k (sk,ak) ] , ∀t ∈ T . (3)\nFor high DOF systems, where policies πθt represent a mapping from a high-dimensional state space to a high-dimensional action space, individually optimizing each policy is computationally too expensive. Instead of doing so, we can assume that tasks the agent needs to perform require similar solution properties and consequently the optimal policies have some sort of common structures. We can then introduce a low-dimensional latent variable h(t) that, by compressing a particular aspect of πθt over all the policies, each policy can be conditioned on as πθ(ak|sk,h(t)). Such a hierarchical structure is depicted as Fig. 2(b), where h can be interpreted as high-level commands. We can then define the uncontrolled and the task t’s controlled trajectory distributions as\np(s1:K ,a1:K ,h1:K) ≡ p(s1) K∏ k=1 p(sk+1|sk,ak)p(ak)p(hk), (4)\nq (t) θ (s1:K ,a1:K ,h1:K) ≡ p(s1) K∏ k=1 p(sk+1|sk,ak)πθ(ak|sk,hk)q(t)(hk|sk), (5)\nreceptively. In other words, the control policy πθ is shared across all the tasks, actively mapping high-level commands h, into actual actions, a. Only high-level commands vary with the given task specifications. In the perspective of control as inference, a corresponding inference problem now has two parts: one for the policy parameter θ and another for the task-specific commands h. Note that, if a high-level policy π̄θ(h|s) is used to compute high-level commands, the learning problem then becomes the standard Hierarchical RL (HRL). We instead introduce a reasoning module to generate high-level commands which infers the optimal h for a given task t and a current state s by predicting futures. As often used in many HRL methods, the high-level module of the proposed framework operates in a coarser time scale than the low-level policy does.\nSimilar to the latent model learning in Appendix A.3 and the control-inference duality in Appendix A.2, we can derive the following lower-bound of optimality likelihood L(t) for a task t:\nlog pθ(O (t) 1:K = 1) = log\n∫ p(O\n(t) 1:K = 1|s1:K)p(τ)\nq (t) θ (τ) q (t) θ (τ) dτ\n≥ E q (t) θ (τ) [ K∑ k=1 r (t) k (sk)− log πθ(ak|sk,hk) p(ak) − log q (t)(hk|sk) p(hk) ] ≡ L(t)(θ, q), (6)\nwhere τ ≡ (s1:K ,a1:K ,h1:K). This suggests a novel learning scheme of the hierarchical policy in Equation 5: (i) For a given task t and a fixed low-level policy πθ, high-level commands hk are computed via variational inference. This inference procedure q(h|s) should take predictions about future rewards into account to generate h, which can be interpreted as planning. To do so, we build an internal model via self-supervised learning and perform planning with the internal model. (ii) With the planning module equipped, a low-level policy πθ(a|s,h) generates control actions a as in RL problems, which can be trained using standard deep RL algorithms (Schulman et al., 2017; Haarnoja et al., 2018b)." }, { "heading": "3.2 SELF-SUPERVISED LEARNING OF INTERNAL MODEL", "text": "The role of q(h|s) is to compute the high-level commands that would lead to maximum accumulated rewards in the future; as shown in Equation 6, this infers the commands that maximizes the likelihood of optimality variables whenO1:K = 1 were observed. Given that the ELBO gap is the KL-divergence between the posterior and variational distributions, it is obvious that more exact variational inference will make the lower bound tighter, thereby directly leading to the agent’s better performance as well as the better policy learning. What would the exact posterior be like? Fig. 2(c) shows the graphical model of the inference problem that q(h|s) should address, which is obtained by marginalizing actions from Fig. 2(b); as also shown in (Haarnoja et al., 2018a), such marginalization results in a new system with new control input h, thus the inference problem in this level is again the RL/OC problem. To get the command at the moment, h1, the inference procedure should compute the posterior command trajectories h1:K by considering the dynamics and observations (the optimality variables), and marginalize the future commands h2:K out. Though the dimensionality of h is much lower than that of a, this inference problem is still not trivial to solve by two reasons: (i) The dynamics of states pθ(s′|s,h) = ∫ p(s′|s,a)πθ(a|s,h)da contains the environment component of which information can be obtained only through expensive interactions with the environment. (ii) One might consider building a surrogate model pφ(s′|s,h) via supervised learning with transition data obtained during low-level policy learning and utilizing the learned model for inference. However, learning high-dimensional transition model is hard to be accurate and the inference (planning) in high-dimensional space is intractable because of, e.g., the curse of dimensionality (Ha et al., 2018a).\nHowever, we can reasonably assume that configurations that should be considered from planning form some sort of low-dimensional manifold in the original space (Vernaza & Lee, 2012), and the closed-loop system with high-level commands provides stochastic dynamics on that manifold. That is, a high-dimensional transition model in Fig. 2(c) can be represented as a latent variable model (LVM) in Fig. 2(d). Once this low-dimensional representation is obtained, any motion planning or inference algorithms can solve the variational inference problem very efficiently with the vastly restricted search space.\nOur framework collects the trajectories from low-level policies and utilize them to learn a LVM for inference, which is formulated as a maximum likelihood estimation (MLE) problem. Suppose that we have collected a set of state trajectories and latent commands {s(n)1:K ,h (n) 1:K}n=1,...,N . We then formulate the MLE problem as:\nφ∗ = arg max φ ∑ n log pφ(s (n) 1:K |h (n) 1:K). (7)\nAs in Fig. 2(d), the states are assumed to be emerged frwwwwwwwwwwom a latent dynamical system, where a latent state trajectory, z1:K , lies on a low-dimensional latent space Z:\npφ(s1:K |h1:K) = ∫ pφ(s1:K |z1:K)pφ(z1:K |h1:K)dz1:K . (8)\nIn particular, we consider the state space model where latent states follow stochastic transition dynamics with h as inputs, i.e., the prior pφ(z1:K |h1:K) is a probability measure of a following system:\nzk+1 = fφ(zk) + σφ(zk) (hk + wk) , wk ∼ N (0, I) (9)\nand also a conditional likelihood of a state trajectory is assumed to be factorized along the time axis as: sk ∼ N (µφ(zk),Σφ(zk)) ∀k. The resulting sequence modeling is a self-supervised learning problem that has been extensively studied recently (Karl et al., 2017; Krishnan et al., 2017; Fraccaro\net al., 2017; Ha et al., 2018b). In particular, we adopt the idea of Adaptive path-integral autoencoder in (Ha et al., 2018b), where the variational distribution is parameterized by the controls, u, and an initial distribution, q0, i.e., the proposal qu(z[0,T ]) is a probability measure of a following system:\nzk+1 = fφ(zk) + σφ(zk) (hk + uk + wk) , wk ∼ N (0, I). (10)\nCompared to the original formulation in (Ha et al., 2018b), the probability model here is conditioned on the commands, h1:K , making the learning problem conditional generative model learning (Sohn et al., 2015).1 Note that it is also possible to first obtain a low-dimensional representation considering each state (not trajectory) independently and then fit their dynamics using RNNs like World Model (Ha & Schmidhuber, 2018), or to stack two consecutive observations and learn the dynamical model considering the stacked data as one observation like E2C (Watter et al., 2015). However, Ha et al. (2018b) showed that the representations learned from the short horizon data easily fail to extract enough temporal information and a latent dynamical model suitable for planning can be well-obtained only when considering long trajectories." }, { "heading": "3.3 REASONING (PLANNING) WITH LEARNED INTERNAL MODEL", "text": "Once the LVM is trained, a planning module can efficiently explore the state space S through the latent state z and infer the latent commands h1:K that are likely to result in high rewards; in particular, we adopt a simple particle filter algorithm for inference, because it is known to perform well with non-linear and non-Gaussian systems (Ha et al., 2018a; Piche et al., 2019). Particle filtering, which is also called the sequential Monte-Carlo, utilizes a set of samples and their weights to represent a posterior trajectory distribution. Starting from the initial state, it propagates a set of samples according to the dynamics (Equation 9) and updates the weights using the observation likelihood as w′ ∝ w × p(Ok = 1|sk). It also resamples the low-weighted particles to maintain the effective sample size. In the perspective of this work, this procedure can be viewed as the agent simulating multiple future state trajectories with the internal model, assigning each of them according to the reward, and reasoning the command that leads to the best-possible future. The detailed explanation is elaborated in Appendix A.4 and in Algorithm 2. Note that for the more complex environment, we can also iterate the whole procedure multiple times to compute a better command, then the planning algorithm becomes the adaptive path integral method (Kappen & Ruiz, 2016; Williams et al., 2017; Ha et al., 2018b). If the resampling procedure is eliminated, it is equivalent to the widely-used cross entropy method (Hafner et al., 2019). Any other inference/planning algorithms compatible with the graphical model of Fig. 2(d) can be also used." }, { "heading": "3.4 MAIN LEARNING ALGORITHM", "text": "Algorithm 1 DIStilling Hierarchy for Planning and Control 1: Initialize policy θ and latent model φ 2: for l = 1, ..., L do 3: while not converged do 4: Sample a task t ∈ T 5: Run the policy a ∼ πθ(a|s,h) with high-level commands h ∼ qφ(h|s; t) 6: Store trajectories τ into the experience buffer 7: Train the policy πθ using e.g. PPO . Equation 6 8: end while 9: Random sample h and collect rollouts. 10: Train the internal model using e.g. APIAE . Equation 7 11: end for\nThe overall learning procedure is summarized in Algorithm 1. The procedure consists of an outer internal model learning loop and an inner policy update loop. During the policy update stage (inner loop), the algorithm samples a task, solves the sampled task by using the hierarchical policy, and collects trajectories into the experience buffer. At each time step, the low-level policy decides actions the agent takes under the high-level commands determined by the planning module equipped with the\n1Effectively it only shifts the control input prior from N (0, I) to N (h, I) as written in Equation 9 and Equation 10 (Williams et al., 2017).\ninternal latent model. Using trajectory data in the buffer, the low-level policy is updated via a deep RL algorithm (e.g., policy gradient methods). After the low-level policy update, DISH collects another rollouts by random sampling a latent variable h, and the internal model is learned via self-supervised learning. These two learning procedures are then iterated for L times.\nNote that, for complex systems, tasks can be selected more carefully (at line 4) for the better learning landscape; for example, in earlier phases where the agent couldn’t yet learn a valid policy and/or an internal model, the agent can first learn them through imiation learning of expert’s demonstrations (Peng et al., 2018) or play data (Lynch et al., 2019) or through intrinsic motivations to acquire useful skills by itself (Sharma et al., 2019). As more challenging tasks are gradually provided to the agent, the internal model (or the reasoning module) is learned to cover wider ranges of state space for those tasks and the low-level policy is trained such that it can be operated with more complicated high-level commands." }, { "heading": "4 EXPERIMENT", "text": "In this section, we demonstrate the effectiveness of the proposed framework on performing planning and control for the high dimensional humanoid model (Peng et al., 2018) which has 197 state features and 36 action parameters, simulated by 1.2kHz Bullet physics engine (Coumans et al., 2013). The low-level control policy and the internal latent model are trained through the imitation learning, where three locomotion data from the Carnegie Mellon University motion capture (CMU mocap) database are used as target motions of imitation. The control policy is trained with the DeepMimic imitation reward (Peng et al., 2018) by using proximal policy optimization (PPO) (Schulman et al., 2017), while the internal model is learned to maximize the likelihood of experience data (i.e. Equation 7) by using the APIAE approach (Ha et al., 2018b). The internal model of DISH is constructed to have a 3-dimensional latent state and a 1-dimensional latent command for all experiments. The low-level policy and the internal model are operated in different time scales, 30Hz and 1Hz, respectively. The learned hierarchical model is then evaluated on trajectory following and navigation tasks in Section 4.1 and 4.2, respectively. For planning and execution, the model predictive control (MPC) scheme with particle filtering (A.4) is used; a 5-second trajectory is planned and the first reasoned high-level command is applied to the low-level policy at 1Hz and 4Hz for each task.\nWe refer to the appendix for the reward functions, hyperparmeters, and network architectures (A.5 and A.6), task configurations (A.7), and more experimental results (A.8). Our TensorFlow (Abadi et al., 2015) implementation will be made available in the final manuscript. The videos of the training procedure and the resulting policy are available at: https://bit.ly/2rwIfQn" }, { "heading": "4.1 ABLATION STUDY: LEARNING HIERARCHICAL STRUCTURE", "text": "In the first experiment, we examine how effectively the proposed framework learns and exploits the internal model. To investigate the effectiveness of each component introduced, we conduct ablation studies by considering three baselines: (i) sas′ that does not have neither the hierarchical structure nor LVMs (Fig. 2(a)), (ii) shs′ that utilizes the hierarchical policy but doesn’t learn the lowdimensional latent dynamics (Fig. 2(c)), and (iii) zaz′ that considers the latent dynamics but without\nthe hierarchical structure (no latent commands, a LVM version of Fig. 2(a)2). Given the rollouts {τ (i)} = {s(i)1:K ,a (i) 1:K ,h (i) 1:K}, learning sas′ and shs′ are simply supervised learning problems. For the zaz′ model, the variational autoencoder (VAE) approach (Kingma & Welling, 2013) is taken to train mappings between the observation and the latent space, and then the latent dynamics is trained via supervised learning, following the idea of (Ha & Schmidhuber, 2018). Note that most HRL frameworks can be categorized as either zaz′ e.g., (Ha & Schmidhuber, 2018; Hafner et al., 2019) or shs′ e.g., (Sharma et al., 2019). The similar network structures are used for the baselines; implementation details of the baselines also can be found in A.6. Table 1 summarizes the different features of the models with the related works.\nFigs. 3(a) and 3(b) show the learned latent space colored by the moving-averaged angular velocity of the ground truth motion. In the case of DISH, the latent state forms a manifold of a cylindrical shape in 3-dimensional space where the locomotion phase and the angular velocity are well encoded along the manifold. In contrast, the latent state structure of the zaz′ model does not capture the phase information and failed to construct a periodic manifold, which prevents a valid latent dynamics from being learned. Figs. 3(c) and 3(d) show the rollout trajectories from each internal model colored by the values of high-level commands, h. The high-level commands of DISH are learned to control the heading direction of the humanoid so that the agent can make the structural exploration in the configuration space. The shs′ model, on the other hand, fails to learn a valid controlled dynamics (since its space is too large) and consequently just generates noisy trajectories.\nTo quantitatively evaluate the reasoning performance of DISH and its ability to flexibly perform different tasks, we compare DISH to the baseline models on three trajectory following tasks: going straight, turning left and right. Table 2 reports the RMS errors for reconstruction and differences between the reference, planned, and executed trajectories. There are three things we can observe from the table: (i) Although sas′ has the lowest reconstruction error, the computed action from its internal model even cannot make the humanoid walk. This is because the humanoid has a highly unstable dynamics and reasoning of the high-dimensional action is not accurate enough to stabilize the humanoid dynamics, i.e., searching over the 36-dimensional action space with the limited number of particles (1024 in this case) is not feasible. For the same reason, zaz′ also fails to let the humanoid walk. (ii) Only the models considering the hierarchical policies can make the humanoid walk, and the DISH framework generates the most executable and valuable plans; the humanoid with the shs′ model walks just in random directions rather than following a planned trajectory (see Fig. 3(d)), which implies that the high-level command h does not provide any useful information regarding the navigation. (iii) By iterating the low-level policy and the internal model learning further, DISH+ becomes able to reason better plans as well as execute them better. Further analysis can be found in A.8\n2Note that Fig.2(d) depicts a LVM version of Fig. 2(c)." }, { "heading": "4.2 PLANNING AND CONTROL WITH LEARNED HIERARCHY", "text": "In the second experiment, we further demonstrate the capability of DISH framework to perform navigation tasks in cluttered environments (shown in Fig. 4). Since the humanoid character with the baseline models either kept falling or failed to walk in a desired direction, we omit the comparisons with the baselines in this task. The navigation reward is designed as a sum of two components: penalty for distance from the goal and penalty for collision with obstacles. As shown in Figs. 4(c) and 4(d) as well as in the supplementary video, the humanoid equipped with the DISH policy is able to not only escape a bug trap that cannot be overcome with greedy algorithms (i.e. without planning), but also navigate through obstacle regions successfully. Note that, unlike the HRL algorithms, the proposed hierarchical policy trained using the imitation tasks can be directly applied to the navigation tasks. It shows the generalization power of reasoning process; utilizing the internal model and the command-conditioned policy, the agent becomes able to plan and control its motions to adapt to new tasks and environments." }, { "heading": "5 CONCLUSION", "text": "We proposed a framework to learn a hierarchical policy for an RL agent. In the proposed policy, the high-level loop plans the agent’s motion by predicting its low-dimensional \"task-specific\" futures and the low-level loop maps the high-level commands into actions while actively reacting to the environment using its own state feedback loop. This sophisticated separation was able to emerge because two loops operated in different scales; the high-level planning loop only focuses on taskspecific low-dimensional aspects in a coarser time scale, which enables it to plan relatively long-term futures. In order to learn the internal model for planning, we took advantage of recent advances in self-supervised learning of sequential data, while the low-level control policy is learned using a deep RL algorithm. By alternately optimizing both the LVM and the policy, the proposed framework was able to construct a meaningful internal model as well as a versatile control policy.\nAs future works, it would be interesting to incorporate visual inputs into the high-level reasoning module as suggested by Merel et al. (2019a). Though only continuous latent variables were considered in our framework, utilizing discrete variables such as a notion of logics or modes (Toussaint et al., 2018) also seems to be a promising direction. Lastly, besides imitation of experts, an agent should be able to learn from play data (Lynch et al., 2019) or from its own intrinsic motivation (Sharma et al., 2019)." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 CONTROL-INFERENCE DUALITY", "text": "One theoretical concept this work extensively takes advantage of is the duality between optimal control (OC) and probabilistic inference (Levine, 2018; Todorov, 2008; Rawlik et al., 2012). The idea is that, if we consider an artificial binary observation whose emission probability is given by the exponential of a negative cost, an OC problem can be reformulated as an equivalent inference problem. In this case, the objective is to find the trajectory or control policy that maximizes the likelihood of the observations along the trajectory. One advantage of this perspective is that in order to solve the OC or RL problems, we can adopt any powerful and flexible inference methods, e.g., the expectation propagation (Toussaint, 2009), the particle filtering (Ha et al., 2018a; Piche et al., 2019), or the inference for Gaussian processes (Mukadam et al., 2018). In addition to utilizing efficient inference methods, this work also enjoys the duality to transform a multi-task RL problem into a generative model learning problem, which enables an agent to distill a low-dimensional representation and a versatile control policy in a combined framework." }, { "heading": "A.2 REINFORCEMENT LEARNING AS PROBABILISTIC INFERENCE", "text": "For easier reference, we restate the RL problem and the controlled trajectory distribution here:\nθ∗ = arg max θ Eqθ(s1:K ,a1:K) [ K∑ k=1 r̃k(sk,ak) ] , (11)\nqθ(s1:K ,a1:K) ≡ p(s1) K∏ k=1 p(sk+1|sk,ak)πθ(ak|sk), (12)\nrespectively. It is well known in the literature that the above optimization (Equation 11) also can be viewed as a probabilistic inference problem for a certain type of graphical models (Levine, 2018; Todorov, 2008; Rawlik et al., 2012). Suppose we have an artificial binary random variable ot, called the optimality variable, whose emission probability is given by exponential of a state-dependent reward, i.e., p(ok = 1|sk) = exp (rk(sk)) , (13) and the action prior p(ak) defines the uncontrolled trajectory distribution (see also Fig. 2(a)):\np(s1:K ,a1:K) ≡ p(s1) K∏ k=1 p(sk+1|sk,ak)p(ak). (14)\nThen we can derive the evidence lower-bound (ELBO) for the variational inference:\nlog p(O1:K) = log ∫ p(O1:K |s1:K)p(s1:K ,a1:K)ds1:Kda1:K\n= log ∫ p(O1:K |s1:K)p(s1:K ,a1:K) qθ(s1:K ,a1:K)\nqθ(s1:K ,a1:K) ds1:Kda1:K\n≥ Eqθ(s1:K ,a1:K) [ K∑ K=1 ( log p(Ok|sk)− log πθ(ak|sk) p(ak) )]\n= Eqθ(s1:K ,a1:K) [ K∑ k=1 rk(sk)− log πθ(ak|sk) p(ak) ] ≡ L(θ). (15)\nThe ELBO maximization in Equation 15 becomes equivalent to the reinforcement learning in Equation 11 by choosing an action prior p(ak) and parameterized policy family πθ(ak|sk) to match r̃k = rk − log πθp\n3. Similar to Equation 19, the above maximization means to find the control policy πθ resulting in the variational distribution that best approximates the posterior trajectory distribution when all the optimality variables were observed p(s1:K ,a1:K |O1:K = 1).\n3For example, when p(ak) and πθ(ak|sk) are given as Gaussian with the same covariance, log πθp encodes quadratic penalty on the control effort; when p(a) is given as an uninformative uniform distribution, log πθ\np\nbecomes the entropy regularization term in the maximum entropy reinforcement learning (Ziebart et al., 2008; Haarnoja et al., 2018b)." }, { "heading": "A.3 SELF-SUPERVISED LEARNING OF LATENT DYNAMICAL MODELS", "text": "Self-supervised learning is an essential approach that allows an agent to learn underlying dynamics from sequential high-dimensional sensory inputs. The learned dynamical model can be utilized to predict and plan the future state of the agent. By assuming that observations were emerged from the low-dimensional latent states, the learning problems are formulated as latent model learning, which includes an intractable posterior inference of latent states for given input data (Karl et al., 2017; Krishnan et al., 2017; Fraccaro et al., 2017; Ha et al., 2018b).\nSuppose that a set of observation sequences {s(n)1:K}n=1,...,N is given, where s (n) 1:K ≡ {sk;∀k = 1, ...,K}(n) are i.i.d. sequences of observation that lie on (possibly high-dimensional) data space S . The goal of the self-supervised learning problem of interest is to build a probabilistic model that well describes the given observations. The problem is formulated as a maximum likelihood estimation (MLE) problem by parameterizing a probabilistic model with φ:\nφ∗ = arg max φ ∑ n log pφ(s (n) 1:K). (16)\nFor latent dynamic models, we assume that the observations are emerged from a latent dynamical system, where a latent state trajectory, z1:K ≡ {zk; ∀k ∈ 1, ...,K}, lies on a (possibly lowdimensional) latent space Z:\npφ(s1:K) = ∫ pφ(s1:K |z1:K)pφ(z1:K)dz1:K , (17)\nwhere pφ(s1:K |z1:K) and pφ(z1:K) are called a conditional likelihood and a prior distribution, respectively. Since the objective function (Equation 16) contains the intractable integration, it cannot be optimized directly. To circumvent the intractable inference, a variational distribution q(·) is introduced and then a surrogate loss function L(q, φ; s1:K), which is called the evidence lower bound (ELBO), can be considered alternatively:\nlog pφ(s1:K) = log ∫ pφ(s1:K |z1:K)pφ(z1:K)dz1:K\n≥ Eq(z1:K) [ log\npφ(s1:K |z1:K)pφ(z1:K) q(z1:K) ] ≡ L(q, φ; s1:K), (18)\nwhere q(·) can be any probabilistic distribution over Z of which support includes that of pθ(·). Note that the gap between the log-likelihood and the ELBO is the Kullback-Leibler (KL) divergence between q(z) and the posterior pθ(z1:K |s1:K):\nlog pφ(s1:K)− L(q, φ; s1:K) = DKL(q(z1:K)||pφ(z1:K |s1:K)). (19)\nOne of the most general approaches is the expectation-maximization (EM) style optimization where, alternately, (i) E-step denotes an inference procedure where an optimal variational distribution q∗ is computed for given φ and (ii) M-step maximizes the ELBO w.r.t. model parameter φ for given q∗. Note that if we construct the whole inference and generative procedures as one computational graph, all the components can be learned by efficient end-to-end training (Karl et al., 2017; Krishnan et al., 2017; Fraccaro et al., 2017; Ha et al., 2018b). In p articular, Ha et al. (2018b) proposed the adaptive path-integral autoencoder (APIAE), a framework that utilizes the optimal control method; this framework is suitable to this work because we want to perform the planning in the learned latent space. APIAE considers the state-space model in which the latent states are governed by a stochastic dynamical model, i.e., the prior pφ(z1:K) is a probability measure of a following system:\nzk+1 = fφ(zk) + σφ(zk)wk, z0 ∼ p0(·), wk ∼ N (0, I). (20)\nAdditionally, a conditional likelihood of sequential observations is factorized along the time axis:\npφ(s1:K |z1:K) = K∏ k=1 pφ(sk|zk). (21)\nIf the variational distribution is parameterized by the control input u1:K−1 and the initial state distribution q0 as:\nzk+1 = fφ(zk) + σφ(zk) (uk + wk) , z0 ∼ q0(·), wk ∼ N (0, I), (22)\nthe ELBO can be written in the following form:\nL = Equ [ log pφ(s1:K |z1:K) + log p0(z0) q0(z0) − K−1∑ k=1 1 2 ‖uk‖2 + u>k wk ] . (23)\nThen, the problem of finding the optimal variational parameters u∗ and q∗0 (or equivalently, the best approximate posterior) can be formulated as a stochastic optimal control (SOC) problem:\nu∗, q∗0 = arg min u,q0\nEqu(z1:K) [ V (z1:K) + K−1∑ k=1 1 2 ‖uk‖2 + u>k wk ] , (24)\nwhere V (z1:K) ≡ − log p0(z(0))q0(z(0)) − ∑K k=1 log pφ(sk|zk) serves as a state cost of the SOC problem. Ha et al. (2018b) constructed the differentiable computational graph that resembles the path-integral control procedure to solve the above SOC problem, and trained the whole architecture including the latent dynamics, p0(z), fφ(z) and σφ(z), and the generative network, pφ(s|z) through the end-to-end training." }, { "heading": "A.4 PLANNING BY PARTICLE FILTERING", "text": "Algorithm 2 Particle Filtering with Internal Model for Planning\n1: Initialize ∀i ∈ {1, ..., Nparticle} : z(i)1 ∼ qφ(·|s:cur) and w (i) 1 = 1/Nparticle 2: for k = 2, ...,Kplan do 3: for i = 1, ..., Nparticle do 4: z(i)k = fφ(z (i) k−1) + σφ(z (i) k−1) ( h (i) k−1 + w (i) k−1 ) , w (i) k−1 ∼ N (0, I)\n5: s(i)k ∼ N ( µφ(z (i) k ),Σφ(z (i) k ) ) .\n6: w(i)k = w (i) k−1 exp(rk(s (i) k )) 7: end for 8: w(i)k = w (i) k / ∑ j w (j) k , ∀i ∈ {1, ..., Nparticle}\n9: Resample {z(i)1:k,w (i) 1:k} if (∑ i(w (i) k ) 2 )−1 < Nparticle/3\n10: end for 11: return h∗1 = ∑ i w (i) Kplan w (i) 1 . h ∗ k = ∑ i w (i) Kplan w (i) k ,∀k = 1, ..,KMPC for general MPC cases\nAt each time step δt, the high-level planner takes the current state as an argument and required to output the commands by predicting the future trajectory and corresponding reward rk(·). We adopted the particle filter algorithm to perform such the reasoning and the pseudo code is shown in Algorithm 2. The particle filter algorithm attempts to represent the posterior distribution using a set of samples. The algorithm first samples Nparticle initial latent states using the inference network (which is a part of the learned internal model) and assigns the same weights for them. During the forward recursion, the particles are propagated using the latent dynamics of the internal model (line 4), and the corresponding configurations are generated through the learned model (line 5). The weights of all particles are then updated based on the reward of the generated configurations (line 6 and 8); i.e., the particles that induce higher reward values will get higher weights. If only a few samples have weights effectively, i.e., if the weights collapse, the algorithm resamples the particles from the current approximate posterior distribution to maintain the effective sample size (line 9). After the forward recursion over the planning horizon, the optimal commands are computed as a linear combination of the initial disturbances; i.e., it is given by the expected disturbance under the posterior transition dynamics (Kappen & Ruiz, 2016)." }, { "heading": "A.5 TRAINING LOW-LEVEL POLICY", "text": "For the training algorithm for low-level policy network (πθ), we extend motion imitation approach (Peng et al., 2018) to multi-task scheme; we construct value networks parameterized by neural network with size [197, 1024, 512, 1] for each task (three in our experiments), and the low-level policy network (actor network) taking a state feature s and a latent variable h as inputs to determine an action a as illustrated in Fig. 5. The imitation reward is given as following:\nrt = 0.3r root t + 0.2r pose t + 0.15r vel t + 0.15r ee t + 0.2r com t (25) rroott = exp ( − 0.5||p̂rt − prt ||22 − 0.5||ˆ̇prt − ṗrt ||22 − 0.5||q̂rt − qrt ||22 − 0.05||ˆ̇qrt − q̇rt ||22 ) rposet = exp ( − 2 ∑ ||q̂jt − q j t ||22 ) rvelt = exp ( − 0.1 ∑ ||ˆ̇qjt − q̇ j t ||22 )\nreet = exp ( − 40 ∑ ||p̂et − pet ||22 ) rcomt = exp ( − ||ˆ̇pct − ṗct ||22 ) where qt and pt represent angle and global position while ˆ represent those of the reference.4 As reference motion data, three motion capture clips, turning left (t = [1, 0, 0]), going straight (t = [0, 1, 0]), turning right (t = [0, 0, 1]) from the Carnegie Mellon University motion capture (CMU mocap) database are utilized. Following the reference, PPO with same hyperparameters is used for RL algorithm. Since the internal model does not exist at the first iteration (l = 1), the high-level planner is initialized by qφ(h|s; t) = wT t where w = [−1, 0, 1]. After the first iteration, high-level planner computes a command ht that makes the model to best follow the horizontal position of the reference motion for 5 seconds (δt = 0.1s and Kplan = 50). The corresponding reward function is as following:\nrk = −||p̂rh,k − prh,k||22 (26)\nwhere ph,k is the horizontal components of position vector at time step k." }, { "heading": "A.6 TRAINING INTERNAL MODELS", "text": "Internal models of DISH is trained to maximize the ELBO in Equation 23 by using the APIAE approach (Ha et al., 2018b) with hyperparameters as following: 3 adaptations (R = 4), 10 time steps (K = 10), 32 samples (L = 32), and time interval of 0.1s (δt = 0.1). The network architectures of transition network and inference network are shown in Fig 6.\nFor the baselines, the transition functions, fφ(xk+1|xk,yk), were parameterized by neural networks having same architectures as DISH except for the input variables as shown in Table 3. The loss\n4Each superscript denotes as following: r: root (pelvis), j: local joints, e: end-effectors (hands and feet), c: center-of-mass.\nTable 3: Input variables of baseline transition models\ninput variables DISH sas′ shs′ zaz′\nxk zk sk sk zk yk hk ak hk ak\n(a) transition network (b) generative network\nFigure 6: The network architecture of internal model.\nfunction for baseline is as following:\nLsas′(φ) = K∑ k=1 ||sk − s̃k||22, s̃k = fsas ′ φ (s̃k−1,ak−1), (27)\nLshs′(φ) = K∑ k=1 ||sk − s̃k||22, s̃k = fshs ′ φ (s̃k−1,hk−1), (28)\nLzaz′(φ) = K∑ k=1 ||sk − g(z̃k)||22, z̃k = fzaz ′ φ (z̃k−1,ak−1), (29)\nwhere s̃1 = s1, z̃1 is latent state for s1 inferred by VAE, and g(·) is learned generative network of VAE." }, { "heading": "A.7 TASK CONFIGURATIONS", "text": "Trajectory Following Tasks: Planning reward rt penalizes the distance between the horizontal position of the root of humanoid character prk and the that of reference trajectory p̄k:\nrk = −||p̄k − prh,k||22. (30)\nNavigation Tasks: Planning cost rt penalizes the distance between the horizontal position of the root of humanoid character prk and the that of the goal pgoal and the collision with obstacles, while giving a reward on arrival:\nrk = −||pgoal − prh,k||22 − 105 × (IS_CRASHED) + 104 × (IS_REACHED). (31)" }, { "heading": "A.8 FURTHER RESULTS", "text": "Table 4 reports the RMS between reference, planned, and executed trajectories for each tasks. As illustrated in the table, DISHs showed the best performance. Although shs′ sometimes showed smaller errors for the difference between the planed and reference trajectories, the errors between the reference and executed trajectory of DISHs are always smallest. This demonstrates that DISHs best learn the internal dynamics of the humanoid, making the most valid predictions for future motion.\nComparing DISH (L = 1) and DISH+ (L = 2), we can observe that DISH outperforms in the turning tasks while showing the worse performance in going straight. This is because the high-level planner is initialized to output only one of {−1, 0, 1} (as shown in Appendix A.5), so the corresponding low-level policy of DISH is trained only around h ∈ {−1, 0, 1} rather than along the continuous h values. As a result, the DISH agent is only good at radical turns (not smooth turns), making it difficult to stabilize its heading direction properly. The ability to turn smoothly is obtained in the next iteration where the proper reasoning module is equipped, thus, although it lost some ability to turn fast, the DISH+ agent achieves the better ability to walk straight and the increased average performance (see Table 2).\nFig. 7 shows rollout samples by varying the control values. Except for DISHs, the generated trajectories are very noisy, which indicates that the baseline internal models are not suitable for planning the future movements of the humanoid." } ]
2,019
null
SP:31950e78fd806be2e0f7cc4728251fdc1a3c437d
[ "The paper proposes to combine the video modeling approaches based on autoregressive flows (e.g. Kumar’19) with amortized variational inference (e.g. Denton’18), wherein an autoregressive latent variable model optimized with variational inference is extended with an autoregressive flow that further transforms the output of the latent variable model while allowing to compute exact conditional probability. This is motivated with a physical intuition, where a dynamics model can benefit from decorrelating the inputs, and it is demonstrated that layers of autoregressive flows can represent derivatives of the original signal. In a proof-of-concept experiment, it is shown that using a layer of autoregressive flow improves NLL of a latent variable model.", "This paper proposes to model temporal sequences using autoregressive flows across time steps, that allow to model more explicitly temporal changes of the input, i.e. how the input x_t has changed w.r.t x_{<t}. As also stated by the authors, this is a generalization of other work that instead of modelling the input at each time step, models temporal differences between consecutive time steps." ]
We propose an approach for sequence modeling based on autoregressive normalizing flows. Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics. This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques. We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models. Results are presented on three benchmark video datasets, where flowbased dynamics improve log-likelihood performance over baseline models.
[]
[ { "authors": [ "Justin Bayer", "Christian Osendorfer" ], "title": "Learning stochastic recurrent networks", "venue": "In NeurIPS 2014 Workshop on Advances in Variational Inference,", "year": 2014 }, { "authors": [ "Yoshua Bengio", "Samy Bengio" ], "title": "Modeling high-dimensional discrete data with multi-layer neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2000 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Marc Peter Deisenroth", "Dieter Fox", "Carl Edward Rasmussen" ], "title": "Gaussian processes for dataefficient learning in robotics and control", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Conor Durkan", "Artur Bekasov", "Iain Murray", "George Papamakarios" ], "title": "Neural spline flows", "venue": "arXiv preprint arXiv:1906.04032,", "year": 2019 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Alex X Lee", "Sergey Levine" ], "title": "Self-supervised visual planning with temporal skip connections", "venue": "In Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Marco Fraccaro", "Søren Kaae Sønderby", "Ulrich Paquet", "Ole Winther" ], "title": "Sequential neural models with stochastic layers", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Brendan J Frey", "Geoffrey E Hinton", "Peter Dayan" ], "title": "Does the wake-sleep algorithm produce good density estimators? In Advances in neural information processing", "venue": null, "year": 1996 }, { "authors": [ "Karl Friston" ], "title": "Hierarchical models in the brain", "venue": "PLoS computational biology,", "year": 2008 }, { "authors": [ "Zhe Gan", "Chunyuan Li", "Ricardo Henao", "David E Carlson", "Lawrence Carin" ], "title": "Deep temporal sigmoid belief networks for sequence modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Mevlana Gemici", "Chia-Chun Hung", "Adam Santoro", "Greg Wayne", "Shakir Mohamed", "Danilo J Rezende", "David Amos", "Timothy Lillicrap" ], "title": "Generative temporal models with memory", "venue": "arXiv preprint arXiv:1702.04649,", "year": 2017 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "David Ha", "Jürgen Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jiawei He", "Andreas Lehrmann", "Joseph Marino", "Greg Mori", "Leonid Sigal" ], "title": "Probabilistic video generation using holistic attribute control", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Gustav Eje Henter", "Simon Alexanderson", "Jonas Beskow" ], "title": "Moglow: Probabilistic and controllable motion synthesis using normalising flows", "venue": "arXiv preprint arXiv:1905.06598,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Chin-Wei Huang", "David Krueger", "Alexandre Lacoste", "Aaron Courville" ], "title": "Neural autoregressive flows", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Priyank Jaini", "Kira A Selby", "Yaoliang Yu" ], "title": "Sum-of-squares polynomial flow", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Michael I Jordan", "Zoubin Ghahramani", "Tommi S Jaakkola", "Lawrence K Saul" ], "title": "An introduction to variational methods for graphical models", "venue": "NATO ASI SERIES D BEHAVIOURAL AND SOCIAL SCIENCES,", "year": 1998 }, { "authors": [ "Rudolph Emil Kalman" ], "title": "A new approach to linear filtering and prediction problems", "venue": "Journal of basic Engineering,", "year": 1960 }, { "authors": [ "Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick van der Smagt" ], "title": "Deep variational bayes filters: Unsupervised learning of state space models from raw data", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Sungwon Kim", "Sang-Gil Lee", "Jongyoon Song", "Jaehyeon Kim", "Sungroh Yoon" ], "title": "Flowavenet: A generative flow for raw audio", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Stochastic gradient vb and the variational auto-encoder", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Manoj Kumar", "Mohammad Babaeizadeh", "Dumitru Erhan", "Chelsea Finn", "Sergey Levine", "Laurent Dinh", "Durk Kingma" ], "title": "Videoflow: A flow-based generative model for video", "venue": null, "year": 1903 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "Yingzhen Li", "Stephan Mandt" ], "title": "A deep generative model for disentangled representations of sequential data", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Joseph Marino", "Milan Cvitkovic", "Yisong Yue" ], "title": "A general method for amortizing variational filtering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kevin P Murphy" ], "title": "Machine learning: a probabilistic perspective", "venue": "MIT press,", "year": 2012 }, { "authors": [ "Junier Oliva", "Avinava Dubey", "Manzil Zaheer", "Barnabas Poczos", "Ruslan Salakhutdinov", "Eric Xing", "Jeff Schneider" ], "title": "Transformation autoregressive networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Wei Ping", "Kainan Peng", "Jitong Chen" ], "title": "Clarinet: Parallel wave generation in end-to-end text-tospeech", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "Waveglow: A flow-based generative network for speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Nicholas Rhinehart", "Kris M Kitani", "Paul Vernaza" ], "title": "R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Nicholas Rhinehart", "Rowan McAllister", "Kris Kitani", "Sergey Levine" ], "title": "Precog: Prediction conditioned on goals in visual multi-agent settings", "venue": null, "year": 2019 }, { "authors": [ "Oren Rippel", "Ryan Prescott Adams" ], "title": "High-dimensional probability estimation with deep density models", "venue": "arXiv preprint arXiv:1302.5125,", "year": 2013 }, { "authors": [ "Christian Schuldt", "Ivan Laptev", "Barbara Caputo" ], "title": "Recognizing human actions: a local svm approach", "venue": "In International Conference on Pattern Recognition,", "year": 2004 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhudinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Igor Babuschkin", "Karen Simonyan", "Oriol Vinyals", "Koray Kavukcuoglu", "George Driessche", "Edward Lockhart", "Luis Cobo", "Florian Stimberg" ], "title": "Parallel wavenet: Fast high-fidelity speech synthesis", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tianfan Xue", "Jiajun Wu", "Katherine Bouman", "Bill Freeman" ], "title": "Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Zachary Ziegler", "Alexander Rush" ], "title": "Latent normalizing flows for discrete sequences", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Marino" ], "title": "Finally, in the filtering setting, we can rewrite the expectation, bringing it inside of the sum (see Gemici et al", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "We propose an approach for sequence modeling based on autoregressive normalizing flows. Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics. This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques. We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models. Results are presented on three benchmark video datasets, where flowbased dynamics improve log-likelihood performance over baseline models." }, { "heading": "1 INTRODUCTION", "text": "Data often contain sequential structure, providing a rich signal for learning models of the world. Such models are useful for learning self-supervised representations of sequences (Li & Mandt, 2018; Ha & Schmidhuber, 2018) and planning sequences of actions (Chua et al., 2018; Hafner et al., 2019). While sequential models have a longstanding tradition in probabilistic modeling (Kalman et al., 1960), it is only recently that improved computational techniques, primarily deep networks, have facilitated learning such models from high-dimensional data (Graves, 2013), particularly video and audio. Dynamics in these models typically contain a combination of stochastic and deterministic variables (Bayer & Osendorfer, 2014; Chung et al., 2015; Gan et al., 2015; Fraccaro et al., 2016), using simple distributions (e.g. Gaussian) to directly model the likelihood of data observations. However, attempting to capture all sequential dependencies with relatively unstructured dynamics may make it more difficult to learn such models. Intuitively, the model should use its dynamical components to track changes in the input instead of simultaneously modeling the entire signal. Rather than expanding the computational capacity of the model, we seek a method for altering the representation of the data to provide a more structured form of dynamics.\nTo incorporate more structured dynamics, we propose an approach for sequence modeling based on autoregressive normalizing flows (Kingma et al., 2016; Papamakarios et al., 2017), consisting of one or more autoregressive transforms in time. A single transform is equivalent to a Gaussian autoregressive model. However, by stacking additional transforms or latent variables on top, we can arrive at more expressive models. Each autoregressive transform serves as a moving reference frame in which higher-level structure is modeled. This provides a general mechanism for separating different forms of dynamics, with higher-level stochastic dynamics modeled in the simplified space provided by lower-level deterministic transforms. In fact, as we discuss, this approach generalizes the technique of modeling temporal derivatives to simplify dynamics estimation (Friston, 2008).\nWe empirically demonstrate this approach, both with standalone autoregressive normalizing flows, as well as by incorporating these flows within more flexible sequential latent variable models. While normalizing flows have been applied in a few sequential contexts previously, we emphasize the use of these models in conjunction with sequential latent variable models. We present experimental results on three benchmark video datasets, showing improved quantitative performance in terms of log-likelihood. In formulating this general technique for improving dynamics estimation in the framework of normalizing flows, we also help to contextualize previous work." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 AUTOREGRESSIVE MODELS", "text": "Consider modeling discrete sequences of observations, x1:T ∼ pdata(x1:T ), using a probabilistic model, pθ(x1:T ), with parameters θ. Autoregressive models (Frey et al., 1996; Bengio & Bengio, 2000) use the chain rule of probability to express the joint distribution over all time steps as the product of T conditional distributions. Because of the forward nature of the world, as well as for handling variable-length sequences, these models are often formulated in forward temporal order:\npθ(x1:T ) = T∏ t=1 pθ(xt|x<t). (1)\nEach conditional distribution, pθ(xt|x<t), models the temporal dependence between time steps, i.e. a prediction of the future. For continuous variables, we often assume that each distribution takes a relatively simple form, such as a diagonal Gaussian density:\npθ(xt|x<t) = N (xt;µθ(x<t),diag(σ2θ(x<t))), (2) where µθ(·) and σθ(·) are functions denoting the mean and standard deviation, often sharing parameters over time steps. While these functions may take the entire past sequence of observations as input, e.g. through a recurrent neural network, they may also be restricted to a convolutional window (van den Oord et al., 2016a). Autoregressive models can also be applied to non-sequential data (van den Oord et al., 2016b), where they excel at capturing local dependencies. However, due to their restrictive distributional forms, such models often struggle to capture higher-level structure." }, { "heading": "2.2 AUTOREGRESSIVE LATENT VARIABLE MODELS", "text": "Autoregressive models can be improved by incorporating latent variables, often represented as a corresponding sequence, z1:T . Classical examples include Gaussian state space models and hidden Markov models (Murphy, 2012). The joint distribution, pθ(x1:T , z1:T ), has the following form:\npθ(x1:T , z1:T ) = T∏ t=1 pθ(xt|x<t, z≤t)pθ(zt|x<t, z<t). (3)\nUnlike the simple, parametric form in Eq. 2, evaluating pθ(xt|x<t) now requires integrating over the latent variables,\npθ(xt|x<t) = ∫ pθ(xt|x<t, z≤t)pθ(z≤t|x<t)dz≤t, (4)\nyielding a more flexible distribution. However, performing this integration in practice is typically intractable, requiring approximate inference techniques, like variational inference (Jordan et al.,\n1998). Recent works have parameterized these models with deep neural networks, e.g. (Chung et al., 2015; Gan et al., 2015; Fraccaro et al., 2016; Karl et al., 2017), using amortized variational inference (Kingma & Welling, 2014; Rezende et al., 2014) for inference and learning. Typically, the conditional likelihood, pθ(xt|x<t, z≤t), and the prior, pθ(zt|x<t, z<t), are Gaussian densities, with temporal conditioning handled through deterministic recurrent networks and the stochastic latent variables. Such models have demonstrated success in audio (Chung et al., 2015; Fraccaro et al., 2016) and video modeling (Xue et al., 2016; Gemici et al., 2017; Denton & Fergus, 2018; He et al., 2018; Li & Mandt, 2018). However, design choices for these models remain an active area of research, with each model proposing new combinations of deterministic and stochastic dynamics." }, { "heading": "2.3 AUTOREGRESSIVE FLOWS", "text": "Our approach is based on affine autoregressive normalizing flows (Kingma et al., 2016; Papamakarios et al., 2017). Here, we review this basic concept, continuing with the perspective of temporal sequences, however, it is worth noting that these flows were initially developed and demonstrated in static settings. Kingma et al. (2016) noted that sampling from an autoregressive Gaussian model is an invertible transform, resulting in a normalizing flow (Rippel & Adams, 2013; Dinh et al., 2015; 2017; Rezende & Mohamed, 2015). Flow-based models transform between simple and complex probability distributions while maintaining exact likelihood evaluation. To see their connection to autoregressive models, we can express sampling a Gaussian random variable, xt ∼ pθ(xt|x<t) (Eq. 2), using the reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014):\nxt = µθ(x<t) + σθ(x<t) yt, (5) where yt ∼ N (yt; 0, I) is an auxiliary random variable and denotes element-wise multiplication. Thus, xt is an invertible transform of yt, with the inverse given as\nyt = xt − µθ(x<t)\nσθ(x<t) , (6)\nwhere division is performed element-wise. The inverse transform in Eq. 6 acts to normalize (hence, normalizing flow) and therefore decorrelate x1:T . Given the functional mapping between yt and xt in Eq. 5, the change of variables formula converts between probabilities in each space:\nlog pθ(x1:T ) = log pθ(y1:T )− log ∣∣∣∣det(∂x1:T∂y1:T )∣∣∣∣ . (7) By the construction of Eqs. 5 and 6, the Jacobian in Eq. 7 is triangular, enabling efficient evaluation as the product of diagonal terms:\nlog ∣∣∣∣det(∂x1:T∂y1:T )∣∣∣∣ = T∑\nt=1 ∑ i log σθ,i(x<t), (8)\nwhere i denotes the observation dimension, e.g. pixel. For a Gaussian autoregressive model, pθ(y1:T ) = N (y1:T ; 0, I). With these components, the change of variables formula (Eq. 7) provides an equivalent method for sampling and evaluating the model, pθ(x1:T ), from Eqs. 1 and 2.\nWe can improve upon this simple set-up by chaining together multiple transforms, effectively resulting in a hierarchical autoregressive model. Letting ym1:T denote the variables after them\nth transform, the change of variables formula for M transforms is\nlog pθ(x1:T ) = log pθ(y M 1:T )− log ∣∣∣∣det(∂x1:T∂y11:T )∣∣∣∣−M−1∑\nm=1\nlog ∣∣∣∣det( ∂ym1:T∂ym+11:T )∣∣∣∣ . (9)\nAutoregressive flows were initially considered in the contexts of variational inference (Kingma et al., 2016) and generative modeling (Papamakarios et al., 2017). These approaches are, in fact, generalizations of previous approaches with affine transforms (Dinh et al., 2015; 2017). While autoregressive flows are well-suited for sequential data, as mentioned previously, these approaches, as well as many recent approaches (Huang et al., 2018; Oliva et al., 2018; Kingma & Dhariwal, 2018), were initially applied in static settings, such as images.\nMore recent works have started applying flow-based models to sequential data. For instance, van den Oord et al. (2018) and Ping et al. (2019) distill autoregressive speech models into flow-based models.\nPrenger et al. (2019) and Kim et al. (2019) instead train these models directly. Kumar et al. (2019) use a flow to model individual video frames, with an autoregressive prior modeling dynamics across time steps. Rhinehart et al. (2018) and Rhinehart et al. (2019) use autoregressive flows for modeling vehicle motion, and Henter et al. (2019) use flows for motion synthesis with motion-capture data. Ziegler & Rush (2019) learn distributions over sequences of discrete observations (e.g., text) by using flows to model dynamics of continuous latent variables. Like these recent works, we apply flow-based models to sequential data. However, we demonstrate that autoregressive flows can serve as a useful, general-purpose technique for improving sequence modeling as components of sequential latent variable models. To the best of our knowledge, our work is the first to focus on the aspect of using flows to pre-process sequential data to improve downstream dynamics modeling.\nFinally, we utilize affine flows (Eq. 5) in this work. This family of flows includes methods like NICE (Dinh et al., 2015), RealNVP (Dinh et al., 2017), IAF (Kingma et al., 2016), MAF (Papamakarios et al., 2017), and GLOW (Kingma & Dhariwal, 2018). However, there has been recent work in non-affine flows (Huang et al., 2018; Jaini et al., 2019; Durkan et al., 2019), which may offer further flexibility. We chose to investigate affine flows for their relative simplicity and connections to previous techniques, however, the use of non-affine flows could result in additional improvements." }, { "heading": "3 METHOD", "text": "We now describe our approach for sequence modeling with autoregressive flows. Although the core idea is a relatively straightforward extension of autoregressive flows, we show how this simple technique can be incorporated within autoregressive latent variable models (Section 2.2), providing a general-purpose approach for improving dynamics modeling. We first motivate the benefits of affine autoregressive transforms in the context of sequence modeling with a simple example." }, { "heading": "3.1 A MOTIVATING EXAMPLE", "text": "Consider the discrete dynamical system defined by the following set of equations:\nxt = xt−1 + ut, (10) ut = ut−1 + wt, (11)\nwhere wt ∼ N (wt; 0,Σ). We can express xt and ut in probabilistic terms as xt ∼ N (xt; xt−1 + ut−1,Σ), (12) ut ∼ N (ut; ut−1,Σ). (13)\nPhysically, this describes the noisy dynamics of a particle with momentum and mass 1, subject to Gaussian noise. That is, x represents position, u represents velocity, and w represents stochastic forces. If we consider the dynamics at the level of x, we can use the fact that ut−1 = xt−1 − xt−2 to write\np(xt|xt−1,xt−2) = N (xt; xt−1 + xt−1 − xt−2,Σ). (14)\nThus, we see that in the space of x, the dynamics are second-order Markov, requiring knowledge of the past two time steps. However, at the level of u (Eq. 13), the dynamics are first-order Markov, requiring only the previous time step. Yet, note that ut is, in fact, an affine autoregressive transform of xt because ut = xt−xt−1 is a special case of the general form xt−µθ(x<t)σθ(x<t) . In Eq. 10, we see that the Jacobian of this transform is ∂xt/∂ut = I, so, from the change of variables formula, we have p(xt|xt−1,xt−2) = p(ut|ut−1). In other words, an affine autoregressive transform has allowed us to convert a second-order Markov system into a first-order Markov system, thereby simplifying the dynamics. Continuing this process to move to wt = ut − ut−1, we arrive at a representation that is entirely temporally decorrelated, i.e. no dynamics, because p(wt) = N (wt; 0,Σ). A sample from this system is shown in Figure 2, illustrating this process of temporal decorrelation.\nThe special case of modeling temporal changes, ut = xt−xt−1 = ∆xt, is a common pre-processing technique; for recent examples, see Deisenroth et al. (2013); Chua et al. (2018); Kumar et al. (2019). In fact, ∆xt is a finite differences approximation of the generalized velocity (Friston, 2008) of x, a classic modeling technique in dynamical models and control (Kalman et al., 1960), redefining the state-space to be first-order Markov. Affine autoregressive flows offer a generalization of this technique, allowing for non-linear transform parameters and flows consisting of multiple transforms, with each transform serving to successively decorrelate the input sequence in time. In analogy with generalized velocity, each transform serves as a moving reference frame, allowing us to focus model capacity on less correlated fluctuations rather than the highly temporally correlated raw signal." }, { "heading": "3.2 AUTOREGRESSIVE FLOWS ON SEQUENCES", "text": "We apply autoregressive flows across time steps within a sequence, x1:T ∈ RT×D. That is, the observation at each time step, xt ∈ RD, is modeled as an autoregressive function of past observations, x<t ∈ Rt−1×D, and a random variable, yt ∈ RD (Figure 3a). We consider flows of the form given in Eq. 5, where µθ(x<t) and σθ(x<t) are parameterized by neural networks. In constructing chains of flows, we denote the shift and scale functions at the mth transform as µmθ (·) and σmθ (·) respectively. We then calculate ym using the corresponding inverse transform:\nymt = ym−1t − µmθ (ym−1<t )\nσmθ (y m−1 <t )\n. (15)\nAfter the final (M th) transform, the base distribution, pθ(yM1:T ), can range from a simple distribution, e.g. N (yM1:T ; 0, I), in the case of a flow-based model, up to more complicated distributions in the case of other latent variable models (Section 3.3). While flows of greater depth can improve model capacity, such transforms have limiting drawbacks. In particular, 1) they require that the outputs maintain the same dimensionality as the inputs, RT×D, 2) they are restricted to affine transforms, and 3) these transforms operate element-wise within a time step. As we discuss in the next section, we can combine autoregressive flows with non-invertible sequential latent variable models (Section 2.2), which do not have these restrictions." }, { "heading": "3.3 LATENT VARIABLE MODELS WITH AUTOREGRESSIVE FLOWS", "text": "We can use autoregressive flows as a component in parameterizing the dynamics within autoregressive latent variable models. To simplify notation, we consider this set-up with a single transform, but a chain of multiple transforms (Section 3.2) can be applied within each flow." }, { "heading": "3.3.1 MODEL FORMULATION", "text": "Let us consider parameterizing the conditional likelihood, pθ(xt|x<t, z≤t), within a latent variable model using an autoregressive flow (Figure 3b). To do so, we express a base conditional distribution for yt, denoted as pθ(yt|y<t, z≤t), which is then transformed into xt via the affine transform in Eq. 5. We have written pθ(yt|y<t, z≤t) with conditioning on y<t, however, by removing temporal correlations to arrive at y1:T , our hope is that these dynamics can be primarily modeled through z1:T . Using the change of variables formula, we can express the latent variable model’s log-joint distribution as\nlog pθ(x1:T , z1:T ) = log pθ(y1:T , z1:T )− log ∣∣∣∣det(∂x1:T∂y1:T )∣∣∣∣ , (16)\nwhere the joint distribution over y1:T and z1:T , in general, is given as\npθ(y1:T , z1:T ) = T∏ t=1 pθ(yt|y<t, z≤t)pθ(zt|y<t, z<t). (17)\nNote that the latent prior, pθ(zt|y<t, z<t), can be equivalently conditioned on x<t or y<t, as there is a one-to-one mapping between these variables. We could also consider parameterizing the prior with autoregressive flows, or even constructing a hierarchy of latent variables. However, we leave these extensions for future work, opting to first introduce the basic concept here." }, { "heading": "3.3.2 VARIATIONAL INFERENCE & LEARNING", "text": "Training a latent variable model via maximum likelihood requires marginalizing over the latent variables to evaluate the marginal log-likelihood of observations: log pθ(x1:T ) = log ∫ pθ(x1:T , z1:T )dz1:T . This marginalization is typically intractable, requiring the use of approximate inference methods. Variational inference (Jordan et al., 1998) introduces an approximate posterior distribution, q(z1:T |x1:T ), which provides a lower bound on the marginal log-likelihood:\nlog pθ(x1:T ) ≥ L(x1:T ; q, θ) ≡ Eq(z1:T |x1:T ) [log pθ(x1:T , z1:T )− log q(z1:T |x1:T )] , (18) referred to as the evidence lower bound (ELBO). Often, we assume q(z1:T |x1:T ) is a structured distribution, attempting to explicitly capture the model’s temporal dependencies across z1:T . We can consider both filtering or smoothing inference, however, we focus on the case of filtering, with\nq(z1:T |x1:T ) = T∏ t=1 q(zt|x≤t, z<t). (19)\nThe conditional dependencies in q can be modeled through a direct, amortized function, e.g. using a recurrent network (Chung et al., 2015), or through optimization (Marino et al., 2018). Again, note that we can condition q on x≤t or y≤t, as there exists a one-to-one mapping between these variables. With the model’s joint distribution (Eq. 16) and approximate posterior (Eq. 19), we can then evaluate the ELBO. We derive the ELBO for this set-up in Appendix A, yielding\nL = T∑ t=1 Eq(z≤t|y≤t)\n[ log pθ(yt|y<t, z≤t)− log\nq(zt|y≤t, z<t) pθ(zt|y<t, z<t) − log ∣∣∣∣det(∂xt∂yt )∣∣∣∣ ] . (20)\nThis expression makes it clear that a flow-based conditional likelihood amounts to learning a latent variable model on top of the intermediate learned space provided by y, with an additional factor in the objective penalizing the scaling between x and y." }, { "heading": "4 EVALUATION", "text": "We demonstrate and evaluate the proposed framework on three benchmark video datasets: Moving MNIST (Srivastava et al., 2015), KTH Actions (Schuldt et al., 2004), and BAIR Robot Pushing (Ebert et al., 2017). Experimental setups are described in Section 4.1, followed by a set of qualitative experiments in Section 4.2. In Section 4.3, we provide quantitative comparisons across different model classes. Further implementation details and visualizations can be found in Appendix B. Anonymized code is available at the following link." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "We implement three classes of models: 1) standalone autoregressive flow-based models, 2) sequential latent variable models, and 3) sequential latent variable models with flow-based conditional likelihoods. Flows are implemented with convolutional networks, taking in a fixed window of previous frames and outputting shift, µθ, and scale, σθ, parameters. The sequential latent variable models consist of convolutional and recurrent networks for both the encoder and decoder networks, following the basic form of architecture that has been previously employed in video modeling (Denton & Fergus, 2018; Ha & Schmidhuber, 2018; Hafner et al., 2019).\nIn the case of a regular sequential latent variable model, the conditional likelihood is a Gaussian that models the frame, xt. In the case of a flow-based conditional likelihood, we model the noise variable, yt, with a Gaussian. In our experiments, the flow components have vastly fewer parameters than the sequential latent variable models. In addition, for models with flow-based conditional likelihoods, we restrict the number of parameters to enable a fairer comparison. These models have fewer parameters than the baseline sequential latent variable models (with non-flow-based conditional likelihoods). See Appendix B for parameter comparisons and architecture details. Finally, flow-based conditional likelihoods only add a constant computational cost per time-step, requiring a single forward pass per time step for both evaluation and generation." }, { "heading": "4.2 QUALITATIVE EVALUATION", "text": "To better understand the behavior of autoregressive flows on sequences, we visualize each component as an image. In Figure 4, we show the data, xt, shift, µθ, scale, σθ, and noise variable, yt, for standalone flow-based models (left) and flow-based conditional likelihoods (right) on random sequences from the Moving MNIST and BAIR Robot Pushing datasets. Similar visualizations for KTH Actions are shown in Figure 8 in the Appendix. In Figure 9 in the Appendix, we also visualize these quantities for a flow-based conditional likelihood with two transforms.\nFrom these visualizations, we can make a few observations. The shift parameters (second row) tend to capture the static background, blurring around regions of uncertainty. The scale parameters (third row), on the other hand, tend to focus on regions of higher uncertainty, as expected. The resulting noise variables (bottom row) display any remaining structure not modeled by the flow. In comparing standalone flow-based models with flow-based conditional likelihoods in sequential latent variable models, we see that the latter qualitatively contains more structure in y, e.g. dots (Figure 4b, fourth row) or sharper edges (Figure 4d, fourth row). This is expected, as the noise distribution is more expressive in this case. With a relatively simple dataset, like Moving MNIST, a single flow can reasonably decorrelate the input, yielding white noise images (Figure 4a, fourth row). However, with natural image datasets like KTH Actions and BAIR Robot Pushing, a large degree of structure is still present in these images, motivating the use of additional model capacity to model this signal. In Appendix C.1, we quantify the degree of temporal decorrelation performed by flow-based models by evaluating the empirical correlation between frames at successive time steps for both the data, x, and the noise variables, y. In Appendix C.2, we provide additional qualitative results." }, { "heading": "4.3 QUANTITATIVE EVALUATION", "text": "Log-likelihood results for each model class are shown in Table 1. We report the average test loglikelihood in nats per pixel per channel for flow-based models and the lower bound on this quantity for sequential latent variable models. Standalone flow-based models perform surprisingly well, even outperforming sequential latent variable models in some cases. Increasing flow depth from 1 to 2 generally results in improved performance. Sequential latent variable models with flow-based conditional likelihoods outperform their baseline counterparts, despite having fewer parameters. One reason for this disparity is overfitting. Comparing with the training performance reported in Table 3, we see that sequential latent variable models with flow-based conditional likelihoods overfit less. This is particularly apparent on KTH Actions, which contains training and test sets with a high degree of separation (different identities and activities). This suggests that removing static components, like backgrounds, yields a reconstruction space that is better for generalization.\nThe quantitative results in Table 1 are for a representative sequential latent variable model with a standard convolutional encoder-decoder architecture and fully-connected latent variables. However, many previous works do not evaluate proper lower bounds on log-likelihood, using techniques like down-weighting KL divergences (Denton & Fergus, 2018; Ha & Schmidhuber, 2018; Lee et al., 2018). Indeed, Marino et al. (2018) train SVG (Denton & Fergus, 2018) with a proper lower bound and report a lower bound of −2.86 nats per pixel on KTH Actions, on-par with our results. Kumar et al. (2019) report log-likelihood results on BAIR Robot Pushing, obtaining −1.3 nats per pixel, substantially higher than our results. However, their model is significantly larger than the models presented here, consisting of 3 levels of latent variables, each containing 24 steps of flows." }, { "heading": "5 CONCLUSION", "text": "We have presented a technique for improving sequence modeling based on autoregressive normalizing flows. This technique uses affine transforms to temporally decorrelate sequential data, thereby simplifying the estimation of dynamics. We have drawn connections to classical approaches, which involve modeling temporal derivatives. Finally, we have empirically shown how this technique can improve sequential latent variable models." }, { "heading": "A LOWER BOUND DERIVATION", "text": "Consider the model defined in Section 3.3.1, with the conditional likelihood parameterized with autoregressive flows. That is, we parameterize\nxt = µθ(x<t) + σθ(x<t) yt (21) yielding\npθ(xt|x<t, z≤t) = pθ(yt|y<t, z≤t) ∣∣∣∣det(∂xt∂yt )∣∣∣∣−1 . (22) The joint distribution over all time steps is then given as\npθ(x1:T , z1:T ) = T∏ t=1 pθ(xt|x<t, z≤t)pθ(zt|x<t, z<t) (23)\n= T∏ t=1 pθ(yt|y<t, z≤t) ∣∣∣∣det(∂xt∂yt )∣∣∣∣−1 pθ(zt|x<t, z<t). (24) To perform variational inference, we consider a filtering approximate posterior of the form\nq(z1:T |x1:T ) = T∏ t=1 q(zt|x≤t, z<t). (25)\nWe can then plug these expressions into the evidence lower bound:\nL ≡ Eq(z1:T |x1:T ) [log pθ(x1:T , z1:T )− log q(z1:T |x1:T )] (26)\n= Eq(z1:T |x1:T )\n[ log ( T∏ t=1 pθ(yt|y<t, z≤t) ∣∣∣∣det(∂xt∂yt )∣∣∣∣−1 pθ(zt|x<t, z<t) )\n− log (\nT∏ t=1\nq(zt|x≤t, z<t) )]\n(27)\n= Eq(z1:T |x1:T ) [ T∑ t=1 log pθ(yt|y<t, z≤t)− log q(zt|x≤t, z<t) pθ(zt|x<t, z<t) − log ∣∣∣∣det(∂xt∂yt )∣∣∣∣ ] . (28)\nFinally, in the filtering setting, we can rewrite the expectation, bringing it inside of the sum (see Gemici et al. (2017); Marino et al. (2018)):\nL = T∑ t=1 Eq(z≤t|x≤t)\n[ log pθ(yt|y<t, z≤t)− log\nq(zt|x≤t, z<t) pθ(zt|x<t, z<t) − log ∣∣∣∣det(∂xt∂yt )∣∣∣∣ ] . (29)\nBecause there exists a one-to-one mapping between x1:T and y1:T , we can equivalently condition the approximate posterior and the prior on y, i.e.\nL = T∑ t=1 Eq(z≤t|y≤t)\n[ log pθ(yt|y<t, z≤t)− log\nq(zt|y≤t, z<t) pθ(zt|y<t, z<t) − log ∣∣∣∣det(∂xt∂yt )∣∣∣∣ ] . (30)" }, { "heading": "B EXPERIMENT DETAILS", "text": "We store a fixed number of past frames in the buffer of each transform, to generate the shift and scale for the transform. For each stack of flow, 4 convolutional layers with kernel size (3, 3), stride 1 and padding 1 are applied first on each data observation in the buffer, preserving the data shape. The outputs are concatenated along the channel dimension and go through another four convolutional layers also with kernel size (3, 3), stride 1 and padding 1. Finally, separate convolutional layers with the same kernel size, stride and padding are used to generate shift and scale respectively.\nFor latent variable models, we use a DC-GAN structure (Radford et al., 2015), with 4 layers of convolutional layers of kernel size (4, 4), stride 2 and padding 1 before another convolutional layer of kernel size (4, 4), stride 1 and no padding to encode the data. The encoded data is sent to an LSTM (Hochreiter & Schmidhuber, 1997) followed by fully connected layers to generate the mean and log-variance for estimating the approximate posterior distribution of the latent variable, zt. The conditional prior distribution is modeled with another LSTM followed by fully connected layers, taking the previous latent variable as input. The decoder take the inverse structure of the encoder. In the SLVM, we use 2 LSTM layers for modelling the conditional prior and approximate posterior distributions, while in the combined model we use 1 LSTM layer for each.\nWe use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 1 × 10−4 to train all the models. For Moving MNIST, we use a batch size of 16 and train for 200, 000 iterations for latent variable models and 100, 000 iterations for flow-based and latent variable models with flow-based likelihoods. For BAIR Robot Pushing, we use a batch size of 8 and train for 200, 000 iterations for all models. For KTH dataset we use a batch size of 8 and train for 90, 000 iterations for all models. Batch norm (Ioffe & Szegedy, 2015) is applied to all convolutional layers that do not directly generate distribution or transform parameters. We randomly crop sequence of length 13 from all sequences and evaluate on the last 10 frames. (For 2-flow models we crop sequence of length 16 to fill up all buffers.) Anonymized code is available at the following link.\nxt\nyt\nxt 3 xt 2 xt 1\n✓ µ✓\n÷\n: 4 layer CNN\n: 1 layer CNN\nAll convolution layers are set up with with kernel size (3,3) stride 1 and padding 1.\ntt-1t-2t-3\nFigure 5: Implementation Visualization of the autoregressive flow.\n!19\n!20" }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS", "text": "C.1 QUANTITATIVE EVALUATION OF TEMPORAL DECORRELATION\nThe qualitative results in Figures 4 and 8 demonstrate that flows are capable of removing much of the structure of the observations, resulting in whitened noise images. To quantitatively confirm the temporal decorrelation resulting from this process, we evaluate the empirical correlation between successive frames, averaged over spatial locations and channels, for the data observations and noise variables. This is an average normalized version of the auto-covariance of each signal with a time delay of 1 time step. Specifically, we estimate the temporal correlation as\ncorrx ≡ 1 C ∗W ∗H · H,W,C∑ i,j,k E x (i,j,k) t ,x (i,j,k) t+1 ∼D\n[ (x\n(i,j,k) t − µ(i,j,k))(x(i,j,k)t+1 − µ(i,j,k))( σ(i,j,k) )2\n] , (31)\nwhere x(i,j,k) denotes the value of the image at location (i, j) and channel k, µ(i,j,k) denotes the mean of this dimension, and σ(i,j,k) denotes the standard deviation of this dimension. H,W, and C respectively denote the height, width, and number of channels of the observations.\nWe evaluated this quantity for data examples, x, and noise variables, y, for SLVM w/ 1-AF. The results for training sequences are shown in Table 4. In Figure 7, we plot this quantity during training for KTH Actions. We see that flows do indeed result in a decrease in temporal correlation. Note that because correlation is a measure of linear dependence, one cannot conclude from these results alone\nthat the flows have resulted in simplified temporal structure. However, these results agree with the qualitative and quantitative results presented in Section 4, suggesting that autoregressive flows can yield sequences with simpler dynamics.\nC.2 ADDITIONAL QUALITATIVE RESULTS" } ]
2,019
IMPROVING SEQUENTIAL LATENT VARIABLE MODELS
SP:ce4cfc10fe405005267e62712d939275d2847128
[ "In this paper, an unbiased estimator for expectations over discrete random variables is developed based on a sampling-without-replacement strategy. The proposed estimator is shown to be a Rao-Blackwellization of three existing unbiased estimators with guaranteed reduction in estimation variance. The connections of the method to other gradient estimators are discussed. Experimental results on several toy and real-data DL/RL problems are reported to demonstrate the applicability of the proposed estimators in the practice of machine learning. ", "This paper introduces an gradient estimator for loss functions that are expectations over discrete random variables. The basic idea is that an estimator over a discrete distribution can be Rao-Blackwellized by conditioning on the event that the discrete realization was produced by being the first sample drawn from an unordered set of samples drawn with replacement. Much of the paper is spent showing how this Rao-Blackwellized estimator can be computed in practice and how it compares to other known estimators." ]
We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the RaoBlackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.
[ { "affiliations": [], "name": "Wouter Kool" }, { "affiliations": [], "name": "Herke van Hoof" } ]
[ { "authors": [ "Dzmitry Bahdanau", "Philemon Brakel", "Kelvin Xu", "Anirudh Goyal", "Ryan Lowe", "Joelle Pineau", "Aaron Courville", "Yoshua Bengio" ], "title": "An actor-critic algorithm for sequence prediction", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Irwan Bello", "Hieu Pham", "Quoc V Le", "Mohammad Norouzi", "Samy Bengio" ], "title": "Neural combinatorial optimization with reinforcement learning", "venue": "arXiv preprint arXiv:1611.09940,", "year": 2016 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Randal Douc", "Olivier Cappé" ], "title": "Comparison of resampling schemes for particle filtering", "venue": "ISPA", "year": 2005 }, { "authors": [ "Nick Duffield", "Carsten Lund", "Mikkel Thorup" ], "title": "Priority sampling for estimation of arbitrary subset sums", "venue": "Journal of the ACM (JACM),", "year": 2007 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier" ], "title": "Classical structured prediction losses for sequence to sequence learning", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),", "year": 2018 }, { "authors": [ "Paul Fearnhead", "Peter Clifford" ], "title": "On-line inference for hidden markov models via particle filters", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2003 }, { "authors": [ "Will Grathwohl", "Dami Choi", "Yuhuai Wu", "Geoffrey Roeder", "David Duvenaud" ], "title": "Backpropagation through the void: Optimizing control variates for black-box gradient estimation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Karol Gregor", "Ivo Danihelka", "Andriy Mnih", "Charles Blundell", "Daan Wierstra" ], "title": "Deep autoregressive networks", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Aditya Grover", "Eric Wang", "Aaron Zweig", "Stefano Ermon" ], "title": "Stochastic optimization of sorting networks via continuous relaxations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jiatao Gu", "Daniel Jiwoong Im", "Victor OK Li" ], "title": "Neural machine translation with Gumbel-greedy decoding", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Shixiang Gu", "Sergey Levine", "Ilya Sutskever", "Andriy Mnih" ], "title": "Muprop: Unbiased backpropagation for stochastic neural networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Di He", "Yingce Xia", "Tao Qin", "Liwei Wang", "Nenghai Yu", "Tie-Yan Liu", "Wei-Ying Ma" ], "title": "Dual learning for machine translation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Carolyn Kim", "Ashish Sabharwal", "Stefano Ermon" ], "title": "Exact sampling with integer linear programs and random perturbations", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Wouter Kool", "Herke van Hoof", "Max Welling" ], "title": "Attention, learn to solve routing problems", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Wouter Kool", "Herke van Hoof", "Max Welling" ], "title": "Buy 4 reinforce samples, get a baseline for free! In Deep Reinforcement Learning Meets Structured Prediction", "venue": "Workshop at the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Wouter Kool", "Herke Van Hoof", "Max Welling" ], "title": "Stochastic beams and where to find them: The gumbel-top-k trick for sampling sequences without replacement", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hugo Larochelle", "Iain Murray" ], "title": "The neural autoregressive distribution estimator", "venue": "In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Rémi Leblond", "Jean-Baptiste Alayrac", "Anton Osokin", "Simon Lacoste-Julien" ], "title": "Searnn: Training RNNs with global-local losses", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "EL Lehmann", "Henry Scheffé" ], "title": "Completeness, similar regions, and unbiased estimation", "venue": "Part i. Sankhyā: The Indian Journal of Statistics,", "year": 1950 }, { "authors": [ "Chen Liang", "Mohammad Norouzi", "Jonathan Berant", "Quoc V Le", "Ni Lao" ], "title": "Memory augmented policy optimization for program synthesis and semantic parsing", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Runjing Liu", "Jeffrey Regier", "Nilesh Tripuraneni", "Michael Jordan", "Jon Mcauliffe" ], "title": "RaoBlackwellized stochastic gradients for discrete distributions", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Guy Lorberbom", "Andreea Gane", "Tommi Jaakkola", "Tamir Hazan" ], "title": "Direct optimization through argmax for discrete variational auto-encoder", "venue": "arXiv preprint arXiv:1806.02867,", "year": 2018 }, { "authors": [ "Guy Lorberbom", "Chris J Maddison", "Nicolas Heess", "Tamir Hazan", "Daniel Tarlow" ], "title": "Direct policy gradients: Direct optimization of policies in discrete action spaces", "venue": null, "year": 1906 }, { "authors": [ "R Duncan Luce" ], "title": "Individual choice behavior: A theoretical analysis", "venue": "John Wiley,", "year": 1959 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Andriy Mnih", "Karol Gregor" ], "title": "Neural variational inference and learning in belief networks", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Andriy Mnih", "Danilo Rezende" ], "title": "Variational inference for Monte Carlo objectives", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "MN Murthy" ], "title": "Ordered and unordered estimators in sampling without replacement", "venue": "Sankhyā: The Indian Journal of Statistics (1933-1960),", "year": 1957 }, { "authors": [ "Renato Negrinho", "Matthew Gormley", "Geoffrey J Gordon" ], "title": "Learning beam search policies via imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Mohammad Norouzi", "Samy Bengio", "Navdeep Jaitly", "Mike Schuster", "Yonghui Wu", "Dale Schuurmans" ], "title": "Reward augmented maximum likelihood for neural structured prediction", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "John Paisley", "David M Blei", "Michael I Jordan" ], "title": "Variational Bayesian inference with stochastic search", "venue": "In International Conference on Machine Learning,", "year": 2012 }, { "authors": [ "Robin L Plackett" ], "title": "The analysis of permutations", "venue": "Journal of the Royal Statistical Society: Series C (Applied Statistics),", "year": 1975 }, { "authors": [ "Des Raj" ], "title": "Some estimators in sampling with varying probabilities without replacement", "venue": "Journal of the American Statistical Association,", "year": 1956 }, { "authors": [ "Rajesh Ranganath", "Sean Gerrish", "David Blei" ], "title": "Black box variational inference", "venue": "In Artificial Intelligence and Statistics,", "year": 2014 }, { "authors": [ "Marc’Aurelio Ranzato", "Sumit Chopra", "Michael Auli", "Wojciech Zaremba" ], "title": "Sequence level training with recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Steven J Rennie", "Etienne Marcheret", "Youssef Mroueh", "Jerret Ross", "Vaibhava Goel" ], "title": "Self-critical sequence training for image captioning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Geoffrey Roeder", "Yuhuai Wu", "David K Duvenaud" ], "title": "Sticking the landing: Simple, lower-variance gradient estimators for variational inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ruslan Salakhutdinov", "Iain Murray" ], "title": "On the quantitative analysis of deep belief networks", "venue": "In International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "John Schulman", "Nicolas Heess", "Theophane Weber", "Pieter Abbeel" ], "title": "Gradient estimation using stochastic computation graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Shiqi Shen", "Yong Cheng", "Zhongjun He", "Wei He", "Hua Wu", "Maosong Sun", "Yang Liu" ], "title": "Minimum risk training for neural machine translation", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Michalis K Titsias", "Miguel Lázaro-Gredilla" ], "title": "Local expectation gradients for black box variational inference", "venue": "In Advances in Neural Information Processing Systems-Volume", "year": 2015 }, { "authors": [ "George Tucker", "Andriy Mnih", "Chris J Maddison", "John Lawson", "Jascha Sohl-Dickstein" ], "title": "Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tim Vieira" ], "title": "Estimating means in a finite universe, 2017", "venue": "URL https://timvieira.github", "year": 2017 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "John I Yellott" ], "title": "The relationship between Luce’s choice axiom, Thurstone’s theory of comparative judgment, and the double exponential distribution", "venue": "Journal of Mathematical Psychology,", "year": 1977 }, { "authors": [ "Mingzhang Yin", "Yuguang Yue", "Mingyuan Zhou" ], "title": "Arsm: Augment-reinforce-swap-merge estimator for gradient backpropagation through categorical variables", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Maddison" ], "title": "B COMPUTATION OF p(S), pD\\C(S \\ C) AND R(S, s) We can sample the set S from the Plackett-Luce distribution using the Gumbel-Top-k trick by drawing Gumbel variables Gφi ∼ Gumbel(φi) for each element and returning the indices of the k", "venue": null, "year": 2014 }, { "authors": [ "Liu" ], "title": "sum-and-sample estimator is unbiased for any set C", "venue": null, "year": 2019 }, { "authors": [ "Liu" ], "title": "2019), one can trade off the number of summed terms and number", "venue": null, "year": 2019 }, { "authors": [], "title": "EXPERIMENTAL DETAILS We use the code6 by Yin et al. (2019) to reproduce their categorical VAE experiment, of which we include details here for self-containment. The dataset is MNIST, statically binarized by thresholding", "venue": null, "year": 2019 }, { "authors": [ "Kool" ], "title": "locations) and the decoder produces a tour, which is sequence of nodes, selecting one note at the time using an attention mechanism, and uses this autoregressively as input to select the next node", "venue": "We use the code by Kool et al", "year": 2019 }, { "authors": [ "Kool" ], "title": "2019a)), and minimize the expected length of a tour predicted by the model, using different gradient estimators. We did not do any hyperparameter optimization and used the exact same training details, using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−4 (no decay) for 100 epochs for all estimators. For the baselines, we used the same batch size of 512, but for estimators that use k = 4 samples, we used a batch size", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Put replacement in your basement! We derive the unordered set estimator1: an unbiased (gradient) estimator for expectations over discrete random variables based on (unordered sets of) samples without replacement. In particular, we consider the problem of estimating (the gradient of) the expectation of f(x) where x has a discrete distribution p over the domain D, i.e.\nEx∼p(x)[f(x)] = ∑\nx∈D p(x)f(x). (1)\nThis expectation comes up in reinforcement learning, discrete latent variable modelling (e.g. for compression), structured prediction (e.g. for translation), hard attention and many other tasks that use models with discrete operations in their computational graphs (see e.g. Jang et al. (2016)). In general, x has structure (such as a sequence), but we can treat it as a ‘flat’ distribution, omitting the bold notation, so x has a categorical distribution over D given by p(x), x ∈ D. Typically, the distribution has parameters θ, which are learnt through gradient descent. This requires estimating the gradient ∇θEx∼pθ(x)[f(x)], using a set of samples S. A gradient estimate e(S) is unbiased if\nES [e(S)] = ∇θEx∼pθ(x)[f(x)]. (2)\nThe samples S can be sampled independently or using alternatives such as stratified sampling which reduce variance to increase the speed of learning. In this paper, we derive an unbiased gradient estimator that reduces variance by avoiding duplicate samples, i.e. by sampling S without replacement. This is challenging as samples without replacement are dependent and have marginal distributions that are different from p(x). We further reduce the variance by deriving a built-in control variate, which maintains the unbiasedness and does not require additional samples.\nRelated work. Many algorithms for estimating gradients for discrete distributions have been proposed. A general and widely used estimator is REINFORCE (Williams, 1992). Biased gradients based on a continuous relaxations of the discrete distribution (known as Gumbel-Softmax or Concrete) were jointly introduced by Jang et al. (2016) and Maddison et al. (2016). These can be combined with the straight through estimator (Bengio et al., 2013) if the model requires discrete samples or be used to construct control variates for REINFORCE, as in REBAR (Tucker et al., 2017) or\n1Code available at https://github.com/wouterkool/estimating-gradients-without-replacement.\nRELAX (Grathwohl et al., 2018). Many other methods use control variates and other techniques to reduce the variance of REINFORCE (Paisley et al., 2012; Ranganath et al., 2014; Gregor et al., 2014; Mnih & Gregor, 2014; Gu et al., 2016; Mnih & Rezende, 2016).\nSome works rely on explicit summation of the expectation, either for the marginal distribution (Titsias & Lázaro-Gredilla, 2015) or globally summing some categories while sampling from the remainder (Liang et al., 2018; Liu et al., 2019). Other approaches use a finite difference approximation to the gradient (Lorberbom et al., 2018; 2019). Yin et al. (2019) introduced ARSM, which uses multiple model evaluations where the number adapts automatically to the uncertainty.\nIn the structured prediction setting, there are many algorithms for optimizing a quantity under a sequence of discrete decisions, using (weak) supervision, multiple samples (or deterministic model evaluations), or a combination both (Ranzato et al., 2016; Shen et al., 2016; He et al., 2016; Norouzi et al., 2016; Bahdanau et al., 2017; Edunov et al., 2018; Leblond et al., 2018; Negrinho et al., 2018). Most of these algorithms are biased and rely on pretraining using maximum likelihood or gradually transitioning from supervised to reinforcement learning. Using Gumbel-Softmax based approaches in a sequential setting is difficult as the bias accumulates because of mixing errors (Gu et al., 2018)." }, { "heading": "2 PRELIMINARIES", "text": "Throughout this paper, we will denote with Bk an ordered sample without replacement of size k and with Sk an unordered sample (of size k) from the categorical distribution p.\nRestricted distribution. When sampling without replacement, we remove the set C ⊂ D already sampled from the domain and we denote with pD\\C the distribution restricted to the domain D \\C:\npD\\C(x) = p(x) 1− ∑ c∈C p(c) , x ∈ D \\ C. (3)\nOrdered sample without replacement Bk. Let Bk = (b1, ..., bk), bi ∈ D be an ordered sample without replacement, which is generated from the distribution p as follows: first, sample b1 ∼ p, then sample b2 ∼ pD\\{b1}, b3 ∼ pD\\{b1,b2}, etc. i.e. elements are sampled one by one without replacement. Using this procedure, Bk can be seen as a (partial) ranking according to the PlackettLuce model (Plackett, 1975; Luce, 1959) and the probability of obtaining the vector Bk is\np(Bk) = k∏ i=1 pD\\B i−1 (bi) = k∏ i=1\np(bi) 1− ∑ j<i p(bj) . (4)\nWe can also restrict Bk to the domain D \\ C, which means that bi 6∈ C for i = 1, ..., k:\npD\\C(Bk) = k∏ i=1\npD\\C(bi) 1− ∑ j<i pD\\C(bj) = k∏ i=1\np(bi) 1− ∑ c∈C p(c)− ∑ j<i p(bj) . (5)\nUnordered sample without replacement. Let Sk ⊆ D be an unordered sample without replacement from the distribution p, which can be generated simply by generating an ordered sample and discarding the order. We denote elements in the sample with s ∈ Sk (so without index) and we write B(Sk) as the set of all k! permutations (orderings)Bk that correspond to (could have generated) Sk. It follows that the probability for sampling Sk is given by:\np(Sk) = ∑\nBk∈B(Sk)\np(Bk) = ∑\nBk∈B(Sk) k∏ i=1\np(bi) 1− ∑ j<i p(bj) = ∏ s∈Sk p(s) · ∑ Bk∈B(Sk) k∏ i=1\n1 1− ∑ j<i p(bj) .\n(6) The last step follows since Bk ∈ B(Sk) is an ordering of Sk, such that ∏k i=1 p(bi) = ∏ s∈S p(s). Naive computation of p(Sk) is O(k!), but in Appendix B we show how to compute it efficiently.\nWhen sampling from the distribution restricted to D \\ C, we sample Sk ⊆ D \\ C with probability:\npD\\C(Sk) = ∏ s∈Sk p(s) · ∑ Bk∈B(Sk) k∏ i=1\n1 1− ∑ c∈C p(c)− ∑ j<i p(bj) . (7)\nThe Gumbel-Top-k trick. As an alternative to sequential sampling, we can also sample Bk and Sk by taking the top k of Gumbel variables (Yellott, 1977; Vieira, 2014; Kim et al., 2016). Following notation from Kool et al. (2019c), we define the perturbed log-probability gφi = φi + gi, where φi = log p(i) and gi ∼ Gumbel(0). Then let b1 = arg maxi∈D gφi , b2 = arg maxi∈D\\{b1} gφi , etc., soBk is the top k of the perturbed log-probabilities in decreasing order. The probability of obtaining Bk using this procedure is given by equation 4, so this provides an alternative sampling method which is effectively a (non-differentiable) reparameterization of sampling without replacement. For a differentiable reparameterization, see Grover et al. (2019).\nIt follows that taking the top k perturbed log-probabilities without order, we obtain the unordered sample set Sk. This way of sampling underlies the efficient computation of p(Sk) in Appendix B." }, { "heading": "3 METHODOLOGY", "text": "In this section, we derive the unordered set policy gradient estimator: a low-variance, unbiased estimator of ∇θEpθ(x)[f(x)] based on an unordered sample without replacement Sk. First, we derive the generic (non-gradient) estimator for E[f(x)] as the Rao-Blackwellized version of a single sample Monte Carlo estimator (and two other estimators!). Then we combine this estimator with REINFORCE (Williams, 1992) and we show how to reduce its variance using a built-in baseline." }, { "heading": "3.1 RAO-BLACKWELLIZATION OF THE SINGLE SAMPLE ESTIMATOR", "text": "A very crude but simple estimator for E[f(x)] based on the ordered sample Bk is to only use the first element b1, which by definition is a sample from the distribution p. We define this estimator as the single sample estimator, which is unbiased, since\nEBk∼p(Bk)[f(b1)] = Eb1∼p(b1)[f(b1)] = Ex∼p(x)[f(x)]. (8)\nDiscarding all but one sample, the single sample estimator is inefficient, but we can use RaoBlackwellization (Casella & Robert, 1996) to signficantly improve it. To this end, we consider the distribution Bk|Sk, which is, knowing the unordered sample Sk, the conditional distribution over ordered samples Bk ∈ B(Sk) that could have generated Sk.2 Using Bk|Sk, we rewrite E[f(b1)] as\nEBk∼p(Bk)[f(b1)] = ESk∼p(Sk) [ EBk∼p(Bk|Sk) [f(b1)] ] = ESk∼p(Sk) [ Eb1∼p(b1|Sk) [f(b1)] ] .\nThe Rao-Blackwellized version of the single sample estimator computes the inner conditional expectation exactly. Since Bk is an ordering of Sk, we have b1 ∈ Sk and we can compute this as\nEb1∼p(b1|Sk) [f(b1)] = ∑ s∈Sk P (b1 = s|Sk)f(s) (9)\nwhere, in a slight abuse of notation, P (b1 = s|Sk) is the probability that the first sampled element b1 takes the value s, given that the complete set of k samples is Sk. Using Bayes’ Theorem we find\nP (b1 = s|Sk) = p(Sk|b1 = s)P (b1 = s) p(Sk) = pD\\{s}(Sk \\ {s})p(s) p(Sk) . (10)\nThe step p(Sk|b1 = s) = pD\\{s}(Sk \\ {s}) comes from analyzing sequential sampling without replacement: given that the first element sampled is s, the remaining elements have a distribution restricted toD\\{s}, so sampling Sk (including s) given the first element s is equivalent to sampling the remainder Sk \\{s} from the restricted distribution, which has probability pD\\{s}(Sk \\{s}) (see equation 7).\n2Note that Bk|Sk is not a Plackett-Luce distribution restricted to Sk!\nThe unordered set estimator. For notational convenience, we introduce the leave-one-out ratio.\nDefinition 1. The leave-one-out ratio of s w.r.t. the set S is given by R(Sk, s) = p D\\{s}(Sk\\{s})\np(Sk) .\nRewriting equation 10 as P (b1 = s|Sk) = p(s)R(Sk, s) shows that the probability of sampling s first, given Sk, is simply the unconditional probability multiplied by the leave-one-out ratio. We now define the unordered set estimator as the Rao-Blackwellized version of the single-sample estimator. Theorem 1. The unordered set estimator, given by\neUS(Sk) = ∑ s∈Sk p(s)R(Sk, s)f(s) (11)\nis the Rao-Blackwellized version of the (unbiased!) single sample estimator.\nProof. Using P (b1 = s|Sk) = p(s)R(Sk, s) in equation 9 we have Eb1∼p(b1|Sk) [f(b1)] = ∑ s∈Sk P (b1 = s|Sk)f(s) = ∑ s∈Sk p(s)R(Sk, s)f(s). (12)\nThe implication of this theorem is that the unordered set estimator, in explicit form given by equation 11, is an unbiased estimator of E[f(x)] since it is the Rao-Blackwellized version of the unbiased single sample estimator. Also, as expected by taking multiple samples, it has variance equal or lower than the single sample estimator by the Rao-Blackwell Theorem (Lehmann & Scheffé, 1950)." }, { "heading": "3.2 RAO-BLACKWELLIZATION OF OTHER ESTIMATORS", "text": "The unordered set estimator is also the result of Rao-Blackwellizing two other unbiased estimators: the stochastic sum-and-sample estimator and the importance-weighted estimator.\nThe sum-and-sample estimator. We define as sum-and-sample estimator any estimator that relies on the identity that for any C ⊂ D\nEx∼p(x)[f(x)] = Ex∼pD\\C(x) [∑ c∈C p(c)f(c) + ( 1− ∑ c∈C p(c) ) f(x) ] . (13)\nFor the derivation, see Appendix C.1 or Liang et al. (2018); Liu et al. (2019). In general, a sum-andsample estimator with a budget of k > 1 evaluations sums expectation terms for a set of categoriesC (s.t. |C| < k) explicitly (e.g. selected by their value f (Liang et al., 2018) or probability p (Liu et al., 2019)), and uses k− |C| (down-weighted) samples from D \\C to estimate the remaining terms. As is noted by Liu et al. (2019), selecting C such that 1− ∑ c∈C p(c)\nk−|C| is minimized guarantees to reduce variance compared to a standard minibatch of k samples (which is equivalent to setting C = ∅). See also Fearnhead & Clifford (2003) for a discussion on selecting C optimally. The ability to optimize C depends on whether p(c) can be computed efficiently a-priori (before sampling). This is difficult in high-dimensional settings, e.g. sequence models which compute the probability incrementally while ancestral sampling. An alternative is to select C stochastically (as equation 13 holds for any C), and we choose C = Bk−1 to define the stochastic sum-and-sample estimator:\neSSAS(Bk) = k−1∑ j=1 p(bj)f(bj) + 1− k−1∑ j=1 p(bj) f(bk). (14) For simplicity, we consider the version that sums k − 1 terms here, but the following results also hold for a version that sums k−m terms and uses m samples (without replacement) (see Appendix C.3). Sampling without replacement, it holds that bk|Bk−1 ∼ pD\\B k−1 , so the unbiasedness follows from equation 13 by separating the expectation over Bk into expectations over Bk−1 and bk|Bk−1:\nEBk−1∼p(Bk−1) [ Ebk∼p(bk|Bk−1) [ eSSAS(Bk) ]] = EBk−1∼p(Bk−1) [E[f(x)]] = E[f(x)].\nIn general, a sum-and-sample estimator reduces variance if the probability mass is concentrated on the summed categories. As typically high probability categories are sampled first, the stochastic sum-and-sample estimator sums high probability categories, similar to the estimator by Liu et al. (2019) which we refer to as the deterministic sum-and-sample estimator. As we show in Appendix C.2, Rao-Blackwellizing the stochastic sum-and-sample estimator also results in the unordered set estimator. This even holds for a version that usesm samples and k−m summed terms (see Appendix C.3), which means that the unordered set estimator has equal or lower variance than the optimal (in terms of m) stochastic sum-and-sample estimator, but conveniently does not need to choose m.\nThe importance-weighted estimator. The importance-weighted estimator (Vieira, 2017) is\neIW(Sk, κ) = ∑ s∈Sk p(s) q(s, κ) f(s). (15)\nThis estimator is based on the idea of priority sampling (Duffield et al., 2007). It does not use the order of the sample, but assumes sampling using the Gumbel-Top-k trick and requires access to κ, the (k + 1)-th largest perturbed log-probability, which can be seen as the ‘threshold’ since gφs > κ ∀s ∈ Sk. q(s, a) = P (gφs > a) can be interpreted as the inclusion probability of s ∈ Sk (assuming a fixed threshold a instead of a fixed sample size k). For details and a proof of unbiasedness, see Vieira (2017) or Kool et al. (2019c). As the estimator has high variance, Kool et al. (2019c) resort to normalizing the importance weights, resulting in biased estimates. Instead, we use Rao-Blackwellization to eliminate stochasticity by κ. Again, the result is the unordered set estimator (see Appendix D.1), which thus has equal or lower variance." }, { "heading": "3.3 THE UNORDERED SET POLICY GRADIENT ESTIMATOR", "text": "Writing pθ to indicate the dependency on the model parameters θ, we can combine the unordered set estimator with REINFORCE (Williams, 1992) to obtain the unordered set policy gradient estimator.\nCorollary 1. The unordered set policy gradient estimator, given by eUSPG(Sk) = ∑ s∈Sk pθ(s)R(S k, s)∇θ log pθ(s)f(s) = ∑ s∈Sk ∇θpθ(s)R(Sk, s)f(s), (16)\nis an unbiased estimate of the policy gradient.\nProof. Using REINFORCE (Williams, 1992) combined with the unordered set estimator we find:\n∇θEpθ(x)[f(x)]=Epθ(x)[∇θ log pθ(x)f(x)]=ESk∼pθ(Sk) ∑ s∈Sk pθ(s)R(S k, s)∇θ log pθ(s)f(s) .\nVariance reduction using a built-in control variate. The variance of REINFORCE can be reduced by subtracting a baseline from f . When taking multiple samples (with replacement), a simple and effective baseline is to take the mean of other (independent!) samples (Mnih & Rezende, 2016). Sampling without replacement, we can use the same idea to construct a baseline based on the other samples, but we have to correct for the fact that the samples are not independent.\nTheorem 2. The unordered set policy gradient estimator with baseline, given by\neUSPGBL(Sk) = ∑ s∈Sk ∇θpθ(s)R(Sk, s) f(s)− ∑ s′∈Sk pθ(s ′)RD\\{s}(Sk, s′)f(s′) , (17) where\nRD\\{s}(Sk, s′) = p D\\{s,s′} θ (S k \\ {s, s′}) p D\\{s} θ (S k \\ {s}) (18)\nis the second order leave-one-out ratio, is an unbiased estimate of the policy gradient.\nProof. See Appendix E.1.\nThis theorem shows how to include a built-in baseline based on dependent samples (without replacement), without introducing bias. By having a built-in baseline, the value f(s) for sample s is compared against an estimate of its expectation E[f(s)], based on the other samples. The difference is an estimate of the advantage (Sutton & Barto, 2018), which is positive if the sample s is ‘better’ than average, causing pθ(s) to be increased (reinforced) through the sign of the gradient, and vice versa. By sampling without replacement, the unordered set estimator forces the estimator to compare different alternatives, and reinforces the best among them.\nIncluding the pathwise derivative. So far, we have only considered the scenario where f does not depend on θ. If f does depend on θ, for example in a VAE (Kingma & Welling, 2014; Rezende et al., 2014), then we use the notation fθ and we can write the gradient (Schulman et al., 2015) as\n∇θEpθ(x)[fθ(x)] = Epθ(x)[∇θ log pθ(x)fθ(x) +∇θfθ(x)]. (19)\nThe additional second (‘pathwise’) term can be estimated (using the same samples) with the standard unordered set estimator. This results in the full unordered set policy gradient estimator:\neFUSPG(Sk) = ∑ s∈Sk ∇θpθ(s)R(Sk, s)fθ(s) + ∑ s∈Sk pθ(s)R(S k, s)∇θfθ(s)\n= ∑ s∈Sk R(Sk, s)∇θ (pθ(s)fθ(s)) (20)\nEquation 20 is straightforward to implement using an automatic differentiation library. We can also include the baseline (as in equation 17) but we must make sure to call STOP GRADIENT (DETACH in PyTorch) on the baseline (but not on fθ(s)!). Importantly, we should never track gradients through the leave-one-out ratioR(Sk, s) which means it can be efficiently computed in pure inference mode.\nScope & limitations. We can use the unordered set estimator for any discrete distribution from which we can sample without replacement, by treating it as a univariate categorical distribution over its domain. This includes sequence models, from which we can sample using Stochastic Beam Search (Kool et al., 2019c), as well as multivariate categorical distributions which can also be treated as sequence models (see Section 4.2). In the presence of continuous variables or a stochastic function f , we may separate this stochasticity from the stochasticity over the discrete distribution, as in Lorberbom et al. (2019). The computation of the leave-one-out ratios adds some overhead, although they can be computed efficiently, even for large k (see Appendix B). For a moderately sized model, the costs of model evaluation and backpropagation dominate the cost of computing the estimator." }, { "heading": "3.4 RELATION TO OTHER MULTI-SAMPLE ESTIMATORS", "text": "Relation to Murthy’s estimator. We found out that the ‘vanilla’ unordered set estimator (equation 11) is actually a special case of the estimator by Murthy (1957), known in statistics literature for estimation of a population total Θ = ∑ i∈D yi. Using yi = p(i)f(i), we have Θ = E[f(i)], so Murthy’s estimator can be used to estimate expectations (see equation 11). Murthy derives the estimator by ‘unordering’ a convex combination of Raj (1956) estimators, which, using yi = p(i)f(i), are stochastic sum-and-sample estimators in our analogy.\nMurthy (1957) also provides an unbiased estimator of the variance, which may be interesting for future applications. Since Murthy’s estimator can be used with arbitrary sampling distribution, it is straightforward to derive importance-sampling versions of our estimators. In particular, we can sample S without replacement using q(x) > 0, x ∈ D, and use equations 11, 16, 17 and 20, as long as we compute the leave-one-out ratio R(Sk, s) using q.\nWhile part of our derivation coincides with Murthy (1957), we are not aware of previous work using this estimator to estimate expectations. Additionally, we discuss practical computation of p(S) (Appendix B), we show the relation to the importance-weighted estimator, and we provide the extension to estimating policy gradients, especially including a built-in baseline without adding bias.\nRelation to the empirical risk estimator. The empirical risk loss (Edunov et al., 2018) estimates the expectation in equation 1 by summing only a subset S of the domain, using normalized probabilities p̂θ(s) =\npθ(s)∑ s′∈S pθ(s) . Using this loss, the (biased) estimate of the gradient is given by\neRISK(Sk) = ∑ s∈Sk ∇θ ( pθ(s)∑ s′∈Sk pθ(s ′) ) f(s). (21)\nThe risk estimator is similar to the unordered set policy gradient estimator, with two important differences: 1) the individual terms are normalized by the total probability mass rather than the leave-one-out ratio and 2) the gradient w.r.t. the normalization factor is taken into account. As a result, samples ‘compete’ for probability mass and only the best can be reinforced. This has the same effect as using a built-in baseline, which we prove in the following theorem. Theorem 3. By taking the gradient w.r.t. the normalization factor into account, the risk estimator has a built-in baseline, which means it can be written as\neRISK(Sk) = ∑ s∈Sk ∇θpθ(s) 1∑ s′′∈Sk pθ(s ′′) f(s)− ∑ s′∈Sk pθ(s ′) 1∑ s′′∈Sk pθ(s ′′) f(s′) . (22) Proof. See Appendix F.1\nThis theorem highlights the similarity between the biased risk estimator and our unbiased estimator (equation 17), and suggests that their only difference is the weighting of terms. Unfortunately, the implementation by Edunov et al. (2018) has more sources of bias (e.g. length normalization), which are not compatible with our estimator. However, we believe that our analysis helps analyze the bias of the risk estimator and is a step towards developing unbiased estimators for structured prediction.\nRelation to VIMCO. VIMCO (Mnih & Rezende, 2016) is an estimator that uses k samples (with replacement) to optimize an objective of the form log 1k ∑ i f(xi), which is a multi-sample stochastic lower bound in the context of variational inference. VIMCO reduces the variance by using a local baseline for each of the k samples, based on the other k − 1 samples. While we do not have a log term, as our goal is to optimize general E[f(x)], we adopt the idea of forming a baseline based on the other samples, and we define REINFORCE with replacement (with built-in baseline) as the estimator that computes the gradient estimate using samples with replacement Xk = (x1, ..., xk) as\neRFWR(Xk) = 1\nk k∑ i=1 ∇θ log pθ(xi) f(xi)− 1 k − 1 ∑ j 6=i f(xj) . (23) This estimator is unbiased, as Exi,xj [∇θ log pθ(xi)f(xj)] = 0 for i 6= j (see also Kool et al. (2019b)). We think of the unordered set estimator as the without-replacement version of this estimator, which weights terms by pθ(s)R(Sk, s) instead of 1k . This puts more weight on higher probability elements to compensate for sampling without replacement. If probabilities are small and (close to) uniform, there are (almost) no duplicate samples and the weights will be close to 1k , so the gradient estimate of the with- and without-replacement versions are similar.\nRelation to ARSM. ARSM (Yin et al., 2019) also uses multiple evaluations (‘pseudo-samples’) of pθ and f . This can be seen as similar to sampling without replacement, and the estimator also has a built-in control variate. Compared to ARSM, our estimator allows direct control over the computational cost (through the sample size k) and has wider applicability, for example it also applies to multivariate categorical variables with different numbers of categories per dimension.\nRelation to stratified/systematic sampling. Our estimator aims to reduce variance by changing the sampling distribution for multiple samples by sampling without replacement. There are alternatives, such as using stratified or systematic sampling (see, e.g. Douc & Cappé (2005)). Both partition the domain D into k strata and take a single sample from each stratum, where systematic sampling uses common random numbers for each stratum. In applications involving high-dimensional or structured domains, it is unclear how to partition the domain and how to sample from each partition. Additionally, as samples are not independent, it is non-trivial to include a built-in baseline, which we find is a key component that makes our estimator perform well." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 BERNOULLI TOY EXPERIMENT", "text": "We use the code by Liu et al. (2019) to reproduce their Bernoulli toy experiment. Given a vector p = (0.6, 0.51, 0.48) the goal is to minimize the loss L(η) = Ex1,x2,x3∼Bern(σ(η)) [∑3 i=1(xi − pi)2 ] . Here x1, x2, x3 are i.i.d. from the Bernoulli(σ(η)) distribution, parameterized by a scalar η ∈ R, where σ(η) = (1 + exp(−η))−1 is the sigmoid function. We compare different estimators, with and without baseline (either ‘built-in’ or using additional samples, referred to as REINFORCE+ in Liu et al. (2019)). We report the (log-)variance of the scalar gradient ∂L∂η as a function of the number of model evaluations, which is twice as high when using a sampled baseline (for each term).\nAs can be seen in Figure 1, the unordered set estimator is the only estimator that has consistently the lowest (or comparable) variance in both the high (η = 0) and low entropy (η = −4) regimes and for different number of samples/model evaluations. This suggests that it combines the advantages of the other estimators. We also ran the actual optimization experiment, where with as few as k = 3 samples the trajectory was indistinguishable from using the exact gradient (see Liu et al. (2019))." }, { "heading": "4.2 CATEGORICAL VARIATIONAL AUTO-ENCODER", "text": "We use the code from Yin et al. (2019) to train a categorical Variational Auto-Encoder (VAE) with 20 dimensional latent space, with 10 categories per dimension (details in Appendix G.1). To use our estimator, we treat this as a single factorized distribution with 1020 categories from which we can sample without replacement using Stochastic Beam Search (Kool et al., 2019c), sequentially sampling each dimension as if it were a sequence model. We also perform experiments with 102 latent space, which provides a lower entropy setting, to highlight the advantage of our estimator.\nMeasuring the variance. In Table 1, we report the variance of different gradient estimators with k = 4 samples, evaluated on a trained model. The unordered set estimator has the lowest variance in both the small and large domain (low and high entropy) setting, being on-par with the best of the (stochastic3) sum-and-sample estimator and REINFORCE with replacement4. This confirms the toy experiment, suggesting that the unordered set estimator provides the best of both estimators. In Appendix G.2 we repeat the same experiment at different stages of training, with similar results.\n3We cannot use the deterministic version by Liu et al. (2019) since we cannot select the top k categories. 4We cannot compare against VIMCO (Mnih & Rezende, 2016) as it optimizes a different objective.\nELBO optimization. We use different estimators to optimize the ELBO (details in Appendix G.1). Additionally to the baselines by Yin et al. (2019) we compare against REINFORCE with replacement and the stochastic sum-and-sample estimator. In Figure 2 we observe that our estimator performs on par with REINFORCE with replacement (and built-in baseline, equation 23) and outperforms other estimators in at least one of the settings. There are a lot of other factors, e.g. exploration that may explain why we do not get a strictly better result despite the lower variance. We note some overfitting (see validation curves in Appendix G.2), but since our goal is to show improved optimization, and to keep results directly comparable to Yin et al. (2019), we consider regularization a separate issue outside the scope of this paper. These results are using MNIST binarized by a threshold of 0.5. In Appendix G.2 we report results using the standard binarized MNIST dataset from Salakhutdinov & Murray (2008)." }, { "heading": "4.3 STRUCTURED PREDICTION FOR THE TRAVELLING SALESMAN PROBLEM", "text": "To show the wide applicability of our estimator, we consider the structured prediction task of predicting routes (sequences) for the Travelling Salesman Problem (TSP) (Vinyals et al., 2015; Bello et al., 2016; Kool et al., 2019a). We use the code by Kool et al. (2019a)5 to reproduce their TSP experiment with 20 nodes. For details, see Appendix H.\nWe implement REINFORCE with replacement (and built-in baseline) as well as the stochastic sumand-sample estimator and our estimator, using Stochastic Beam Search (Kool et al., 2019c) for sampling. Also, we include results using the biased normalized importance-weighted policy gradient estimator with built-in baseline (derived in Kool et al. (2019b), see Appendix D.2). Additionally, we compare against REINFORCE with greedy rollout baseline (Rennie et al., 2017) used by Kool et al. (2019c) and a batch-average baseline. For reference, we also include the biased risk estimator, either ‘sampling’ using stochastic or deterministic beam search (as in Edunov et al. (2018)).\nIn Figure 3a, we compare training progress (measured on the validation set) as a function of the number of training steps, where we divide the batch size by k to keep the total number of samples equal. Our estimator outperforms REINFORCE with replacement, the stochastic sum-and-sample estimator and the strong greedy rollout baseline (which uses additional baseline model evaluations) and performs on-par with the biased risk estimator. In Figure 3b, we plot the same results against the number of instances, which shows that, compared to the single sample estimators, we can train with less data and less computational cost (as we only need to run the encoder once for each instance).\n5https://github.com/wouterkool/attention-learn-to-route" }, { "heading": "5 DISCUSSION", "text": "We introduced the unordered set estimator, a low-variance, unbiased gradient estimator based on sampling without replacement, which can be used as an alternative to the popular biased GumbelSoftmax estimator (Jang et al., 2016; Maddison et al., 2016). Our estimator is the result of RaoBlackwellizing three existing estimators, which guarantees equal or lower variance, and is closely related to a number of other estimators. It has wide applicability, is parameter free (except for the sample size k) and has competitive performance to the best of alternatives in both high and low entropy regimes.\nIn our experiments, we found that REINFORCE with replacement, with multiple samples and a built-in baseline as inspired by VIMCO (Mnih & Rezende, 2016), is a simple yet strong estimator which has performance similar to our estimator in the high entropy setting. We are not aware of any recent work on gradient estimators for discrete distributions that has considered this estimator as baseline, while it may be often preferred given its simplicity. In future work, we want to investigate if we can apply our estimator to estimate gradients ‘locally’ (Titsias & Lázaro-Gredilla, 2015), as locally we have a smaller domain and expect more duplicate samples." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was funded by ORTEC. We would like to thank anonymous reviewers for their feedback that helped improve the paper." }, { "heading": "A NOTATION", "text": "Throughout this appendix we will use the following notation from Maddison et al. (2014):\neφ(g) = exp(−g + φ) Fφ(g) = exp(− exp(−g + φ)) fφ(g) = eφ(g)Fφ(g).\nThis means that Fφ(g) is the CDF and fφ(g) the PDF of the Gumbel(φ) distribution. Additionally we will use the identities by Maddison et al. (2014):\nFφ(g)Fγ(g) = Flog(exp(φ)+exp(γ))(g) (24)∫ b g=a eγ(g)Fφ(g)∂g = (Fφ(b)− Fφ(a)) exp(γ) exp(φ) . (25)\nAlso, we will use the following notation, definitions and identities (see Kool et al. (2019c)):\nφi = log p(i) (26) φS = log ∑ i∈S p(i) = log ∑ i∈S expφi (27)\nφD\\S = log ∑ i∈D\\S p(i) = log\n( 1−\n∑ i∈S p(i)\n) = log(1− exp(φS)) (28)\nGφi ∼ Gumbel(φi) (29) GφS = max\ni∈S Gφi ∼ Gumbel(φS) (30)\nFor a proof of equation 30, see Maddison et al. (2014).\nB COMPUTATION OF p(Sk), pD\\C(S \\ C) AND R(Sk, s)\nWe can sample the set Sk from the Plackett-Luce distribution using the Gumbel-Top-k trick by drawing Gumbel variables Gφi ∼ Gumbel(φi) for each element and returning the indices of the k largest Gumbels. If we ignore the ordering, this means we will obtain the set Sk if mini∈Sk Gφi > maxi∈D\\Sk Gφi . Omitting the superscript k for clarity, we can use the Gumbel-Max trick, i.e. that GφD\\S = maxi 6∈S Gφi ∼ Gumbel(φD\\S) (equation 30) and marginalize over GφD\\S :\np(S) = P (min i∈S Gφi > GφD\\S )\n= P (Gφi > GφD\\S , i ∈ S)\n= ∫ ∞ gφD\\S=−∞ fφD\\S (gφD\\S )P (Gφi > gφD\\S , i ∈ S)∂gφD\\S\n= ∫ ∞ gφD\\S=−∞ fφD\\S (gφD\\S ) ∏ i∈S ( 1− Fφi(gφD\\S ) ) ∂gφD\\S (31)\n= ∫ 1 u=0 ∏ i∈S ( 1− Fφi ( F−1φD\\S (u) )) ∂u (32)\nHere we have used a change of variables u = FφD\\S (gφD\\S ). This expression can be efficiently numerically integrated (although another change of variables may be required for numerical stability depending on the values of φ).\nExact computation in O(2k). The integral in equation 31 can be computed exactly using the identity ∏\ni∈S (ai − bi) = ∑ C⊆S (−1)|C| ∏ i∈C bi ∏ i∈S\\C ai\nwhich gives\np(S) = ∫ ∞ gφD\\S=−∞ fφD\\S (gφD\\S ) ∏ i∈S ( 1− Fφi(gφD\\S ) ) ∂gφD\\S\n= ∑ C⊆S (−1)|C| ∫ ∞ gφD\\S=−∞ fφD\\S (gφD\\S ) ∏ i∈C Fφi(gφD\\S ) ∏ i∈S\\C 1∂gφD\\S\n= ∑ C⊆S (−1)|C| ∫ ∞ gφD\\S=−∞ eφD\\S (gφD\\S )FφD\\S (gφD\\S )FφC (gφD\\S )∂gφD\\S\n= ∑ C⊆S (−1)|C| ∫ ∞ gφD\\S=−∞ eφD\\S (gφD\\S )Fφ(D\\S)∪C (gφD\\S )∂gφD\\S\n= ∑ C⊆S (−1)|C|(1− 0) exp(φD\\S) exp(φ(D\\S)∪C)\n= ∑ C⊆S (−1)|C| 1− ∑ i∈S p(i) 1− ∑ i∈S\\C p(i) . (33)\nComputation of pD\\C(S \\ C). When using the Gumbel-Top-k trick over the restricted domain D \\ C, we do not need to renormalize the log-probabilities φs, s ∈ D \\ C since the Gumbel-Top-k trick applies to unnormalized log-probabilities. Also, assuming C ⊆ Sk, it holds that (D \\C)\\ (S \\ C) = D \\ S. This means that we can compute pD\\C(S \\ C) similar to equation 31:\npD\\C(S \\ C) = P ( min i∈S\\C Gφi > Gφ(D\\C)\\(S\\C))\n= P ( min i∈S\\C Gφi > GφD\\S )\n= ∫ ∞ gφD\\S=−∞ fφD\\S (gφD\\S ) ∏ i∈S\\C ( 1− Fφi(gφD\\S ) ) ∂gφD\\S . (34)\nComputation of R(Sk, s). Note that, using equation 10, it holds that∑ s∈Sk pD\\{s}(Sk \\ {s})p(s) p(Sk) = ∑ s∈Sk P (b1 = s|Sk) = 1\nfrom which it follows that p(Sk) = ∑ s∈Sk pD\\{s}(Sk \\ {s})p(s)\nsuch that\nR(Sk, s) = pD\\{s}(Sk \\ {s})\np(Sk) = pD\\{s}(Sk \\ {s})∑ s′∈Sk p D\\{s′}(Sk \\ {s′})p(s′) . (35)\nThis means that, to compute the leave-one-out ratio for all s ∈ Sk, we only need to compute pD\\{s}(Sk \\{s}) for s ∈ Sk. When using the numerical integration or summation inO(2k), we can reuse computation, whereas using the naive method, the cost is O(k · (k−1)!) = O(k!), making the total computational cost comparable to computing just p(Sk), and the same holds when computing the ‘second-order’ leave one out ratios for the built-in baseline (equation 17).\nDetails of numerical integration. For computation of the leave-one-out ratio (equation 35) for large k we can use the numerical integration, where we need to compute equation 34 with C = {s}. For this purpose, we rewrite the integral as\npD\\C(S \\ C) = ∫ ∞ gφD\\S=−∞ fφD\\S (gφD\\S ) ∏ i∈S\\C ( 1− Fφi(gφD\\S ) ) ∂gφD\\S\n= ∫ 1 u=0 ∏ i∈S\\C ( 1− Fφi ( F−1φD\\S (u) )) ∂u\n= ∫ 1 u=0 ∏ i∈S\\C ( 1− uexp(φi−φD\\S) ) ∂u\n= exp(b) · ∫ 1 v=0 vexp(b)−1 ∏\ni∈S\\C\n( 1− vexp(φi−φD\\S+b) ) ∂v\n= exp(a+ φD\\S) · ∫ 1 v=0 vexp(a+φD\\S)−1 ∏\ni∈S\\C\n( 1− vexp(φi+a) ) ∂v.\nHere we have used change of variables v = uexp(−b) and a = b − φD\\S . This form allows to compute the integrands efficiently, as\n∏ i∈S\\C ( 1− vexp(φi+a) ) = ∏ i∈S ( 1− vexp(φi+a) )∏ i∈C ( 1− vexp(φi+a)\n) where the numerator only needs to computed once, and, since C = {s} when computing equation 35, the denominator only consists of a single term.\nThe choice of a may depend on the setting, but we found that a = 5 is a good default option which leads to an integral that is generally smooth and can be accurately approximated using the trapezoid rule. We compute the integrands in logarithmic space and sum the terms using the stable LOGSUMEXP trick. In our code we provide an implementation which also computes all second-order leave-one-out ratios efficiently." }, { "heading": "C THE SUM-AND-SAMPLE ESTIMATOR", "text": "" }, { "heading": "C.1 UNBIASEDNESS OF THE SUM-AND-SAMPLE ESTIMATOR", "text": "We show that the sum-and-sample estimator is unbiased for any set C ⊂ D (see also Liang et al. (2018); Liu et al. (2019)):\nEx∼pD\\C(x) [∑ c∈C p(c)f(c) + ( 1− ∑ x∈C p(c) ) f(x) ]\n= ∑ c∈C p(c)f(c) +\n( 1−\n∑ c∈C p(c) ) Ex∼pD\\C(x)[f(x)]\n= ∑ c∈C p(c)f(c) +\n( 1−\n∑ c∈C p(c) ) ∑ x∈D\\C\np(x) 1− ∑ c∈C p(c) f(x)\n= ∑ c∈C p(c)f(c) + ∑ x∈D\\C p(x)f(x)\n= ∑ x∈D p(x)f(x)\n= Ex∼p(x)[f(x)]" }, { "heading": "C.2 RAO-BLACKWELLIZATION OF THE STOCHASTIC SUM-AND-SAMPLE ESTIMATOR", "text": "In this section we give the proof that Rao-Blackwellizing the stochastic sum-and-sample estimator results in the unordered set estimator.\nTheorem 4. Rao-Blackwellizing the stochastic sum-and-sample estimator results in the unordered set estimator, i.e.\nEBk∼p(Bk|Sk) k−1∑ j=1 p(bj)f(bj) + 1− k−1∑ j=1 p(bj) f(bk) = ∑ s∈Sk p(s)R(Sk, s)f(s). (36)\nProof. To give the proof, we first prove three Lemmas.\nLemma 1.\nP (bk = s|Sk) = p(Sk \\ {s}) p(Sk)\np(s) 1− ∑ s′∈Sk\\{s} p(s ′) (37)\nProof. Similar to the derivation of P (b1 = s|Sk) (equation 10 in the main paper), we can write:\nP (bk = s|Sk) = P (Sk ∩ bk = s)\np(Sk)\n= p(Sk \\ {s})pD\\(Sk\\{s})(s)\np(Sk)\n= p(Sk \\ {s}) p(Sk)\np(s) 1− ∑ s′∈Sk\\{s} p(s ′) .\nThe step from the first to the second row comes from analyzing the event Sk∩bk = s using sequential sampling: to sample Sk (including s) with s being the k-th element means that we should first sample Sk \\ {s} (in any order), and then sample s from the distribution restricted to D \\ (Sk \\ {s}).\nLemma 2.\np(S) + p(S \\ {s}) 1−\n∑ s′∈S p(s ′)\n1− ∑ s′∈S\\{s} p(s ′) = pD\\{s}(S \\ {s}) (38)\nDividing equation 33 by 1− ∑ s′∈S p(s ′) on both sides, we obtain\nProof.\np(S) 1− ∑ s′∈S p(s ′)\n= ∑ C⊆S (−1)|C| 1 1− ∑ s′∈S\\C p(s ′)\n= ∑\nC⊆S\\{s}\n( (−1)|C| 1\n1− ∑ s′∈S\\C p(s ′) + (−1)|C∪{s}| 1 1− ∑ s′∈S\\(C∪{s}) p(s ′)\n)\n= ∑\nC⊆S\\{s}\n(−1)|C| 1 1− ∑ s′∈S\\C p(s ′) + ∑ C⊆S\\{s} (−1)|C∪{s}| 1 1− ∑ s′∈S\\(C∪{s}) p(s ′)\n= ∑\nC⊆S\\{s}\n(−1)|C| 1 1− p(s)− ∑ s′∈(S\\{s})\\C p(s ′) − ∑ C⊆S\\{s} (−1)|C| 1 1− ∑ s′∈(S\\{s})\\C p(s ′)\n= 1 1− p(s) ∑\nC⊆S\\{s}\n(−1)|C| 1 1− ∑ s′∈(S\\{s})\\C p(s′) 1−p(s) − p(S \\ {s}) 1− ∑ s′∈S\\{s} p(s ′)\n= 1 1− p(s) pD\\{s}(S \\ {s}) 1− ∑ s′∈S\\{s} p(s′) 1−p(s) − p(S \\ {s}) 1− ∑ s′∈S\\{s} p(s ′)\n= pD\\{s}(S \\ {s}) 1− p(s)− ∑ s′∈S\\{s} p(s ′) − p(S \\ {s}) 1− ∑ s′∈S\\{s} p(s ′)\n= pD\\{s}(S \\ {s}) 1− ∑ s′∈S p(s ′) − p(S \\ {s}) 1− ∑ s′∈S\\{s} p(s ′) .\nMultiplying by 1− ∑ s′∈S p(s ′) and rearranging terms proves Lemma 2.\nLemma 3.\np(s) + 1− ∑ s′∈Sk p(s′) P (bk = s|Sk) = p(s)R(Sk, s) (39)\nProof. First using Lemma 1 and then Lemma 2 we find\np(s) + 1− ∑ s′∈Sk p(s′) P (bk = s|Sk) =p(s) +\n1− ∑ s′∈Sk p(s′) p(Sk \\ {s}) p(Sk)\np(s) 1− ∑ s′∈Sk\\{s} p(s ′)\n= p(s)\np(Sk)\n( p(Sk) + 1− ∑ s′∈Sk p(s ′)\n1− ∑ s′∈Sk\\{s} p(s ′) p(Sk \\ {s})\n)\n= p(s)\np(Sk) pD\\{s}(Sk \\ {s})\n=p(s)R(Sk, s).\nNow we can complete the proof of Theorem 4 by adding p(bk)f(bk) − p(bk)f(bk) = 0 to the estimator, moving the terms independent of Bk outside the expectation and using Lemma 3:\nEBk∼p(Bk|Sk) k−1∑ j=1 p(bj)f(bj) + 1− k−1∑ j=1 p(bj) f(bk) \n=EBk∼p(Bk|Sk) k∑ j=1 p(bj)f(bj) + 1− k∑ j=1 p(bj) f(bk) = ∑ s∈Sk p(s)f(s) + EBk∼p(Bk|Sk) 1− ∑ s′∈Sk p(s′) f(bk) = ∑ s∈Sk p(s)f(s) + ∑ s∈Sk 1− ∑ s′∈Sk p(s′)\nP (bk = s|Sk)f(s) = ∑ s∈Sk p(s) + 1− ∑ s′∈Sk p(s′) P (bk = s|Sk) f(s)\n= ∑ s∈Sk p(s)R(Sk, s)f(s)." }, { "heading": "C.3 THE STOCHASTIC SUM-AND-SAMPLE ESTIMATOR WITH MULTIPLE SAMPLES", "text": "As was discussed in Liu et al. (2019), one can trade off the number of summed terms and number of sampled terms to maximize the achieved variance reduction. As a generalization of Theorem 4 (the stochastic sum-and-sample estimator with k − 1 summed terms), we introduce here the stochastic sum-and-sample estimator that sums k −m terms and samples m > 1 terms without replacement. To estimate the sampled term, we use the unordered set estimator on them samples without replacement, on the domain restricted to D \\ Bk−m. In general, we denote the unordered set estimator restricted to the domain D \\ C by\neUS,D\\C(Sk) = ∑\ns∈Sk\\C\np(s)RD\\C(Sk, s)f(s) (40)\nwhere RD\\C(Sk, s) is the leave-one-out ratio restricted to the domain D \\ C, similar to the second order leave-one-out ratio in equation 18:\nRD\\C(Sk, s) = p (D\\C)\\{s} θ ((S k \\ C) \\ {s}) p D\\C θ (S k \\ C) . (41)\nWhile we can also constrain Sk ⊆ (D \\C), this definition is consistent with equation 18 and allows simplified notation. Theorem 5. Rao-Blackwellizing the stochastic sum-and-sample estimator with m > 1 samples results in the unordered set estimator, i.e.\nEBk∼p(Bk|Sk) k−m∑ j=1 p(bj)f(bj) + 1− k−m∑ j=1 p(bj) eUS,D\\Bk−m(Sk) = ∑ s∈Sk p(s)R(Sk, s)f(s).\n(42)\nProof. Recall that for the unordered set estimator, it holds that eUS(Sk) = Eb1∼p(b1|Sk) [f(b1)] = Ex∼p(x) [ f(x) ∣∣x ∈ Sk] (43)\nwhich for the restricted equivalent (with restricted distribution pD\\C) translates into eUS,D\\C(Sk) = Ex∼pD\\C(x) [ f(x) ∣∣x ∈ Sk] = Ex∼p(x) [f(x)∣∣x ∈ Sk, x 6∈ C] . (44) Now we consider the distribution bk−m+1|Sk, Bk−m: the distribution of the first element sampled (without replacement) after sampling Bk−m, given (conditionally on the event) that the set of k samples is Sk, so we have bk−m+1 ∈ Sk and bk−m+1 6∈ Bk−m. This means that its conditional expectation of f(bk−m+1) is the restricted unordered set estimator for C = Bk−m since\neUS,D\\B k−m (Sk) = Ex∼p(x) [ f(x) ∣∣x ∈ Sk, x 6∈ Bk−m] = Ebk−m+1∼p(bk−m+1|Sk,Bk−m) [f(bk−m+1)] . (45)\nObserving that the definition (equation 42) of the stochastic sum-and-sample estimator does not depend on the actual order of the m samples, and using equation 45, we can reduce the multisample estimator to the stochastic sum-and-sample estimator with k′ = k − m + 1, such that the result follows from equation 36.\nEBk∼p(Bk|Sk) k−m∑ j=1 p(bj)f(bj) + 1− k−m∑ j=1 p(bj) eUS,D\\Bk−m(Sk) \n=EBk−m∼p(Bk−m|Sk) k−m∑ j=1 p(bj)f(bj) + 1− k−m∑ j=1 p(bj) eUS,D\\Bk−m(Sk) \n=EBk−m∼p(Bk−m|Sk) k−m∑ j=1 p(bj)f(bj) + 1− k−m∑ j=1 p(bj) Ebk−m+1∼p(bk−m+1|Sk,Bk−m) [f(bk−m+1)] \n=EBk−m+1∼p(Bk−m+1|Sk) k−m∑ j=1 p(bj)f(bj) + 1− k−m∑ j=1 p(bj) f(bk−m+1) \n=ESk−m+1|Sk EBk−m+1∼p(Bk−m+1|Sk−m+1) k−m∑ j=1 p(bj)f(bj) + 1− k−m∑ j=1 p(bj) f(bk−m+1) \n=ESk−m+1|Sk ∑ s∈Sk p(s)R(Sk, s)f(s) = ∑ s∈Sk p(s)R(Sk, s)f(s). (46)" }, { "heading": "D THE IMPORTANCE-WEIGHTED ESTIMATOR", "text": "" }, { "heading": "D.1 RAO-BLACKWELLIZATION OF THE IMPORTANCE-WEIGHTED ESTIMATOR", "text": "In this section we give the proof that Rao-Blackwellizing the importance-weighted estimator results in the unordered set estimator. Theorem 6. Rao-Blackwellizing the importance-weighted estimator results in the unordered set estimator, i.e.:\nEκ∼p(κ|Sk) ∑ s∈Sk\np(s)\n1− Fφs(κ) f(s) = ∑ s∈Sk p(s)R(Sk, s)f(s). (47)\nHere we have slightly rewritten the definition of the importance-weighted estimator, using that q(s, a) = P (gφs > a) = 1 − Fφs(a), where Fφs is the CDF of the Gumbel distribution (see Appendix A).\nProof. We first prove the following Lemma:\nLemma 4. Eκ∼p(κ|Sk)\n[ 1\n1− Fφs(κ)\n] = R(Sk, s) (48)\nProof. Conditioning on Sk, we know that the elements in Sk have the k largest perturbed logprobabilities, so κ, the (k + 1)-th largest perturbed log-probability is the largest perturbed logprobability in D \\Sk, and satisfies κ = maxs∈D\\Sk gφs = gφD\\Sk ∼ Gumbel(φD\\Sk). Computing p(κ|Sk) using Bayes’ Theorem, we have\np(κ|Sk) = p(S k|κ)p(κ) p(Sk) =\n∏ s∈Sk(1− Fφs(κ))fφD\\Sk (κ)\np(Sk) (49)\nwhich allows us to compute (using equation 34 with C = {s} and gφD\\S = κ) Eκ∼p(κ|Sk) [ 1\n1− Fφs(κ) ] =\n∫ ∞ κ=−∞ p(κ|Sk) 1 1− Fφs(κ) ∂κ\n= ∫ ∞ κ=−∞ ∏ s∈Sk(1− Fφs(κ))fφD\\Sk (κ) p(Sk)\n1\n1− Fφs(κ) ∂κ\n= 1\np(Sk) ∫ ∞ κ=−∞ ∏ s∈Sk\\{s} (1− Fφs(κ))fφD\\Sk (κ)∂κ\n= 1\np(Sk) pD\\{s}(S \\ {s})\n=R(Sk, s).\nUsing Lemma 4 we find\nEκ∼p(κ|Sk) ∑ s∈Sk\np(s)\n1− Fφs(κ) f(s) = ∑ s∈Sk p(s)Eκ∼p(κ|Sk) [ 1 1− Fφs(κ) ] f(s)\n= ∑ s∈Sk p(s)R(Sk, s)f(s)." }, { "heading": "D.2 THE IMPORTANCE-WEIGHTED POLICY GRADIENT ESTIMATOR WITH BUILT-IN BASELINE", "text": "For self-containment we include this section, which is adapted from our unpublished workshop paper (Kool et al., 2019b). The importance-weighted policy gradient estimator combines REINFORCE (Williams, 1992) with the importance-weighted estimator (Duffield et al., 2007; Vieira, 2017) in equation 15 which results in an unbiased estimator of the policy gradient∇θEpθ(x)[fθ(x)]:\neIWPG(Sk, κ) = ∑ s∈Sk pθ(s) qθ,κ(s) ∇θ log pθ(s)f(s) = ∑ s∈Sk ∇θpθ(s) qθ,κ(s) f(s) (50)\nRecall that κ is the (k + 1)-th largest perturbed log-probability (see Section 3.2). We compute a lower variance but biased variant by normalizing the importance weights using the normalization W (Sk) = ∑ s∈Sk pθ(s) qθ,κ(s) .\nAs we show in Kool et al. (2019b), we can include a ‘baseline’ B(Sk) = ∑ s∈Sk pθ(s) qθ,κ(s)\nf(s) and correct for the bias (since it depends on the complete sample Sk) by weighting individual terms of the estimator by 1− pθ(s) + pθ(s)qθ,κ(s) :\neIWPGBL(Sk, κ) = ∑ s∈Sk ∇θpθ(s) qθ,κ(s) ( f(s) ( 1− pθ(s) + pθ(s) qθ,κ(s) ) −B(Sk) ) (51)\nFor the normalized version, we use the normalizationW (Sk) = ∑ s∈Sk pθ(s) qθ,κ(s) for the baseline, and Wi(S k) = W (Sk)− pθ(s)qθ,κ(s) + pθ(s) to normalize the individual terms:\n∇θEy∼pθ(y) [f(y)] ≈ ∑ s∈Sk\n1 Wi(Sk) · ∇θpθ(s) qθ,κ(s)\n( f(s)− B(S k)\nW (Sk)\n) (52)\nIt seems odd to normalize the terms in the outer sum by 1 Wi(Sk) instead of 1 W (Sk) , but equation 52 can be rewritten into a form similar to equation 17, i.e. with a different baseline for each sample, but this form is more convenient for implementation (Kool et al., 2019b)." }, { "heading": "E THE UNORDERED SET POLICY GRADIENT ESTIMATOR", "text": "" }, { "heading": "E.1 PROOF OF UNBIASEDNESS OF THE UNORDERED SET POLICY GRADIENT ESTIMATOR WITH BASELINE", "text": "To prove the unbiasedness of result we need to prove that the control variate has expectation 0: Lemma 5.\nESk∼pθ(Sk) ∑ s∈Sk ∇θpθ(s)R(Sk, s) ∑ s′∈Sk pθ(s ′)RD\\{s}(Sk, s′)f(s′) = 0. (53) Proof. Similar to equation 10, we apply Bayes’ Theorem conditionally on b1 = s to derive for s′ 6= s\nP (b2 = s ′|Sk, b1 = s) = P (Sk|b2 = s′, b1 = s)P (b2 = s′|b1 = s′) P (Sk|b1 = s)\n= p D\\{s,s′} θ (S k \\ {s, s′})pD\\{s}θ (s′) p D\\{s} θ (S k \\ {s})\n= pθ(s\n′)\n1− pθ(s) RD\\{s}(Sk, s′). (54)\nFor s′ = s we have RD\\{s}(Sk, s′) = 1 by definition, so using equation 54 we can show that∑ s′∈Sk pθ(s ′)RD\\{s}(Sk, s′)f(s′)\n= pθ(s)f(s) + ∑\ns′∈Sk\\{s}\npθ(s ′)RD\\{s}(Sk, s′)f(s′)\n= pθ(s)f(s) + (1− pθ(s)) ∑\ns′∈Sk\\{s}\npθ(s ′)\n1− pθ(s) RD\\{s}(Sk, s′)f(s′)\n= pθ(s)f(s) + (1− pθ(s)) ∑\ns′∈Sk\\{s}\nP (b2 = s ′|Sk, b1 = s)f(s′)\n= pθ(s)f(s) + (1− pθ(s))Eb2∼pθ(b2|Sk,b1=s) [f(b2)] = Eb2∼pθ(b2|Sk,b1=s) [pθ(b1)f(b1) + (1− pθ(b1))f(b2)] .\nNow we can show that the control variate is actually the result of Rao-Blackwellization:\nESk∼pθ(Sk) ∑ s∈Sk ∇θpθ(s)R(Sk, s) ∑ s′∈Sk pθ(s ′)RD\\{s}(Sk, s′)f(s′) = ESk∼pθ(Sk) ∑ s∈Sk pθ(s)R(S k, s)∇θ log pθ(s) ∑ s′∈Sk pθ(s ′)RD\\{s}(Sk, s′)f(s′)\n = ESk∼pθ(Sk) ∑ s∈Sk P (b1 = s|Sk)∇θ log pθ(s) ∑ s′∈Sk pθ(s ′)RD\\{s}(Sk, s′)f(s′)\n = ESk∼pθ(Sk) Eb1∼pθ(b1|Sk) ∇θ log pθ(b1) ∑\ns′∈Sk pθ(s\n′)RD\\{b1}(Sk, s′)f(s′) = ESk∼pθ(Sk) [ Eb1∼pθ(b1|Sk) [ ∇θ log pθ(b1)Eb2∼pθ(b2|Sk,b1) [pθ(b1)f(b1) + (1− pθ(b1))f(b2)]\n]] = ESk∼pθ(Sk) [ EBk∼pθ(Bk|Sk) [∇θ log pθ(b1) (pθ(b1)f(b1) + (1− pθ(b1))f(b2))]\n] = EBk∼pθ(Bk) [∇θ log pθ(b1) (pθ(b1)f(b1) + (1− pθ(b1))f(b2))]\nThis expression depends only on b1 and b2 and we recognize the stochastic sum-and-sample estimator for k = 2 used as ‘baseline’. As a special case of equation 13 for C = {b1}, we have\nEb2∼pθ(b2|b1) [(pθ(b1)f(b1) + (1− pθ(b1))f(b2))] = Ei∼pθ(i) [f(i)] . (55)\nUsing this, and the fact that Eb1∼pθ(b1) [∇θ log pθ(b1)] = ∇θEb1∼pθ(b1) [1] = ∇θ1 = 0 we find\nESk∼pθ(Sk) ∑ s∈Sk ∇θpθ(s)R(Sk, s) ∑ s′∈Sk pθ(s ′)RD\\{s}(Sk, s′)f(s′) = EBk∼pθ(Bk) [∇θ log pθ(b1) (pθ(b1)f(b1) + (1− pθ(b1))f(b2))] = Eb1∼pθ(b1) [ ∇θ log pθ(b1)Eb2∼pθ(b2|b1) [(pθ(b1)f(b1) + (1− pθ(b1))f(b2))]\n] = Eb1∼pθ(b1) [ ∇θ log pθ(b1)Ex∼pθ(x) [f(x)]\n] = Eb1∼pθ(b1) [∇θ log pθ(b1)]Ex∼pθ(x) [f(x)] = 0 · Ex∼pθ(x) [f(x)] = 0" }, { "heading": "F THE RISK ESTIMATOR", "text": "" }, { "heading": "F.1 PROOF OF BUILT-IN BASELINE", "text": "We show that the RISK estimator, taking gradients through the normalization factor actually has a built-in baseline. We first use the log-derivative trick to rewrite the gradient of the ratio as the ratio times the logarithm of the gradient, and then swap the summation variables in the double sum that arises:\neRISK(S) = ∑ s∈S ∇θ ( pθ(s)∑ s′∈S pθ(s ′) ) f(s)\n= ∑ s∈S pθ(s)∑ s′∈S pθ(s ′) ∇θ log ( pθ(s)∑ s′∈S pθ(s ′) ) f(s)\n= ∑ s∈S pθ(s)∑ s′∈S pθ(s ′)\n( ∇θ log pθ(s)−∇θ log\n∑ s′∈S pθ(s ′)\n) f(s)\n= ∑ s∈S pθ(s)∑ s′∈S pθ(s ′) ( ∇θpθ(s) pθ(s) − ∑ s′∈S ∇θpθ(s′)∑ s′∈S pθ(s ′) ) f(s)\n= ∑ s∈S ∇θpθ(s)f(s)∑ s′∈S pθ(s ′) − ∑ s,s′∈S pθ(s)∇θpθ(s′)f(s)(∑ s′∈S pθ(s ′) )2\n= ∑ s∈S ∇θpθ(s)f(s)∑ s′∈S pθ(s ′) − ∑ s,s′∈S pθ(s ′)∇θpθ(s)f(s′)(∑ s′∈S pθ(s ′) )2\n= ∑ s∈S ∇θpθ(s)∑ s′∈S pθ(s ′) ( f(s)− ∑ s′∈S pθ(s ′)f(s′)∑ s′∈S pθ(s ′) )\n= ∑ s∈S ∇θpθ(s)∑ s′′∈S pθ(s ′′)\n( f(s)−\n∑ s′∈S pθ(s ′)∑ s′′∈S pθ(s ′′) f(s′)\n) ." }, { "heading": "G CATEGORICAL VARIATIONAL AUTO-ENCODER", "text": "" }, { "heading": "G.1 EXPERIMENTAL DETAILS", "text": "We use the code6 by Yin et al. (2019) to reproduce their categorical VAE experiment, of which we include details here for self-containment. The dataset is MNIST, statically binarized by thresholding at 0.5 (although we include results using the standard binarized dataset by Salakhutdinov & Murray (2008); Larochelle & Murray (2011) in Section G.2). The latent representation z is K = 20 dimensional with C = 10 categories per dimension with a uniform prior p(zk = c) = 1/C, k = 1, ...,K. The encoder is parameterized by φ as qφ(z|x) = ∏ k qφ(zk|x) and has two fully connected hidden layers with 512 and 256 hidden nodes respectively, with LeakyReLU (α = 0.1) activations. The decoder, parameterized by θ, is given by pθ(x|z) = ∏ i pθ(xi|z), where xi ∈ {0, 1} are the pixel values, and has fully connected hidden layers with 256 and 512 nodes and LeakyReLU activation.\nELBO optimization. The evidence lower bound (ELBO) that we optimize is given by L(φ,θ) = Ez∼qφ(z|x) [ln pθ(x|z) + ln p(z)− ln qφ(z|x)] (56)\n= Ez∼qφ(z|x) [ln pθ(x|z)]−KL(qφ (z|x)||p(z)) . (57)\nFor the decoder parameters θ, since qφ(z|x) does not depend on θ, it follows that ∇θL(φ,θ) = Ez∼qφ(z|x) [∇θ ln pθ(x|z)] . (58)\nFor the encoder parameters φ, we can write∇φL(φ,θ) using equation 57 and equation 19 as ∇φL(φ,θ) = Ez∼qφ(z|x) [∇φ ln qφ(z|x) ln pθ(x|z)]−∇φKL(qφ (z|x)||p(z)) . (59)\n6https://github.com/ARM-gradient/ARSM\nThis assumes we can compute the KL divergence analytically. Alternatively, we can use a sample estimate for the KL divergence, and use equation 56 with equation 19 to obtain\n∇φL(φ,θ) = Ez∼qφ(z|x) [∇φ ln qφ(z|x)(ln pθ(x|z) + ln p(z)− ln qφ(z|x)) +∇φ ln qφ(z|x)] (60)\n= Ez∼qφ(z|x) [∇φ ln qφ(z|x)(ln pθ(x|z)− ln qφ(z|x))] . (61)\nHere we have left out the term Ez∼qφ(z|x) [∇φ ln qφ(z|x)] = 0, similar to Roeder et al. (2017), and, assuming a uniform (i.e. constant) prior ln p(z), the term Ez∼qφ(z|x) [∇φ ln qφ(z|x) ln p(z)] = 0. With a built-in baseline, this second term cancels out automatically, even if it is implemented. Despite the similarity of the equation 56 and equation 57, their gradient estimates (equation 60 and equation 59) are structurally dissimilar and care should be taken to implement the REINFORCE estimator (or related estimators such as ARSM and the unordered set estimator) correctly using automatic differentiation software. Using Gumbel-Softmax and RELAX, we take gradients ‘directly’ through the objective in equation 57.\nWe optimize the ELBO using the analytic KL for 1000 epochs using the Adam (Kingma & Ba, 2015) optimizer. We use a learning rate of 10−3 for all estimators except Gumbel-Softmax and RELAX, which use a learning rate of 10−4 as we found they diverged with a higher learning rate. For ARSM, as an exception we use the sample KL, and a learning rate of 3 · 10−4, as suggested by the authors. All reported ELBO values are computed using the analytic KL. Our code is publicly available7." }, { "heading": "G.2 ADDITIONAL RESULTS", "text": "Gradient variance during training. We also evaluate gradient variance of different estimators during different stages of training. We measure the variance of different estimators with k = 4 samples during training with REINFORCE with replacement, such that all estimators are computed for the same model parameters. The results during training, given in Figure 4, are similar to the results for the trained model in Table 1, except for at the beginning of training, although the rankings of different estimator are mostly the same.\nNegative ELBO on validation set. Figure 5 shows the -ELBO evaluated during training on the validation set. For the large latent space, we see validation error quickly increase (after reaching a minimum) which is likely because of overfitting (due to improved optimization), a phenomenon observed before (Tucker et al., 2017; Grathwohl et al., 2018). Note that before the overfitting starts, both REINFORCE without replacement and the unordered set estimator achieve a validation error similar to the other estimators, such that in a practical setting, one can use early stopping.\nResults using standard binarized MNIST dataset. Instead of using the MNIST dataset binarized by thresholding values at 0.5 (as in the code and paper by Yin et al. (2019)) we also experiment with the standard (fixed) binarized dataset by Salakhutdinov & Murray (2008); Larochelle & Murray (2011), for which we plot train and validation curves for two runs on the small and large domain in Figure 6. This gives more realistic (higher) -ELBO scores, although we still observe the effect of overfitting. As this is a bit more unstable setting, one of the runs using REINFORCE with replacement diverged, but in general the relative performance of estimators is similar to using the dataset with 0.5 threshold.\n7https://github.com/wouterkool/estimating-gradients-without-replacement" }, { "heading": "H TRAVELLING SALESMAN PROBLEM", "text": "The Travelling Salesman Problem (TSP) is a discrete optimization problem that consists of finding the order in which to visit a set of locations, given as x, y coordinates, to minimize the total length of the tour, starting and ending at the same location. As a tour can be considered a sequence of locations, this problem can be set up as a sequence modelling problem, that can be either addressed using supervised (Vinyals et al., 2015) or reinforcement learning (Bello et al., 2016; Kool et al., 2019a).\nKool et al. (2019a) introduced the Attention Model, which is an encoder-decoder model which considers a TSP instances as a fully connected graph. The encoder computes embeddings for all nodes (locations) and the decoder produces a tour, which is sequence of nodes, selecting one note at the time using an attention mechanism, and uses this autoregressively as input to select the next node. In Kool et al. (2019a), this model is trained using REINFORCE, with a greedy rollout used as baseline to reduce variance.\nWe use the code by Kool et al. (2019a) to train the exact same Attention Model (for details we refer to Kool et al. (2019a)), and minimize the expected length of a tour predicted by the model, using different gradient estimators. We did not do any hyperparameter optimization and used the exact same training details, using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−4 (no decay) for 100 epochs for all estimators. For the baselines, we used the same batch size of 512, but for estimators that use k = 4 samples, we used a batch size of 5124 = 128 to compensate for the additional samples (this makes multi-sample methods actually faster since the encoder still needs to be evaluated only once)." } ]
2,020
null
SP:e677ee557b7b802ce2588bfc05b16054913f9662
[ "The authors developed a novel quantization technique that yields layer-wise different mixed-precision quantization. To do so, they alternatively update the pre-trained weights and the quantizer, which they call cursor. The following two features distinguish this paper: using two precision values around the cursor's value (instead of the closest one) and regularizing the parameter size using a new loss function. Because the whole process is differentiable, the appropriate precision to each layer can be found fast. Thanks to these efforts, this method balances the compression rate and accuracy well on CIFAR-10 and ImageNet.", "This paper is about using quantization to compress the DNN models. The main idea is to use NAS to obtain the mixed precision model. More specifically, it adaptively chooses the number of quantization bit for each layer using NAS by minimizing the cross-entropy loss and the total number of bits (or model size) used to compress the model. The experiment is on CIFAR and ImageNet, and compared with other quantization methods showing better accuracy." ]
Deep neural network (DNN) has rapidly found many applications in different scenarios. However, its large computational cost and memory consumption are barriers to computing restrained applications. DNN model quantization is a widely used method to reduce the DNN storage and computation burden by decreasing the bit width. In this paper, we propose a novel cursor based adaptive quantization method using differentiable architecture search (DAS). The multiple bits’ quantization mechanism is formulated as a DAS process with a continuous cursor that represents the possible quantization bit. The cursor-based DAS adaptively searches for the desired quantization bit for each layer. The DAS process can be solved via an alternative approximate optimization process, which is designed for mixed quantization scheme of a DNN model. We further devise a new loss function in the search process to simultaneously optimize accuracy and parameter size of the model. In the quantization step, based on a new strategy, the closest two integers to the cursor are adopted as the bits to quantize the DNN together to reduce the quantization noise and avoid the local convergence problem. Comprehensive experiments on benchmark datasets show that our cursor based adaptive quantization approach can efficiently obtain lower size model with comparable or even better classification accuracy.
[]
[ { "authors": [ "Sajid Anwar", "Kyuyeon Hwang", "Wonyong Sung" ], "title": "Structured pruning of deep convolutional neural networks", "venue": "CoRR, abs/1512.08571,", "year": 2015 }, { "authors": [ "Han Cai", "Jiacheng Yang", "Weinan Zhang", "Song Han", "Yong Yu" ], "title": "Path-level network transformation for efficient architecture", "venue": "search. CoRR,", "year": 2018 }, { "authors": [ "Jungwook Choi", "Zhuo Wang", "Swagath Venkataramani", "Pierce I-Jen Chuang", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "PACT: parameterized clipping activation for quantized neural networks", "venue": "CoRR, abs/1805.06085,", "year": 2018 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio" ], "title": "Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1", "venue": "CoRR, abs/1602.02830,", "year": 2016 }, { "authors": [ "Ahmed T. Elthakeb", "Prannoy Pilligundla", "Amir Yazdanbakhsh", "Sean Kinzer", "Hadi Esmaeilzadeh" ], "title": "Releq: A reinforcement learning approach for deep quantization of neural networks", "venue": "CoRR, abs/1811.01704,", "year": 2018 }, { "authors": [ "Yang Gao", "Hong Yang", "Peng Zhang", "Chuan Zhou", "Yue Hu" ], "title": "Graphnas: Graph neural architecture search with reinforcement learning", "venue": "CoRR, abs/1904.09981,", "year": 2019 }, { "authors": [ "Minghao Guo", "Zhao Zhong", "Wei Wu", "Dahua Lin", "Junjie Yan" ], "title": "IRLAS: inverse reinforcement learning for architecture", "venue": "search. CoRR,", "year": 2018 }, { "authors": [ "Yunhui Guo" ], "title": "A survey on methods and theories of quantized neural networks", "venue": "CoRR, abs/1808.04752,", "year": 2018 }, { "authors": [ "Song Han", "Huizi Mao", "William J. Dally" ], "title": "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman", "venue": "coding. CoRR,", "year": 2015 }, { "authors": [ "Kohei Hayashi", "Taiki Yamaguchi", "Yohei Sugawara", "Shin ichi Maeda" ], "title": "Einconv: Exploring unexplored tensor decompositions for convolutional neural networks. ArXiv", "venue": null, "year": 1908 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the Knowledge in a Neural Network", "venue": "arXiv e-prints, art", "year": 2015 }, { "authors": [ "Jason Zhi Liang", "Elliot Meyerson", "Risto Miikkulainen" ], "title": "Evolutionary architecture search for deep multitask networks. CoRR, abs/1803.03745, 2018", "venue": null, "year": 2018 }, { "authors": [ "Darryl Dexu Lin", "Sachin S. Talathi", "V. Sreekanth Annapureddy" ], "title": "Fixed point quantization of deep convolutional networks", "venue": "CoRR, abs/1511.06393,", "year": 2015 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: differentiable architecture", "venue": "search. CoRR,", "year": 2018 }, { "authors": [ "Christos Louizos", "Matthias Reisser", "Tijmen Blankevoort", "Efstratios Gavves", "Max Welling" ], "title": "Relaxed quantization for discretized neural networks", "venue": "CoRR, abs/1810.01875,", "year": 2018 }, { "authors": [ "Eunhyeok Park", "Dongyoung Kim", "Sungjoo Yoo", "Peter Vajda" ], "title": "Precision highway for ultra lowprecision quantization", "venue": "CoRR, abs/1812.09818,", "year": 2018 }, { "authors": [ "Eunhyeok Park", "Sungjoo Yoo", "Peter Vajda" ], "title": "Value-aware quantization for training and inference of neural networks. CoRR, abs/1804.07802, 2018b", "venue": "URL http://arxiv.org/abs/1804", "year": 2018 }, { "authors": [ "Hanyu Peng", "Jiaxiang Wu", "Shifeng Chen", "Junzhou Huang" ], "title": "Collaborative channel pruning for deep networks", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hieu Pham", "Melody Y. Guan", "Barret Zoph", "Quoc V. Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "CoRR, abs/1802.03268,", "year": 2018 }, { "authors": [ "Antonio Polino", "Razvan Pascanu", "Dan Alistarh" ], "title": "Model compression via distillation and quantization", "venue": "CoRR, abs/1802.05668,", "year": 2018 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "CoRR, abs/1603.05279,", "year": 2016 }, { "authors": [ "Mark Sandler", "Andrew G. Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation", "venue": "CoRR, abs/1801.04381,", "year": 2018 }, { "authors": [ "Cheng Tai", "Tong Xiao", "Yi Zhang", "Xiaogang Wang", "Weinan E" ], "title": "Convolutional neural networks with low-rank regularization", "venue": "arXiv e-prints, art", "year": 2015 }, { "authors": [ "Xin Wei", "Wenchao Liu", "Lei Chen", "Long Ma", "He Chen", "Yin Zhuang" ], "title": "Fpga-based hybrid-type implementation of quantized neural networks for remote sensing applications", "venue": "doi: 10.3390/s19040924. URL", "year": 2019 }, { "authors": [ "Bichen Wu", "Yanghan Wang", "Peizhao Zhang", "Yuandong Tian", "Peter Vajda", "Kurt Keutzer" ], "title": "Mixed precision quantization of convnets via differentiable neural architecture", "venue": "search. CoRR,", "year": 2018 }, { "authors": [ "Jiaxiang Wu", "Cong Leng", "Yuhang Wang", "Qinghao Hu", "Jian Cheng" ], "title": "Quantized convolutional neural networks for mobile", "venue": "devices. CoRR,", "year": 2015 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "CoRR, abs/1612.03928,", "year": 2016 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "Lq-nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "CoRR, abs/1807.10029,", "year": 2018 }, { "authors": [ "Shuchang Zhou", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuxin Wu", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "CoRR, abs/1606.06160,", "year": 2016 }, { "authors": [ "Zhuangwei Zhuang", "Mingkui Tan", "Bohan Zhuang", "Jing Liu", "Yong Guo", "Qingyao Wu", "Junzhou Huang", "Jin-Hui Zhu" ], "title": "Discrimination-aware channel pruning for deep neural networks", "venue": "CoRR, abs/1810.11809,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "CoRR, abs/1611.01578,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning (DL) has achieved great successes in varied fields such as gaming, natural language processing, speech recognition, computer vision and so on. However, its huge computational burden and large memory consumption still intimidate many potential applications, especially for mobile devices and embedded systems.\nA number of efforts have been devoted to compress the DL model size and accelerate its training and test speed. These efforts can be roughly categorized into four major classes: network pruning (Han et al. (2015); Anwar et al. (2015); Peng et al. (2019); Zhuang et al. (2018)), low rank approximation (Tai et al. (2015); Wang et al. (2018); Hayashi et al. (2019)), knowledge distillation (Hinton et al. (2015); Zagoruyko & Komodakis (2016)), and network quantization (Courbariaux & Bengio (2016); Lin et al. (2015); Wu et al. (2015); Polino et al. (2018); Zhang et al. (2018)). Among them, network quantization methods, jointly optimizing the whole network weights, activations or gradients with low bit (such as 8 bits or even 1 bit), show great potential in compressing model size and accelerating inference time. In addition, quantization based approaches are preferable for mobile devices and embedded systems since these devices are gradually equipped by specifically designed low bit computing hardware.\nAlthough existing quantization based approaches, which mainly use one fixed bit to represent the whole DNN model, yields encouraging compression ratio while keeping the model’s performance, we argue that simply using only a fixed bit for quantization is not the optimal choice for the trade-off between a model size and its performance. For example, to run a model on chips with strict memory limitations, 1 bit or 2 bits’ quantization suffers from severe accuracy loss (Rastegari et al. (2016)) while 16 bits’ or 8 bits’ quantization cannot significantly reduce the model size.\nTo address the above problem, we propose a cursor based adaptive quantization method to derive a different number of bits in different layers for DNN model compression, i.e., we search for the best\nconfiguration of different bit quantization for different layers in a neural network model. Distinctive from most algorithms aforementioned, our approach is motivated by recent neural architecture search (NAS) that aims to find better performance neural architecture with less calculations or less size automatically. The key in our algorithm is using a continuous cursor that represents the bit quantization scheme for each layer. For different layers, many cursors will be adaptively searched during the NAS process. Since the cursor itself is continuous and the whole search procedure can be considered as a differentiable architecture search (DAS) process, which can be effectively solved based on an alternative optimization strategy. A novel cost function that considers the model compression and prediction accuracy is also proposed in the DAS process. In the searching process of the cursors, a quantization process is applied to compress the model size at the same time. To reduce the possible quantization noise and local convergence problem, we make use of the closest two integer bits to the cursor to quantize the weights for each layer in a DNN model. We validate our proposed method with image classification tasks on CIFAR10,CIFAR100 and ImageNet. Comprehensive experiments on some backbone DNN models such as ResNet20, ResNet18, ResNet56 and MobileNetV2 show that the proposed cursor based quantization method achieves remarkably better performance of compression ratio with ignorable accuracy drop or even better accuracy.\nIn summary, the contributions of this work are four-fold:\n• We cast the adaptive quantization of neural network as a problem of neural architecture search. A continuous cursor is proposed to represent the possible quantization bit, leading to a more efficient search space.\n• A novel regularization function is proposed to optimize model compression in the search process. Thus the search for the cursor position and weights can be efficiently solved in an alternative optimization manner.\n• Two nearest neighbor integers to the cursor are adopted with a carefully designed strategy to implement the quantization of the network to reduce the quantization noise and avoid possible local convergence.\n• We comprehensively evaluate the proposed adaptive quantization method on some benchmark datasets and achieve new state-of-the-art performance for different number of bits quantization of neural network." }, { "heading": "2 RELATED WORK", "text": "Recently, a lot of new quantization approaches have been proposed, enabling the quantized compressed model to compete with their full precision counterparts. Clustering method is applied for the weight codebook representation (Han et al. (2015)), and then the network is retrained to get better quantized centroids. In (Zhang et al. (2018)), the authors jointly trained a DNN and its associated quantizes to reduce the noticeable predication accuracy gap between the quantized model and its full precision one. A direct differentiable quantization method was introduced in (Louizos et al. (2018)) with promising test accuracy. A new activation quantization method that takes an activation clipping parameter was proposed in (Choi et al. (2018)) to ensure the suitable quantization scale.\nSome efforts have been taken on quantization of the neural network with a different number of bits used for different layers. In (Lin et al. (2015)), signal-to-quantization-noise ratio (SQNR) is applied on layer weight to evaluate the effects of quantization error. Based on SQNR, different bits were used for quantization of each layer, yielding about 20% model size reduction without accuracy loss in their tests. The authors of (Wang et al. (2018)) proposed an automated mixed precision quantization scheme based on reinforcement learning (RL) to achieve better latency on different hardware platforms such as edge devices and cloud data center. They also claimed that their actorcritic model produced efficient actions that result in better latency and less energy consumption with negligible loss of accuracy.\nIn the past few years, a new trend has been witnessed for network design, i.e., neural architecture search (NAS). RL based approaches are first utilized to generate network with high accuracy (Zoph & Le (2016)), and they also build a strong basis for the following recent works such as (Gao et al. (2019); Guo et al. (2018)). Then, evolution based approach (Liang et al. (2018) is further applied to obtain the possible optimal solution in the large search space. Both of these two category approaches tend to yield large amount of computational burden because NAS is treated as a blackbox\noptimization problem in a discrete domain, yielding a large number of architecture evaluations, and thus run very slow even on the most advanced GPU machine. To alleviate this problem, in 2018, the authors (Liu et al. (2018)) proposed a differentiable approach to accelerate the search of a good neural network by relaxation of the possible operation on the cell level structure. Wu et al. recently proposed a new approach to find the mixed bits for different layers by applying differentiable NAS (DNAS) method based on a model of super net (Wu et al. (2018)), which is a kind of directed acyclic graph. They considered the quantization as a problem of sampling on a stochastic super net, where a Gumbel softmax function is applied to make the sampling process differentiable.\nWe cast the different bit quantization for DNN as a cursor based adaptive architecture search problem, and it is different from the traditional direct quantization works and the learning based mixed bits’ quantization approaches. Moreover, it is also distinctive from DARTs and DNAS in the methodology itself." }, { "heading": "3 CURSOR-BASED ADAPTIVE QUANTIZATION", "text": "" }, { "heading": "3.1 NEURAL ARCHITECTURE SEARCH", "text": "It is well known that DNN model needs much time to design its structure. Neural architecture search (NAS) recently emerged as a new methodology to overcome this challenge. It designs the optimal architecture of a neural network by considering all possible factors such as number of layers, width of each layer, different operators in each layer and so on. Two key concepts are related to a NAS process, i.e., search space and search strategy. All the possible combinations of the major factors that determine a network structure constitute the search space. Generally speaking, the search space of a DNN is very large, leading to a huge computational task. As such, the previous NAS works first devise normal and reduction cell (Pham et al. (2018)). Such kind of motif is then repeated to build the final neural network. Another definition is about search strategy, that is, how to transverse in such a large search space. With each searched network structure, the performance of it will be evaluated. A typical search method is random search, however, its computational efficiency is not ideal. Therefore, most recent works (Cai et al. (2018); Liu et al. (2018)) have been proposed along this big direction to improve the search efficiency as much as possible." }, { "heading": "3.2 SEARCH SPACE FOR QUANTIZATION PROBLEM", "text": "Quantization has also been a very hot research topic in the past few years. Rounding function, vector quantization or stochastic function are typically applied to implement quantization to compact the model size while maintaining equivalent performance or acceptable loss. Some other approaches also use stochastic or probabilistic methods to quantize the neural network. Most previous methods simply apply one kind of bit quantization to the whole network due to the simplicity of implementation. A few recent works begin to utilize different bit quantization scheme to further improve the compression ratio and prediction accuracy. If we consider quantization choice as a part of the neural architecture, we can estimate its corresponding search space. Let us take Resent20 as an example and if we decide to quantize the neural network with the possible bit width of 1, 2, 4, 8, 16, 32, then all the possible quantization choices for ResNet20 will be 620. In the context of NAS, this is a very large number for the search space. Hence, evaluation of so many designs one by one seems infeasible right now." }, { "heading": "3.3 DIFFERENTIABLE CURSOR SEARCH FOR ADAPTIVE QUANTIZATION", "text": "The discrete search space of the above quantization scheme is so large. If we further consider the possible bit choice for each layer as a virtual continuous cursor in the range of [1, 32], the cursors then become significant parts of the architecture for a neural network model. Here, we define the cursor as a position that is related to the quantization choice for each layer. Its value is a floatingpoint number within [1, 32]. If we assume a DNN has N layers, and each layer has a different cursor, denoted by c1, c2, ..., cN , together with their weights of WC , our goal is to find a good combination of c1, c2, ..., cN to achieve better prediction accuracy and compression ratio. As such, for the whole neural network it can be described as an optimization problem that minimizes the loss on the validation data after training through the minimization of the loss on the training data as follows (Liu et al. (2018)):\nminE(x′, y′)DT (LossT (C,WC)) s.t. WC∗ = argminE(x, y)DV (LossV (C,WC)) (1)\nwhere C represents the cursor vector, WC∗ is the weights corresponding to the optimal C, LossT (C,WC) and LossV (C,WC) is the respective training and validation loss function based on the cursors and weights with condition of C. DT and DV represents the training and validation dataset respectively, (x, y) and (x′, y′) means data from the training and validation dataset. It should be noted that using training and validation data is a tradition to derive the weight parameters and architecture parameters in the field of NAS, which is a little bit different from the other problems in deep learning. For simplicity and efficiency, in this paper, we let LossT (C,WC) equals to LossV (C,WC) and assume they share the same form of Loss(C,WC). To consider both the prediction accuracy and model size, we design the loss function as a combination of cross entropy and model size as follows:\nLoss(C,WC) = CrossEntropy(C,WC) + λ× LossC (2)\nwhere CrossEntropy(C,WC) is the widely used cross entropy function, encoding the prediction accuracy of the model. The reason why we add a regularization item to the loss function is first because it may determines the model’s performance compromise between the accuracy and quantization, which directly determines the model size, and λ is a regularization coefficient that adjusts the trade-off of accuracy and compression. In addition, it may prevent overfitting to some extent. Concerning the loss related to LossC , we focus on the model size change with quantization. So we design it in the form of Eq.(3), which will be introduced in details in the next subsection.\nThe above process is a bi-level optimization problem, which requires to deduce higher order derivatives and is hard to obtain an exact solution. An approximated iterative solution can be applied instead, so we alternatively take the optimization strategy in weight and cursor space to update WC based on the training losses fromDT and renew C based on the validation losses fromDV . By solving this bi-level optimization problem using an alternative approximation approach, the cursors can be efficiently searched by gradient based optimization approach such as Adam. Our later experimental results also show that this alternative optimization method may yield a good solution with high compression ratio and accuracy. Compared to the original discrete search space, this search method is more efficient because the design of continuous cursor and the direct gradient based optimization approach. The whole differentiable cursor search for adaptive quantization based on the alternative optimization of WC and C is described in Algorithm 1. With the training and validation set, initialized cursor value and a pretrained model as inputs, our algorithm first quantizes the weights in each layer of the network using the two most close integers to the cursor and calculates the loss based on the training data for forward process, and then it updates the original 32 bit weight by gradient descent algorithm. For the subsequent validation, it also first quantizes the network with the two most close integers to the cursor, and use them to obtain the validation error, then our algorithm utilizes this error to update the cursors with gradient descend algorithm. It should be emphasized that we utilize the original 32 bit weight in the above training and validation step, and use it for sharing when implementing quantization with the cursor’s two neighbor integers. The key step in the algorithm about quantization with the nearest two integers to the cursor will be elaborated in the subsequent section. The outputs of the whole algorithm are rounded cursor values for each layer and its quantized network model.\nIt should be noted that our proposed cursor based differentiable search is different from DARTs (Liu et al. (2018)) in the following three aspects. First, DARTs method considers the possible operation in each layer as a mixture of primitive operations. Here, we directly make use of cursor to represent the quantization bit for each layer, no similar mixture operation exists in the whole search algorithm. Second, in DARTs, each primitive operation is assigned with a probability through a softmax function. Cursor based search is optimized directly without probability. Third, DARTs approach concentrates on the cell structure, but we apply the DAS directly on the whole network. Compared to DNAS (Wu et al. (2018)), our approach is also distinctive. For DNAS, the authors build a stochastic super net first to describe all the possible quantization choices, then a sampling approach based on Gumbel softmax function that enables the discrete distribution to be continuous and differentiable is applied in each layer of the super net. Our cursor based differentiable search has no super net or sampling process in the pipeline. Hence, the subsequent solutions to the optimization problem is also completely different. In short, the proposed method requires no relaxation anymore as in both DARTs and DNAS approach.\ninput : The training set DT , validation set DV , initialized cursors C, pretrained 32-bit weight W , and the batch size n while not reaching the target epochs or not converge do Sample data from training data DT ; Quantize the network using two integers that are closest to the cursor, calculate the loss L\non training data with Eq.(2); Update the weight W by gradient descent W ′ =W −∇W L(C,W ); Sample data from validation data DV ; Quantize the network using two integers that are closest to the cursor, calculate the loss L\non validation data with Eq. (2); Update the cursor C by gradient descent C ′ = C −∇C L(C,W );\nend output: Rounded cursor values for each layer and quantized network\nAlgorithm 1: Differentiable Cursor Search for Adaptive Quantization" }, { "heading": "3.4 TRAINING FOR NETWORK QUANTIZATION", "text": "Aiming for DNN quantization, we attempt to apply the cursor that represents the bit to quantize the weight layers. Unfortunately, the cursor obtained during the search is a fractional number, which cannot be directly used for quantization. One choice is to round the cursor to some integers, but it may cause rather large quantization error if we choose the rather distant bits. On the other hand, if we directly round the cursor to its nearest integer, it may not efficiently represent the variation of cursor. For example, if cursor1 and cursor2 for different epochs in the same layer are 2.6 and 2.8 respectively, they will be rounded to the same integer 3, yielding no change in the model size for this layer when implementing quantization. In addition, in the whole search process, such one integer choice may result in local convergence because the iteration process of one integer quantization may get stuck in a local minimum region for the cursor search. To alleviate the above two problems, we propose instead to make use of the nearest lower and upper integer bound at the same time in the search training process. Compared to directly using the nearest one neighbor to quantize, the lower and upper integer bounds may produce more variations in the loss function that describes the quantization effects, yielding effective gradient changes to update the cursors more efficiently. Subsequent experiments also demonstrate that this design can obtain better quantization performance compared to simply applying rounding function on the searched cursor. As such, the loss function for model size part in Eq.(2) is designed as follows:\nLossC = (Cost(C)) γ (3)\nwhere γ is a postive coefficient, Cost(C) is a quantization related continuous cost function with cursor C as its variable. In this work and for the convenience of implementation, we further design it as:\nLossC = ( ∑N i=1 Si × ci∑N i=1 Si )γ (4)\nwhere Si is defined as the size of the ith layer in bits when each of parameters is represented by 1 bit. In fact, for a trained model, the size of a layer in bits, i.e., Si , is a constant for lay i. Since ci is a continuous cursor, we may consider the above equation differentiable with respect to ci.\nWhen implementing the quantization for each layer, we utilize the DoReFa-Net (Zhou et al. (2016)) quantization:\nwk = 2Qk( tanh(w)\n2max(|tanh(w)|) + 0.5)− 1 (5)\nwhere w represents the full precision weight of a model and Qk(.) is the k-bit quantization function that transforms a continuous value x ∈ [0, 1] to a k-bit output y ∈ [0, 1] as below:\ny = round((2k − 1)× x)\n2k − 1 (6)\nwhere round function is the typical rounding operation used in quantization. In other words, in the process of quantization, after searching the possible quantization bit of ci in each layer, its\ncorresponding two nearest neighbor integers will be applied to Eq.(6) and Eq.(5) to quantize each layer of the network.\nIn the forward process of the proposed quantization scheme, output of the ith layer, assuming Oi, can be described with the following equation:\nOi = M∑ j=1 ReLu(1− |ci − qj |)×Oj s.t. M∑ j=1 ReLu(1− |ci − qj |) = 1 (7)\nwhereReLu(x) = max(0, x) is the Rectified Linear Unit that zeros out the negative values, ci is the cursor in one layer, qj denotes the jth quantization bit width within M total quantization choices, and Oj is the corresponding output of one convolution layer of the model when quantizing with qj . Due to the introduction of ReLu(x) in the forward process, the whole loss function may not be differetiable at the point of zero. In such a corner case, fortunately, the gradient may be still also estimated through the method of straight-through-estimator(STE) (Guo (2018)), whose details are beyond the scope of this paper. In this work, we mainly consider qj ∈ {1, 2, 3, 4, 5, 6, 7, 8}, M = 8 and ci ∈ [1, 8], because it has been observed that with greater than 8 bits, the neural network’s performance almost has no degradation (Elthakeb et al. (2018)). We also concentrate on such a design in all our experiments. In fact, there is also a new trend that investigates the possible quantization with bit that is not power of 2 (Elthakeb et al. (2018); Park et al. (2018b;a); Wang et al. (2018)). In addition, some hardware such as FPGA also gradually supports efficient quantization using such bits (Wang et al. (2018); Wei et al. (2019)).\nBased on Eq.(7), it can be found that there is only two closest integers that take effects in each layer’s output. As such, our proposed search algorithm can much reduce the coupling of different quantization operations, leading to more effective search of a good cursor. Assuming the cursor’s lower and upper bound integer in the ith layer is ai1 and ai2, we can define two coefficients di1 and di2 as below:\ndi1 = 1− |ci − ai1|; di2 = 1− |ai2 − ci| (8)\nwhere ci represents a cursor searched in the ith layer, di1 and di2 represents how close the bounds are to a cursor. The closer it is, the larger the coefficient is. A continuous move of a cursor then adjusts the weights continuously. Then, based on Eq.(7), given the inputX of one convolution layer, output of it (we only consider the quantization of the convolution layers in this work ) in the forward process of quantization can be simplified and described with the following equation:\nOi = di1 × (Conv(X,Wi1) + di2 × Conv(X,Wi2)) (9)\nwhere Wi1 and Wi2 are the temporary weights in one layer after quantization using ai1 and ai2 based on its corresponding 32 bit weight, Conv is the convolution operation. The 32 bit weight will be updated as the whole algorithm iterates, while Wi1 and Wi2 will be recalculated in the forward process based on the new ai1 and ai2 . In other words, we activate the two closest bits to the cursor, and sum the convolution results of these two quantization bit choices based on the coefficients of the L1 distance. As such, the whole process is differentiable. This can be also intuitively explained by that the outcome of the desired quantization scheme for each layer in the forward process might be represented by a weighted sum of the two different quantization schemes using the approximated closest bit choices. Hence, the proposed scheme may find the best quantization scheme for the whole network in the cursor searching process based on the alternative optimization solution.\nAfter the approximate alternative optimization approach converges or reaches the target epoch number, the final quantization bit in each layer can be obtained by applying rounding operation on each cursor for inference. The final quantized model may also need to be finetuned based on the quantization bit." }, { "heading": "4 EXPERIMENTS", "text": "Currently, we only apply quantization on the weights and use full precision activations. In addition, we also follow the traditions in the domain of DNN quantization to avoid the quantization of the first and last layer in a model. In all the experiments, we take ResNet18, ResNet20, ResNet56 (He et al. (2015)) or MobileNetV2 (Sandler et al. (2018)) as the backbone models. Please be noted that these\nmodels should be pretrained to obtain the floating point models first. For the initialization of cursors in each layer, all of them are set with 4 bits for the convenience of iteration. When the cursors are obtained by our method, the model may be further fine tuned to get its final accuracy, which is a practical tradition in the fields of NAS and quantization.\nAs for the parameter λ in Eq.(2) and γ in the loss of model size in Eq.(3), a rather optimal set of them is chosen as (0.25, 0.3) after trials. Based on our experiments, we also generalize a trial rule for λ. We control the balance of the two loss parts in Eq.(2) to ensure the ratio of the first and second part between [0.5, 2.0]. We also study the influence of λ in the experiments to show that in most cases, the cursor based adaptive quantization scheme is not senstive to its change if λ is at a larger interval of λ ≥ 0.1. Concerning the learning rate schedule of weight and cursor, we apply cosine annealing method to adjust them. The minimum learning rate for them is 0.001 and 0.0001 respectively." }, { "heading": "4.1 COMPARISON OF MODEL SIZE LOSS", "text": "To show the validity of quantization approach using two integer bounds nearest to the cursor, we first implement the search process by comparing it to using only one nearest integer of the cursor. We analyze their model size losses, i.e., Eq.(3), to show the great distinction in the training process.\nHere we apply ResNet20 on CIFAR10 dataset to demonstrate the optimization process. For illustrative purpose, we only draw the training and validation loss change of the model size, i.e., the second term of Eq.(2), in the first 20 epochs. As shown in Figure 1, the red curve represents the training and validation loss of model size using one nearest integer to implement quantization, while the blue one denotes the training and validation loss of size obtained by using two neighbor integers nearest to the curser searched by the proposed scheme. The major differences in these two tests lie at the quantization choices. In fact, we also tried some other parameters and random initialization for one integer quantization scheme, and similar curves can be found. Obviously, the blue one looks more smooth and natural for a convergence process. The red loss may lead to a strong possibility that the cursors are stuck in a local minimum region instead. From this Figure, we can clearly notice that the training and validation loss of the model size using only one integer quantization will keep constant immediately in the first epoch, which is NOT desired for cursor search because it will cause no change in the model size anymore. In fact, the cursor values obtained by the one neighbor scheme tend to be 1 bit for all layers. The reason why the one integer quantization scheme fails may be because, in most cases, the weights in one layer span a rather small range, one lower integer quantization may lead to the same quantization results on the weights in the training process. Such same quantization results further yield almost no change in the backward gradient process, which is not beneficial for the optimal cursor search. The designed two integers’ quantization process, on the other hand, can map the cursor to two different integer values, leading to efficient change in the model size even for the weights in rather a small value range. Figure 1 also shows that Eq.(4) is continuous and differntiable." }, { "heading": "4.2 SEARCH PROCESS ANALYSIS", "text": "To get some insights of our adaptive cursor search algorithm, we investigate its iteration process in this subsection. For illustration only, we take MobileNetV2 on CIFAR10 as an example. Its search process is depicted in Figure 2 with the quantization bits ignored due to space limitation. Here the abscissa and vertical coordinate respectively represents the compression ratio and prediction accuracy. It should be noted that here our proposed algorithm runs 10 epochs only to clearly show the variation of performance. In addition, because of the cosine annealing scheduler, such an iteration process may also be representative. From Figure 2, we observe that for the proposed adaptive cursor search scheme, it first begins at the lower left region (lower accuracy and compression) and then gradually assembles to the upper right region (higher accuracy and compression). Meanwhile, there is some small vibrations in the whole process, for example, from epoch 8 to epoch 9, there is some increase in accuracy as well as compression ratio, but from epoch 9 to epoch 10, there is a slight reduction in both measures. It can also be noticed that the search process is rather stable and gathers to the final upper right region with better accuracy and compression ratio. We also observed similar pattern for ResNet20 and ResNet56 on CIFAR10, but we ignored the picture of them because of space limitation. The reason why the search process of our method can reach to a region with high prediction accuracy and compression ratio may be due to the alternative optimization approach to solve this bi-level problem with two goals. In addition, the regularization item may also play a positive role in this process." }, { "heading": "4.3 IMPACT OF REGULARIZATION COEFFICIENT", "text": "The regularization coefficient λ in Eq.(2) determines the balance between the model precision and size. In this part, we carry out some experiments to analyze the influence of it on the whole performance. We choose λ = 0.9, 0.7, 0.5, 0.25, 0.1, 0.05, 0.01, and we test its effects on the quantized model. For the purpose of illustration, we test ResNet20 on CIFAR10. To directly show the effects of our cursor based differentiable search, we do NOT implement finetune step for all these results after finishing the cursor search. The results of the quantized ResNet20 on CIFAR10 is demonstrated in Table 1, and all the results are obtained by implementing the search with 200 epochs.\nFrom Table 1, we can observe that forλ >= 0.1, the whole performance of the proposed quantization method is rather steady, that is, the accuracy and compression ratio of the quantized model maintain at a concentrated region with the accuracy about 90% while the compression ratio about 29.00. When λ < 0.1, the cursor based adaptive quantization approach may still have a good performance of prediction but gradually loses its effects on model compression. This can be explained that when the regularization becomes gradually weak, it does NOT exert its compression effects so well as when the coefficient is large. This further validates the effectiveness of the regularization function proposed in this paper." }, { "heading": "4.4 CIFAR10 RESULTS", "text": "We demonstrate our cursor based adaptive quantization algorithm on CIFAR10 benchmark dataset with ResNet20, ResNet56 an MobileNetV2. For ResNet20,we compare the accuracy and compression ratio of the proposed approach to some related or similar works such as DNAS (Wu et al. (2018)), TTQ (Zhu et al. (2016)), PACT (Choi et al. (2018)) and LQE (Zhang et al. (2018)) with Resnet20 on CIFAR-10, and the details of accuracy and compression ratio are shown in Table 2. It can be noticed that, compared to the other related works, our method achieves much better compression ratio while achieving comparable or better classification accuracy on CIFAR10 dataset. The reason why the proposed approach is better than the quantization methods such as LQE, TTQ and PACT may be due to the adaptive cursor based search mechanism. By considering both the model accuracy and compression ratio, the cursor based approach can effectively search different quantization bit for each layer as a whole, leading to better compression ratio with better accuracy. Compared to DNAS, the reason for our better performance in terms of CR is partially due to that the two closest integers’ quantization scheme produces less quantization error in each layer. In addition, it may be also because of our multiple lower bits’ design in the search process.\nWe also apply the proposed approach to ResNet56 and compare its performance with DNAS (Wu et al. (2018)), and the results are recorded in Table 5. We further test the proposed approach on MobilenetV2, with the results shown in Table 6. Because of space limitation, we put the detailed Tables and descriptions in Appendix." }, { "heading": "4.5 CIFAR100 RESULTS", "text": "To further show the effectiveness of the proposed scheme, we test our method on CIFAR100 dataset using ResNet20, ResNet56 and MobileNetV2. We illustrate compressed ResNet20’s performance compared to the original one on CIFAR100 in Table 3, it should be pointed out that here we also do not fine tune the original model, so its accuracy may not be the best one in the literature. For ResNet20, our approach achieves a good compression ratio of 11.6 while maintaining a comparable accuracy of 68.18%.\nThe performances of the quantized network of ResNet56 and MobileNetV2 on CIFAR100 datasets are presented in Table 7 and in Table 8 respectively, which are also placed in Appendix due to limited space. We notice that both quantized models show a little better accuracy with impressive compression ratios of 17.2 and 12.9 for ResNet56 and MobileNetV2 respectively." }, { "heading": "4.6 IMAGENET RESULTS", "text": "In this subsection, we apply ResNet18 and MobileNetV2 to ImageNet dataset, which is a much larger dataset compared to CIFAR10 and CIFAR100. Here, as in (Han et al. (2015); Wang et al. (2018)), we present two sets of our results, i.e., the most efficient result and most accurate one to compare more conveniently.\nWe record the performance of the proposed method with ResNet18 on ImageNet in Table 4 as well as some comparisons to LQE (Zhang et al. (2018)), TTQ (Zhu et al. (2016)), PACT (Choi et al. (2018)) methods. From Table 4, it can be noticed that, compared to the original 32 bit model, the most accurate result of our algorithm achieves a promising compression rate of 13.9 with a slight accuracy drop of 0.15%, and for the most efficient one, our algorithm achieves an accuracy of 68.80% and an impressive compression ratio of 18.1. The most accurate result of our algorithm shows much better accuracy over LQE, TTQ and PACT methods although the compression ratio is a little bit smaller. As for the most efficient one, both the accuracy and compression ratio are better than those of LQE, TTQ and PACT, validating the effectiveness of the proposed scheme. The results of MobileNetV2 on ImageNet are illustrated in Appendix, please refer to Table 9 for details." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we have proposed a novel cursor based DAS algorithm for obtaining the mixed precision DNN model. Different from most of the traditional approaches, which choose quantization configuration using heuristics or learning based rules, we adaptively choose the quantization bit for each layer in the DNN model from the perspective of NAS. A cursor based search algorithm with alternative manner is applied for efficient optimization. The nearest two neighbor integers to the cursor are used to implement the quantization in the training process to reduce the quantization noise and avoid local convergence. The result of our algorithm is the adaptive bit width choice for different layers as a whole. Extensive experiments with some typical models demonstrate that the proposed approach provides dramatic compression capability with accuracy on par with or better than the state-of-the-art of methods on benchmark datasets. In the near future, we will apply the proposed scheme to object detection tasks to further show its possible wider application. We may also utilize some hardware platforms to test some performance such as inference time and resource consumption." }, { "heading": "A APPENDIX", "text": "Due to space limitations, we put a number of experimental results here.\nA.1 RESNET56 ON CIFAR10\nWe apply the proposed approach to ResNet56 and compare its performance with DNAS (Wu et al. (2018)), and the results are recorded in Table 5. In this Table, we notice that ResNet56 compressed using our quantization approach shows a higher compression ratio of 20.6 compared to the most efficient DNAS’s result of 18.9, while its accuracy is almost the same. As for the most accurate one of DNAS, our quantized model is still comparable, while the compression ratio is much higher compared to 14.6. It should also be pointed out that due to different environments and other implementation differences in the whole process, the accuracy of our ResNet56 baseline is not as good as the one in DNAS. It should also be noted that DNAS takes cutout operation in its experiments, we also test such an operation to show the comparison. It may be noticed that our test with cutout achieves a accuracy of 94.48% with an impressive compression ratio of 23.3.\nA.2 MOBILENETV2 ON CIFAR10\nWe further apply the proposed approach to MobilenetV2, which is a typical DL model for mobile devices and embedded systems, with the results shown in Table 6. It can be noticed that our adaptive cursor based quantization shows a better classification accuracy of 93.28% as well as a promising compression ratio of 12.4.\nA.3 RESNET56 ON CIFAR100\nWe also test ResNet56 on CIFAR100 and present its corresponding results in Table 7. We can see that the proposed algorithm yields a little bit better accuracy(1.05%) together with an impressive compression ratio of 17.2.\nA.4 MOBILENETV2 ON CIFAR100\nThe quantized MobilenetV2 on CIFAR100 and its corresponding results are demonstrated in Table 8. As for MobileNetV2, the proposed algorithm yields a little bit better accuracy together with an impressive compression ratio of 12.9.\nA.5 MOBILENETV2 ON IMAGENET\nThe performance of MobileNetV2 on ImageNet is illustrated in Table 9 together with comparisons to some related works such as HAQ (Wang et al. (2018)) and deep compression (Han et al. (2015)). In Table 9, we notice that, for the most accurate result, the quantized MobileNetV2 model using our approach shows a slight accuracy loss (71.65% vs 72.19% of the original 32 bit model) while achieves an encouraging compression ratio of 9.1. It may also be observed that the accuracy of our most accurate one is a little bit higher than the corresponding most accurate results of HAQ and deep compression together with a better compression ratio. While for the most efficient one, our algorithm shows a compression ratio of 14.3, which is better than that of HAQ, but smaller than that of deep compression. However, our approach demonstrates a dramatically better accuracy of 70.59% compared to the corresponding 66.75% of HAQ and 58.07% of deep compression, which again validated the advantage of our algorithm." } ]
2,019
null
SP:0b1459e58145faa54216d433648da41f64c39a23
[ "The paper proposes a novel architecture for spatially structured memory. The main idea is to incorporate inductive bias/invariance derived from projective geometry arguments. The experiments seem to clearly show that this new architecture improves previous approaches to tasks which require spatial reasoning and memory, and the ablations studies and visualizations provide useful insights into the workings of the agent. One thing I'm missing is an experiment showing that this inductive bias also doesn't degrade performance on tasks where spatial reasoning is not necessary (as compared to vanilla GRU/LSTM).", "This paper studies how to build semantic spatial maps for the purpose of navigation in 3D environments. The paper presents a differentiable policy network that pastes together semantic map predictions into a spatial map. Information is read out from this map using a global read operation (that looks at the entire map) and a self-attention read operation. This information is used to produce actions. The paper presents experimental results in 3D VizDoom scenarios and reports improvements over a vanilla LSTM, and another spatial memory based method (Neural Map)." ]
Tasks involving localization, memorization and planning in partially observable 3D environments are an ongoing challenge in Deep Reinforcement Learning. We present EgoMap, a spatially structured neural memory architecture. EgoMap augments a deep reinforcement learning agent’s performance in 3D environments on challenging tasks with multi-step objectives. The EgoMap architecture incorporates several inductive biases including a differentiable inverse projection of CNN feature vectors onto a top-down spatially structured map. The map is updated with ego-motion measurements through a differentiable affine transform. We show this architecture outperforms both standard recurrent agents and state of the art agents with structured memory. We demonstrate that incorporating these inductive biases into an agent’s architecture allows for stable training with reward alone, circumventing the expense of acquiring and labelling expert trajectories. A detailed ablation study demonstrates the impact of key aspects of the architecture and through extensive qualitative analysis, we show how the agent exploits its structured internal memory to achieve higher performance.
[]
[ { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian Reid", "Stephen Gould", "Anton den Hengel" ], "title": "Vision-and-language navigation: Interpreting visuallygrounded navigation instructions in real environments", "venue": null, "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "A. Banino", "C. Barry", "B. Uria", "C. Blundell", "T. Lillicrap", "P. Mirowski", "A. Pritzel", "M.J. Chadwick", "T. Degris", "J. Modayil", "G. Wayne", "H. Soyer", "F. Viola", "B. Zhang", "R. Goroshin", "N. Rabinowitz", "R. Pascanu", "C. Beattie", "S. Petersen", "A. Sadik", "S. Gaffney", "H. King", "K. Kavukcuoglu", "D. Hassabis", "R. Hadsell", "D. Kumaran" ], "title": "Vector-based navigation using grid-like representations in artificial agents", "venue": null, "year": 2018 }, { "authors": [ "Edward Beeching", "Christian Wolf", "Jilles Dibangoye", "Olivier Simonin" ], "title": "Deep reinforcement learning on a budget: 3d control and reasoning without a supercomputer", "venue": "CoRR, abs/1904.01806,", "year": 2019 }, { "authors": [ "Shehroze Bhatti", "Alban Desmaison", "Ondrej Miksik", "Nantas Nardelli", "N. Siddharth", "Philip H.S. Torr" ], "title": "Playing doom with slam-augmented deep reinforcement learning", "venue": "arxiv", "year": 2016 }, { "authors": [ "Simon Brodeur", "Ethan Perez", "Ankesh Anand", "Florian Golemo", "Luca Celotti", "Florian Strub", "Jean Rouat", "Hugo Larochelle", "Aaron Courville" ], "title": "HoME: a Household Multimodal Environment", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Tao Chen", "Saurabh Gupta", "Abhinav Gupta" ], "title": "Learning exploration policies for navigation", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Gated Feedback Recurrent Neural Networks", "venue": "In ICML,", "year": 2015 }, { "authors": [ "J. Civera", "D. Galvez-Lopez", "L. Riazuelo", "J.D. Tardós", "J.M.M. Montiel" ], "title": "Towards semantic SLAM using a monocular camera", "venue": "In IROS,", "year": 2011 }, { "authors": [ "C.J. Cueva", "X.-X. Wei" ], "title": "Emergence of grid-like representations by training recurrent neural networks to perform spatial localization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Kuan Fang", "Alexander Toshev", "Li Fei-Fei", "Silvio Savarese" ], "title": "Scene memory transformer for embodied agents in long-horizon tasks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Yves Frégnac", "Alice René", "Jean Baptiste Durand", "Yves Trotter" ], "title": "Brain encoding and representation of 3d-space using different senses, in different species", "venue": "Journal of physiology, Paris,", "year": 2004 }, { "authors": [ "Daniel Gordon", "Aniruddha Kembhavi", "Mohammad Rastegari", "Joseph Redmon", "Dieter Fox", "Ali Farhadi" ], "title": "Iqa: Visual question answering in interactive environments", "venue": "In CVPR", "year": 2018 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Alex Graves", "Greg Wayne", "Malcolm Reynolds", "Tim Harley", "Ivo Danihelka", "Agnieszka GrabskaBarwińska", "Sergio Gómez Colmenarejo", "Edward Grefenstette", "Tiago Ramalho", "John Agapiou" ], "title": "Hybrid computing using a neural network with dynamic external memory", "venue": null, "year": 2016 }, { "authors": [ "S. Gupta", "J. Davidson", "S. Levine", "R. Sukthankar", "J. Malik" ], "title": "Cognitive mapping and planning for visual navigation", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Torkel Hafting", "Marianne Fyhn", "Sturla Molden", "May-Britt Moser", "Edvard Moser" ], "title": "Microstructure of a spatial map in the entorhinal cortex", "venue": "Nature, 436:801–6,", "year": 2005 }, { "authors": [ "J. Henriques", "A. Vedaldi" ], "title": "Mapnet: An allocentric spatial memory for mapping environments", "venue": null, "year": 2018 }, { "authors": [ "Peter Henry", "Michael Krainin", "Evan Herbst", "Xiaofeng Ren", "Dieter Fox" ], "title": "Rgb-d mapping: Using depth cameras for dense 3d modeling of indoor environments", "venue": "In Experimental robotics,", "year": 2014 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Shahram Izadi", "David Kim", "Otmar Hilliges", "David Molyneaux", "Richard Newcombe", "Pushmeet Kohli", "Jamie Shotton", "Steve Hodges", "Dustin Freeman", "Andrew Davison" ], "title": "Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera", "venue": "In Proceedings of the 24th annual ACM symposium on User interface software and technology,", "year": 2011 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman", "koray kavukcuoglu" ], "title": "Spatial transformer networks", "venue": "In NIPS", "year": 2015 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z. Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Leslie Pack Kaelbling", "Michael L Littman", "Anthony R Cassandra" ], "title": "Planning and acting in partially observable stochastic domains", "venue": "Artificial intelligence,", "year": 1998 }, { "authors": [ "Michal Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaskowski" ], "title": "ViZDoom: A Doom-based AI research platform for visual reinforcement learning", "venue": "IEEE Conference on Computatonal Intelligence and Games, CIG,", "year": 2017 }, { "authors": [ "Ilya Kostrikov" ], "title": "Pytorch implementations of reinforcement learning algorithms", "venue": "https://github. com/ikostrikov/pytorch-a2c-ppo-acktr,", "year": 2018 }, { "authors": [ "Guillaume Lample", "Devendra Singh Chaplot" ], "title": "Playing FPS games with deep reinforcement learning", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Yann Lecun", "L Eon Bottou", "Yoshua Bengio", "Patrick Haaner" ], "title": "Gradient-Based Learning Applied to Document Recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "X. Liang", "Y. Wei", "X. Shen", "J. Yang", "L. Lin", "S. Yan" ], "title": "Proposal-Free Network for Instance-Level Object Segmentation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Dhruv Batra" ], "title": "Habitat: A platform for embodied ai research", "venue": null, "year": 2019 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J. Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Learning to Navigate in Complex Environments", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Piotr Mirowski", "Matthew Koichi Grimes", "Mateusz Malinowski", "Karl Moritz Hermann", "Keith Anderson", "Denis Teplyashin", "Karen Simonyan", "Koray Kavukcuoglu", "Andrew Zisserman", "Raia Hadsell" ], "title": "Learning to Navigate in Cities Without a Map", "venue": null, "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": null, "year": 2016 }, { "authors": [ "H. Moravec" ], "title": "Sensor fusion in certainty grids for mobile robots", "venue": "AI magazine,", "year": 1988 }, { "authors": [ "J. O’Keefe", "J. Dostrovsky" ], "title": "The hippocampus as a spatial map. preliminary evidence from unit activity in the freely-moving rat", "venue": "Brain Research,", "year": 1971 }, { "authors": [ "Emilio Parisotto", "Ruslan Salakhutdinov" ], "title": "Neural map: Structured memory for deep reinforcement learning", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": null, "year": 2015 }, { "authors": [ "L. Rummelhard", "A. Nègre", "C. Laugier" ], "title": "Conditional Monte Carlo Dense Occupancy Tracker", "venue": "In International Conference on Intelligent Transportation Systems,", "year": 2015 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Keisuke Tateno", "Federico Tombari", "Iro Laina", "Nassir Navab" ], "title": "Cnn-slam: Real-time dense monocular slam with learned depth prediction", "venue": null, "year": 2017 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Jane X. Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z. Leibo", "Rémi Munos", "Charles Blundell", "Dharshan Kumaran", "Matthew Botvinick" ], "title": "Learning to reinforcement learn", "venue": "arxiv pre-print", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "A critical part of intelligence is navigation, memory and planning. An animal that is able to store and recall pertinent information about their environment is likely to exceed the performance of an animal whose behavior is purely reactive. Many control problems in partially observed 3D environments involve long term dependencies and planning. Solving these problems requires agents to learn several key capacities: spatial reasoning — to explore the environment in an efficient manner and to learn spatio-temporal regularities and affordances. The agent needs to autonomously navigate, discover relevant objects, store their positions for later use, their possible interactions and the eventual relationships between the objects and the task at hand. Semantic mapping is a key feature in these tasks. A second feature is discovering semantics from interactions — while solutions exist for semantic mapping and semantic SLAM Civera et al. (2011); Tateno et al. (2017), a more interesting problem arises when the semantics of objects and their affordances are not supervised, but defined through the task and thus learned from reward.\nA typical approach for these types of problems are agents based on deep neural networks including recurrent hidden states, which encode the relevant information of the history of observations Mirowski et al. (2017); Jaderberg et al. (2017). If the task requires navigation, the hidden state will naturally be required to store spatially structured information. It has been recently reported that spatial structure as inductive bias can improve the performance on these tasks. In Parisotto & Salakhutdinov (2018), for instance, different cells in a neural map correspond to different positions of the agent.\nIn our work, we go beyond structuring the agent’s memory with respect to the agent’s position. We use projective geometry as an inductive bias to neural networks, allowing the agent to structure its memory with respect to the locations of objects it perceives, as illustrated in Figure 1b. The model performs an inverse projection of CNN feature vectors in order to map and store observations in an egocentric (bird’s eye view) spatially structured memory. The EgoMap is complementary to the hidden state vector of the agent and is read with a combination of a global convolutional read operation and an attention model allowing the agent to query the presence of specific content. We show that incorporating projective spatial memory enables the agent to learn policies that exceed the performance of a standard recurrent agent. Two different objects visible in the same input image\ncould be at very different places in the environment. In contrast to Parisotto & Salakhutdinov (2018), our model will map these observations to their respective locations, and not to cells corresponding to the agent’s position, as shown in Figure 1a.\nThe model bears a certain structural resemblance with Bayesian occupancy grids (BOG), which have been used in mobile robotics for many years Moravec (1988); Rummelhard et al. (2015). As in BOGs, we perform inverse projections of observations and dynamically resample the map to take into account ego-motion. However, in contrast to BOGs, our model does not require a handcrafted observation model and it learns semantics directly from interactions with the environment through reward. It is fully differentiable and trained end-to-end with backpropagation of derivatives calculated with policy gradient methods. Our contributions are as follows:\n• To our knowledge, we present the first method using a differentiable SLAM-like mapping of visual features into a top-down egocentric feature map using projective geometry while at the same time training this representation using RL from reward. • Our spatial map can be translated and rotated through a differentiable affine transform and read globally and through self-attention. • We show that the mapping, spatial memory and self-attention can be learned end-to-end with RL, avoiding the cost of labelling trajectories from domain experts, auxiliary supervision or pre-training specific parts of the architecture. • We demonstrate the improvement in performance over recurrent and spatial structured baselines without projective geometry. • We illustrate the reasoning abilities of the agent by visualizing the content of the spatial memory\nand the self-attention process, tying it to the different objects and affordances related to the task. • Experiments with noisy actions demonstrate the agent is robust to actions tolerances of up to 10%. The code will be made publicly available on acceptance." }, { "heading": "2 RELATED WORK", "text": "Reinforcement learning — In recent years the field of Deep Reinforcement Learning (RL) has gained attention with successes on board games Silver et al. (2018) and Atari games Mnih et al. (2015). One key component was the application of deep neural networks Lecun et al. (1998) to frames from the environment or game board states. Recent works that have applied Deep RL for the control of an agent in 3D environments such as maze navigation are Mirowski et al. (2017) and Jaderberg et al. (2017) which explored the use of auxiliary tasks such as depth prediction, loop detection and reward prediction to accelerate learning. Meta RL approaches for 3D navigation have been applied by Wang et al. (2016) and Lample & Chaplot (2017) also accelerated the learning process in 3D environments by prediction of tailored game features. There has also been recent work in the use of street-view scenes to train an agent to navigate in city environments Mirowski et al. (2018). In order to infer long term dependencies and store pertinent information about the partially observable environment; network architectures typically incorporate recurrent memory such as Gated Recurrent Units Chung et al. (2015) or Long Short-Term Memory Hochreiter & Schmidhuber (1997).\nDifferentiable memory — Differentiable memory such as Neural Turing Machines Graves et al. (2014) and Differential Neural Computers Graves et al. (2016) have shown promise where long term dependencies and storage are required. Neural Networks augmented with these memory structures have been shown to learn tasks such as copying, repeating and sorting. Some recent works for control in 2D and 3D environments have included structured memory-based architectures and mapping of observations. Neural SLAM Zhang et al. (2017) aims to incorporate a SLAM-like mapping module as part of the network architecture, but uses simulated sensor data rather than RGB observations from the environment, so the agent is unable to extract semantic meaning from its observations. The experimental results focus on 2D environments and the 3D results are limited. Playing Doom with SLAM augmented memory Bhatti et al. (2016) implements a non-differentiable inverse projective mapping with a fixed feature extractor based on Faster-RCNN Ren et al. (2015), pre-trained in a supervised manner. A downside of this approach is that the network does not learn to extract features pertinent to the task at hand as it is not trained end-to-end with RL. Fang et al. (2019) replace recurrent memory with a transformer (Vaswani et al. (2017)) attention distribution over previous observation embeddings, to highlight that recurrent architectures can struggle to capture long term dependencies. The downside is the storage of previous observations grows linearly with each step in the environment and the agent cannot chose to discard redundant information.\nGrid cells — there is evidence that biological agents learn to encode spatial structure. Rats develop grid cells/neurons, which fire at different locations with different frequencies and phases, a discovery that led to the 2014 Nobel prize in medicine O’Keefe & Dostrovsky (1971); Hafting et al. (2005). A similar structure seems to emerge in artificial neural networks trained to localize themselves in a maze, discovered independently in 2018 by two different research groups Cueva & Wei (2018); Banino et al. (2018).\nProjective geometry and spatial memory — Our work encodes spatial structure directly into the agent as additional inductive bias. We argue that projective geometry is a strong law imposed on any vision system working from egocentric observations, justifying a fully differentiable model of perception. To our knowledge, we present the first method which uses projective geometry as inductive bias while at the same time learning spatial semantic features with RL from reward.\nThe past decade has seen an influx of affordable depth sensors. This has led to a many works in the domain reconstruction of 3D environments, which can be incorporated into robotic systems. Seminal works in this field include Izadi et al. (2011) who performed 3D reconstruction scenes using a moving Kinect sensor and Henry et al. (2014) who created dense 3D maps using RGB-D cameras.\nNeural Map Parisotto & Salakhutdinov (2018) implements a structured 2D differentiable memory which was tested in both egocentric and world reference frames, but does not map observations in a SLAM-like manner and instead stores a single feature vector at the agent’s current location. The agent’s position is also discretized to fixed cells and orientation quantized to four angles (North, South, East, West). A further downside is that the movement of the memory is fixed to discrete translations and the map is not rotated to the agent’s current viewpoint.\nMapNet Henriques & Vedaldi (2018) includes an inverse mapping of CNN features, is trained in a supervised manner to predict x,y position and rotation from human trajectories, but does not use the\nmap for control in an environment. Visual Question Answering in Interactive Environments Gordon et al. (2018) creates semantic maps from 3D observations for planning and question answering and is applied in a discrete state space.\nUnsupervised Predictive Memory in a Goal-Directed Agent Wayne et al. (2018) incorporates a Differential Neural Computer in an RL agent’s architecture and was applied to simulated memorybased tasks. The architecture achieves improved performance over a typical LSTM Hochreiter & Schmidhuber (1997) based RL agent, but does not include spatial structure or projective mapping. In addition, visual features and neural memory are learned through the reconstruction of observations and actions, rather than for a specific task.\nCognitive Mapping and Planning for Visual Navigation Gupta et al. (2017) applies a differentiable mapping process on 3D viewpoints in a discrete grid-world, trained with imitation learning which provides supervision on expert trajectories. The downside of discretization is that affine sampling is trivial for rotations of 90-degree increments, and this motion is not representative of the real world. Their tasks are simple point-goal problems of up to 32 time-steps, whereas our work focused on complex multi-step objectives in a continuous state space. Their reliance on imitation learning highlights the challenge of training complex neural architectures with reward alone, particular on tasks with sparse reward such as the ones presented in this paper.\nLearning Exploration Policies for Navigation Chen et al. (2019), do not learn a perception module but instead map the depth buffer to a 2D map to provide a map-based exploration reward. Our work learns the features that can be mapped so the agent can query not only occupancy, but task-related semantic content.\nOur work greatly exceeds the performance of Neural Map Parisotto & Salakhutdinov (2018), by embedding a differentiable inverse projective transform and a continuous egocentric map into the agent’s network architecture. The mapping of the environment is in the agent’s reference frame, including translation and rotation with a differentiable affine transform. We demonstrate stable training with reinforcement learning alone, over several challenging tasks and random initializations, and do not require the expense of acquiring expert trajectories. We detail the key similarities and differences with related work in table 1." }, { "heading": "3 EGOMAP", "text": "We consider partially observable Markov decision processes (POMDPs) Kaelbling et al. (1998) in 3D environments and extend recent Deep-RL models, which include a recurrent hidden layer to store pertinent long term information Mirowski et al. (2017); Jaderberg et al. (2017). In particular, RGBD observations It at time step t are passed through a perception module extracting features st, which are used to update the recurrent state:\nst = fp(It; θp) ht = fr(ht−1, st; θr) (1)\nwhere fp is a convolutional neural network and fr is a recurrent neural network in the Gated Recurrent Unit variant Chung et al. (2015). Gates and their equations have been omitted for simplicity. Above and in the rest of this paper, θ∗ are trainable parameters, exact architectures are provided in the appendix. The controller outputs an estimate of the policy (the action distribution) and the value function given its hidden state:\nπt = fπ(ht; θπ) vt = fv(ht; θv) (2) The proposed model is motivated by the regularities which govern 3D physical environments. When an agent perceives an observation of the 3D world, it observes a 2D planar perspective projection of the world based on its current viewpoint. This projection is a well understood physical process, we aim to imbue the agent’s architecture with an inductive bias based on inverting the 3D to 2D planar projective process. This inverse mapping operation appears to be second nature to many organisms, with the initial step of depth estimation being well studied in the field of Physiology Frégnac et al. (2004). We believe that providing this mechanism implicitly in the agent’s architecture will improve its reasoning capabilities in new environments bypass a large part of the learning process.\nThe overall concept is that as the agent explores the environment, the perception module fp produces a 2D feature map st, in which each feature vector represents a learned semantic representation of a small receptive field from the agent’s egocentric observation. While they are integrated into the flat (not spatially structured) recurrent hidden state ht through function fr (Equation 1), we propose its integration into a second tensor Mt, a top-down egocentric memory, which we call EgoMap. The feature vectors are mapped to their egocentric positions using the inverse projection matrix and depth estimates. This requires an agent with a calibrated camera (known intrinsic parameters), which is a soft constraint easily satisfied. The map can then be read by the agent in two ways: a global convolutional read operation and a self-attention operation.\nFormally, let the agent’s position and angle at time t be (xt, yt) and φt respectively, Mt is the current EgoMap. st are the feature vectors extracted by the perception module, Dt are the depth buffer values. The change in agent position and orientation in the agent’s frame of reference between time-step t and t−1 are (dxt, dyt, dφt). There are three key steps to the operation of the EgoMap:\n1. Transform the map to the agent’s egocentric frame of reference: M̂t = Affine(Mt−1, dxt, dyt, dφt) (3)\n2. Update the map to include new observations: M̃t = InverseProject(st, Dt) M ′t = Combine(M̂t, M̃t) (4)\n3. Perform a global read and attention based read, the outputs of which are fed into the policy and value heads:\nrt = Read(M ′t) ct = Context(M ′ t , st, rt) (5)\nThese three operations will be further detailed below in individual subsections. Projective mapping and spatially structured memory should augment the agent’s performance where spatial reasoning and long term recollection are required. On simpler tasks the network can still perform as well as the baseline, assuming the extra parameters do not cause further instability in the RL training process.\nAffine transform — At each time-step we wish to translate and rotate the map into the agent’s frame of reference, this is achieved with a differentiable affine transform, popularized by the well known Spatial Transformer Networks Jaderberg et al. (2015). Relying on the simulator to be an oracle and provide the change in position (dx, dy) and orientation dφ, we convert the deltas to the agent’s egocentric frame of reference and transform the map with a differentiable affine transform. The effect of noise on the change in position on the agent’s performance is analysed in the experimental section.\nInverse projective mapping — We take the agent’s current observation, extract relevant semantic embeddings and map them from a 2D planar projection to their 3D positions in an egocentric frame of reference. At each time-step, the agent’s egocentric observation is encoded by the perception module (a convolutional neural network) to produce feature vectors, this step is a mapping from R4×64×112 → R16×4×10. Given the inverse camera projection matrix and the depth buffer provided by the simulator, we can compute the approximate location of the features in the agent’s egocentric frame of reference. As the mapping is a many to one operation, several features can be mapped to the same location. Features that share the same spatial location are averaged element-wise.\nThe newly mapped features must then be combined with the translated map from the previous time-step. We found that the use of a momentum hyper-parameter, α, enabled a smooth blending of new and previously observed features. We use an α value of 0.9 for the tests presented in the paper. We ensured that the blending only occurs where the locations of new projected features and the map from the previous time-step are co-located, this criterion is detailed in Equation 6.\nM ′(x,y) t = ηM̂t (x,y) + (1− η)M̃ (x,y)t\nη = 1.0, if M̃ (x,y)t = 0 & M̂t (x,y) 6= 0 0.0, if M̃t (x,y) 6= 0 & M̂ (x,y)t = 0\nα, otherwise\n(6)\nSampling from a global map — A naive approach to the storage and transformation of the egocentric feature map would be to apply an affine transformation to the map at each time-step. A fundamental downside of applying repeated affine transforms is that at each step a bilinear interpolation operation is applied, which causes smearing and degradation of the features in the map. We mitigated this issue by storing the map in a global reference frame and mapping the agent’s observations to the global reference frame. For the read operation an offline affine transform is applied. For further details see the appendix B\nRead operations — We wanted the agent to be able to summarize the whole spatial map and also selectively query for pertinent information. This was achieved by incorporating two types of read operation into the agent’s architecture, a Global Read operation and a Self-attention Read.\nThe global read operation is a CNN that takes as input the egocentric map and outputs a 32- dimensional feature vector that summarizes the map’s contents. The output of the global read is concatenated with the visual CNN output.\nTo query for relevant features in the map, the agent’s controller network can output a query vector qt, the network then compares this vector to each location in the map with a cosine similarity function in order to produce scores, which are the same width and height as the map. The scores are normalized with a softmax operation to produce a soft-attention in the lines of Bahdanau et al. (2014) and used to compute a weighted average of the map, allowing the agent to selectively query and focus on parts of the map. This querying mechanism was used in both the Neural Map Parisotto & Salakhutdinov (2018) and MERLIN Wayne et al. (2018) RL agents. We made the following improvements: Attention Temperature and Query Position.\nσ(x)i = eβxi∑ eβxj\n(7)\nQuery Position: A limitation of self-attention is that the agent can query what it has observed but not where it had observed it. To improve the spatial reasoning performance of the agent we augmented the neural memory with two fixed additional coordinate planes representing the x,y egocentric coordinate system normalized to (−1.0, 1.0), as introduced for segmentation in Liang et al. (2018). The agent still queries based on the features in the map, but the returned context vector includes two extra scalar quantities which are the weighted averages of the x,y planes. The impacts of these additions are discussed and quantified in the ablation study, in Section 4.\nAttention Temperature: To provide the agent with the ability to learn to modify the attention distribution, the query includes an additional learnable temperature parameter, β, which can adjust the softmax distribution detailed in Equation 7. This parameter can vary query by query and is constrained to be one or greater by a Oneplus function. The use of temperature in neural memory architectures was first introduced in Neural Turing Machines Graves et al. (2014)." }, { "heading": "4 EXPERIMENTS", "text": "The EgoMap and baseline architectures were evaluated on four challenging 3D scenarios, which require navigation and different levels of spatial memory. The scenarios are taken from Beeching et al. (2019) who extended the 3D ViZDoom environment Kempka et al. (2017) with various scenarios that are partially observable, require memory, spatial understanding, have long horizons and sparse rewards. Whilst more visually realistic simulators are available such as Gibson Xia et al. (2018), Matterport Anderson et al. (2018), Home Brodeur et al. (2018) and Habitat Manolis Savva* et al.\n(2019), the tasks available are simple point-goal tasks which do not require long term memory and recollection. We target the following three tasks: Labyrinth: The agent must find the exit in the fastest time possible, the reward is a sparse positive reward for finding the exit. This tests an agent’s ability to explore in an efficient manner.\nOrdered k-item: An agent must find k objects in a fixed order. It tests three aspects of an agent: its ability to explore the environment efficiently, the ability to learn to collect items in a predefined order and its ability to store as part of its hidden state where items were located so they can be retrieved in the correct order. We tested two versions of this scenario with 4-items or 6-items.\nFind and return: The agent starts next to a green totem, must explore the environment to find a red totem and then return to the starting point. This is our implementation of “Minotaur\" scenario from Parisotto & Salakhutdinov (2018). The scenario tests an agent’s ability to navigate and retain information over long time periods.\nAll the tasks require different levels of spatial memory and reasoning. For example, if an agent observes an item out of order it can store the item’s location in its spatial memory and navigate back to it later. We observe that scenarios that require more spatial reasoning, long term planning and recollection are where the agent achieves the greatest improvement in performance. In all scenarios there is a small negative reward for each time-step to encourage the agent to complete the task quickly.\nExperimental strategy and generalization to unseen environments — Many configurations of each scenario were created through procedural generation and partitioned into separated training and testing sets of size 256 and 64 respectively for each scenario type. Although the task in a scenario is fixed, we vary the locations of the walls, item locations, and start and end points; thus we ensure a diverse range of possible scenario configurations. A limited hyper-parameter sweep was undertaken with the baseline architecture to select the hyper-parameters, which were fixed for both the baseline, Neural Map and EgoMap agents. Three independent experiments were conducted per task to evaluate the algorithmic stability of the training process. To avoid information asymmetry, we provide the baseline agent with dx, dy, sin(dθ), cos(dθ) concatenated with its visual features.\nTraining Details — The model parameters were optimized with an on-policy, policy gradient algorithm; batched Advantage Actor Critic (A2C) Mnih et al. (2016), we used the popular PyTorch Paszke et al. (2017) implementation of A2C Kostrikov (2018). We sampled trajectories from 16 parallel agents and updated every 128 steps in the environment with discounted returns bootstrapped from value estimates for non-terminal states. The gamma factor was 0.99, the entropy weight was 0.001, the RMSProp Tieleman & Hinton (2012) optimizer was used with a learning rate of 7e-4. The EgoMap agent map size was 16×24×24 with a grid sampling chosen to cover the environment size with a 20% padding. The agent’s policy was updated over 1.2B environment steps, with a frame skip of 4. Training took 36 hours for the baseline and 8 days for the EgoMap, on 4 Xeon E5-2640v3 CPUs, with 32GB of memory and one NVIDIA GK210 GPU.\nResults — Results from the baseline and EgoMap policies evaluated on the 4 scenarios are shown in table 2, all tasks benefit from the inclusion of inverse projective mapping and spatial memory in the agent’s network architecture, with the largest improvement on the Find and Return scenario. We postulate that the greater improvement in performance is due to two factors; firstly this scenario always requires spatial memory as the agent must return to its starting point and secondly the objects in this scenario are larger and occupy more space in the map. We also compared to the state of the art in spatially structured neural memory, Neural Map Parisotto & Salakhutdinov (2018). Figure 2 shows agent training curves for the recurrent baseline, Neural Map and EgoMap, on the Find and Return test set configurations.\nAblation study — An ablation study was carried out on the improvements made by the EgoMap architecture. We were interested to see the influence of key options such as the global and attentionbased reads, the similarity function used when querying the map, the learnable temperature parameter and the incorporation of location-based querying. The Cartesian product of these options is large and it was not feasible to test them all, we therefore decided to selectively switch off key options to understand which aspects contribute to the improvement in performance. The results of the ablation study are shown in Table 2. Both the global and self-attention reads provide large improvements in performance over the baseline recurrent agent. The position-based query provides a small improvement. A comparison of the similarity metric of the attention mechanism highlights the L1-similarity achieved higher performance than cosine. A qualitative analysis of the self-attention mechanism is shown in the next section." }, { "heading": "5 ANALYSIS", "text": "Noisy Actions — One common criticism of agents trained in simulation is that the agent can query its environment for information that would not be readily available in the real world. In the case of EgoMap, the agent is trained with ground truth ego-motion measurements. Real-world robots have noisy estimates of ego-motion due to the tolerances of available hardware. We performed an analysis of the EgoMap agent trained in the presence of a noisy oracle, which adds noise to the ego-motion measurement. Noise is drawn from a normal distribution centred at one and is multiplied by the agent’s ground-truth motion, the effect of the noise is cumulative but unbiased. Tests were conducted with standard deviations of up to 0.2 which is a tolerance of more than 20% on the agent’s ego-motion measurements, results are shown in Figure 3. We observed retain the performance increase over the baseline for up to 10% of noisy actions, the performance degrades to that of the baseline agent.\nVisualization — The EgoMap architecture is highly interpretable and provides insights about how the agent reasons in 3D environments. In Figures 4a and 4b we show analysis of the spatially structured memory and how the agent has learned to query and self-attend to recall pertinent information. The Figures show key steps during an episode in the Ordered 6-item and Find and Return scenarios, including the first three principal components of a dimensionality reduction of the 16-dimensional EgoMap, the attention distribution and the vector returned from position queries. Refer to the caption for further details. The agent is seen to attend to key objects at certain phases of the task, in the Ordered 6-item scenario the agent attends the next item in the sequence and in the Find and Return scenario the agent attends to the green totem located at the start/return point once it has found the intermediate goal." }, { "heading": "6 CONCLUSION", "text": "We have presented EgoMap, an egocentric spatially structured neural memory that augments an RL agent’s performance in 3D navigation, spatial reasoning and control tasks. EgoMap includes a differentiable inverse projective transform that maps learned task-specific semantic embeddings of agent observations to their world positions. We have shown that through the use of global and self-attentive read mechanisms an agent can learn to focus on important features from the environment. We demonstrate that an RL agent can benefit from spatial memory, particularly in 3D scenarios with sparse rewards that require localization and memorization of objects. EgoMap out-performs existing state of the art baselines, including Neural Map, a spatial memory architecture. The increase in performance compared to Neural Map is due to two aspects. 1) The differential projective transform maps what the objects are to where they are in the map, which allows for direct localization with attention queries and global reads. In comparison, Neural Map writes what the agent observes to where the agent is on the map, this means that the same object viewed from two different directions will be written to two different locations on the map, which leads to poorer localization of the object. 2) Neural Map splits the map to four 90-degree angles, which alleviates the blurring highlighted in the appendix, our novel solution to this issue stores a single unified map in an allocentric frame of\nreference and performs an offline egocentric read, which allows an agent to act in states spaces where the angle is continuous, without the need to quantize the agent’s angle to 90-degree increments.\nWe have shown, with detailed analysis, how the agent has learned to interact with its structured internal memory with self-attention. The ablation study has shown that the agent benefits from both the global and self-attention operations and that these can be augmented with temperature, position querying and other similarity metrics. We have demonstrated that the EgoMap architecture is robust to actions with tolerances of up to 10%. Future work in this area of research would be to apply the mapping and memory architecture in more realistic-looking domains and aim to incorporate both dynamic and static objects into the agent’s network architecture and update mechanisms." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "B AFFINE TRANSFORM", "text": "A naive implementation of the repeated affine transforms leads to smearing of features Figure 5 demonstrates the degradation of features on synthetic RGB images with repeated rotations and translations. We should how storing the features in an allocentric frame of reference and performing offline transforms for read operations can greatly mitigate this issue." }, { "heading": "C ARCHITECTURES", "text": "To encourage reproducibility, we detail the exact architectures of the agents. Figure 6 shows an overview.\nBaseline Model — is comprised of the following: Perception Module fp: A 3 layer CNN with kernel sizes of 8,4,3, strides of 4,2,1, no padding and filter sizes of 16,32,16, respectively and ReLU activation. fp is a mapping from an RGBD input observation of R4×64×112 → R16×4×10. Recurrent Module fr: A FC layer reduces the output of fp from 640 values to a vector of size 128 and includes a ReLU activation; we then use a GRU layer with 128 hidden units. Policy and Value Heads fπ & fv: These layers receive as input the output of fr. The policy layer is a FC layer with 5 output units corresponding to the 5 discrete actions available to the agent (through a softmax activation). The value layer is a FC layer with one output unit.\nEgoMap Model — is comprised of the following: Perception Module fp: The same as the baseline architecture, apart from that the mapping operation is applied before the final ReLU activation function. EgoMap Global Read Module: A 3 layer CNN with kernel sizes of 3,4,4, strides of 1,2,2 and filter sizes of 16,16,16 respectively, and no padding. Followed by two linear layers of 256 and 32 hidden units, and ReLU activations, apart from the last which was tanh. Recurrent Module fr: The output of fp and the global read module were concatenated to form a vector of size 672 and fed into a FC layer later with 128 output units. The recurrent module was a GRU with 128 units. Self-Attention Read Head: The query head is a linear layer with 17 output units, 16 for the calculation of the EgoMap similarity scores and one for the β temperature parameter. The query head returns a vector of size 18 which includes two more scalar values for the average position of the query. Policy\nand Value Heads fπ & fv: Are the same as the baseline but their input is the concatenation of the output of the fr and the attention head." }, { "heading": "D READ MECHANISMS", "text": "In figure 7 we provide further details of the operation of the global read, context read and xy-querying." }, { "heading": "E ADDITIONAL RESULTS", "text": "In figure 8 we provide the curves of agent performance on held out test configurations for three scenarios: 4-item, 6-item and labyrinth." } ]
2,019
null
SP:dab57601f3910855870d72fb2729f4ce011f11a7
[ "This paper tackles the problem of solving a black-box optimization problem where only some samples have been observed. This task requires a good model that can be both expressive and generalizable. Instead of learning only a single forward model of x -> y, this paper proposes to additionally use a mapping from y -> x. Optimizing in the space of z instead of x can be much simpler, and this should also act as a strong regularizer during training. Specifically, the paper uses a GAN that transforms [y,z] -> x, where z is stochastically sampled. This paper further proposes a reweighting scheme that interpolates between a uniform weighting and weighting the best sample so far, as well as a sampling procedure that iteratively samples points and refits a second model, which was inspired by Thompson sampling.", "The paper prposes to learn an inverse network to predict x given a target y for optimisation, instead of the traditional way of optimisation (e.g. using Bayesian optimisation for the complex cases considered in the paper). However, unfortunately, this paper is too close in concept, and in my understanding lower in the solution quality to this recent paper:" ]
In this work, we aim to solve data-driven optimization problems, where the goal is to find an input that maximizes an unknown score function given access to a dataset of input, score pairs. Inputs may lie on extremely thin manifolds in highdimensional spaces, making the optimization prone to falling-off the manifold. Further, evaluating the unknown function may be expensive, so the algorithm should be able to exploit static, offline data. We propose model inversion networks (MINs) as an approach to solve such problems. Unlike prior work, MINs scale to extremely high-dimensional input spaces and can efficiently leverage offline logged datasets for optimization in both contextual and non-contextual settings. We show that MINs can also be extended to the active setting, commonly studied in prior work, via a simple, novel and effective scheme for active data collection. Our experiments show that MINs act as powerful optimizers on a range of contextual/non-contextual, static/active problems including optimization over images and protein designs and learning from logged bandit feedback.
[]
[ { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Brookes", "Hahnbeom Park", "Jennifer Listgarten" ], "title": "Conditioning by adaptive sampling for robust design", "venue": "In Proceedings of the 36th International Conference on Machine Learning. PMLR,", "year": 2019 }, { "authors": [ "Paul F. Christiano", "Jan Leike", "Tom B. Brown", "Miljan Martic", "Shane Legg", "Dario Amodei" ], "title": "Deep reinforcement learning from human preferences", "venue": null, "year": 2017 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "CoRR, abs/1605.08803,", "year": 2016 }, { "authors": [ "Marta Garnelo", "Dan Rosenbaum", "Christopher Maddison", "Tiago Ramalho", "David Saxton", "Murray Shanahan", "Yee Whye Teh", "Danilo Rezende", "S.M. Ali Eslami" ], "title": "Conditional neural processes", "venue": "In Proceedings of the 35th International Conference on Machine Learning. PMLR,", "year": 2018 }, { "authors": [ "Rafael Gómez-Bombarelli", "David Duvenaud", "José Miguel Hernández-Lobato", "Jorge AguileraIparraguirre", "Timothy D. Hirzel", "Ryan P. Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "In ACS central science,", "year": 2018 }, { "authors": [ "Anvita Gupta", "James Zou" ], "title": "Feedback gan (fbgan) for dna: a novel feedback-loop architecture for optimizing protein", "venue": "functions. ArXiv,", "year": 2018 }, { "authors": [ "Warren Hoburg", "Pieter Abbeel" ], "title": "Geometric programming for aircraft design", "venue": "optimization. volume 52,", "year": 2012 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "CoRR, abs/1611.01144,", "year": 2016 }, { "authors": [ "Thorsten Joachims", "Adith Swaminathan", "Maarten de Rijke" ], "title": "Deep learning with logged bandit feedback", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih", "Jonathan Schwarz", "Marta Garnelo", "Ali Eslami", "Dan Rosenbaum", "Oriol Vinyals", "Yee Whye Teh" ], "title": "Attentive neural processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "URL http:// arxiv.org/abs/1312.6114. cite arxiv:1312.6114", "year": 2013 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Yann LeCun", "Corinna Cortes. MNIST handwritten digit database." ], "title": "URL http://yann", "venue": "lecun.com/exdb/mnist/.", "year": 2010 }, { "authors": [ "Thomas Liao", "Grant Wang", "Brian Yang", "Rene Lee", "Kristofer Pister", "Sergey Levine", "Roberto Calandra" ], "title": "Data-efficient learning of morphology and controller for a microrobot", "venue": "IEEE International Conference on Robotics and Automation,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Alberto Maria Metelli", "Matteo Papini", "Francesco Faccio", "Marcello Restelli" ], "title": "Policy optimization via importance sampling. NIPS’18, 2018", "venue": "URL http://dl.acm.org/citation.cfm?", "year": 2018 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets, 2014", "venue": "URL http: //arxiv.org/abs/1411.1784. cite arxiv:1411.1784", "year": 2014 }, { "authors": [ "Xue Bin Peng", "Angjoo Kanazawa", "Sam Toyer", "Pieter Abbeel", "Sergey Levine" ], "title": "Variational discriminator bottleneck: Improving imitation learning, inverse RL, and GANs by constraining information flow", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Rasmus Rothe", "Radu Timofte", "Luc Van Gool" ], "title": "Dex: Deep expectation of apparent age from a single image", "venue": "In IEEE International Conference on Computer Vision Workshops (ICCVW),", "year": 2015 }, { "authors": [ "Rasmus Rothe", "Radu Timofte", "Luc Van Gool" ], "title": "Deep expectation of real and apparent age from a single image without facial landmarks", "venue": "International Journal of Computer Vision (IJCV),", "year": 2016 }, { "authors": [ "Reuven Y. Rubinstein" ], "title": "Optimization of computer simulation models with rare events", "venue": "European Journal of Operations Research,", "year": 1996 }, { "authors": [ "Reuven Y. Rubinstein", "Dirk P. Kroese" ], "title": "The Cross Entropy Method: A Unified Approach To Combinatorial Optimization, Monte-carlo Simulation (Information Science and Statistics)", "venue": null, "year": 2004 }, { "authors": [ "Daniel Russo", "Benjamin Van Roy" ], "title": "An information-theoretic analysis of thompson sampling", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Bobak Shahriari", "Kevin Swersky", "Ziyu Wang", "Ryan P. Adams", "Nando de Freitas" ], "title": "Taking the human out of the loop: A review of bayesian optimization", "venue": "Proceedings of the IEEE,", "year": 2016 }, { "authors": [ "Jasper Snoek", "Oren Rippel", "Kevin Swersky", "Ryan Kiros", "Nadathur Satish", "Narayanan Sundaram", "Mostofa Patwary", "Mr Prabhat", "Ryan Adams" ], "title": "Scalable bayesian optimization using deep neural networks", "venue": "In Proceedings of the 32nd International Conference on Machine Learning. PMLR,", "year": 2015 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Igor Babuschkin", "et.al" ], "title": "Parallel WaveNet: Fast high-fidelity speech synthesis", "venue": "In Proceedings of the 35th International Conference on Machine Learning", "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Philipp Krähenbühl", "Eli Shechtman", "Alexei A. Efros" ], "title": "Generative visual manipulation on the natural image manifold", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning. 2017", "venue": "URL https://arxiv.org/abs/1611.01578", "year": 2017 }, { "authors": [ "Russo", "Van Roy" ], "title": "We first define information ratio and then use it to prove the regret bound", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Data-driven optimization problems arise in a range of domains: from protein design (Brookes et al., 2019) to automated aircraft design (Hoburg & Abbeel, 2012), from the design of robots (Liao et al., 2019) to the design of neural net architectures (Zoph & Le, 2017) and learning from logged feedback, such as optimizing user preferences in recommender systems. Such problems require optimizing unknown reward or score functions using previously collected data consisting of pairs of inputs and corresponding score values, without direct access to the score function being optimized. This can be especially challenging when valid inputs lie on a low-dimensional manifold in the space of all inputs, e.g., the space of valid aircraft designs or valid images. Existing methods to solve such problems often use derivative-free optimization (Snoek et al.). Most of these techniques require active data collection where the unknown function is queried at new inputs. However, when function evaluation involves a complex real-world process, such as testing a new aircraft design or evaluating a new protein, such active methods can be very expensive. On the other hand, in many cases there is considerable prior data – existing aircraft and protein designs, and advertisements and user click rates, etc. – that could be leveraged to solve the optimization problem.\nIn this work, our goal is to develop an optimization approach to solve such optimization problems that can (1) readily operate on high-dimensional inputs comprising a narrow, low-dimensional manifold, such as natural images, (2) readily utilize offline static data, and (3) learn with minimal active data collection if needed. We can define this problem setting formally as the optimization problem\nx? = arg max x f(x), (1)\nwhere the function f(x) is unknown, and we have access to a dataset D = {(x1, y1), . . . , (xN , yN )}, where yi denotes the value f(xi). If no further data collection is possible, we call this the data-driven model-based optimization setting. This can also be extended to the contextual setting, where the aim is to optimize the expected score function value across a context distribution. That is,\nπ? = arg max π Ec∼p0(·)[f(c, π(c))], (2)\nwhere π? maps contexts c to inputs x, such that the expected score under the context distribution p0(c) is optimized. As before, f(c,x) is unknown and we have access to a dataset D = {(ci,xi, yi)}Ni=1,\nwhere yi is the value of f(ci,xi). Such contextual problems with logged datasets have been studied in the context of contextual bandits (Swaminathan & Joachims, a; Joachims et al., 2018).\nA simple way to approach these model-based optimization problems is to train a proxy function fθ(x) or fθ(c,x), with parameters θ, to approximate the true score, using the dataset D. However, directly using fθ(x) in place of the true function f(x) in Equation (1) generally works poorly, because the optimizer will quickly find an input x for which fθ(x) outputs an erroneously large value. This issue is especially severe when the inputs x lie on a narrow manifold in a high-dimensional space, such as the set of natural images (Zhu et al., 2016). The function fθ(x) is only valid near the training distribution, and can output erroneously large values when queried at points chosen by the optimizer. Prior work has sought to addresses this issue by using uncertainty estimation and Bayesian models (Snoek et al., 2015) for fθ(x), as well as active data collection (Snoek et al.). However, explicit uncertainty estimation is difficult when the function fθ(x) is very complex or when x is high-dimensional.\nInstead of learning fθ(x), we propose to learn the inverse function, mapping from values y to corresponding inputs x. This inverse mapping is one-to-many, and therefore requires a stochastic mapping, which we can express as f−1θ (y, z) → x, where z is a random variable. We term such models model inversion networks (MINs). MINs provide us with a number of desirable properties: they can utilize static datasets, handle high-dimensional input spaces such as images, can handle contextual problems, and can accommodate both static datasets and active data collection. We discuss how to design simple active data collection methods for MINs, leverage advances in deep generative modeling (Goodfellow et al.; Brock et al., 2019), and scale to very high-dimensional input spaces. We experimentally demonstrate MINs in a range of settings, showing that they outperform prior methods on high-dimensional input spaces, perform competitively to Bayesian optimization methods on tasks with active data collection and lower-dimensional inputs, and substantially outperform prior methods on contextual optimization from logged data (Swaminathan & Joachims, a)." }, { "heading": "2 RELATED WORK", "text": "Bayesian optimization. In this paper, we aim to solve data-driven optimization problems. Most prior work aimed at solving such optimization problems has focused on the active setting. This includes algorithms such as the cross entropy method (CEM) and related derivative-free methods Rubinstein (1996); Rubinstein & Kroese (2004), reward weighted regression Peters & Schaal, Bayesian optimization methods based on Gaussian processes Shahriari et al. (2016); Snoek et al.; 2015), and variants that replace GPs with parametric acquisition function approximators such as Bayesian neural networks (Snoek et al., 2015) and latent variable models (Kim et al., 2019; Garnelo et al., 2018b;a), as well as more recent methods such as CbAS (Brookes et al., 2019). These methods require the ability to query the true function f(x) at each iteration to iteratively arrive at a near-optimal solution. We show in Section 3.3 that MINs can be applied to such an active setting as well, and in our experiments we show that MINs can perform competitively with these prior methods. Additionally, we show that MINs can be applied to the static setting, where these prior methods are not applicable. Furthermore, most conventional BO methods do not scale favourably to high-dimensional input spaces, such as images, while MINs can handle image inputs effectively.\nContextual bandits. Equation 2 captures the class of contextual bandit problems. Prior work on batch contextual bandits has focused on batch learning from bandit feedback (BLBF), where the learner needs to produce the best possible policy that optimizes the score function from logged experience. Existing approaches build on the counterfactual risk minimization (CRM) principle (Swaminathan & Joachims, a;b), and have been extended to work with deep nets (Joachims et al., 2018). In our comparisons, we find that MINs substantially outperform these prior methods in the batch contextual bandit setting.\nDeep generative modeling. Recently, deep generative modeling approaches have been very successful at modelling high-dimensional manifolds such as natural images (Goodfellow et al.; Van Den Oord et al.; Dinh et al., 2016), speech (van den Oord et al., 2018), text (Yu et al.), alloy composition prediction (Nguyen et al.), etc. MINs combine the strength of such generative models with important algorithmic decisions to solve model-based optimization problems. In our experimental evaluation, we show that these design decisions are important for adapting deep generative models to model-based optimization, and it is difficult to perform effective optimization without them." }, { "heading": "3 MODEL INVERSION NETWORKS", "text": "In this section, we describe our model inversion networks (MINs) method, which can perform both active and passive model-based optimization over high-dimensional input spaces.\nProblem statement. Our goal is to solve optimization problems of the form x? = arg maxx f(x), where the function f(x) is not known, but we must instead use a dataset of input-output tuples D = {(xi, yi)}. In the contextual setting described in Equation (2), each datapoint is also associated with a context ci. For clarity, we present our method in the non-contextual setting, but the contextual setting can be derived analogously by conditioning all functions on the context. In the active setting, which is most often studied in prior work, the algorithm is allowed to actively query f(x) one or more times on each iteration to augment the dataset, while in the static setting, only an initial static dataset is available. The goal is to obtain the best possible x? (i.e., the one with highest possible value of f(x?)).\nOne naïve way of solving MBO problems is to learn a proxy score function fθ(x), via standard empirical risk minimization. We could then maximize this learned function with respect to x via standard optimization methods. However, naïve applications of such a method would fail for two reasons. First, the proxy function fθ(x) may not be accurate outside the samples on which it is trained, and optimization with respect to it may simply lead to values of x for which fθ(x) makes the largest mistake in the negative direction. The second problem is more subtle. When x lies on a narrow manifold in very high-dimensional space (such as the space of natural images), the optimizer can produce invalid values of x, which result in arbitrary outputs when fed into fθ(x). Since the shape of this manifold is unknown, it is difficult to constrain the optimizer to prevent this. This second problem is rarely addressed or discussed in prior work, which typically focuses on optimization over low-dimensional and compact domains with known bounds." }, { "heading": "3.1 OPTIMIZATION VIA INVERSE MAPS", "text": "Part of the reason for the brittleness of the naïve approach above is that fθ(x) has a high-dimensional input space, making it easy for the optimizer to find inputs x for which the proxy function produces an unreasonable output. Can we instead learn a function with a small input space, which implicitly understands the space of valid, in-distribution values for x? The main idea behind our approach is to model an inverse map that produces a value of x given a score value y, given by f−1θ : Y → X . The input to the inverse map is a scalar, making it comparatively easy to constrain to valid values, and by directly generating the inputs x, an approximation to the inverse function must implicitly understand which input values are valid. As multiple x values can correspond to the same y, we design f−1θ as a stochastic map that maps a score value along with a dz-dimensional random vector to a x, f−1θ : Y × Z → X , where z is distributed according to a prior distribution p0(z). To define the inverse map objective, let the data distribution be denoted pD(x, y), let pD(y) be the marginal over y, and let p(y) be an any distribution defined on Y (which could be equal to pD(y)). We can train the proxy inverse map f−1θ under distribution p(y) by minimizing the following objective:\nLp(D) = Ey∼p(y)[D(pD(x|y), pf−1θ (x|y))], (3) where pf−1θ (x|y) is obtained by marginalizing over z, pf−1θ (x|y) = ∫ z p0(z) · 1[[x = f−1θ (z, y)]]dz, and D is a measure of divergence between the two distributions. Using the Kullback-Leibler divergence leads to maximum likelihood learning, while Jensen-Shannon divergence motivates a GAN-style training objective. MINs can be adapted to the contextual setting by passing in the context as an input and learning f−1θ (yi, z, ci). In standard empirical risk minimization, we would choose p(y) to be the data distribution pD(y), such that the expectation\nAlgorithm 1 Generic Algorithm for MINs 1: Input: pD(y): distribution of y in D 2: Train inverse map f−1θ : Y × Z → X using\nobjective (Equation 3) with reweighting, active data collection if needed\n3: x? ← APPROX-INFER(f−1θ , pD(y)) 4: return x? can be approximated simply by sampling training tuples (xi, yi) from the training set. However, as we will discuss in Section 3.3, a more careful choice for p(y) can lead to better performance. The MIN algorithm is based on training an inverse map, and then using it via the inference procedure in Section 3.2 to infer the x that approximately optimizes f(x). The structure of the MIN algorithm is shown in Algorithm 1." }, { "heading": "3.2 INFERENCE WITH INVERSE MAPS (APPROX-INFER)", "text": "Once the inverse map is trained, the goal of our algorithm is to generate the best possible x?, which will maximize the true score function as well as possible under the dataset. Since a score y needs to be provided as input to the inverse map, we must select for which score y to query the inverse map to obtain a near-optimal x. One naïve heuristic is to pick the best ymax ∈ D and produce xmax ∼ f−1θ (y∗max) as the output. However, the method should be able to extrapolate beyond the best score seen in the dataset, especially in contextual settings, where a good score may not have been observed for all contexts.\nIn order to extrapolate as far as possible, while still staying on the valid data manifold, we need to measure the validity of the generated values of x. One way to do this is to measure the agreement between the learned inverse map and an independently trained forward model fθ: the values of y for which the generated samples x are predicted to have a score similar to y are likely in-distribution, whereas those where the forward model predicts a very different score may be too far outside the training distribution. Since the latent variable z captures the multiple possible outputs of the one-tomany inverse map, we can further optimize over z for a given y to find the best, most trustworthy output x. This can be formalized as the following optimization:\nỹ∗, z̃∗ := arg max y,z fθ(f −1 θ (z, y))− λ1||y − fθ(f −1 θ (z, y))||2 + λ2 log p0(z) (4)\nThis optimization can be motivated as finding an extrapolated score that corresponds to values of x that lie on the valid input manifold, and for which independently trained forward and inverse maps agree. Although this optimization uses an approximate forward map fθ(x), we show in our experiments in Section 4 that it produces substantially better results than optimizing with respect to a forward model alone. The inverse map substantially constraints the search space, requiring an optimization over a 1-dimensional y and a (relatively) low-dimensional z, rather than the full space of inputs. This scheme can be viewed as a special (deterministic) case of a probabilistic optimization procedure described in Appendix A." }, { "heading": "3.3 REWEIGHTING THE TRAINING DISTRIBUTION", "text": "A naïve implementation of the training objective in Equation (3) samples y from the data distribution pD(y). However, as we are most interested in the inverse map’s predictions for high values of y, it is much less important for the inverse map to predict accurate x values for values of y that are far from the optimum. We could consider increasing the weights on datapoints with larger values of y. In the extreme case, we could train only on the best datapoint – either the single datapoint with the largest y or, in the contextual case, the datapoint with the largest y for each context. More generally, we can define the optimal y distribution p∗(y), which is simply the delta function centered on the best y, p∗(y) = δy∗(y), in the deterministic case. If we instead assume that the observed scores have additive noise (i.e., we observe f(x) + ε, ε ∼ N ), then p∗(y) would be a distribution centered around the optimal y. Of course, training on p∗(y) is not practical, since it heavily down-weights most of the training data, leading to a very high-variance training objective, and is not even known in general, since the optimal data point is likely not in our training set. In this section, we will propose a better choice for p(y) that trades off the variance due to an overly peaked training distribution and the bias due to training on the “wrong” distribution (i.e., anything other than p∗(y)).\nWe can train under a distribution other than the empirical distribution by using importance sampling, such that we sample from pD and assign an importance weight, given by wi =\np(yi) pD(yi) , to each datapoint (xi, yi), where p(yi) is our desired distribution. The reweighted objective is given by L̂p(D) := 1|D| ∑ iwi · D̂(xi, f −1 θ (yi)). By bounding the variance and the bias of the gradient of L̂p(D) estimate, with respect to the reweighted objective without sampling error under y drawn from p∗(y), we obtain the following result: (Proof in Appendix B) Theorem 3.1 ((Informal) Bias + variance bound in MINs). Let L(p∗) be the objective under p∗(y) without sampling error: L(p∗) = Ey∼p∗(y)[D(p(x|y), f−1(y))]. LetNy be the number of datapoints with the particular y value observed in D, For some constants C1, C2, C3, with high confidence,\nE [ ||∇θL̂p(D)−∇θL(p∗)||22 ] ≤ C1Ey∼p(y) [ 1\nNy\n] + C2\nd2(p||pD) |D| + C3 · DTV(p∗, p)2\nTheorem 3.1 suggests a tradeoff between being close to the optimal distribution p∗(y) and reducing variance by covering the full data distribution pD. We observe that the distribution p(y) that minimizes\nthe RHS bound in Theorem 3.1 has the following form: p(y) ∝ NyNy+K · g(p ∗(y)), where g(p∗) is a linear function of p∗(y) that ensures that the distributions p and p∗ are close. Theoretically, g(◦) is an increasing, piece-wise linear function of ◦. We can interpret the expression for p(y) as a product of two likelihoods – the optimality of a particular y value and the likelihood of a particular y not being rare in D. We empirically choose an exponential parameteric form for this function, which we describe in Section 3.5. This upweights the samples with higher scores, reduces the weight on rare y-values (i.e., those with low Ny), while preventing the weight on common y-values from growing, since NyNy+K saturates to 1 for large Ny. This is consistent with our intuition: we would like to upweight datapoints with high y-values, provided the number of samples at those values is not too low. Of course, for continuous-valued scores, we rarely see the same score twice. Therefore, we bin the y-values into discrete bins for the purpose of weighting, as we discuss in Section 3.5." }, { "heading": "3.4 ACTIVE DATA COLLECTION VIA RANDOMIZED LABELING", "text": "While the passive setting requires care in finding the best value of y for the inverse map, the active setting presents a different challenge: choosing a new query point x at each iteration to augment the dataset D and make it possible to find the best possible optimum. Prior work on bandits and Bayesian optimization often uses Thompson sampling (TS) (Russo & Van Roy, 2016; Russo et al., 2018; Srinivas et al.) as the data-collection strategy. TS maintains a posterior distribution over functions p(ft|D1:t). At each iteration, it samples a function from this distribution and queries the point x?t that greedily minimizes this function. TS offers an appealing query mechanism, since it achieves sub-linear Bayesian regret (defined as the expected cumulative difference between the value of the optimal input and the selected input), given by O( √ T ), where T is the number of queries.\nMaintaining a posterior over high-dimensional parametric functions is generally intractable. However, we can devise a scheme to approximate Thompson sampling with MINs. To derive this method, first note that sampling ft from the posterior is equivalent to sampling (x, y) pairs consistent with ft – given sufficiently many (x, y) pairs, there is a unique smooth function ft that satisfies yi = ft(xi). For example, we can infer a quadratic function exactly from three points. For a more formal description, we refer readers to the notion of Eluder dimension (Russo & Van Roy). Thus, instead of maintaining intractable beliefs over the function, we identify a function by the samples it generates, and define a way to sample synthetic (x, y) points such that they implicitly define a unique function sample from the posterior.\nTo apply this idea to MINs, we train the inverse map f−1θt at each iteration t with an augmented dataset D′t = Dt ∪ St, where St = {(x̃j , ỹj)}Kj=1 is a dataset of synthetically generated input-score pairs corresponding to unseen y values in Dt. Training f−1θt on D ′ t corresponds to training f −1 θt to be an approximate inverse map for a function ft sampled from p(ft|D1:t), as the synthetically generated samples St implicitly induce a model of ft. We can then approximate Thompson sampling by obtaining x?t from f −1 θt\n, labeling it via the true function, and adding it to Dt to produce Dt+1. Pseudocode for this method, which we call “randomized labeling,” is presented in Algorithm 2. In Appendix C, we further derive O( √ T ) regret guarantees under mild assumptions. Implementationwise, this method is simple, does not require estimating explicit uncertainty, and works with arbitrary function classes, including deep neural networks.\nAlgorithm 2 Active Data Collection with Model Inversion Networks via Randomized Labeling 1: Initialize inverse map, f−1θ : Y × Z → X , dataset D0 = {}, 2: for step t in {0, . . . , T-1} do 3: Sample synthetic samples St = {(xi, yi)}Ki=1 corresponding to unseen data points yi (by randomly\npairing noisy observed xi values with unobserved y values.) 4: Train inverse map f−1t on D′t = Dt ∪ St, using reweighting described in Section 3.3. 5: Query function f at xt = f−1t (maxD′t y) 6: Observe outcome: (xt, f(xt)) and update Dt+1 = Dt ∪ (xt, f(xt)) 7: end for" }, { "heading": "3.5 PRACTICAL IMPLEMENTATION OF MINS", "text": "In this section, we describe our instantiation of MINs for high-dimensional inputs with deep neural network models. GANs (Goodfellow et al.) have been successfully used to model the manifold of\nhigh-dimensional inputs, without the need for explicit density modelling and are known to produce more realistic samples than other models such as VAEs (Kingma & Welling, 2013) or Flows (Dinh et al., 2016). The inverse map in MINs needs to model the manifold of valid x thus making GANs a suitable choice. We can instantiate our inverse map with a GAN by choosing D in Equation 3 to be the Jensen-Shannon divergence measure. Since we generate x conditioned on y, the discriminator is parameterized as Disc(x|y), and trained to output 1 for a valid (x, y) pair (i.e., where y = f(x) and x comes from the data) and 0 otherwise. Thus, we optimize the following objective:\nmin θ max Disc Lp(D) = Ey∼p(y)\n[ Ex∼pD(x|y)[log Disc(x|y)] + Ez∼p0(z)[log(1−Disc(f −1 θ (z, y)|y)] ] This model is similar to a conditional GAN (cGAN), which has been used in the context of modeling distribution of x conditioned on a discrete-valued label (Mirza & Osindero, 2014). As discussed in Section 3.3, we additionally reweight the data distribution using importance sampling. To that end, we discretize the space Y into B discrete bins b1, · · · , bB and, following Section 3.3, weight each bin bi according to p(bi) ∝\nNbi Nbi+λ\nexp ( |bi−y∗|\nτ\n) , where Nbi is the number of datapoints\nin the bin, y∗ is the maximum score observed, and τ is a hyperparameter. (After discretization, using notation from Section 3.3, for any y that lies in bin b, p∗(y) := p∗(b) = exp ( |b−y∗| τ ) and\np(y) := p(b) ∝ NbNb+λ exp ( |b−y∗| τ ) .) Experimental details are provided in Appendix C.4.\nIn the active setting, we perform active data collection using the synthetic relabelling algorithm described in Section 3.4. In practice, we train two copies of f−1θ . The first, which we call the exploration model f−1expl, is trained with data augmented via synthetically generated samples (i.e., D′t). The other copy, called the exploitation model f−1exploit, is trained on only real samples (i.e., Dt). This improves stability during training, while still performing data collection as dictated by Algorithm 2. To generate the augmented dataset D′t in practice, we sample y values from p∗(y) (the distribution over high-scoring ys observed in Dt), and add positive-valued noise, thus making the augmented y values higher than those in the dataset which promotes exploration. The corresponding inputs x are simply sampled from the dataset Dt or uniformly sampled from the bounded input domain when provided in the problem statement. (for example, benchmark function optimization) After training, we infer best possible x? from the trained model using the inference procedure described in Section 3.2. In the active setting, the inference procedure is applied on f−1exploit, the inverse map that is trained only on real data points." }, { "heading": "4 EXPERIMENTAL EVALUATION", "text": "The goal of our empirical evaluation is to answer the following questions. (1) Can MINs successfully solve optimization problems of the form shown in Equations 1 and 2, in static settings and active settings, better than or comparable to prior methods? (2) Can MINs generalize to high dimensional spaces, where valid inputs x lie on a lower-dimensional manifold, such as the space of natural images? (3) Is reweighting the data distribution important for effective data-driven model-based optimization? (4) Does our proposed inference procedure effectively discover valid inputs x with better values than any value seen in the dataset? (5) Does randomized labeling help in active data collection?" }, { "heading": "4.1 DATA-DRIVEN OPTIMIZATION WITH STATIC DATASETS", "text": "We first study the data-driven model-based optimization setting. This requires generating points that achieve a better function value than any point in the training set or, in the contextual setting, better than the policy that generated the dataset for each context. We evaluate our method on a batch contextual bandit task proposed in prior work (Joachims et al., 2018) and on a high-dimensional contextual image optimization task. We also evaluate our method on several non-contextual tasks that require optimizing over high-dimensional image inputs to evaluate a semantic score function, including hand-written characters and real-world photographs.\nBatch contextual bandits. We first study the contextual optimization problem described in Equation 2. The goal is to learn a policy, purely from static data, that predicts the correct bandit arm x for each context c, such that the policy achieves a high overall score f(c, π(c)) on average across contexts drawn from a distribution p0(c). We follow the protocol set out by Joachims et al. (2018),\nwhich evaluates contextual bandit policies trained on a static dataset for a simulated classification tasks. The data is constructed by selecting images from the (MNIST/CIFAR) dataset as the context c, a random label as the input x, and a binary indicator indicating whether or not the label is correct as the score y. Multiple schemes can be used for selecting random labels for generating the dataset, and we evaluate on two such schemes, as described below. We report the average score on a set of new contexts, which is equal to the average 0-1 accuracy of the learned model on a held out test set of images (contexts). We compare our method to previously proposed techniques, including the BanditNet model proposed by Joachims et al. (2018) on the MNIST and CIFAR-10 (Krizhevsky, 2009) datasets. Note that this task is different from regular classification, in that the observed feedback ((ci,xi, yi) pairs) is partial, i.e. we do not observe the correct label for each context (image) ci, but only whether or not the label in the training tuple is correct or not. We evaluate on two datasets: (1) data generated by selecting random labels xi for each context ci and (2) data where the correct label is used 49% of the time, which matches the protocol in prior work (Joachims et al., 2018). We compare to BanditNet (Joachims et al., 2018) on identical dataset splits. We report the average 0-1 test accuracy for all methods in Table 1. The results show that MINs drastically outperform BanditNet on both MNIST and CIFAR-10 datasets, indicating that MINs can successfully perform contextual model-based optimization in the static (data-driven) setting. The results also show that utilizing the inference procedure in Section 3.2 produces an improvement of about 1.5% and 1.0% in test-accuracy on MNIST and CIFAR-10, respectively.\nCharacter stroke width optimization. In the next experiment, we study how well MINs optimize over high-dimensional inputs, where valid inputs lie on a lower-dimensional manifold. We constructed an image optimization task out of the MNIST (LeCun & Cortes, 2010) dataset. The goal is to optimize directly over the image pixels, to produce images with the thickest stroke width, such that the image corresponds either (a) to any valid character or (b) a valid instance of a particular character class. A\nsuccessful algorithm will produce the thickest character that is still recognizable. In Figure 1, we observe that MINs generate images x that maximize the respective score functions in each case. We also evaluate on a harder task where the goal is to maximize the number of disconnected blobs of black pixels in an image of a digit. For comparison, we evaluate a method that directly optimizes the image pixels with respect to a forward model, of the form fθ(x). In this case, the solutions are far off the manifold of valid characters. We also compare to MINs without the reweighting scheme and the inference procedure, where y is the maximum possible y in the dataset to demonstrate the benefits of these two aspects.\nTask MIN MIN (best)\n≥ 15 -13.6 -12.2 ≥ 25 -26.2 -23.9\nmodel is trained on all faces older than 25 years.This ensures that our model cannot simply copy the youngest face. To obtain ground truth scores for the generated faces, we use subjective judgement from human participants. We perform a study with 13 users. Each user was asked to answer a set of 35 binary-choice questions each asking the user to pick the older image of the two provided alternatives. We then fit an age function to this set of binary preferences, analogously to Christiano et al. (2017).\nFigure 2 shows the images produced by MINs. For comparison, we also present some sample of images from the dataset partitioned by the ground truth score. We find that the most likely age for optimal images produced by training MINs on images of people 15 years or older was 13.6 years, with the best image having an age of 12.2. The model trained on ages 25 and above produced more mixed results, with an average age of 26.2, and a minimum age of 23.9. We report these results in Table 2. This task is exceptionally difficult, since the model must extrapolate outside of the ages seen in the training set, picking up on patterns in the images that can be used to produce faces that appear younger than any face that the model had seen, while avoiding unrealistic images.\nWe also conducted experiments on contextual image optimization with MINs. We studied contextual optimization over hand-written digits to maximize stroke width, using either the character category as the context c, or the top one-fourth or top half of the image. In the latter case, MINs must learn to complete the image while maximizing for the stroke width. In the case of class-conditioned optimization, MINs attain an average score over the classes of\n237.6, while the dataset average is 149.0. In the case where the context is the top half or quarter of the image, MINs obtain average scores of 223.57 and 234.32, respectively, while the dataset average is 149.0 for both tasks. We report these results in Table 3. We also conducted a contextual optimization experiment on faces from the Celeb-A dataset, with some example images shown in Figure 3. The context corresponds to the choice for the attributes brown hair, black hair, bangs, or moustache. The optimization score is given by the sum of the attributes wavy hair, eyeglasses, smiling, and no beard. Qualitatively, we can see that MINs successfully optimize the score while obeying the target context, though evaluating the true score is impossible without subjective judgement on this task. We discuss these experiments in more detail in Appendix D.1." }, { "heading": "4.2 OPTIMIZATION WITH ACTIVE DATA COLLECTION", "text": "In the active MBO setting, MINs must select which new datapoints to query to improve their estimate of the optimal input. In this setting, we compare to prior model-based optimization methods, and evaluate the exploration technique described in Section 3.4.\nGlobal optimization on benchmark functions. We first compare MINs to prior work in Bayesian optimization on standard benchmark problems (DNGO) (Snoek et al., 2015): the 2D Branin function, and the 6D Hartmann function. As shown in Table 4, MINs reach within ±0.1 units of the global minimum (minimization is performed here, instead of maximization), performing comparably with commonly used Bayesian optimization methods based on Gaussian processes. We do not expect MINs to be as efficient as GP-based methods, since MINs rely on training parametric neural networks with many parameters, which is less efficient than GPs on low-dimensional tasks. Exact Gaussian processes and adaptive Bayesian linear regression (Snoek et al., 2015) outperform MINs in terms of optimization precision and the number of samples queried, but MINs achieve comparable performance with about 4× more samples. We also report the performance of MINs without the random labeling exploration\nmethod, instead selecting the next query point by greedily maximizing the current model with some additive noise. We find that the random relabeling method produces substantially better results than the greedy data collection approach, indicating the importance of effective exploration methods for MINs.\nProtein fluorescence maximization. In the next experiment, we study a high-dimensional active MBO task, previously studied by Brookes et al. (2019). This task requires optimizing over protein designs by selecting variable length sequences of codons, where each codon can take on one of 20 values. In order to model discrete values, we use a Gumbel-softmax GAN also previously employed in (Gupta & Zou, 2018), and as a baseline in (Brookes et al., 2019). For backpropagation, we choose a temperature τ = 0.75 for the Gumbel-softmax operation. This is also mentioned in Appendix D. The aim in this task is to produce a protein with maximum fluorescence. Each algorithm is provided with a starting dataset, and then allowed a identical, limited number of score function queries. For each query made by an algorithm, it receives a score value from an oracle. We use the trained oracles released by Brookes et al. (2019). These oracles are separately trained forward models, and can potentially be inaccurate, especially for datapoints not observed in the starting static dataset. We compare to CbAS (Brookes et al., 2019) and other baselines, including CEM (Cross Entropy Method), RWR (Reward Weighted Regression) and a method that uses a forward model – GB (Gómez-Bombarelli et al., 2018) reported by Brookes et al. (2019). For evaluation, we report\nthe groundtruth score of the output of optimization (max), and the 50th-percentile groundtruth score of all the samples produced via sampling (this is without inference in the MIN case) so as to be comparable to Brookes et al. (2019). In Table 5, we show that MINs are comparable to the best performing method on this task, and produce samples with the highest score among all the methods considered.\nThese results suggest that MINs can perform competitively with previously proposed model-based optimization methods in the active setting, reaching comparable or better performance when compared both to Bayesian optimization methods and previously proposed methods for a higher-dimensional protein design task.\n5 DISCUSSION\nIn this work, we presented a novel approach towards model-based optimization (MBO). Instead of learning a proxy forward function fθ(x) from inputs x to scores y, MINs learn a stochastic inverse mapping from scores y to inputs. MINs are resistent to out-of-distribution inputs and can optimize over high dimensional x values where valid inputs lie on a narrow manifold. By using simple and principled design decisions, such as re-weighting the data distribution, MINs can perform effective model-based optimization even from static, previously collected datasets in the data-driven setting without the need for active data collection. We also described ways to perform active data collection if needed. Our experiments showed that MINs are capable of solving MBO optimization tasks in both contextual and non-contextual settings, and are effective over highly semantic score functions such as age of the person in an image.\nPrior work has usually considered MBO in the active or \"onpolicy\" setting, where the algorithm actively queries data as it learns. In this work, we introduced the data-driven MBO problem statement and devised a method to perform optimization in such\nscenarios. This is important in settings where data collection is expensive and where abundant datasets exist, for example, protein design, aircraft design and drug design. Further, MINs define a family of algorithms that show promising results on MBO problems on extremely large input spaces.\nWhile MINs scale to high-dimensional tasks such as model-based optimization over images, and are performant in both contextual and non-contextual settings, we believe there are a number of interesting open questions for future work. The interaction between active data collection and reweighting should be investigated in more detail, and poses interesting consequences for MBO, bandits and reinforcement learning. Better and more principled inference procedures are also a direction for future work. Another avenue is to study various choices of training objectives in MIN optimization." }, { "heading": "A PROBABILISTIC INTERPRETATION OF SECTION 3.2", "text": "In this section, we show that the inference scheme described in Equation 4, Section 3.2 emerges as a deterministic relaxation of the probabilistic inference scheme described below. We re-iterate that in Section 3.2, a singleton x∗ is the output of optimization, however the procedure can be motivated from the perspective of the following probabilistic inference scheme.\nLet p(x|y) denote a stochastic inverse map, and let pf (y|x) be a probabilistic forward map. Consider the following optimization problem:\narg max y,p̂ Ex∼p̂(x|y),ŷ∼pf (ŷ|x) [ŷ]\nsuch that H(ŷ|x) ≤ 1, D(p̂(x|y), pθ(x|y)) ≤ 2,\nwhere pθ(x|y) is the probability distribution induced by the learned inverse map (in our case, this corresponds to the distribution of f−1θ (y, z) induced due to randomness in z ∼ p0(·)), pf (x|y) is the learned forward map, H is Shannon entropy, and D is KL-divergence measure between two distributions. In Equation 4, maximization is carried out over the input y to the inverse-map, and the input z which is captured in p̂ in the above optimization problem, i.e. maximization over z in Equation 4 is equivalent to choosing p̂ subject to the choice of singleton/ Dirac-delta p̂. The Lagrangian is given by:\nL(y, p̂; p, pf ) = Ex∼p̂(x|y),ŷ∼pf (ŷ|x) [ŷ] + λ1 ( Ex∼p̂(x|y),ŷ∼pf (ŷ|x) [log pf (ŷ|x)] + 1 ) +\nλ2 ( 2 −D(p̂(x|y), pθ(x|y)))\nIn order to derive Equation 4, we restrict p̂ to the Dirac-delta distribution generated by querying the learned inverse map f−1θ at a specific value of z. Now note that the first term in the Lagrangian corresponds to maximizing the \"reconstructed\" ŷ similarly to the first term in Equation 4. If pf is assumed to be a Gaussian random variable with a fixed variance, then log pf (ŷ|x) = −||ŷ − µ(x)||22, where µ is the mean of the probabilistic forward map. With deterministic forward maps, we make the assumption that µ(x) = y (the queried value of y), which gives us the second term from Equation 4.\nFinally, in order to obtain the log p0(z) term, note that, D(p̂(x|y), pθ(x|y)) ≤ D(δz(·), p0(·)) = − log p0(z) (by the data processing inequality for KL-divergence). Hence, constraining log p0(z) instead of the true divergence gives us a lower bound on L. Maximizing this lower bound (which is the same as Equation 4) hence also maximizes the true Lagrangian L." }, { "heading": "B BIAS-VARIANCE TRADEOFF DURING MIN TRAINING", "text": "In this section, we provide details on the bias-variance tradeoff that arises in MIN training. Our analysis is primarily based on analysing the bias and variance in the `2 norm of the gradient in two cases – if we had access to infinte samples of the distribution over optimal ys, p∗(y) (this is a Dirac-delta distribution when function f(x) evaluations are deterministic, and a distribution with non-zero variance when the function evaluations are stochastic or are corrupted by noise). Let\nL̂p(D) = 1|Y| ∑ yj∼pD(y) p(yj) pD(yj) ( 1 |Nyj | ∑|Nyj | k=1 D̂(xj,k, f −1(yj)) )\ndenote the empirical objective that the inverse map is trained with. We first analyze the variance of the gradient estimator in Lemma B.2. In order to analyse this, we will need the expression for variance of the importance sampling estimator, which is captured in the following Lemma. Lemma B.1 (Variance of IS (Metelli et al., 2018)). Let P and Q be two probability measures on the space (X ,F) such that d2(P ||Q) <∞. Let x1, · · · ,xN be N randomly drawn samples from Q, and f : X → R is a uniformly-bounded function. Then for any δ ∈ (0, 1], with probability atleast 1− δ,\nEx∼P [f(x)] ∈\n[ 1\nN N∑ i=1 wP/Q(xi)f(xi) ± ||f ||∞\n√ (1− δ)d2(P ||Q)\nδN\n]\nEquipped with Lemma B.1, we are ready to show the variance in the gradient due to reweighting to a distribution for which only a few datapoints are observed.\nLemma B.2 (Gradient Variance Bound for MINs). Let the inverse map be given by f−1θ . Let Ny denote the number of datapoints observed in D with score equal to y, and let L̂p(D) be as defined above. Let Lp(pD) = E[L̂p(D)], where the expectation is computed with respect to the dataset D. Assume that ||∇θD̂(x, f−1(y))||2 ≤ L and var[∇θD̂(x, f−1(y))] ≤ σ2. Then, there exist some constants C1, C2 such that with a confidence at least 1− δ,\nE [ ||∇θL̂p(D)−∇θLp(pD)||22 ] ≤ C1Ey∼p(y) [ σ2\nlog 1δ Ny\n] + C2L\n2 (1− δ)d2(p||pD) δ ∑ y∈DNy\nProof. We first bound the range in which the random variable∇θL̂p(D) can take values as a function of number of samples observed for each y. All the steps follow with high probability, i.e. with probability greater than 1− δ,\n∇θL̂p(D) = ∇θ 1 |YD| ∑\nyj∼pD(y)\np(yj)\npD(yj) 1 |Nyj | |Nyj |∑ k=1 D̂(xj,k, f −1(yj)) ∈ 1 |YD| ∑ yj∼pD(y) Exij∼p(x|yj) [D̂(xij , yj)]± √ var(D̂(x, y)) · (log !δ ) δ ·Ny\n ∈Eyj∼p(y) Exij∼p(x|yj) [D̂(xij , yj)]± √\nvar(D̂(x, y)) · (log !δ δ ·Ny ±√ (1− δ) · d2(p(y)||pD(y)) δ · ∑ yj∈DNyj\n(5) where d2(p||q) is the exponentiated Renyi-divergence between the two distributions p and q, i.e. d2(p(y)||q(y)) = ∫ y q(y) ( p(y) q(y) )2 dy. The first step follows by applying Hoeffding’s inequality on each inner term in the sum corresponding to yj and then bounding the variance due to importance sampling ys finally using concentration bounds on variance of importance sampling using Lemma B.1.\nThus, the gradient can fluctuate in the entire range of values as defined above with high probability. Thus, with high probability, atleast 1− δ,\nE [ ||∇θL̂p(D)−∇θLp(pD)||22 ] ≤ C1Ey∼p(y) [ σ2\nlog 1δ Ny\n] + C2L\n2 (1− δ)d2(p||pD) δ ∑ YD Ny\n(6)\nThe next step is to bound the bias in the gradient that arises due to training on a different distribution than the distribution of optimal ys, p∗(y). This can be written as follows:\n||Ey∼p∗(y)[Ex∼p(x|y)[D(x, y)]]− Ey∼p(y)[Ex∼p(x|y)[D(x, y)]]||22 ≤ DTV(p, p∗)2 · L. (7)\nwhere DTV is the total variation divergence between two distributions p and p∗, and L is a constant that depends on the maximum magnitude of the divergence measure D. Combining Lemma B.2 and the above result, we prove Theorem 3.1." }, { "heading": "C ARGUMENT FOR ACTIVE DATA COLLECTION VIA RANDOMIZED LABELING", "text": "In this section, we explain in more detail the randomized labeling algorithm described in Section 3.4. We first revisit Thompson sampling, then provide arguments for how our randomized labeling algorithm relates to it, highlight the differences, and then prove a regret bound for this scheme under mild assumptions for this algorithm. Our proof follows commonly available proof strategies for Thompson sampling.\nAlgorithm 3 Thompson Sampling (TS) 1: Initialize a policy πa : X → R, data so-far D0 = {}, a prior over θ in fθ – P (θ∗|D0) 2: for step t in {0, . . . , T-1} do 3: θt ∼ P (θ∗|Ft) (Sample θt from the posterior) 4: Query xt = argmaxx E[fθt(x) | θ? = θt] (Query based on the posterior probability xt is optimal) 5: Observe outcome: (xt, f(xt)) 6: Dt+1 = Dt ∪ (xt, f(xt)) 7: end for\nNotation The TS algorithm queries the true function f at locations (xt)t∈N and observes true function values at these points f(xt). The true function f(x) is one of many possible functions that can be defined over the space R|X |. Instead of representing the true objective function as a point object, it is common to represent a distribution p∗ over the true function f . This is justified because, often, multiple parameter assignments θ, can give us the same overall function. We parameterize f by a set of parameters θ∗.\nThe T period regret over queries x1, · · · ,xT is given by the random variable\nRegret(T ) := T−1∑ t=0 [f(x?)− f(xt)]\nSince selection of xt can be a stochastic, we analyse Bayes risk (Russo & Van Roy, 2016; Russo et al., 2018), we define the Bayes risk as the expected regret over randomness in choosing xt, observing f(xt), and over the prior distribution P (θ∗). This definition is consistent with Russo & Van Roy (2016).\nE[Regret(T )] = E [ T−1∑ t=0 [f(x?)− f(xt)] ]\nLet πTS be the policy with which Thompson sampling queries new datapoints. We do not make any assumptions on the stochasticity of πTS, therefore, it can be a stochastic policy in general. However, we make 2 assumptions (A1, A2). The same assumptions have been made in Russo & Van Roy (2016).\nA1: supx f(x)− infx f(x) ≤ 1 (Difference between max and min scores is bounded by 1) – If this is not true, we can scale the function values so that this becomes true.\nA2: Effective size of X is finite. 1\nTS (Alg 3) queries the function value at x based on the posterior probability that x is optimal. More formally, the distribution that TS queries xt from can be written as: πTSt = P (x∗ = ·|Dt). When we use parameters θ to represent the function parameter, and thus this reduces to sampling an input that is optimal with respect to the current posterior at each iteration: xt ∈ arg max\nx∈X E[fθt(x)|θ∗ = θ̂t].\nMINs (Alg 2) train inverse maps f−1θ (·), parameterized as f −1 θ (z, y), where y ∈ R. We call an inverse map optimal if it is uniformly optimal given θt, i.e. ||f−1θt (maxx f(x)|θt) − δ{arg maxx E[f(x)|θt]}|| ≤ εt, where εt is controllable (usually the case in supervised learning, errors can be controlled by cross-validation).\nNow, we are ready to show that the regret incurred the randomized labelling active data collection scheme is bounded by O( √ T ). Our proof follows the analysis of Thompson sampling presented in Russo & Van Roy (2016). We first define information ratio and then use it to prove the regret bound.\nInformation Ratio Russo & Van Roy (2016) related the expected regret of TS to its expected information gain i.e. the expected reduction in the entropy of the posterior distribution of X ∗.\n1By effective size we refer to the intrinsic dimensionality of X . This doesn’t necessarily imply that X should be discrete. For example, under linear approximation to the score function fθ(x), i.e., if fθ(x) = θTx, this defines a polyhedron but just analyzing a finite set of just extremal points of the polyhedron works out, thus making |X | effectively finite.\nInformation ratio captures this quantity, and is defined as:\nΓt := Et [f(xt)− f(x?)]2\nIt (x∗; (xt, f(xt)))\nwhere I(·, ·) is the mutual information between two random variables and all expectations Et are defined to be conditioned on Dt. If the information ratio is small, Thompson sampling can only incur large regret when it is expected to gain a lot of information about which x is optimal. Russo & Van Roy (2016) then bounded the expected regret in terms of the maximum amount of information any algorithm could expect to acquire, which they observed is at most the entropy of the prior distribution of the optimal x. Lemma C.1 (Bayes-regret of vanilla TS)(Russo & Van Roy, 2016)). For any T ∈ N, if Γt ≤ Γ (i.e. information ratio is bounded above) a.s. for each t ∈ {1, . . . , T},\nE[Regret(T, πTS)] ≤ √ ΓH (X ∗)T\nWe refer the readers to the proof of Proposition 1 in Russo & Van Roy (2016). The proof presented in Russo & Van Roy (2016) does not rely specifically on the property that the query made by the Thompson sampling algorithm at each iteration xt is posterior optimal, but rather it suffices to have a bound on the maximum value of the information ratio Γt at each iteration t. Thus, if an algorithm chooses to query the true function at a datapoint xt such that these queries always contribute in learning more about the optimal function, i.e. I(·, ·) appearing in the denominator of Γ is always more than a threshold, then information ratio is lower bounded, and that active data collection algorithm will have a sublinear asymptotic regret. We are interested in the case when the active data collection algorithm queries a datapoint xt at iteration t, such that xt is the optimum for a function f̂θ̂t , where θ̂t is a sample from the posterior distribution over θt, i.e. θ̂t lies in the high confidence region of the posterior distribution over θt given the data Dt seen so far. In this case, the mutual information between the optimal datapoint x? and the observed (xt, f(xt)) input-score pair is likely to be greater than 0. More formally,\nIt(x ?, (xt, f(xt))) ≥ 0 ∀ xt = arg max x fθ̂t(x) where P (θ̂t|Dt) ≥ threshold (8)\nThe randomized labeling scheme for active data collection in MINs performs this step. The algorithm samples a bunch of (x, y) datapoints, sythetically generated, – for example, in our experiments, we add noise to the values of x, and randomly pair them with unobserved or rarely observed values of y. If the underlying true function f is smooth, then there exist a finite number of points that are sufficient to uniquely describe this function f . One measure to formally characterize this finite number of points that are needed to uniquely identify all functions in a function class is given by Eluder dimension (Russo & Van Roy).\nBy augmenting synthetic datapoints and training the inverse map on this data, the MIN algorithm ensures that the inverse map is implicitly trained to be an accurate inverse for the unique function fθ̂t that is consistent with the set of points in the dataset Dt and the augmented set St. Which sets of functions can this scheme represent? The functions should be consistent with the data seen so far Dt, and can take randomly distributed values outside of the seen datapoints. This can roughly argued to be a sample from the posterior over functions, which Thompson sampling would have maintained given identical history Dt. Lemma C.2 (Bounded-error training of the posterior-optimal xt preserves asymptotic Bayes-regret). ∀t ∈ N, let x̂t be any input such that f(x̂t) ≥ maxx E[f(x)|Dt] − εt. If MIN chooses to query the true function at x̂t and if the sequence (εt)t∈N satisfies ∑T t=0 εt = O( √ T ), then, the regret from querying this εt-optimal x̂t which is denoted in general as the policy π̂TS is given by E[Regret(T, π̂TS)] = O( √ T ).\nProof. This lemma intuitively shows that if posterior-optimal inputs xt can be \"approximately\" queried at each iteration, we can still maintain sublinear regret. To see this, note:\nf(x?))− f(x̂t) = f(x?)− f(xt) + f(xt)− f(x̂t).\n=⇒ E[Regret(T, π̂TS)] = E[Regret(T, πTS)] + E[ T∑ t=1 (f(xt)− f(x̂t))]\nThe second term can be bounded by the absolute value in the worst case, which amounts ∑T t=0 εt\nextra Bayesian regret. As Bayesian regret of TS is O( √ T ) and ∑T t=0 εt = O( √ T ), the new overall\nregret is also O( √ T ).\nTheorem C.3 (Bayesian Regret of randomized labeling active data collection scheme proposed in Section 3.4 is O( √ T )). Regret incurred by the MIN algorithm with randomized labeling is of the\norder O( √ (Γ̄H(X ∗) + C)T ).\nProof. Simply put, we will combine the insight about the mutual information I(x?, (xt, f(xt))) > 0 and C.2 in this proof. Non-zero mutual information indicates that we can achieve a O( √ T ) regret if we query xts which are optimal corresponding to some implicitly defined forward function lying in the high confidence set of the true posterior given the observed datapoints Dt. Lemma C.2 says that if bounded errors are made in fitting the inverse map, the overall regret remains O( √ T ).\nMore formally, if ||f−1θt (maxx f(x)|θt)− δ{arg maxx E[f(x)|θt]}|| ≤ δt, this means that\n||Ext∼f−1θt [f(xt)]− Ex′t∼πTSt [f(x ′ t)]|| ≤ ||f(·)||∞ · ||f−1θt − π TS t || ≤ δtRmax ≤ εt\nand now application of Lemma C.2 gives us the extra regret incurred. (Note that this also provides us a way to choose the number of training steps for the inverse map)\nFurther, note if we sample xt at iteration t from a distribution that shares support with the true posterior over optimal xt (which is used by TS), we still incur sublinear, bounded O( √ Γ̄H(A∗)T ) regret.\nIn the worst case, the overall bias caused due to the approximations will lead to an additive cumulative increase in the Bayesian regret, and hence, there is a constant ∃ C ≥ 0, such that E[Regret(T, f−1)] = O( √ (Γ̄H(X ∗) + C)T )." }, { "heading": "D ADDITIONAL EXPERIMENTS AND DETAILS", "text": "D.1 CONTEXTUAL IMAGE OPTIMIZATION\nIn this set of static dataset experiments, we study contextual MBO tasks on image pixels. Unlike the contextual bandits case, where x corresponds to an image label, here x corresponds to entire images. We construct several tasks. First, we study stroke width optimization on MNIST characters, where the context is the class of the digit we wish to optimize. Results are shown in Figure 4. MINs correctly produce digits of the right class, and achieve an average score over the digit classes of 237.6, whereas the average score of the digits in the dataset is 149.0.\nThe next task is to test the ability of MINs to be able to complete/inpaint unobserved patches of an image given an observed context patch. We use two masks: mask A: only top half and mask B: only top one-fourth parts of the image are visible, to mask out portions of the image and present the masked image as context c to the MIN, with the goal being to produce a valid completion x, while still maximizing score corresponding to the stroke width. We present some\nsample completions in Figure 4. The quantitative results are presented in Table 6. We find that MINs are effective as compared completions for the context in the dataset in terms of score while still producing a visibly valid character.\nWe evaluate MINs on a complex semantic optimization task on the CelebA (Liu et al., 2015) dataset. We choose a subset of attributes and provide their one-hot encoding as context to the model. The score is equal to the `1 norm of the binary indicator vector for a different subset of attributes disjoint from the context. We present our results in Figure 3. We observe that MINs produce diverse images consistent with the context, and is also able to effectively infer the score function, and learn features to maximize it. Some of the model produced optimized solutions were presented in Section 4 in Figure 3. In this section, we present the produced generations for some other contexts. Figure 7 shows these results.\nD.2 ADDITIONAL RESULTS FOR NON-CONTEXTUAL IMAGE OPTIMIZATION\nIn this section, we present some additional results for non-contextual image optimization problems. We also evaluated our contextual optimization procedure on the CelebA dataset in a non-contextual setting. The reward function is the same as that in the contextual setting – the sum of attributes: wavy hair, no beard, smiling and eyeglasses. We find that MINs are able to sucessfully produce solutions in this scenario as well. We show some optimized outputs at different iterations from the model in Figure 5.\ncGAN baseline. We compare our MIN model to a cGAN baseline on the IMDB-Wiki faces dataset for the semantic age optimization task. In general, we found that the cGAN model learned to ignore the score value passed as input even when trained on the entire dataset (without excluding the youngest faces) and behaved almost like a regular unconditional GAN model when queried to produce images x corresponding to the smallest age. We suspect that this could possibly be due to the fact that age of a person doesn’t have enough direct signal to guide the model to utilize it unless other tricks like reweighting proposed in Section 3.3 which explicitly enforce the model attention to datapoints of interest, are used. We present the produced optimized x in Figure 6.\nD.3 QUANTITATIVE SCORES FOR NON-CONTEXTUAL MNIST OPTIMIZATION\nIn Figure 8, we highlight the quantitative score values for the stroke width score function (defined as the number of pixels which have intensity more than a threshold). Note that MINs achieve the highest value of average score while still resembling a valid digit, that stays inside the manifold of valid digits, unlike a forward model which can get high values of the score function (number of pixels turned on), but doesn’t stay on the manifold of valid digits.\nD.4 EXPERIMENTAL DETAILS AND SETUP\nIn this section, we explain the experimental details and the setup of our model. For our experiments involving MNIST and optimization of benchmark functions task, we used the same architecture as a fully connected GAN - where the generator and discriminator are both fully connected networks. We based our code for this part on the open-source implementation (Linder-Norén). For the forward model experiments in these settings, we used a 3-layer\nfeedforward ReLU network with hidden units of size 256 each in this setting. For all experiments on CelebA and IMDB-Wiki faces, we used the VGAN (Peng et al., 2019) model and the associated codebase as our starting setup. For experiments on batch contextual bandits, we used a fully connected discriminator and generator for MNIST, and a convolutional generator and Resnet18-like discriminator for CIFAR-10. The prediction in this setting is categorical – 1 of 10 labels needs to be predicted, so instead of using reinforce or derivative free optimization to train the inverse map, we used the Gumbel-softmax Jang et al. (2016) trick with a temperature τ = 0.75, to be able to use stochastic gradient descent to train the model. For the protein flourescence maximization experiment,\nwe used a 2-layer, 256-unit feed-forward gumbel-softmax inverse map and a 2-layer feed-forward discriminator.\nWe trained models present in open-source implementations of BanditNet (Sachdeva), but were unable to reproduce results as reported by Joachims et al. (2018). Thus we reported the paper reported numbers from the BanditNet paper in the main text as well.\nTemperature hyperparameter τ which is used to compute the reweighting distribution is adaptively chosen based on the 90th percentile score in the dataset. For example, if the difference between ymax and y90th−percentile is given by α, we choose τ = α. This scheme can adaptively change temperatures in the active setting. In order to select the constant which decides whether the bin corresponding to a particular value of y is small or not, we first convert the expression NyNy+λ to use densities rather than absolute counts, that is, p̂D(y)p̂D(y)+λ , where p̂D(y) is the empirical density of observing y in D, and now we use the same constant λ = 0.003. We did not observe a lot of sensitivity to λ values in the range [0.0001, 0.007], all of which performed reasonably similar. We usually fixed the number of bins to 20 for the purposed of reweighting, however note that the inverse map was still trained on continuous y values, which helps it extrapolate.\nIn the active setting, we train two copies of f−1 jointly side by side. One of them is trained on the augmented datapoints generated out of the randomized labelling procedure, and the other copy is just trained on the real datapoints. This was done so as to prevent instabilities while training inverse maps. Training can also be made more incremental in this manner, and we need to train an inverse map to optimality inside every iteration of the active MIN algorithm, but rather we can train both the inverse maps for a fixed number of gradient steps." } ]
2,019
null
SP:5260bc0d3c1b956f31d8921a51bbc776843cd6ef
[ " This paper tackles an interesting problem, one-class classification or anomaly detection, using a meta-learning approach. The main contribution is to introduce a parameter such that the inner-loop of the meta-learning algorithm better reflects the imbalance which occurs during meta-testing. Results are shown comparing a few simple baselines to both MAML and the modified variant, on a few datasets such as image-based ones (MNIST, miniImageNet), a synthetic dataset, and a real-world time-series example from CNC milling machines.", "One of promising approach to tackle the few-shot problems is to use meta-learning so that the learner can quickly generalize to an unseen task. One-class classification requires only a set of positive examples to discriminate negative examples from positive examples. The current paper addresses a method of meta-training one-class classifiers in the MAML framework when only a handful of positive examples are available. " ]
Although few-shot learning and one-class classification (OCC), i.e. learning a binary classifier with data from only one class, have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot OCC problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and learns a model initialization particularly suited for learning few-shot OCC tasks. This is done by explicitly optimizing for a parameter initialization which only requires a few gradient steps with one-class minibatches to yield a performance increase on class-balanced test data. We provide a theoretical analysis that explains why our approach works in the few-shot OCC scenario, while other meta-learning algorithms, including MAML, fail. Empirical results on six datasets from the image and time-series domains show that our method substantially outperforms both, classical OCC and few-shot classification approaches, and demonstrate the ability to quickly learn unseen tasks from only few normal class samples. Moreover, we successfully learn anomaly detectors for a real-world application on sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using a few examples from the normal class.
[]
[ { "authors": [ "Charu C Aggarwal" ], "title": "Outlier analysis", "venue": "In Data mining,", "year": 2015 }, { "authors": [ "Jinwon An", "Sungzoon Cho" ], "title": "Variational autoencoder based anomaly detection using reconstruction probability", "venue": "Special Lecture on IE,", "year": 2015 }, { "authors": [ "Jerone TA Andrews", "Thomas Tanay", "Edward J Morton", "Lewis D Griffin" ], "title": "Transfer representationlearning for anomaly detection", "venue": null, "year": 2016 }, { "authors": [ "Varun Chandola", "Arindam Banerjee", "Vipin Kumar" ], "title": "Anomaly detection: A survey", "venue": "ACM computing surveys (CSUR),", "year": 2009 }, { "authors": [ "Jinghui Chen", "Saket Sathe", "Charu Aggarwal", "Deepak Turaga" ], "title": "Outlier detection with autoencoder ensembles", "venue": "In Proceedings of the 2017 SIAM International Conference on Data Mining,", "year": 2017 }, { "authors": [ "Sarah M Erfani", "Sutharshan Rajasegarar", "Shanika Karunasekera", "Christopher Leckie" ], "title": "Highdimensional and large-scale anomaly detection using a linear one-class svm with deep learning", "venue": "Pattern Recognition,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Meta-learning and universality: Deep representations and gradient descent can approximate any learning", "venue": null, "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Pedro Garcia-Teodoro", "Jesus Diaz-Verdejo", "Gabriel Maciá-Fernández", "Enrique Vázquez" ], "title": "Anomaly-based network intrusion detection: Techniques, systems and challenges. computers", "venue": null, "year": 2009 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Simon Hawkins", "Hongxing He", "Graham Williams", "Rohan Baxter" ], "title": "Outlier detection using replicator neural networks", "venue": "In International Conference on Data Warehousing and Knowledge Discovery,", "year": 2002 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Kyle Hsu", "Sergey Levine", "Chelsea Finn" ], "title": "Unsupervised learning via meta-learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Eric Jones", "Travis Oliphant", "Pearu Peterson" ], "title": "SciPy: Open source scientific tools for Python, 2001", "venue": "URL http://www.scipy.org/. [Online; accessed ¡today¿]", "year": 2001 }, { "authors": [ "Shehroz S Khan", "Michael G Madden" ], "title": "One-class classification: taxonomy of study and review of techniques", "venue": "The Knowledge Engineering Review,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Gregory R. Koch" ], "title": "Siamese neural networks for one-shot image recognition", "venue": null, "year": 2015 }, { "authors": [ "Jedrzej Kozerawski", "Matthew Turk. Clear" ], "title": "Cumulative learning for one-shot one-class image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Brenden Lake", "Ruslan Salakhutdinov", "Jason Gross", "Joshua Tenenbaum" ], "title": "One shot learning of simple visual concepts", "venue": "In Proceedings of the annual meeting of the cognitive science society,", "year": 2011 }, { "authors": [ "Brenden M. Lake", "Ruslan Salakhutdinov", "Joshua B. Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": "Science, 350(6266):1332–1338,", "year": 2015 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "Christopher J.C. Burges" ], "title": "The mnist database of handwritten digits", "venue": "http://yann.lecun.com/exdb/mnist/", "year": 2010 }, { "authors": [ "Fei Tony Liu", "Kai Ming Ting", "Zhi-Hua Zhou" ], "title": "Isolation forest", "venue": "Eighth IEEE International Conference on Data Mining,", "year": 2008 }, { "authors": [ "Mary M Moya", "Mark W Koch", "Larry D Hostetler" ], "title": "One-class classifier networks for target recognition applications", "venue": "NASA STI/Recon Technical Report N,", "year": 1993 }, { "authors": [ "Alex Nichol", "John Schulman" ], "title": "Reptile: a scalable metalearning algorithm", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Mahesh Pal", "Giles M Foody" ], "title": "Feature selection for classification of hyperspectral data by svm", "venue": "IEEE Transactions on Geoscience and Remote Sensing,", "year": 2010 }, { "authors": [ "Marcel Prastawa", "Elizabeth Bullitt", "Sean Ho", "Guido Gerig" ], "title": "A brain tumor segmentation framework based on outlier detection", "venue": "Medical image analysis,", "year": 2004 }, { "authors": [ "Mahdyar Ravanbakhsh", "Moin Nabi", "Enver Sangineto", "Lucio Marcenaro", "Carlo Regazzoni", "Nicu Sebe" ], "title": "Abnormal event detection in videos using generative adversarial nets", "venue": "IEEE International Conference on Image Processing (ICIP), Sep 2017. doi: 10.1109/icip.2017.8296547. URL http://dx.doi.org/10.1109/icip.2017.8296547", "year": 2017 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Lukas Ruff", "Robert Vandermeulen", "Nico Goernitz", "Lucas Deecke", "Shoaib Ahmed Siddiqui", "Alexander Binder", "Emmanuel Müller", "Marius Kloft" ], "title": "Deep one-class classification", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrei A. Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization, 2018", "venue": null, "year": 2018 }, { "authors": [ "Mohammad Sabokrou", "Mohammad Khalooei", "Mahmood Fathy", "Ehsan Adeli" ], "title": "Adversarially learned one-class classifier for novelty detection", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Thomas Schlegl", "Philipp Seeböck", "Sebastian M Waldstein", "Ursula Schmidt-Erfurth", "Georg Langs" ], "title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2017 }, { "authors": [ "Bernhard Schölkopf", "John C Platt", "John Shawe-Taylor", "Alex J Smola", "Robert C Williamson" ], "title": "Estimating the support of a high-dimensional distribution", "venue": "Neural computation,", "year": 2001 }, { "authors": [ "Luke Scime", "Jack Beuth" ], "title": "Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm", "venue": "Additive Manufacturing,", "year": 2018 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Dan Xu", "Elisa Ricci", "Yan Yan", "Jingkuan Song", "Nicu Sebe" ], "title": "Learning deep representations", "venue": null, "year": 2016 }, { "authors": [ "Hsu" ], "title": "Under review as a conference paper at ICLR 2020 Figure 1: Adaptation to test task Ts from the parameter initializations yielded by OC-MAML and MAML B EXPERIMENT DETAILS For MT-MNIST, we use the same 4-block convolutional architecture", "venue": null, "year": 2018 }, { "authors": [ "Finn" ], "title": "We also do not include the batch normalization layers for the two latter datasets. On the STS datasets, the model architecture used is composed of 3 modules, each including a 5 x 5 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The model architecture used for the CNC-MMD experiments is composed of 4 of these aforementioned modules, except that the convolutional layers in the last two modules include 64 filters. The last layer of all architectures", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The anomaly detection (AD) task (Chandola et al., 2009; Aggarwal, 2015) consists in differentiating between normal and abnormal data samples. AD applications are common in various domains that involve different data types, including medical diagnosis (Prastawa et al., 2004), cybersecurity (Garcia-Teodoro et al., 2009) and quality control in industrial manufacturing (Scime & Beuth, 2018). Due to the rarity of anomalies, the data underlying AD problems exhibits high class-imbalance. Therefore, AD problems are usually formulated as one-class classification (OCC) problems (Moya et al., 1993), where either only a few or no anomalous data samples are available for training the model (Khan & Madden, 2014). While most of the developed approaches (Khan & Madden, 2014) require a substantial amount of normal data to yield good generalization, in many real-world applications, e.g. in industrial manufacturing, only small datasets are available. Data scarcity can have many reasons: data collection itself might be expensive, e.g. in healthcare, or happens only gradually, such as in a cold-start situation. To enable learning from few examples, various viable meta-learning approaches (Lake et al., 2011; Ravi & Larochelle, 2016; Finn et al., 2017) have been developed. However, they rely on having examples from each of the classification task’s classes, which prevents their application to OCC tasks. To the best of our knowledge, the few-shot OCC (FS-OCC) problem has only been addressed by Kozerawski & Turk (2018) in the image domain.\nOur contribution is threefold: Firstly, we show that classical OCC approaches fail in the few-shot data regime. Secondly, we provide a theoretical analysis showing that classical gradient-based metalearning algorithms do not yield initializations suitable for OCC tasks and that second-order derivatives are needed to optimize for such initializations. Thirdly, we propose one-class model-agnostic meta-learning (OC-MAML), a data-domain-agnostic algorithm that quickly learns FS-OCC tasks, to serve as a first, simple and strong baseline for future research in the understudied FS-OCC problem.\nOC-MAML builds upon model-agnostic meta-learning (MAML) (Finn et al., 2017), which is a meta-learning method that explicitly optimizes for few-shot learning and yields a model initialization\nthat enables quick adaptation to a new task using only few of its datapoints. Like MAML, OCMAML yields model parameters that are easily adaptable to unseen tasks. The difference is that the model initialization delivered by OC-MAML is particularly suited for adaptation to OCC tasks and hence requires few examples from only one class of the target task for good adaptation. We provide a theoretical analysis that shows that OC-MAML explicitly optimizes for parameter initializations which yield performance increase on class-balanced test data by taking only a few gradient steps with one-class minibatches. This is done by maximizing the inner product of gradients computed on different minibatches with different class-imbalance rates. While recent meta-learning approaches focused on the few-shot learning problem, i.e. learning to learn with few examples, we extend their use to the OCC problem, i.e. learning to learn with examples from only one class.\nWe empirically validate our theoretical analysis on six datasets from the image and time-series domains, and demonstrate the robustness and maturity of our approach for real-world application by successfully testing it on a real-world dataset of sensor readings recorded during manufacturing of metal workpieces with a CNC milling machine." }, { "heading": "2 APPROACH", "text": "" }, { "heading": "2.1 PROBLEM STATEMENT", "text": "Our goal is to learn a one-class classification (OCC) task using only a few examples from the normal class. In the following, we first discuss the unique challenges of the few-shot one-class classification (FS-OCC) problem. Subsequently, we formulate the FS-OCC problem as a meta-learning problem.\nIn order to perform one-class classification, i.e. differentiate between in-class and out-of-class examples, approximating a generalized decision boundary for the normal class is necessary. Learning such a class decision boundary in the few-shot regime can be especially challenging for the following reasons. On the one hand, if the model overfits to the few available datapoints, the class decision boundary would be too restrictive, which would prevent generalization to unseen examples. As a result, some normal samples would be predicted as anomalies. On the other hand, if the model overfits to the majority class, e.g. predicting almost everything as normal, the class decision boundary would overgeneralize, and out-of-class (anomalous) examples would not be detected.\nIn our meta-learning problem formulation, we assume access to data from classification tasks T traini sampled from a task distribution p(T ) related to our target OCC tasks. In the few-shot classification context, N -way K-shot learning tasks are usually used to test the learning procedure, in our case the model initialization, yielded by the meta-learning algorithm. An N -way K-shot classification task includesK examples from each of theN classes that are used for learning this task, after which the trained classifier is tested on a disjoint set of data (Vinyals et al., 2016). When the target task is an OCC task, only examples from one class are available for training, which can be viewed as a 1-way K-shot classification task. In order to align with the AD problem, the available examples have to belong to the normal (majority) class, which usually has a lower variance than the anomalous (minority) class. This problem formulation is a prototype for a practical use case where an application-specific anomaly detector is needed and only few normal class examples are available." }, { "heading": "2.2 MODEL-AGNOSTIC META-LEARNING", "text": "Model-agnostic meta-learning (MAML) (Finn et al., 2017) is an optimization-based meta-learning algorithm upon which we build in our present work. MAML learns a model initialization that enables quick adaptation to unseen tasks using only few data samples. For that, MAML trains a model explicitly for few-shot learning on tasks Ti coming from the same task distribution p(T ) as the unseen target task Ttest. In order to assess the model’s adaptation ability to unseen tasks, the available tasks are divided into mutually disjoint task sets: one for meta-training Str, one for metavalidation Sval and one for meta-testing Stest. Each task Ti is divided into two disjoint sets of data, each of which is used for a particular MAML operation: Dtr is used for adaptation and Dval is used for validation, i.e. evaluating the adaptation. The adaptation procedure of a model fθ to a particular task Ti consists in taking one (or more) gradient descent step(s) using few datapoints sampled from Dtr. We also refer to the adaptation updates as inner loop updates.\nA good measure for the suitability of the initialization parameters θ for few-shot adaptation to a considered task Ti is the loss LvalTi (fθ′i ), which is computed on the validation set D\nval using the task-specific adapted model fθ′i . In order to optimize for few-shot learning, the model parameters θ are updated by minimizing the aforementioned loss across all meta-training tasks. This update, called the outer loop update, can be expressed as:\nθ ← θ − β∇θ ∑\nTi∼p(T )\nLvalTi (fθ′i ), (1)\nwhere β is the learning rate used for the outer loop. In order to avoid meta-overfitting, i.e. overfitting to the meta-training tasks, model selection can be done via conducting validation episodes using tasks from Sval throughout meta-training. At meta-test time, the few-shot adaptation to unseen tasks from Stest is evaluated. We note that, in the case of few-shot classification, K datapoints from each class are sampled from Dtr for the adaptation, during training, validation and testing." }, { "heading": "2.3 ONE-CLASS MODEL-AGNOSTIC META-LEARNING", "text": "" }, { "heading": "2.3.1 ALGORITHM", "text": "The primary contribution of our work is to show that second-order gradient-based meta-learning is a viable approach to the underexplored few-shot one-class classification (FS-OCC) problem. We achieve this by adequately modifying the objective of the adaptation step, i.e. the inner loop updates, of the MAML algorithm. We choose to build upon gradient-based meta-learning algorithms, because these were shown to be universal learning algorithm approximators (Finn & Levine, 2017), which means that they could approximate a learning algorithm tailored for FS-OCC. As explained in Section 2.2, MAML optimizes explicitly for few-shot adaptation by creating and using auxiliary tasks that have the same characteristic as the target tasks, in this case tasks that include only few datapoints for training. Analogously, OC-MAML trains explicitly for quick adaptation to OCC tasks by creating OCC auxiliary tasks for meta-training. Concretely, this is done by modifying the class-imbalance rate (CIR) of the inner loop data batches to match the one of the test task. The meta-training procedure of OC-MAML is described in Algorithm 1 in Appendix A.\nAs described in Section 1, OCC problems are binary classification scenarios where only few or no minority class samples are available. In order to address both of theses cases, we introduce a hyperparameter (c) which sets the CIR of the batch sampled for the inner updates. Hereby, c gives the percentage of the samples belonging to the minority (anomalous) class w.r.t. the total number of samples, e.g. setting c = 0% means only majority class samples are contained in the data batch. We focus on this latter extreme case, where no anomalous samples are available for learning.\nThe key difference between MAML and OC-MAML is in the sampling operation of the inner loop batch (operation 5 in Algorithm 1 in Appendix A). By reducing the size of the batch used for the adaptation (via the hyperparameter K), MAML trains for few-shot adaptation. OC-MAML extends this approach to train for few-shot one-class adaptation by reducing the CIR of the batch used for adaptation (via the hyperparameter c). In order to evaluate the performance of the adapted model on both classes, we use a class-balanced validation batch B ′ for the outer loop updates. This way, we maximize the performance of the model in recognizing both classes after having seen examples from only one class during adaptation. Using OCC tasks for adaptation during meta-training favors model initializations that enable a quick adaptation to OCC tasks over those that require classbalanced tasks. From a representation learning standpoint, OC-MAML learns representations that are not only broadly suitable for the data underlying p(T ), but also particularly suited for OCC tasks. In Section 2.3.2, we discuss the unique characteristics of the model initializations yielded by OC-MAML and explain why adapting first-order meta-learning algorithms to the OCC scenario does not yield the targeted results." }, { "heading": "2.3.2 THEORETICAL ANALYSIS: WHY DOES OC-MAML WORK ?", "text": "In this section we give a theoretical explanation of why OC-MAML works and why it is a more suitable approach than MAML for the few-shot one-class classification (FS-OCC) problem. To address the latter problem, we aim to find a model parameter initialization, from which adaptation using few data examples from only one class yields a good performance on both classes, i.e. good\ngeneralization to the class-balanced task. We additionally demonstrate that adapting first-order metalearning algorithms, e.g. First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018), to the OCC scenario as done in OC-MAML, does not yield initializations with the desired characteristics, as it is the case for OC-MAML.\ngMAML = g2 − αH2g1 − αH1g2 +O(α2)\n= g2 − α ∂(g1.g2)\n∂φ1 +O(α2)\n(2)\nBy using a Taylor series expansion, Nichol & Schulman (2018) approximate the gradient used in the MAML update. For simplicity of exposition, in Equation 2 we give their results for the case where only 2 gradient-based updates are performed, i.e. one adaptation update on a minibatch including K datapoints from Dtr and one meta-update on a minibatch including Q datapoints from Dval. We use the same notation used by Nichol & Schulman (2018), where gi and Hi denote the gradient and Hessian computed on the ith minibatch at the initial parameter point φ1, and α gives the learning rate. Here it is assumed that the same learning rate is used for the adaptation and meta-updates.\nIn Equation 2 Nichol & Schulman (2018) demonstrate that MAML partially optimizes for increasing the inner product of the gradients computed on different minibatches. In fact, when gradients from different minibatches have a positive inner product, taking a gradient step using one of them yields a performance increase on the other (Nichol & Schulman, 2018). Equation 2 holds also for OC-MAML. However, in OC-MAML the minibatches 1 and 2 have different class-imbalance rates (CIRs), since the first minibatch includes data from only one class and the second minibatch is classbalanced. Hence, it optimizes for increasing the inner product of the gradients computed on different minibatches with different CIRs, while MAML does the same but for different minibatches with the same CIR, namely c = 50%. Consequently, OC-MAML optimizes for a parameter initialization from which taking one (or few) gradient step(s) with one-class minibatch(es) results in a performance increase on class-balanced data. In contrast, MAML optimizes for a parameter initialization that requires class-balanced minibatches to yield the same effect (Figure 1 in Appendix A). When adapting to OCC tasks, however, only examples from one class are available. We conclude, therefore, that using minibatches with different CIRs for meta-training, as done in OC-MAML, yields parameter initializations that are more suitable for adapting to OCC tasks.\nA natural question is whether applying our modification of MAML, i.e. using only data from the normal class for adaptation during meta-training, to other gradient-based meta-learning algorithms would yield the same desired effect. We investigate this for First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018). FOMAML is a first-order approximation of MAML, which ignores the second derivative terms. Reptile is also a first-order meta-learning algorithm that learns an initialization that enables fast adaptation to test tasks using only few examples from each class. In the following we demonstrate that adapting the FOMAML and Reptile algorithms to the one-class classification scenario, to which we will refer as OC-FOMAML and OCReptile, does not result in optimizing for an initialization suitable for OCC tasks, as it is the case for OC-MAML. We note that for OC-Reptile, the first (N −1) batches contain examples from only one class and the last (N th) batch is class-balanced. The approximated gradients used in the FOMAML and Reptile updates are given by Equations 3 and 4 (Nichol & Schulman, 2018), respectively.\ngFOMAML = g2 − αH2g1 +O(α2) (3)\ngReptile = g1 + g2 − αH2g1 +O(α2) (4)\nWe note that these equations hold also for OC-FOMAML and OC-Reptile. By taking the expectation over minibatch sampling Eτ,1,2 for a meta-training task τ and two class-balanced minibatches, Nichol & Schulman (2018) establish that Eτ,1,2[H1g2] = Eτ,1,2[H2g1]. Averaging the two sides of the latter equation results in the following:\nEτ,1,2[H2g1] = 1\n2 Eτ,1,2[H1g2 +H2g1] =\n1 2 Eτ,1,2[ ∂(g1.g2) ∂φ1 ] (5)\nEquation 5 shows that, in expectation, FOMAML and Reptile, like MAML, optimize for increasing the inner product of the gradients computed on different minibatches with the same CIR. However, when the minibatches 1 and 2 have different CIRs, which is the case for OC-FOMAML and OC-Reptile, Eτ,1,2[H1g2] 6= Eτ,1,2[H2g1] and therefore Eτ,1,2[H2g1] 6= 12Eτ,1,2[ ∂(g1.g2) ∂φ1 ]. Hence,\neven though, similarly to OC-MAML, OC-FOMAML and OC-Reptile use minibatches with different CIRs for meta-training, contrarily to OC-MAML, they do not optimize for increasing the inner product of the gradients computed on different minibatches with different CIRs. The second derivative term H1g2 is, thus, necessary to optimize for an initialization from which performance increase on a class-balanced task is yielded by taking few gradient steps using only data from one class." }, { "heading": "3 RELATED WORKS", "text": "Our proposed method addresses the few-shot one-class classification (FS-OCC) problem, i.e. solving binary classification problems using only few datapoints from only one class. To the best of our knowledge, this problem was only addressed by Kozerawski & Turk (2018), and exclusively in the image data domain. Kozerawski & Turk (2018) train a feed-forward neural network (FFNN) to learn a transformation from feature vectors, extracted by a CNN pre-trained on ILSVRC 2014 (Russakovsky et al., 2015), to SVM decision boundaries. Hereby, the FFNN is trained on ILSVRC 2012. At test time, an SVM boundary is inferred by using one image of one class from the test task which is then used to classify the test examples. This approach is specific to the image domain since it relies on the availability of very large, well annotated datasets and uses data augmentation techniques specific to the image domain, e.g. mirroring. OC-MAML offers a more general approach to FS-OCC since it is data-domain-agnostic. In fact, it does not require a pre-trained feature extraction model, which might not be available for some data domains, e.g. sensor readings." }, { "heading": "3.1 FEW-SHOT CLASSIFICATION", "text": "Recent few-shot classification approaches may be broadly categorized in optimization-based methods (Ravi & Larochelle, 2016; Finn et al., 2017; Nichol & Schulman, 2018) and metric-based methods (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). The optimization-based approaches aim to learn an optimization algorithm (Ravi & Larochelle, 2016) and/or a parameter initialization (Finn et al., 2017; Nichol & Schulman, 2018), that is tailored for few-shot learning. Metric-based techniques learn a metric space where samples belonging to the same class are close together, which facilitates few-shot classification (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). Rusu et al. (2018) develops a hybrid method that combines the advantages of both categories. Prior meta-learning approaches to few-shot classification addressed the N-way K-shot classification problem described in Section 2.1, i.e they only consider neatly class-balanced test classification tasks. Optimization-based techniques require these samples to finetune the learned initialization. In the metric-based methods, these samples are necessary to compute class prototypes (Snell et al., 2017), embeddings needed for verification (Koch, 2015) or relation scores (Sung et al., 2018). Our approach, however, requires only samples from one of the test task’s classes for learning. Moreover, while the evaluation of the previous approaches in the classification context was limited to the image domain, we additionally validate OC-MAML on datasets from the time-series domain." }, { "heading": "3.2 ONE-CLASS CLASSIFICATION", "text": "Classical OCC approaches rely on SVMs (Schölkopf et al., 2001; Tax & Duin, 2004) to distinguish between normal and abnormal samples. Pal & Foody (2010) show that the classification accuracy of SVMs decreases with an increasing number of input features, particularly when small datasets are available for training. Hybrid approaches combining SVM-based techniques with feature extractors were developed to compress the input samples in lower dimensional representations (Xu et al., 2015; Erfani et al., 2016; Andrews et al., 2016). Fully deep methods that jointly perform the feature extraction step and the OCC step have also been developed (Ruff et al., 2018). Another category of approaches to OCC uses the reconstruction error of antoencoders (Hinton & Salakhutdinov, 2006) trained with only normal class examples as an anomaly score (Hawkins et al., 2002; An & Cho, 2015; Chen et al., 2017). Yet, determining a decision threshold for such an anomaly score requires labeled data from both classes. Further more recent techniques rely on GANs (Goodfellow et al., 2014) to perform OCC (Schlegl et al., 2017; Ravanbakhsh et al., 2017; Sabokrou et al., 2018). The aforementioned hybrid and fully deep approaches require a considerable amount of data from the OCC task to train the typically highly parametrized models to learn features specific to the normal class. By leveraging auxiliary OCC tasks and explicitly optimizing for few-shot learning, OC-MAML learns a representation that can be adapted to unseen OCC task with only few exaples." }, { "heading": "4 EXPERIMENTAL EVALUATION", "text": "The conducted experiments 1 aim to address the following key questions: (a) How does OC-MAML perform compared to classical one-class classification (OCC) approaches in the few-shot (FS) data regime? (b) Does using OCC tasks for meta-training improve the adaptation to such tasks, as it is the case for few-shot tasks (Finn et al., 2017), and do our theoretical findings (Section 2.3.2) about the differences between the MAML and OC-MAML initializations hold in practice? (c) How does OC-MAML compare to the first-order meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Section 2.3.2)? (d) How does OC-MAML perform in FS-OCC problems from the time-series domain, which is understudied in the few-shot learning literature?" }, { "heading": "4.1 BASELINES AND DATASETS", "text": "This section provides information about the baselines and datasets we use in our experimental evaluation. We compare OC-MAML to the classical one-class classification (OCC) approaches One-Class SVM (OC-SVM) (Schölkopf et al., 2001) and Isolation Forest (IF) (Liu et al., 2008) (Question (a)), which we fit to the adaptation set of the test task. Here, we apply PCA to reduce the dimensionality of the data, by choosing the minimum number of eigenvectors so that at least 95% of the variance is preserved as done by Erfani et al. (2016). We additionally tune the inverse length scale γ by using 10% of the test set, as done by Ruff et al. (2018), which gives OC-SVM a supervised advantage, compared to the other methods. For a fairer comparison to OC-MAML, where these latter methods also benefit from the meta-training and meta-validation tasks, we additionally train them on embeddings inferred by feature extractors learned on these tasks. Here, we train two types of feature extractors on the meta-training tasks: one is trained in a Multi-Task-Learning (MTL) setting and the other trained using the ”Finetune” baseline (FB) (Triantafillou et al., 2019). FB is a few-shot classification approach, where one multi-class classifier is trained with all the classes available in all meta-training tasks, after which, an output layer is finetuned with the few available examples of the target task on top of the learned feature extractor. Moreover, we compare OC-MAML to class-balanced meta-learning algorithms, namely MAML, FOMAML and Reptile, as well as firstorder meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Questions (b) and (c)). Experimental details are provided in Appendix B.\nWe evaluate our approach on six datasets, including 3 from the image domain and 3 from the timeseries domain. In the image domain we use 2 few-shot learning benchmark datasets, namely MiniImageNet (Ravi & Larochelle, 2016) and Omniglot (Lake et al., 2015), and 1 OCC benchmark dataset, the Multi-Task MNIST (MT-MNIST) dataset. To adapt the datasets to the OCC scenario, we create binary classification tasks, where the normal class contains examples from one class of the initial dataset and the anomalous class contains examples from multiple other classes. We create 9 different datasets based on MNIST, where the meta-testing task of each dataset consists in differentiating between a certain digit and the others. We use the same (10th) task for meta-validation in all datasets. Since most of the time-series datasets for anomaly detection include data from only one domain and only one normal class, adapting them to the meta-learning problem formulation where several different tasks are required is not possible. Therefore, we create two synthetic timeseries (STS) datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks, to assess the suitability of OC-MAML to time-series data (Question (d)). The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). We propose the STS-datasets as benchmark datasets for the few-shot (one-class) classification problem in the time-series domain. Finally, we validate OC-MAML on a real-world anomaly detection dataset of sensor readings recorded during industrial manufacturing using a CNC milling machine. Various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) were performed on ca. 100 aluminium workpieces to record the CNC Milling Machine Data (CNC-MMD). In Appendix C, we give details about all 6 datasets, the task creation procedures adopted to adapt them to the OCC case, as well as the generation of the STS-datasets.\n1Our OC-MAML implementation and experimental evaluation will be made public upon paper acceptance." }, { "heading": "4.2 RESULTS AND DISCUSSION", "text": "Our results of the comparison between OC-MAML and the classical OCC approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 1. OC-MAML consistently outperforms all baselines across all datasets and on both adaptation set sizes. While FB and MTL yield relatively good performance when adapting to class-balanced tasks (c = 50%), they completely fail in adapting to OCC tasks. On the MT-MNIST dataset and the STS-Sawtooth dataset, some of the baselines that combine a feature extractor and a shallow model yield high performance, when the adaptation set size is K = 10. Our results of the comparison between OC-MAML and the classical few-shot classification approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 2. The results on the other 8 MT-MNIST datasets and on the STS-Sine dataset are presented in Appendix D and are consistent with the results in Tables 1 and 2. We observe that OC-MAML consistently outperforms the other meta-learning algorithms with a substantial margin on all datasets and for both adaptation set sizes. This confirms our theoretical findings (Section 2.3.2) that the initializations yielded by class-balanced meta-learning algorithms as well as OC-FOMAML and OC-Reptile are not optimized for adaptation using data from only one class. These latter yield test accuracies close to 50% showing that they overfitted to the normal class (Table 2 (top)).\nIn an attempt to increase the performance of the other meta-learning algorithms in the OCC scenario, we add a batch normalization (BN) (Ioffe & Szegedy, 2015) layer immediately before the output layer of the network. This BN operation standardizes the latent features using the mean and standard deviation of the K datapoints available for adaptation, which all belong to the normal class. As a result, this layer would output features with mean close to 0 and standard deviation close to 1 for normal class examples. In contrast, anomalous examples would yield features with other statistics, which simplifies their detection. We hypothesize that by enforcing a mapping of the data to a latent space standardized only by examples from the normal class, the anomalies would clearly fall out of the normal-class-distribution, making their detection easier. We note that the BN layer is used during meta-training as well. Hereby, we fix the learnable scaling (γ) and centering (β) parameters of the BN layer to 1 and 0, respectively, to prevent it from shifting the standard distribution.\nWe find that this simple modification increases the performance of the other meta-learning algorithms on all image datasets. However, OC-MAML without BN still yields the highest results, with only one exception. The higher performance increase when a bigger adaptation set is available (K = 10) confirms our hypothesis that enforcing a mapping of the data to a latent space standardized only by examples from the normal class makes the detection of the anomalies easier. In fact, using more examples yields more accurate mean and standard deviation measures, which enables a better approximation of the distribution of the normal class, and hence leads to an improved detection of the anomalies. We also tested these algorithms on networks including a trainable BN layer after each convolutional layer. This yielded comparable results to just adding one non-trainable BN layer\nbefore the output layer. Even though some of the meta-learning algorithms and OCC approaches sometimes outperform OC-MAML (Tables 2, 5, 8, 9), they do not consistently yield high performance in learning FS-OCC tasks across several datasets, as it is the case for OC-MAML. We note that this happens only on few MT-MNIST datasets and explain that by the high overlap between the digit classes underlying the meta-training and meta-testing tasks in the MT-MNIST datasets.\nThe results of OC-MAML experiments on the CNC-MMD dataset are presented in Table 3. We compute F1-scores for evaluation since the test sets are class-imbalanced. OC-MAML consistently achieves high F1-scores across the 6 different milling processes. This high model performance on the minority class, i.e. in detecting anomalous data samples, is reached by using only K = 10 non-anomalous data samples (c = 0%). These results show that OC-MAML yielded a parameter initialization suitable for learning OCC tasks in the time-series data domain. Moreover, the high performance reached show the maturity of this method for industrial real-world applications." }, { "heading": "5 CONCLUSION", "text": "This work addressed the novel and challenging problem of few-shot one-class classification (FSOCC) and introduced OC-MAML, a robust meta-learning approach to FS-OCC problems that learns model parameters which are easily adaptable to unseen tasks using few examples from only one class. We demonstrated the viability of our method on six datasets from the image time-series domains, including a real-world dataset of industrial sensor readings, where it significantly outperformed classical OCC and few-shot classification methods. Future works could investigate an unsupervised approach to FS-OCC, as done by Hsu et al. (2018) in the class-balanced scenario." }, { "heading": "A OC-MAML: ALGORITHM AND PARAMETER INITIALIZATION", "text": "In this section we present the pseudo-code of OC-MAML in Algorithm 1 and a diagram visualizing the parameter initializations yielded by MAML and OC-MAML.\nAlgorithm 1 Few-shot one-class classification with OC-MAML Require: Str: Set of meta-training tasks Require: α, β: Learning rates Require: K,Q: Batch size for the inner and outer updates Require: c: CIR for the inner-updates\n1: Randomly initialize θ 2: while not done do 3: Sample batch of tasks Ti from Str Let {Dtr, Dval} = Ti 4: for all sampled Ti do 5: Sample K datapoints B = {x(l), y(l)} from Dtr such that CIR= c 6: Initialize θ ′\ni = θ 7: for number of adaptation steps do 8: Compute adaptation loss LtrTi(fθ′i ) using B 9: Compute adapted parameters with gradient descent: θ ′\ni = θ ′\ni − α∇θ′iL tr Ti (fθ′i )\n10: end for 11: Sample Q datapoints B ′ = {x′(l), y′(l)} from Dval 12: Compute outer loop loss LvalTi (fθ′i ) using B ′ 13: end for 14: Update θ: θ ← θ − β∇θ ∑ Ti LvalTi (fθ′i ) 15: end while 16: return meta-learned parameters θ\nFigure 1 visualizes the adaptation to a binary classification test task Ts from the parameter initializations yielded by OC-MAML and MAML, denoted by θOCMAML and θMAML respectively. θ∗s,CB denotes the optimal parameters for Ts. Taking a gradient step using a one-class adaptation setDs,OC (gradient direction denoted by ∇Ls,OC), yields a performance increase on Ts when starting from the OC-MAML parameter initialization. In contrast, when starting from the parameter initialization reached by MAML a class-balanced adaptation set Ds,CB (gradient direction denoted by ∇Ls,CB) is required for a performance increase in Ts." }, { "heading": "B EXPERIMENT DETAILS", "text": "For MT-MNIST, we use the same 4-block convolutional architecture as used by Hsu et al. (2018) for their multi-class MNIST experiments. However, we exclude the batch normalization (Ioffe & Szegedy, 2015) layers, as we want to assess their effect in the OCC case, as discussed in Section 4.2. Each convolutional block includes a 3 x 3 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The same model architecture is used for the MiniImageNet experiments as done by Ravi & Larochelle (2016). For the Omniglot experiments, we use the same architecture used by Finn et al. (2017). We also do not include the batch normalization layers for the two latter datasets. On the STS datasets, the model architecture used is composed of 3 modules, each including a 5 x 5 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The model architecture used for the CNC-MMD experiments is composed of 4 of these aforementioned modules, except that the convolutional layers in the last two modules include 64 filters. The last layer of all architectures is a linear layer followed by softmax. We note that in the experiments on the time-series datasets (STS and CNC-MMD) 1-D convolutional filters are used.\nTable 4 shows the hyperparameters used in the experiments of each model on the different datasets. We note that we did not fix the outer loop size Q in the experiments on the CNC-MMD dataset, because the sizes and CIRS of the validation sets Dval differ across the different tasks. For the meta-learning algorithms, including OC-MAML, we used vanilla SGD in the inner loop and the Adam optimizer (Kingma & Ba, 2014) in the outer loop, as done by Finn et al. (2017). The MTL and FB baselines are also trained with the Adam optimizer.\nIn the following, we provide details about the meta-training procedure adopted in the meta-learning experiments. We use disjoint sets of data for adaptation (Dtr) and validation (Dval) on the metatraining tasks, as it was empirically found to yield better final performance (Nichol & Schulman, 2018). Hereby, the same sets of data are used in the OC-MAML and baseline experiments. In the\nMT-MNIST, Omniglot, MiniImageNet and STS experiments, the aforementioned sets of data are class-balanced. The sampling of the batch used for adaptation B ensures that this latter has the appropriate CIR (c = 50% for MAML, FOMAML and Reptile, and c = ctarget for OC-MAML, OC-FOMAML and OC-Reptile). For the one-class meta-learning algorithms, ctarget = 0%, i.e. no anomalous samples of the target task are available, sothat only normal examples are sampled from Dtr during meta-training. In order to ensure that class-balanced and one-class meta-learning algorithms are exposed to the same data during meta-training, we move the anomalous examples from the adaptation set of data (Dtr) to the validation set of data (Dval). We note that this is only done in the experiments using one-class meta-learning algorithms.\nDuring meta-training, meta-validation episodes are conducted to perform model selection. In order to mimic the adaptation to unseen FS-OCC tasks with CIR c = ctarget at test time, the CIR of the batches used for adaptation during meta-validation episodes is also set to c = ctarget. We note that the hyperparameter K denotes the total number of datapoints, i.e. batch size, used to perform the adaptation updates, and not the number of datapoints per class as done by Finn et al. (2017). Hence, a task with sizeK = 10 and CIR c = 50% is equivalent to a 2-way 5-shot classification task.\nIn the following, we provide details about the adaptation to the target task(s) and the subsequent evaluation. In the MT-MNIST and MiniImageNet experiments, we randomly sample 20 adaptation sets from the target task(s)’ data, each including K examples with the CIR corresponding to the experiment considered. After each adaptation episode conducted using one of these sets, the adapted model is evaluated on a disjoint class-balanced test set that includes 4,000 images for MT-MNIST and 600 for MiniImageNet. We note that the samples included in the test sets of the test tasks are not used nor for meta-training neither for meta-validation. This results in 20 and 400 (20 adaptation sets created from each of the 20 test classes) different test tasks for MT-MNIST and MiniImageNet, respectively. All the results presented give the mean over all adaptation episodes. Likewise, in the STS experiments, we evaluate the model on 10 different adaptation sets from each of the 5 test tasks. In the CNC-MMD experiments, the 30 tasks created from the target operation are used for adaptation and subsequent evaluation. For each of these target tasks, we randomly sample K datapoints belonging to the normal class that we use for adaptation, and use the rest of the datapoints for testing. We do this 5 times for each target task, which results in 150 testing tasks.\nFor MTL and FB baselines, as well as all the baseline combining these model with shallow models, i.e. IF and OC-SVM, we use the meta-validation task(s) for model choice, like in the meta-learning experiments. For the MTL baseline, for each validation task, we finetune a fully connected layer on top of the shared multi-task learned layers, as it is done at test time." }, { "heading": "C DATASETS AND TASK CREATION PROCEDURES", "text": "In this Section we provide information about the datasets used and the task creation procedures.\nMulti-task MNIST (MT-MNIST): We derive 10 binary classification tasks from the MNIST dataset (LeCun et al., 2010), where every task consists in recognizing one of the digits. This is a classical one-class classification benchmark dataset. For a particular task Ti, images of the digit i are labeled as normal samples, while out-of-distribution samples, i.e. the other digits, are labeled as anomalous samples. We use 8 tasks for meta-training, 1 for meta-validation and 1 for meta-testing. Hereby, images of digits to be recognized in the validation and test tasks are not used as anomalies in the meta-training tasks. This ensures that the model is not exposed to normal samples from the test task during meta-training. Moreover, the sets of anomalous samples of the meta-training, meta-validation and meta-testing tasks are mutually disjoint. We conduct experiments on 9 MTMNIST datasets, each of which involves a different target task (T0 − T8). The task T9 is used as a meta-validation task across all experiments.\nMiniImageNet: This dataset was proposed by Ravi & Larochelle (2016) and includes 64 classes for training, 16 for validation and 20 for testing, and is a classical challenging benchmark dataset for few-shot learning. To adapt it to the few-shot one-class classification setting, we create 64 binary classification tasks for meta-training, each of which consists in differentiating one of the training classes from the others, i.e. the anomalous examples of a task Ti are randomly sampled from the 63\nclasses with labels different from i. We do the same to create 16 meta-validation and 20 meta-testing tasks using the corresponding classes.\nOmniglot: This dataset was proposed by Lake et al. (2015) and includes 20 instances of 1623 handwritten characters from 50 different alphabets. We generate our meta-training and meta-testing tasks based on the official data split (Lake et al., 2015), where 30 alphabets are reserved for training and 20 for evaluation. For each character class, we create a binary classification task, which consists in differentiating between this character and other characters from the same set (meta-training or meta-testing), i.e. the anomalous examples of a task Ti are randomly sampled from the remaining characters. By removing 80 randomly sampled tasks from the meta-training tasks, we create the meta-validation tasks set.\nSynthetic time-series (STS): In order to investigate the applicability of OC-MAML to time-series (question (c)), we created two datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks. The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). Each time-series is generated with random frequencies, amplitudes, noise boundaries, as well as anomaly width and height boundaries. Additionally, the width of the rising ramp as a proportion of the total cycle is sampled randomly for the sawtooth dataset, which results in tasks having rising and falling ramps with different steepness values. The data samples of a particular task are generated by randomly cropping windows of length 128 from the corresponding time-series. We generate 200 normal and 200 anomalous data examples for each task. For each dataset, we randomly choose 20 tasks for meta-training, 5 for meta-validation and 5 for meta-testing. We propose the STS-datasets as benchmark datasets for the few-shot one-class classification problem in the time-series domain, and will make them public upon paper acceptance.\nIn the following, we give details about the generation procedure adopted to create the STS-Sawtooth dataset. The same steps were conducted to generate the STS-Sine dataset. First, we generate the sawtooth waveforms underlying the different tasks by using the Signal package of the Scipy library (Jones et al., 2001–). Thereafter, a randomly generated noise is applied to each signal. Subsequently, signal segments with window length l = 128 are randomly sampled from each noisy signal. These represent the normal, i.e. non-anomalous, examples of the corresponding task. Then, some of the normal examples are randomly chosen, and anomalies are added to them to produce the anomalous examples.\nFigure 2 shows exemplary normal and anomalous samples from the STS-Sawtooth and STS-Sine datasets. In order to increase the variance between the aforementioned synthetic signals underlying the different tasks, we randomly sample the frequency, i.e. the number of periods within the window length l, with which each waveform is generated, as well as the amplitude and the vertical position\n(see Figure 2). For sawtooth waveforms, we also randomly sample the width of the rising ramp as a proportion of the total cycle between 0% and 100%, for each task. Setting this value to 100% and to 0% produces sawtooth waveforms with rising and falling ramps, respectively. Setting it to 50% corresponds to triangle waveforms.\nWe note that the noise applied to the tasks are randomly sampled from task-specific intervals, the boundaries of which are also randomly sampled. Likewise, the width and height of each anomaly is sampled from a random task specific-interval. Moreover, we generate the anomalies of each task, such that half of them have a height between the signal’s minimum and maximum (e.g. anomalies (a) and (d) in Figure 2), while the other half can surpass these boundaries, i.e. the anomaly is higher than the normal signal’s maximum or lower than its minimum at least at one time step (e.g. anomalies (b) and (c) in Figure 2). We note that an anomalous sample can have more than one anomaly.\nWe preprocess the data by removing the mean and scaling to unit variance. Hereby, only the available normal examples are used for the computation of the mean and the variance. This means that in the experiments, where the target task’s size K = 2 and only normal samples are available c = 0%, only two examples are used for the mean and variance computation. We note that the time-series in Figure 2 are not preprocessed.\nCNC Milling Machine Data (CNC-MMD): This dataset consists of ca. 100 aluminum workpieces on which various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) are performed. The sensor readings which were recorded at a rate of 500Hz measure various quantities that are important for the process monitoring including the torques of the various axes. Each run of machining a single workpiece can be seen as a multivariate time-series. We segmented the data of each run in the various operations performed on the workpieces. E.g. one segment would describe the milling of a pocket where another describes a surface finish operation on the workpiece. Since most manufacturing processes are highly efficient, anomalies are quite rare but can be very costly if undetected. For this reason, anomalies were provoked for 6 operations during manufacturing to provide a better basis for the analysis. Anomalies were provoked by creating realistic scenarios for deficient manufacturing. Examples are using a workpiece that exhibits deficiencies which leads to a drop in the torque signal or using rather slightly decalibrated process parameters which induced various irritations to the workpiece surface which harmed production quality. The data was labeled by domain experts from Siemens Digital Industries. It should be noted that this dataset more realistically reflects the data situation in many real application scenarios from industry where anomalies are rare and data is scarce and for this reason training models on huge class-balanced datasets is not an option.\nFor our experiments, we created 30 tasks per operation by randomly cropping windows of length 2048 from the corresponding time-series of each operation. As a result, the data samples of a particular task Ti cropped from a milling operation Oj correspond to the same trajectory part of Oj , but to different workpieces. The task creation procedure ensures that at least two anomalous data samples are available for each task. The resulting tasks include between 15 and 55 normal samples, and between 2 and 4 (9 and 22) anomalous samples for finishing (roughing) operations. We validate our approach on all 6 milling operations in the case where only 10 samples belonging to the normal class (K = 10, c = 0%) are available. Given the type of the target milling operation,e.g. finishing, we use the tasks from the other operations of the same type for meta-training. We note that the model is not exposed to any sample belonging to any task of the target operation during training.\nWe preprocess each of the three signals separately by removing the mean and scaling to unit variance, as done for the STS datasets. Likewise, only the available normal examples are used for the computation of the mean and the variance.\nExemplary anomalous signals recorded from a finishing and a roughing operations are shown in Figure 3. These signals are not mean centered and scaled to unit variance. We note that we do not use the labels per time-step, but rather the label ”anomalous” is assigned to each time-series that contains at least an anomalous time-step." }, { "heading": "D EXPERIMENTAL RESULTS", "text": "In this Section, we present the results of the experiments on the STS-Sine dataset and the 8 further MT-MNIST datasets." } ]
2,019
null
SP:cd63a80ffd1039df8b4b470f26353da3ce0022ec
[ "This paper tackles the task of automatically inducing a curriculum for agents learning through reinforcement. Specifically, they use two agents — a setter agent that sets goals, and a solver agent that solves the goals provided by the setter. While this has been explored before, the difficulty lies in training both agents simultaneously in a robust fashion. If the goals are too difficult, the solver will be unable to solve them and if they are too easy, the solver will be unable to improve. The authors propose a combination of different losses to help the setter balance its goal predictions — validity, feasibility and coverage. In addition, they train a judge model predict the reward that the solver agent would achieve on a goal proposed by the setter. Empirical results on two setups demonstrate the effectiveness of this approach in learning a good curriculum. ", "This paper proposes an autocurricula scheme to train a goal-conditional agent in a dynamic and sparse-rewarding environment. The main idea is to train a setter model to sample goals for next-step training, where the setter can make the decision either based on the training history or the environmental observation (conditional case). The paper proposes three criteria which leads to three types of loss to train the setter model, i.e., goal validity (the goal should be achievable by some existing policy), goal feasibility (how probable the current policy can achieve the goal), and goal coverage (the sampled goals by the setter need to cover all possible goals). A judge model is needed to output the feasibility of a given goal. So the autocurricula scheme contains the solver (agent), the setter, and the judge, each having its own combination of loss and they are trained together. Given a desired goal distribution, the paper proposes to additionally train a discriminator whose optimization objective is Wasserstein loss. In experiments, they evaluate the proposed method on three types of tasks in two environments, i.e., 3D color finding and grid world alchemy. The goals in the two environments are similar in that they all aim to achieve some color or color pairs. The difference lies in that the first one finds colors while the second pick up colors. Each environment can be changed between episodes by changing the colors of objects in the scenes. Experimental results show that different combinations of the three types of losses can bring improvements in some scenarios. Making setter and judge conditioned on environment observation can further improve the success rate. Given a desired distribution of goals, the learning becomes more efficient. The paper compares this method with Goal GAN as a baseline and outperforms it on the three tasks." ]
Reinforcement learning algorithms use correlations between policies and rewards to improve agent performance. But in dynamic or sparsely rewarding environments these correlations are often too small, or rewarding events are too infrequent to make learning feasible. Human education instead relies on curricula–the breakdown of tasks into simpler, static challenges with dense rewards–to build up to complex behaviors. While curricula are also useful for artificial agents, handcrafting them is time consuming. This has lead researchers to explore automatic curriculum generation. Here we explore automatic curriculum generation in rich, dynamic environments. Using a setter-solver paradigm we show the importance of considering goal validity, goal feasibility, and goal coverage to construct useful curricula. We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work. Finally, we demonstrate the value of a novel technique that guides agents towards a desired goal distribution. Altogether, these results represent a substantial step towards applying automatic task curricula to learn complex, otherwise unlearnable goals, and to our knowledge are the first to demonstrate automated curriculum generation for goal-conditioned agents in environments where the possible goals vary between episodes.
[ { "affiliations": [], "name": "Sébastien Racanière" }, { "affiliations": [], "name": "Andrew K. Lampinen" }, { "affiliations": [], "name": "Adam Santoro" }, { "affiliations": [], "name": "David P. Reichert" }, { "affiliations": [], "name": "Vlad Firoiu" }, { "affiliations": [], "name": "Timothy P. Lillicrap" } ]
[ { "authors": [ "Forest Agostinelli", "Stephen McAleer", "Alexander Shmakov", "Pierre Baldi" ], "title": "Solving the rubiks cube with deep reinforcement learning and search", "venue": "Nature Machine Intelligence,", "year": 2019 }, { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight Experience Replay", "venue": "Advances in Neural Information Processing Systems, (Nips),", "year": 2017 }, { "authors": [ "Adrien Baranes", "Pierre-Yves Oudeyer" ], "title": "Active learning of inverse models with intrinsically motivated goal exploration in robots", "venue": "Robotics and Autonomous Systems,", "year": 2013 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "arXiv preprint arXiv:1808.04355,", "year": 2018 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Jeffrey L Elman" ], "title": "Learning and development in neural networks: The importance of starting small", "venue": null, "year": 1993 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymyr Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "arXiv preprint arXiv:1705.06366,", "year": 2017 }, { "authors": [ "Alex Graves", "Greg Wayne", "Malcolm Reynolds", "Tim Harley", "Ivo Danihelka", "Agnieszka GrabskaBarwińska", "Sergio Gómez Colmenarejo", "Edward Grefenstette", "Tiago Ramalho", "John Agapiou" ], "title": "Hybrid computing using a neural network with dynamic external", "venue": "memory. Nature,", "year": 2016 }, { "authors": [ "Alex Graves", "Marc G Bellemare", "Jacob Menick", "Remi Munos", "Koray Kavukcuoglu" ], "title": "Automated curriculum learning for neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Karol Gregor", "Frederic Besse", "Danilo Jimenez Rezende", "Ivo Danihelka", "Daan Wierstra" ], "title": "Towards conceptual compression", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv preprint arXiv:1611.07507,", "year": 2016 }, { "authors": [ "Nick Haber", "Damian Mrowca", "Stephanie Wang", "Li F Fei-Fei", "Daniel L Yamins" ], "title": "Learning to play with intrinsically-motivated, self-aware agents", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Mark Harrower", "Cynthia A Brewer" ], "title": "Colorbrewer. org: an online tool for selecting colour schemes for maps", "venue": "The Cartographic Journal,", "year": 2003 }, { "authors": [ "Max Jaderberg", "Wojciech M. Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castañeda", "Charles Beattie", "Neil C. Rabinowitz", "Ari S. Morcos", "Avraham Ruderman", "Nicolas Sonnerat", "Tim Green", "Louise Deason", "Joel Z. Leibo", "David Silver", "Demis Hassabis", "Koray Kavukcuoglu", "Thore Graepel" ], "title": "Human-level performance in 3d multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Lars Maaløe", "Marco Fraccaro", "Valentin Liévin", "Ole Winther" ], "title": "Biva: A very deep hierarchy of latent variables for generative modeling", "venue": null, "year": 1902 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Ashvin V Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine" ], "title": "Visual reinforcement learning with imagined goals", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sanmit Narvekar", "Peter Stone" ], "title": "Learning curriculum policies for reinforcement learning", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 25–33. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2019 }, { "authors": [ "Sébastien Racanière", "Théophane Weber", "David Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomènech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Suman Ravuri", "Oriol Vinyals" ], "title": "Classification accuracy score for conditional generative models", "venue": "arXiv preprint arXiv:1905.10887,", "year": 2019 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "Mastering Chess and Shogi by Self-Play with a General Reinforcement", "venue": "Learning Algorithm. Science,", "year": 2018 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Rupesh K Srivastava", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Training very deep networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Sainbayar Sukhbaatar", "Zeming Lin", "Ilya Kostrikov", "Gabriel Synnaeve", "Arthur Szlam", "Rob Fergus" ], "title": "Intrinsic motivation and automatic curricula via asymmetric self-play", "venue": "arXiv preprint arXiv:1703.05407,", "year": 2017 }, { "authors": [ "Wu", "Dani Yogatama", "Julia Cohen", "Katrina McKinney", "Oliver Smith", "Tom Schaul", "Timothy Lillicrap", "Chris Apps", "Koray Kavukcuoglu", "Demis Hassabis", "David Silver" ], "title": "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/, 2019", "venue": null, "year": 2019 }, { "authors": [ "Rui Wang", "Joel Lehman", "Jeff Clune", "Kenneth O Stanley" ], "title": "Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions", "venue": null, "year": 1901 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter Battaglia" ], "title": "Deep reinforcement learning with relational inductive biases", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever" ], "title": "Learning to execute", "venue": "arXiv preprint arXiv:1410.4615,", "year": 2014 }, { "authors": [ "Espeholt" ], "title": "2018), except for the addition of a learner to train the Judge and Setter. We therefore end up with three types of workers, that run asynchronously and communicate data to each other. Below, we write in pseudo code what loops are running on each type of worker", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) algorithms use correlations between policies and environmental rewards to reinforce and improve agent performance. But such correlation-based learning may struggle in dynamic environments with constantly changing settings or goals, because policies that correlate with rewards in one episode may fail to correlate with rewards in a subsequent episode. Correlation-based learning may also struggle in sparsely rewarding environments since by definition there are fewer rewards, and hence fewer instances when policy-reward correlations can be measured and learned from. In the most problematic tasks, agents may fail to begin learning at all.\nWhile RL has been used to achieve expert-level performance in some sparsely rewarding games (Silver et al., 2016; OpenAI, 2018; Vinyals et al., 2019), success has often required carefully engineered curricula to bootstrap learning, such as learning from millions of expert games or hand-crafted shaping rewards. In some cases self-play between agents as they improve can serve as a powerful automatic curriculum for achieving expert or superhuman performance (Silver et al., 2018; Vinyals et al., 2019). But self-play is only possible in symmetric two-player games. Otherwise humans must hand-design a curriculum for the agents, which requires domain knowledge and is time-consuming, especially as tasks and environments grow in complexity. It would be preferable to have an algorithm that could automatically generate a curriculum for agents as they learn.\n∗DeepMind and Stanford University Department of Psychology\nSeveral automatic-curriculum generating algorithms have been proposed, including some that help agents explore and learn about their environments (e.g. Gregor et al., 2016b; Eysenbach et al., 2018), and some that attempt to gradually increase goal difficulty (e.g. Florensa et al., 2017). Most of these approaches have been tested only on simple tasks in simple environments, and often assume that either the environment is fixed from one episode to the next or that the agent’s goal is fixed and unchanging. Ideally, curricula would apply to complex, varying environments and would support goal-conditioning to handle changing tasks.\nSurprise- or difficulty-based exploration may sometimes discover desired agent behaviors (Gregor et al., 2016b; Burda et al., 2018; Haber et al., 2018). This approach may not always be practical, though, since many difficult, but otherwise irrelevant tasks might “distract” exploration objectives. For example, training a self-driving car to successfully do flips might be challenging and novel, but it would not be particularly beneficial. Human curricula efficiently lead learners towards a desired competency, rather than along arbitrary dimensions of difficulty. Analogously, it would be useful for algorithms to leverage knowledge of the desired goal distribution to develop more targeted curricula.\nThis paper take several steps toward automatic, targeted curriculum generation by proposing an algorithm for training a goal-conditioned agent in dynamic task settings with sparse rewards. The approach trains a “setter” model to generate goals for a “solver” agent by optimizing three setter objectives: discovering the subset of expressible goals that are valid, or achievable by an expert solver (goal validity), encouraging exploration in the space of goals (goal coverage), and maintaining goal feasibility given the agent’s current skill (goal feasibility). We also propose an extension for targeting a distribution of desired tasks (if one is known) using a Wasserstein discriminator (Arjovsky et al., 2017). We demonstrate our approach in a rich 3D environment and a grid-world wherein observation statistics and possible goals vary between episodes, and show that it substantially outperforms baselines, lesions and prior approaches." }, { "heading": "2 RELATED WORK", "text": "Uniform sampling of sub-tasks Perhaps the simplest curriculum is training uniformly over subtasks of varying difficulty. For example, Agostinelli et al. (2019) trained a Rubik’s cube solver on problems sampled uniformly between 1 and K moves from the solved state. This curriculum leverages the fact that some sub-tasks can be solved before others, and that learning of these sub-tasks bootstraps learning of harder sub-tasks, and ultimately the task as a whole. However, in complex settings uniform training may not suffice, either because easier sub-tasks do not exist, they are still too hard to learn, or they do not help learning of harder sub-tasks. When uniform sampling is ineffective, hand-engineered curricula may work (Elman, 1993; Bengio et al., 2009; Zaremba & Sutskever, 2014; Graves et al., 2016). Their effectiveness has led to research on automated ways to derive curricula (Graves et al., 2017). Here we outline a number of such approaches in the RL setting.\nExploration Some work leverages exploration to encourage state diversity (Gregor et al., 2016b), state-transition surprise (Burda et al., 2018; Haber et al., 2018), or distinguishable skills (Eysenbach et al., 2018). These exploration-based methods are usually validated in relatively simple, unchanging environments, and have not been tested as pre-training for goal-conditioned RL tasks. A few studies have considered varying environments; e.g. Wang et al. (2019) considered evolving environments together with paired agents. However, because each agent is paired to a single environment, the method results in agents that are specialized to single, unchanging environments with fixed goals.\nOptimal task selection Other approaches include selecting tasks on which learning is progressing (or regressing) the fastest (Baranes & Oudeyer, 2013). However, it can be prohibitively expensive to determine goal regions and track progress within them, especially as task spaces grow larger and more complex. Some approaches work for a set of pre-specified tasks (Narvekar & Stone, 2019), but they require human effort to hand-select tasks from this set. Again, these approaches have also generally been demonstrated in simple, fixed environments.\nAgent-agent interactions Agent interactions can also generate effective curricula. For example, in symmetric two-player (or two-team) zero-sum games agents jointly improve and thus are forced to face stronger and stronger opponents. This natural curriculum may work on tasks where random play can achieve rewards with reasonable frequency (Silver et al., 2018). But in other cases, handengineered auxiliary tasks may be used to avoid the difficult initial problem of learning from sparse\nrewards, such as imitation learning on data from experts (Silver et al., 2016; Vinyals et al., 2019). Or, dense shaping rewards may be needed (OpenAI, 2018; Jaderberg et al., 2019). Furthermore, this type of curriculum has not been tested in goal-conditioned environments – while the environment might vary because of opponent play, or on a different map, the ultimate goal of winning is fixed. More fundamentally, while this type of curriculum works well for two-player zero-sum games, it is less clear how it can be used to train a single agent on a non-competitive, goal-conditioned task.\nAsymmetric agent-agent interactions, for example when one agent tries to repeat or undo another’s actions (Sukhbaatar et al., 2017), can also be useful. However, this requires the desired task distribution to be close to the distribution generated by these reversing/repeating tasks. In goal-conditioned settings, guaranteeing this is likely as difficult as the original learning problem.\nGoal conditioning In the goal-conditioned setting, hindsight experience replay (Andrychowicz et al., 2017) has agents retrospectively imagine that they were trying to achieve the state they actually ended up in. While this is an active curriculum for starting learning, it does not necessarily encourage goal-space exploration, nor does it provide a framework for generating novel goals.\nNair et al. (2018) used a generative model of state space to sample “imagined” goals, rewarding the agent based on similarity to the generative model’s latent space. Florensa et al. (2017) used a GAN to generate goals of intermediate difficulty for the agent, which resulted in goals that gradually expanded to fill the goal space. This work is closely related to part of our proposal, and we use it as an important benchmark. Critically, this approach has not been tested in environments which vary substantially from episode to episode, particularly ones where the valid goals change from episode to episode. This is an important distinction because training generative models with non-trivial conditioning can be challenging. In particular, while conditioning directly on an informative latent variable can work well, for example when trying to generate images from a given class (Mirza & Osindero, 2014; Brock et al., 2018), even this problem is not completely solved (Ravuri & Vinyals, 2019). Adding the challenge of trying to discover latent variables with which to condition and performing even a simple manipulation of them makes things much more difficult (Rezende & Viola, 2018) (c.f. the difficulty of learning hierarchies of latent variables (Sønderby et al., 2016; Maaløe et al., 2019)). This means that if the valid goals are not trivially observable from the environment, it may be difficult for the goal-setter to discover the goal structure via a generative loss alone. In section 4.2, we demonstrate this particular failure mode, along with some successes.\nSummary A variety of automated curriculum generation approaches for RL have demonstrated some success, but the challenge of curriculum generation in the more complex settings remains open. This is because these approaches have not demonstrated success in tasks with the complexity reflective of difficult real-world tasks; in particular, no approach can handle goal-conditioned tasks in dynamic environments, wherein the set of possible goals varies from one episode to the next, and the set of possible goals might be tiny compared to the set of expressible goals." }, { "heading": "3 METHOD", "text": "Our model consists of three main components: A solver – the goal-conditioned agent we are training. A setter (S) – A generative model we are using to generate a curriculum of goals for the agent. A judge (J) – A discriminative model that predicts the feasibility of a goal for the agent at present. See appendix B for architectural details.\nSee fig. 1 for training schematics (see also Appendix B.2). The solver agent trains on settergenerated goals using a distributed learning setup to compute policy gradients (Espeholt et al., 2018). For setter training, three concepts are important: goal validity, goal feasibility and goal coverage. We say a goal is valid if there exists a solver agent policy which has a non-zero probability of achieving this goal. This concept is independent of the current policy of the solver. By contrast, feasibility captures whether the goal is achievable by the solver at present. Specifically, we say a goal has feasibility f ∈ [0, 1] if the probability that the solver will achieve the goal is f . The set of feasible goals will therefore evolve as the solver learns. The judge is a learned model of feasibility, trained via supervised learning on the solver’s results. Finally, goal coverage indicates the variability (entropy) of the goals generated by the setter." }, { "heading": "3.1 REWARD AND LOSSES FOR THE SOLVER", "text": "Our solver is a goal conditioned RL agent. At the beginning of every episode it receives a goal g sampled by the setter, and a single reward Rg at the end of the episode. The reward Rg is 1 if the solver achieved the goal, or 0 if it did not after a fixed maximum amount of time. The solver could be trained by any RL algorithm. We chose to adopt the training setup and losses from Espeholt et al. (2018). The solver consists of a policy π and a baseline function V π which are trained using the V-trace policy gradient actor-critic algorithm with an entropy regularizer (see Espeholt et al. (2018) for details)." }, { "heading": "3.2 LOSS FOR THE JUDGE", "text": "The Judge J is trained as a binary classifier to predict the reward, 0 or 1. Given a goal g (see section 3.3), J(g) are logits such that σ(J(g)) = p(Rg = 1|g), where σ is the sigmoid function, Rg are returns obtained by the solver when trying to achieve those goals, and p(Rg = 1|g) is the probability assigned by the judge that the agent will have a return of 1 when given goal g. We use a cross-entropy loss with the input distribution defined by the setter, and labels are obtained by testing the solver on these goals:\nLJudge = −Ez∈N (0,1),f∈Unif(0,1),g=S(z,f) [Rg log(σ(J(g)) + (1−Rg)(log(1− σ(J(g))))]" }, { "heading": "3.3 LOSSES FOR THE SETTER", "text": "Our setter takes as input a desired goal feasibility f ∈ (0, 1). In particular, we can sample a goal g = S(z, f) for some sample z from a Gaussian prior N (0, 1) and a desired feasibility f , or we can map backwards from a goal g to a latent z = S−1(g, f), for which we can then compute the probability under the prior. Both directions are used in training. With these features in mind, we define three losses for the setter that reflect the concepts of goal validity, feasibility, and coverage:\nValidity: A generative loss that increases the likelihood of the setter generating goals which the solver has achieved. This is analogous to the hindsight of Andrychowicz et al. (2017), but from the setter perspective rather than the solver. Specifically:\nLval. = Eg achieved by solver,ξ∈Unif(0,δ),f∈Unif(0,1) [ − log p ( S−1(g + ξ, f) )] Where g is sampled from goals that the solver achieved, regardless of what it was tasked with on that episode, ξ is a small amount of noise to avoid overfitting1, and p(.) denotes the probability of\n1This is common practice in generative models of images or discrete data. See e.g. Gregor et al. (2016a)\nsampling that latent under a fixed gaussian prior for the latent of S. This loss may not cover all valid goals, but it is a good estimate available without any other source of knowledge.\nFeasibility: A loss that encourages the setter to choose goals which match the judge’s feasibility estimates for the solver at present. Specifically:\nLfeas. = Ez∈N (0,1),f∈Unif(0,1) [ (J(S(z, f))− σ−1(f))2 ] This loss uniformly samples a desired feasibility f (to train the setter to provide goals at a range of difficulties), then attempts to make the setter produce goals that the judge rates as matching that desired feasibility. Note although gradients pass through the judge, its parameters are not updated.\nCoverage: A loss that encourages the setter to pick more diverse goals. This helps the setter to cover the space of possible goals, and to avoid collapse. Specifically:\nLcov. = Ez∈N (0,1),f∈Unif(0,1) [log p (S(z, f))]\nThis loss maximises the average of the conditional entropy of the setter. Since the density of f is constant, adding a term log(p(f)) in the above formula only changes the loss by a constant, and shows that our loss is equivalent to maximising the entropy of the joint distribution (S(z, f), f).\nThe setter is trained to minimize the total lossLsetter = Lval.+Lfeas.+Lcov.. Note that the sumLfeas.+ Lcov. can be interpreted as a KL-divergence between an energy model and the setter’s distribution. Specifically, for a fixed feasibility f , define an energy function on the space of goals by Ef (g) = (J(g)−σ−1(f))2. Let pf (g) = e−Ef (g)/Z be the density of the distribution defined by this energy, where Z is a normalizing constant. Then the sum of the feasibility and coverage losses is, up to a constant, the average over f ∈ [0, 1] of the divergence KL(pf ||p(S(g, f))). We also demonstrate two important extensions to our framework which are critical in more complicated environments:\nVariable environments and conditioned setters: While prior work has often focused on fixed environments, such as the same maze each episode, we would like to train agents in variable worlds where the possible goals vary from one episode to the next. For this to be possible, our setter and judge must condition on an environmental observation. However, learning these conditional generative models can be challenging if the valid goals are not trivially observable (see the related work section above). We demonstrate the success of our approach in these environments, and advantages with a conditioned setter and judge.\nDesired goal distributions: In complex task spaces, the goals we want agents to accomplish will likely lie in a small region within the space of all possible goals. Thus it may not be efficient to uniformly expand difficulty. We propose an additional loss for optimizing the setter towards a desired goal distribution, when such a distribution is known. Specifically, we propose training a Wasserstein discriminator (Arjovsky et al., 2017) to discriminate setter-generated goals from goals sampled from the desired goal distribution. The Wasserstein discriminator has the beneficial property that it can give useful gradients even when the distributions are non-overlapping, which is critical in this setting, since the easy goals the setter generates initially may not have any overlap with the target goal distribution. Specifically, the desirability discriminator loss is:\nLdisc. = Eg∈desired goal distribution [D(g)]− Ez∈N (0,1),f∈Unif(0,1) [D(S(z, f))]\nand the setter is trained with the loss:\nLdes. = βdes.Ez∈N (0,1),f∈Unif(0,1) [D(S(z, f))]\nWhere βdes. is a hyperparameter. While targeting the desired distribution can be helpful, it is usually not sufficient on its own – the desired tasks may be infeasible at first, so the other setter losses are needed to develop a feasible curriculum. The desirability loss just tries to aim this curriculum in the right direction." }, { "heading": "3.4 ENVIRONMENTS", "text": "We work in two environments, which are briefly described below (see appendix C for further details). In each, the solver receives a goal as input during each episode, which it must attempt to achieve.\n3D color finding: A semi-realistic 3D environment built in Unity (http://unity3d.com), consisting of a room containing colored objects and furniture (fig. 2a). The agent can move and look around, and can pick up, manipulate, and drop objects. This results in a complex 46-dimensional action space. Objects and furniture are randomly placed around the room at the beginning of each episode. The agent receives a color (or pair of colors) as a goal, and is rewarded if a patch (or two adjacent patches) in the center of its view contain average colors close to this goal. Both of these tasks sometimes require complex behavior2. For example, the agent might have to pick up an object of a yellow color, move it to an object of a blue color and look in between to obtain a green that was not otherwise present in the room. Our agents trained within our framework do indeed exhibit these behaviors. For our extensions, we also used a version of this environment in which the walls, ceiling, and floor of the room, as well as all objects, are procedurally recolored into one of two randomly chosen colors each episode (fig. 2b). This makes the achievable colors in each episode lie in a small subset of color space that overlaps little, if at all, with the achievable colors in other episodes.\nGrid-world alchemy: A 2D grid world environment, containing a variety of two-colored objects (fig. 2c). The colors of the objects are randomly sampled each episode. The solver can move around the grid, and can walk over an object to pick it up. It cannot put down an object once it has picked it up. If it is already carrying another object, the two objects will systematically combine to make a new object (specifically, the colors are combined by a component-wise max). The solver receives a goal object as input, and is rewarded if it produces a similar object. Because of the combinatorics of the possible object combinations, the irreversibility of picking an object up, and the difficulty of inferring the result of combining two objects, this environment is challenging for both the setter and the solver. Both have the challenging task of learning what is achievable in any particular episode, since each episode contains colors never seen before.\nEvaluation: In each experiment we evaluate on a fixed test distribution of tasks, regardless of what setter is used for training, in order to have a fair comparison between conditions. In both environments, the space of valid tasks (that could be done by an expert) occupies a small volume in the space of tasks expressible by the setter. In the colour-finding tasks, we do not even know which goals are valid, because of color averaging, shadows, etc. We therefore test on the full set of expressible goals (most of which are invalid), but report performance as a % of best observed scores." }, { "heading": "4 EXPERIMENTS3", "text": "" }, { "heading": "4.1 COMPLEX ENVIRONMENTS REQUIRE ALL THREE LOSSES", "text": "2See video in https://drive.google.com/drive/folders/1ue8EnmPTQyN9aBlUocw2ZPtVvyxNBQS?usp=sharing.\n3To help with reproducibility, we provide code for the networks used for the Setter: https://drive.google.com/drive/folders/1yjhztFeX67tHEImXCiP UAQfQ-wFvV4Y?usp=sharing.\n4.1 Complex environments require all three losses\nFirst, we demonstrate that it is necessary to consider all of goal validity, feasibility, and coverage in complex environments (fig. 3). In the alchemy environment the validity and coverage losses are necessary, while the feasibility loss is not necessary, but does improve consistency (fig. 3a). In the 3D single-color-finding task, various subsets of the losses suffice for learning the task (fig. 3b). However, when the agent must find color pairs, fewer goals are possible and achieving a goal more often requires difficult manipulation of objects. Removing any of the setter losses results in substantially worse performance (fig. 3c). See Appendix B.3 for further analysis of the losses, and supplemental fig. 9 for a visualization of the generated curriculum on a simple location-finding task." }, { "heading": "4.2 ENVIRONMENTS THAT VARY REQUIRE OBSERVATION CONDITIONING", "text": "4.2 Environments that vary require observation conditioning\nWhile much prior work in automated curriculum generation focused on varying goals within a fixed environment, we would like RL systems to perform well on varied tasks in varied environments. For this, they will need to experience a diversity of environments during training, creating the unique challenge of generating curricula that take into account both the current environment and the current abilities of the agent.\nTo address this we implemented a randomly colored version of our color-finding environment, and the grid-world alchemy task. In both, the set of possible goals changes each episode. We compare a version of our algorithm in which the setter and judge condition on an environmental observation before generating (or evaluating) a task to the basic unconditioned version used in the previous experiments, as well as a random baseline (fig. 4). Solvers trained by the basic version of our\nmodel still outperform those trained with randomly generated goals. However, the version of our model which conditions on an observation results in better solver performance. To the best of our knowledge, these are the first results demonstrating the success of any automated curriculum approach for goal-conditioned RL in a varying environment.\nThere are a few points worth noting about our results in the alchemy environment. First, the unconditioned setter had a tendency to not produce stable solver performance. Solver performance would generally degrade after reaching a maximum, while the conditioned setter was able to more steadily maintain solver performance. This was observed across a variety of hyperparameter settings, and merits further investigation. Second, even our conditioned setters are not leading the agents to perfect performance on this task.\nHowever, in grid-world alchemy, the conditioned setter teaches the solver to reach performance close to that of a solver trained by an oracle which samples from the true distribution of possible tasks (fig. 4b). This suggests the limitation is not our setter algorithm, but rather the limitations of the solver agent, for example, the fact that it lacks features like planning (Racanière et al., 2017) or relational inductive biases (Zambaldi et al., 2019) that have proven useful for similar tasks.\nIn more complex settings the setter may also need auxiliary supervision or stronger inductive biases to overcome the challenges of learning conditional generative models. Indeed, we found that conditioning on a compressed representation (closer to the latent variables) in the recolored colorfinding environment gave better results than conditioning on raw observations (see Fig. 10 in the Appendix). Furthermore, in more complex versions of the alchemy environment (for example, introducing more objects with more colors), even our conditioned setter algorithm could not learn to reliably generate feasible goals from raw observations. These results again highlight the challenges of learning conditional generative models when conditioning requires extracting latent variables and performing complex relational reasoning. This will be an important area for future work. Despite this caveat, the success of our setter-solver approach in varied environments represents an important step towards generating curricula in environments closer to the richness of the real world." }, { "heading": "4.3 TARGETING A DESIRED GOAL DISTRIBUTION IS MORE EFFICIENT", "text": "4.3 Targeting a desired goal distribution is more efficient\nIn complex task environments discovering desired behaviors through difficulty-based exploration may not be feasible. There may be many ways a task can be difficult, most of which are irrelevant to what we would ultimately like the agent to achieve. By targeting the desired goal distribution with our desired-goal loss, the setter can push the solver toward mastering the desired tasks more efficiently (fig. 5a). In reality, the path will not be perfectly direct, as the setter trades off feasibility, possibility, and coverage with targeting the desired tasks. However, it will generally be more efficient than untargeted setting, or training on only the desired tasks (if they are difficult).\nWe first explore this in the 3D color-finding environment. We target a distribution of pairs of 12 bright colors. These pairs are rarely achieved by a random policy, so discovering them is difficult without a setter. Training on only the desired distribution thus results in no learning. The untargeted\nsetter-solver setup does eventually learn these tasks. However, with targeting it discovers them much more rapidly (fig. 5b), and has a lasting advantage over the untargeted version (see supp. fig. 7).\nIn the alchemy environment, the story is somewhat different (fig. 5c). We chose the desired distribution to be the most difficult tasks in the environment, consisting of combining half the objects in the room. However, because the setter has the difficult challenge of learning the conditional generative distribution (which is built in to the desired distribution), we find that learning from the desired distribution (if available) results in earlier learning. This is in contrast to the 3D color finding environment, where the desired distribution alone resulted in no learning. This again highlights the complexity of learning to generate goals when the valid goal distribution is conditional in complex, non-linear ways on the environment state. However, once the setter figures out the task structure, it is more easily able to train the solver, and so it surpasses desired distribution training to reach asymptotic mastery sooner. Furthermore, the fact that the desired tasks are somewhat feasible early in learning means that the targeted setter has less of an advantage over the regular setter." }, { "heading": "4.4 COMPARISON TO PRIOR WORK", "text": "4.4 Comparison to prior work\nWe compare to the Goal GAN (Florensa et al., 2017), which is the closest to our approach. Our notion of goal feasibility is related to their binary partitioning of goals into those that are of intermediate difficulty, and those that are not. However, our continuous notion of feasibility has advantages: it allows uniformly sampling feasibility, can be estimated from one run per goal, and may be easier to learn. Furthermore, while their discriminator may implicitly encourage increasing coverage and identifying valid goals, training GANs can be difficult, and our explicit losses may be more effective.\nWe implemented an asynchronous version of the algorithm outlined in Florensa et al. (2017), which continuously trains the GAN and the agent, rather than iterating between training each. This allowed us to equalize computational resources between approaches, and apply their approach to the same distributed solver agent we used, in order to have as fair a comparison as possible. See appendix D for details. We first demonstrate that our implementation of their approach achieves similar performance to our method on a simple (x, y) location-finding task like that used in their paper, but implemented in our more complex 3D environment (fig. 6a), and learnt from pixels rather than state. However, we show that on our more complex color-finding tasks their approach is not as successful as ours (fig. 6b-c, and supp. fig. 8). Furthermore, maintaining and sampling from a large memory buffer, and running the agents on each goal multiple times to get a label of whether it was intermediate difficulty were very costly, and their approach thus required more memory and wallclock time than ours for an equivalent amount of agent steps. In addition, the instabilities introduced by the adversarial training resulted in less consistent results from their approach even on the simple location finding task.\nOverall, our results suggest that our approach is more stable and is more effective on complex tasks. Furthermore, as noted above, Florensa et al. did not attempt the challenge of curriculum generation in environments that vary (which is why we did not compare to their algorithm in the alchemy environment), while we have also demonstrated success in that setting." }, { "heading": "5 DISCUSSION", "text": "In this paper we outlined a strategy for automated curriculum generation for goal-conditioned RL agents in complex environments. The curriculum is generated by training a setter to propose goals for a solver agent. The setter is trained to choose goals based on their validity, feasibility and coverage, and we demonstrated that all three of these components are necessary in a complex environment. Furthermore, we showed that this approach substantially outperforms a prior approach and baselines on complex tasks, including 3D environments with rich visual experiences, interactions with objects, and complex control (a nearly 50-dimensional action space). These results represent a substantial step towards automated curriculum generation in rich environments.\nWe also highlighted the necessity of employing curriculum generation in environments that vary from episode to episode. To address this challenge, we demonstrated that by providing an environmental observation to the setter and judge, our algorithm can learn to generate reasonable curricula in variable environments. This approach outperformed a lesioned version without the environmental observation and other baselines, and nearly reached the performance of an oracle curriculum based on the true task distribution (where available). To our knowledge, these are the first results to demonstrate automated curriculum generation for goal-conditioned agents in environments where the possible goals vary from episode to episode. This is an important step towards developing automated curricula in environments with complexity closer to the real world.\nHowever, our work also highlights challenges when the environment varies. Learning a conditional generative model for the setter in combinatorially complex environments like our alchemy setting can be challenging. From only a generative loss, it is difficult for the model to learn how to extract the appropriate latent variables from an observation and manipulate them appropriately. Training setters in rich environments may require auxiliary information about the structure of the world, or breakthroughs in conditional generative modelling. This is an important direction for future work.\nFinally, we pointed out the challenge of efficiently achieving competence on desired goals which are distributed in a small region of goal space. We demonstrated a loss that can help to solve this problem by targeting the setter toward the desired goal distribution.\nOverall, we showed the success of our setter-solver approach in rich environments, and extensions that allowed it to work in complex tasks with varying environments and guide the solver efficiently towards mastering desired goals. Although work remains to be done, we believe that the strategies we have outlined here will be a useful starting point for automatically devising curricula for RL agents in the increasingly complex tasks we desire them to solve." }, { "heading": "6 ACKNOWLEDGEMENTS", "text": "We would like to thank the DeepMind Worlds Team for their help with the environments used in this paper." }, { "heading": "A SUPPLEMENTAL FIGURES", "text": "" }, { "heading": "B ARCHITECTURE & TRAINING DETAILS", "text": "Solver The solver consists of a ResNet (3 sections, each with 2 residual blocks, with 16, 32, and 32 filters respectively, all of size 3 and stride 1) for vision and a goal input (processed through a linear transformation) to a core LSTM (256 hidden units). The output of the LSTM is fed into separate MLPs for policy (1 hidden layer, 64 hidden units) and value (1 hidden layer, 128 hidden units) outputs. The judge is an MLP (3 hidden layers, 64 units each).\nSetter The setter consists of a Real-Valued Non-Volume Preserving (RNVP) network (Dinh et al., 2016), which has the useful property of providing access to exact log-likelihood of a sample, i.e. exact invertibility. The basis of this invertibility is a clever trick used in the RNVP design, where at each layer only half of the latent variables are updated, but their update depends multiplicatively on the other half of the latent variables. This allows for representing complex transformations, but because the conditioning latent variables are preserved in the output of the layer, it can be exactly inverted. By alternating between updating different subsets of the latent variables at each layer, the variables can undergo a complex evolution, with that evolution still remaining precisely invertible. For more details see sec. 3.3 and fig. 2 of Dinh et al. (2016).\nThe setter has 3 blocks. Each block is a fully connected Highway Network (Srivastava et al., 2015) with 3 layers, and a hidden size of 32, 128, or 512, depending on the complexity of the generative task. Specifically, for the basic color-finding tasks (both single colors and pairs), the hidden size was 32, for the recolored color-finding tasks it was 128, and for the alchemy tasks it was 512. Nonlinearities in all networks are leaky ReLUs, except when generating goals, where arctans were used to get outputs between -1 and 1, which were then scaled appropriately for the goal domain.\nDesirability discriminant As in Arjovsky et al. (2017), we ensure that D is Lipschitz by clipping the parameters of the discriminator D to the range [−0.1, 0.1] before each setter training step.\nB.1 OBSERVATION-CONDITIONED SETTER & JUDGE\nWhen the setter and judge are observation-conditioned, their vision architecture is the same as the agent ResNet, and the setter and judge share its weights (but do not share weights with the agent vision architecture). In the setter, the conditioning information, including the desired feasibility f and the output of the ResNet (where applicable) is concatenated with the part of the latent being transformed in each layer. In the judge, the output of the ResNet, if applicable, is concatenated with the goal before the goal is input to the judge. We found it was useful to down-weight the vision information by fixed constants before inputting it to the setter and the judge, likely because it allowed them to learn first to respond to the goals alone before trying to incorporate visual input. These constants were determined via a hyperparameter sweep, and were 0.1 for the setter in all\nconditioned tasks, and 10−7 and 10−6 respectively for the judge in the alchemy tasks and recolored color-finding tasks. (Despite the small weight on the visual input, the judge appears to still use it, as the final solver performance is worse if the weight is reduced by another order of magnitude, or increased.)\nB.2 TRAINING\nThe solver agents were trained using the framework of Espeholt et al. (2018), with the RMSProp optimizer, without momentum and a learning rate of 2 · 10−4. The setters were trained using Adam, with learning rates of 2 · 10−4 on the 3D tasks and 3 · 10−4 on the grid-world alchemy tasks. The training setup is described in fig. 1 at a high level. We provide now more details. We use a distributed setup that is identical to Espeholt et al. (2018), except for the addition of a learner to train the Judge and Setter. We therefore end up with three types of workers, that run asynchronously and communicate data to each other. Below, we write in pseudo code what loops are running on each type of worker. We have written those for the conditional setter case.\nAlgorithm 1: Solver-Actor loop // The Solver-Actor collects data and send them to the learners; Start environment; Sample feasibility uniformly in [0, 1]; Get first observation from environment and sample goal from Setter with given feasibility; while True do\ntrajectory = []; for n = 1...N do\nGet observation from environment; Choose action using the agent and observation; Apply action to environment; Get reward and discount; trajectory += (observation, reward, discount, action); if this was the last step of the episode then\nSend (first observation, goal, reward) to the Setter-Learner; Restart environment to start new episode; Sample feasibility uniformly in [0, 1]; Get first observation from environment and sample goal from Setter with given feasibility;\nend\nend Send trajectory to the Solver-Learner; Request and apply updated weights for agent from Solver-Learner; Request and apply updated weights for setter from Setter-Learner;\nend\nAlgorithm 2: Setter-Learner loop // The Setter-Learner receives data from the Solver-Actor and use it to train the Judge and Setter; Initialise weights of Setter and Judge; while True do\nWait until we have received a batch of tuple (first observation, goal, reward); Use batch to train Judge with a gradient update; Do a gradient update on setter using batch of first observation as conditioning;\nend\nAlgorithm 3: Solver-Learner loop // The Solver-Learner is identical, and uses the same losses, as a learner from Espeholt et al. (2018); Initialise weights of Solver (agent); while True do\nWait until we have received a batch of trajectories from Solver-Actor; Do a gradient update using the batch;\nend\nB.3 EXAMINING THE INDIVIDUAL LOSSES\nOur experiments demonstrate that our setup leads to better agent performance. In this section, we examine the components of our system, to evaluate whether they are performing as intended.\nFirst, we look at coverage, which we measure using entropy. In the color pair finding task, the setter defines a distribution in the space [0, 1]6. The uniform distribution is a maximum entropy distribution on this space, with an entropy of 0. Entropy is a particularly easy loss to maximise. Indeed, looking at Fig. 11a, we see it immediately climbs to a maximal value, before slowly decreasing as the pressure from the two other losses builds up. In the same figure, we see that removing the coverage loss leads to lower entropy early on, and a larger collapse later. This means the setter will offer a far less varied set of tasks for the solver to train on without the coverage loss. Finally, we observe how using only the coverage loss leads to near maximum entropy being reached very quickly (the entropy then keeps on slowly climbing during training). However, by ignoring validity and feasibility, most the tasks the setter proposes are impossible for an expert, or at least impossible for the agent at present. Thus the overall performance is worse.\nWhile the feasibility loss is being optimised as soon as training start, this loss can only become useful once the judge has learnt how to measure feasibility efficiently. Despite training on a non-stationary distribution, we expect the supervised training of the judge to result in quicker convergence. We can indeed see in Fig. 11b that after some very rapid oscillations, the judge’s cross-entropy loss drops very quickly. Another indication that the judge is indeed training well, is that if we compare the average score of the agent on tasks generated by the setter to the average feasibility of those same tasks as measured by the judge, we see in Fig. 11c these two values match very quickly. Finally, we note that the setter is learning to produce tasks of approximately appropriate feasibility, as shown in fig. 12. The solver is performing slightly better than expected, presumably because of the lag in training the judge and solver, but the difference is not too great, suggesting that the lag is not too bad.\nFinally, we see the validity loss in Fig. 11d drops very quickly to a minimum value when training starts, and slowly increases throughout training. We see this as a good behavior since the hindsight provided by this loss is most useful early on when the setter needs to choose which are reasonable tasks to train on even if it has not yet learnt much about the structure of tasks." }, { "heading": "C ENVIRONMENT & TASK DETAILS", "text": "C.1 COLOR FINDING\nThe room is L-shaped, which means that some objects may be hidden from some locations in the room. The objects present consist of pillow-like objects, a bookshelf, chair, a few wall-mounted shelves and table. The furniture is placed randomly along the walls of the room at the beginning of each episode. The pillows are randomly placed throughout the room, with some scattered on the floor, and some placed on shelves.\nThe actions for this environment consist of multiple speeds of moving forward, back, left, and right, rotating left and right, looking up and down, grabbing objects and moving with them, and manipulating held objects, rotating them along various axes or moving them closer or farther away. This results in a total of 45 actions. The agent has to choose 15 actions per second. Observations are in the form of 72× 96 RGB pixel images. Each pixel value was normalised in the range [0, 1].\nFor single color finding, the agent received a reward of 1 if the color averaged over an 8 × 8 patch in the center of the screen was within an `2 distance of = 0.1 of the goal color ∈ [0, 1]3. For color pair finding, the agent received a reward of 1 if the color in an 8 × 8 patch left of the center of the screen was within of the first goal color, and similar with a patch right of the center of the screen and the second goal color. That is, the agent needed to get both colors correct to receive any reward on the pair color finding task. If the agent received a non-zero reward the episode would terminate, otherwise the episode would terminate with a reward of 0 after 500 environment steps.\nDesired distribution: Our desired distribution for the targeting experiments consisted of pairs of 12 colors: the 3 primary colors and 3 secondary colors, and slightly more muted shades of these (all components moved by 0.3 towards the middle). We found βdes. = 5 to be optimal, though results in fig. 7b are from runs with βdes. = 1.\nC.2 GRID-WORLD ALCHEMY\nThe actions consist of movement in the four cardinal directions. The room is a 9×9 grid surrounded by an impassable wall of size 1 (for a total grid size of 11× 11), with four objects randomly placed in it. Each object has two colors which are randomly sampled uniformly from [0, 1]2] — only the red and blue components were sampled, with the green component fixed at zero. This makes the conditional generative problem slightly easier. The agent receives visual input in which every grid square is rendered as 2 × 2 in order to have the two-colored objects, i.e. it received visual input of size 22× 22. To avoid trivial solutions for the setter, it was necessary to avoid rewarding the solver (or training the setter’s possibility loss) if the agent failed to pick up an object.\nDesired distribution: Our desired distribution for the targeting experiments consisted of the most difficult tasks in this world: combining half of the objects in the level. We found βdes. = 1.5 to be optimal for this task." }, { "heading": "D DETAILS OF COMPARISON TO FLORENSA ET AL. (2017)", "text": "In order to compare to the Goal GAN approach proposed by Florensa et al. (2017), it was necessary to make a few changes. We wanted to make as fair a comparison as possible, so we wanted to use their approach to train the same distributed solver agent we used. In order to do this, we had to modify their algorithm to run asynchronously. Specifically, instead of alternating between training the GAN and training the agent, we trained both simultaneously in separate processes. Because of the asynchronous approach it was also difficult to have a single unified memory buffer; instead each copy of the agent had its own memory buffer which could hold up to 10,000 goals, and added goals to it with a probability of 0.01 rather than performing an expensive comparison to the prior goals at each step. As in the original paper, we sampled 1/3 of the goals from the memory buffer, and 2/3 from the setter. Even with our modifications and the simpler MLP architecture (see below), their approach required more computation time than ours for the same number of agent steps.\nAs in our RNVP architecture, we use a latent noise vector sampled from a standard normal distribution of the same dimensionality as the goal outputs. We originally tried implementing their GAN with the same RNVP architecture we used for our setter (see above), but we had substantial issues with mode collapse, so we switched to using an MLP architecture like was used in their original paper. We used 3 hidden layers with 64 units each in the location tasks, and 128 units each in the color tasks, and for the discriminator we used 3 hidden layers with 128 units in both tasks. The GAN was trained via the Adam optimizer with a learning rate of 5 · 10−4. All these hyperparameter were determined by a sweep." }, { "heading": "E MISCELLANEOUS", "text": "Color palettes for plots were modified from Harrower & Brewer (2003)." } ]
2,020
null
SP:41de1f1971e860acdbb74dcd266fd308c035b47b
[ "This work proposes a new environment, Read to Fight Monsters (RTFM), and correspondingly a new algorithm, txt2\\pi, for solving this problem. The RTFM requires the agent to read a description of the rules (x beats y, etc) and a description of the goal (to eliminate y), and perform the task correctly to win the game. The txt2\\pi algorithm uses the newly proposed FiLM^2 module and consists of integrating the visual input (grid world configuration) and the text input (descriptions) to learn a policy and baseline (actor-critic). ", "This paper constructs a new game that requires combining visual reasoning with text understanding to win. The authors propose a new model txt2π, based on a new layer called FiLM², which combines text and visual features in a way that allows visual features to be encoded with knowledge of the text features (as in the FiLM layer from previous work), as well as text to be encoded with knowledge of the visual features. The model is trained to play the game using IMPALA. Ablation studies show that the FiLM² layer leads to a substantial improvement, and also shows the necessity of curriculum learning. Performance is still below human performance suggesting this is a promising area for future work." ]
Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.
[ { "affiliations": [], "name": "Victor Zhong" }, { "affiliations": [], "name": "Paul G. Allen" }, { "affiliations": [], "name": "Tim Rocktäschel" } ]
[ { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian D. Reid", "Stephen Gould", "Anton van den Hengel" ], "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": null, "year": 2018 }, { "authors": [ "Jacob Andreas", "Dan Klein", "Sergey Levine" ], "title": "Learning with latent language", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Felix Hill", "Jan Leike", "Edward Hughes", "Pushmeet Kohli", "Edward Grefenstette" ], "title": "Learning to follow language instructions with adversarial reward induction", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "K. Barnard", "D. Forsyth" ], "title": "Learning the semantics of words and pictures", "venue": "In ICCV,", "year": 2001 }, { "authors": [ "S.R.K. Branavan", "David Silver", "Regina Barzilay" ], "title": "Learning to win by reading manuals in a monte-carlo framework", "venue": "In ACL,", "year": 2011 }, { "authors": [ "S.R.K. Branavan", "Nate Kushman", "Tao Lei", "Regina Barzilay" ], "title": "Learning high-level planning from text", "venue": "In ACL,", "year": 2012 }, { "authors": [ "S.R.K. Branavan" ], "title": "Grounding Linguistic Analysis in Control Applications", "venue": "PhD thesis,", "year": 2012 }, { "authors": [ "David L. Chen", "Raymond J. Mooney" ], "title": "Learning to sportscast: A test of grounded language acquisition", "venue": "In ICML,", "year": 2008 }, { "authors": [ "John D. Co-Reyes", "Abhishek Gupta", "Suvansh Sanjeev", "Nick Altieri", "John DeNero", "Pieter Abbeel", "Sergey Levine" ], "title": "Guiding policies with language via meta-learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Rémi Munos", "Karen Simonyan", "Volodymyr Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning", "Shane Legg", "Koray Kavukcuoglu" ], "title": "IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": null, "year": 2018 }, { "authors": [ "Daniel Fried", "Ronghang Hu", "Volkan Cirik", "Anna Rohrbach", "Jacob Andreas", "Louis-Philippe Morency", "Taylor Berg-Kirkpatrick", "Kate Saenko", "Dan Klein", "Trevor Darrell" ], "title": "Speaker-follower models for vision-and-language navigation", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Karl Moritz Hermann", "Felix Hill", "Simon Green", "Fumin Wang", "Ryan Faulkner", "Hubert Soyer", "David Szepesvari", "Wojciech Marian Czarnecki", "Max Jaderberg", "Denis Teplyashin", "Marcus Wainwright", "Chris Apps", "Demis Hassabis", "Phil Blunsom" ], "title": "Grounded language learning in a simulated 3d world", "venue": null, "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Compututation,", "year": 1997 }, { "authors": [ "Hengyuan Hu", "Denis Yarats", "Qucheng Gong", "Yuandong Tian", "Mike Lewis" ], "title": "Hierarchical decision making by generating and following natural language", "venue": null, "year": 1906 }, { "authors": [ "Yiding Jiang", "Shixiang Gu", "Kevin Murphy", "Chelsea Finn" ], "title": "Language as an abstraction for hierarchical deep reinforcement learning", "venue": null, "year": 1906 }, { "authors": [ "Thomas Kollar", "Stefanie Tellex", "Deb Roy", "Nicholas Roy" ], "title": "Toward understanding natural language directions", "venue": "In HRI,", "year": 2010 }, { "authors": [ "Heinrich Küttler", "Nantas Nardelli", "Thibaut Lavril", "Marco Selvatici", "Viswanath Sivakumar", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "TorchBeast: A PyTorch Platform for Distributed RL", "venue": "arXiv preprint arXiv:1910.03552,", "year": 2019 }, { "authors": [ "Kenton Lee", "Luheng He", "Mike Lewis", "Luke Zettlemoyer" ], "title": "End-to-end neural coreference resolution", "venue": "In EMNLP,", "year": 2017 }, { "authors": [ "Jiwei Li", "Will Monroe", "Alan Ritter", "Dan Jurafsky", "Michel Galley", "Jianfeng Gao" ], "title": "Deep reinforcement learning for dialogue generation", "venue": null, "year": 2016 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Manfred Otto Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "CoRR, abs/1509.02971,", "year": 2015 }, { "authors": [ "Jelena Luketina", "Nantas Nardelli", "Gregory Farquhar", "Jakob Foerster", "Jacob Andreas", "Edward Grefenstette", "Shimon Whiteson", "Tim Rocktäschel" ], "title": "A Survey of Reinforcement Learning Informed by Natural Language", "venue": null, "year": 2019 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "In ACL,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin A. Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "CoRR, abs/1312.5602,", "year": 2013 }, { "authors": [ "Karthik Narasimhan", "Regina Barzilay", "Tommi S. Jaakkola" ], "title": "Deep transfer in reinforcement learning by language grounding", "venue": null, "year": 2018 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Stefanie Tellex", "Thomas Kollar", "Steven Dickerson", "Matthew R. Walter", "Ashis Gopal Banerjee", "Seth Teller", "Nicholas Roy" ], "title": "Understanding natural language commands for robotic navigation and mobile manipulation", "venue": "In AAAI,", "year": 2011 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsPropG: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Sida I. Wang", "Percy Liang", "Christopher D. Manning" ], "title": "Learning language games through interaction", "venue": "In ACL,", "year": 2016 }, { "authors": [ "Victor Zhong", "Caiming Xiong", "Richard Socher" ], "title": "Global-locally self-attentive dialogue state tracker", "venue": "In ACL,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) has been successful in a variety of areas such as continuous control (Lillicrap et al., 2015), dialogue systems (Li et al., 2016), and game-playing (Mnih et al., 2013). However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.\nPrior work on language grounding and language-based RL (see Luketina et al. (2019) for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics (Branavan et al., 2011; Hermann et al., 2017; Bahdanau et al., 2019; Fried et al., 2018; Co-Reyes et al., 2019), or the dynamics of the environment vary and are presented in language for some fixed goal (Branavan et al., 2012). In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.\nOur contributions are two-fold. First, we propose a grounded policy learning problem that we call Read to Fight Monsters (RTFM). In RTFM, the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate RTFM.\n∗Work done during an internship at Facebook AI Research.\nSecond, we propose txt2π to model the joint reasoning problem in RTFM. We show that txt2π generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM (Perez et al., 2018; Bahdanau et al., 2019) both in terms of sample efficiency and final win-rate on RTFM. Through curriculum learning where we adapt txt2π trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that txt2π attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of RTFM in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future." }, { "heading": "2 RELATED WORK", "text": "Language-conditioned policy learning. A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control (Branavan, 2012) and games (Hermann et al., 2017; Bahdanau et al., 2019) to step-by-step navigation (Fried et al., 2018). In contrast to learning policies for imperative instructions, Branavan et al. (2011; 2012); Narasimhan et al. (2018) infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal.\nLanguage grounding. Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images (Barnard & Forsyth, 2001), games (Chen & Mooney, 2008; Wang et al., 2016), robot control (Kollar et al., 2010; Tellex et al., 2011), and navigation (Anderson et al., 2018). We study language grounding in interactive games similar to Branavan (2012); Hermann et al. (2017) or Co-Reyes et al. (2019), where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics." }, { "heading": "3 READ TO FIGHT MONSTERS", "text": "We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training.\nTo study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on.\nIn RTFM, the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure 1 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing\nweapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent’s inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix H for details on collection and G for a list of entities in the game). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations.\nDuring every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer).\nIn order to win the game (e.g. Figure 1), the agent must\n1. identify the target team from the goal (e.g. Order of the Forest)\n2. identify the monsters that belong to that team (e.g. goblin, jaguar, and ghost)\n3. identify which monster is in the world (e.g. goblin), and its element (e.g. fire)\n4. identify the modifiers that are effective against this element (e.g. fanatical, shimmering)\n5. find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword)\n6. pick up the correct item (e.g. fanatical sword) 7. engage the correct monster in combat (e.g. fire goblin).\nIf the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise.\nRTFM presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand.\nWe split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion." }, { "heading": "4 MODEL", "text": "We propose the txt2π model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the Bidirectional Feature-wise Linear Modulation (FiLM2) layer, which forms the core of our model.\n4.1 BIDIRECTIONAL FEATURE-WISE LINEAR MODULATION (FILM2) LAYER\nFeature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning (Perez et al., 2018) and instruction following (Bahdanau et al., 2019). In RTFM, the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, FiLM2 builds\ncodependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure 2 shows the FiLM2 layer.\nWe use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table 4 in appendix B. Let xtext denote a fixed-length dtext-dimensional representation of the text andXvis the representation of visual inputs with height H , width W , and dvis channels. Let Conv denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features:\nγtext = Wγxtext + bγ (1) βtext = Wβxtext + bβ (2) Vvis = ReLU((1 + γtext) ∗ Convvis(Xvis) + βtext) (3)\nUnlike FiLM, we additionally modulate text features using visual features:\nΓvis = Convγ(Xvis) (4) Bvis = Convβ(Xvis) (5) Vtext = ReLU((1 + Γvis) ∗ (Wtextxtext + btext) +Bvis) (6)\nThe output of the FiLM2 layer consists of the sum of the modulated features V , as well as a max-pooled summary s over this sum across spatial dimensions.\nV = Vvis + Vtext (7) s = MaxPool(V ) (8)\n4.2 THE TXT2π MODEL\nWe model interactions between observations from the environment, goal, and document using FiLM2 layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive FiLM2 layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for FiLM2. The final FiLM2 output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure 3 shows the txt2π model.\nLet Eobs denote word embeddings corresponding to the observations from the environment, where Eobs[:, :, i, j] represents the embeddings corresponding to the lobs-word string that describes the objects in location (i, j) in the grid-world. Let Edoc, Einv, and Egoal respectively denote the embeddings corresponding to the ldoc-word document, the linv-word inventory, and the lgoal-word goal. We first compute a fixed-length summary cgoal of the the goal using a bidirectional LSTM (Hochreiter & Schmidhuber, 1997) followed by self-attention (Lee et al., 2017; Zhong et al., 2018).\nHgoal = BiLSTMgoal(Egoal) (9) a ′ goal,i = wgoalh ᵀ goal,i + bgoal (10)\nagoal = softmax(a ′ goal) (11) cgoal = lgoal∑ i=1 agoal,ihgoal,i (12)\nWe abbreviate self-attention over the goal as cgoal = selfattn(Hgoal). We similarly compute a summary of the inventory as cinv = selfattn(BiLSTMinv(Einv)). Next, we represent the document encoding conditioned on the goal using dot-product attention (Luong et al., 2015).\nHdoc = BiLSTMgoal-doc(Edoc) (13) a′doc,i = cgoalh ᵀ doc,i (14)\nadoc = softmax(a ′ doc) (15) cdoc = ldoc∑ i=1 adoc,ihdoc,i (16)\nWe abbreviate attention over the document encoding conditioned on the goal summary as cdoc = attend(Hdoc, cgoal). Next, we build the joint representation of the inputs using successive FiLM2 layers. At each layer, the visual input to the FiLM2 layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature Xpos consists of the x and y distance from the cell to the agent’s position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let [a; b] denote the feature-wise concatenation of a\nModel Win rate\nTrain Eval 6×6\nEval 10×10\nconv 24± 0 25± 1 13± 1 FiLM 49± 1 49± 2 32± 3 no task attn 49± 2 49± 2 35± 6 no vis attn 49± 2 49± 1 40±12 no text mod 49± 1 49± 2 35± 2 txt2π 84±21 83±21 66±22\nTable 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). “Train” and “Eval” show final win rates on training and eval environments.\nand b. For the ith layer, we have\nR(i) = [V (i−1);Xpos] (17)\nT (i) = [cgoal; cinv; cdoc; attend(BiLSTMvis-doc(Edoc), s (i−1))] (18)\nV (i), s(i) = FiLM2(i)(R(i),T(i)) (19)\nBiLSTMvis-doc(Edoc) is another encoding of the document similar toHgoal, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For i = 0, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features V (0) = [ ∑ j Eobs,j ;Xpos]. We max pool a linear transform of the initial visual features to compute the initial visual summary s(0) = MaxPool(WiniV (0)+bini). Let s(last) denote visual summary of the last FiLM2 layer. We compute the policy ypolicy and baseline ybaseline as\no = ReLU(Wos (last) + bo) (20)\nypolicy = MLPpolicy(o) (21) ybaseline = MLPbaseline(o) (22)\nwhere MLPpolicy and MLPbaseline are 2-layer multi-layer perceptrons with ReLU activation. We train using TorchBeast (Küttler et al., 2019), an implementation of IMPALA (Espeholt et al., 2018). Please refer to appendix D for details." }, { "heading": "5 EXPERIMENTS", "text": "We consider variants of RTFM by varying the size of the grid-world (6 × 6 vs 10 × 10), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section 3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information.\nWe compare txt2π to the FiLM model by Bahdanau et al. (2019) and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of txt2π. In no task attn, the document attention conditioned on the goal utterance (equation 16) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no vis attn, we do not attend over the document given the visual output of the previous layer (equation 18), and the document is instead represented through self-attention.\nIn no text mod, text modulation using visual features (equation 6) is removed. Please see appendix C for model details on our model and baselines, and appendix D for training details." }, { "heading": "5.1 COMPARISON TO BASELINES AND ABLATIONS", "text": "We compare txt2π to baselines and ablated variants on a simplified variant of RTFM in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure 4 shows that compared to baselines and ablated variants, txt2π is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables txt2π to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a ∼ 50% win rate. Table 1 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with txt2π outperforming FiLM and the CNN model.\nWe find similar results for txt2π, its ablated variants, and baselines on a separate, language-based rock-paper-scissors task in which the agent needs to deduce cyclic dependencies (which type beats which other type) through reading in order to acquire the correct item and defeat a monster. We observe that the performance of reading models transfer from training environments to new environments with unseen types and unseen dependencies. Compared to ablated variants and baselines, txt2π is more sample efficient and achieves higher performance on both training and new environment dynamics. When transferring to new environments, txt2π remains more sample efficient than the other models. Details on these experiments are found in appendix E." }, { "heading": "5.2 CURRICULUM LEARNING FOR COMPLEX ENVIRONMENTS", "text": "Due to the long sequence of co-references the agent must perform in order to solve the full RTFM (10× 10 with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of RTFM. We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain 10×10 worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table 2\n(see Figure 13 in appendix F for training curves of each stage). We see that curriculum learning is crucial to making progress on RTFM, and that initial policy training (first row of Table 2) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table 3 shows variants of the last stage of the curriculum in which the model was trained on 6× 6 versions of the full RTFM and in which the model was trained on 10× 10 versions of the full RTFM. We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve RTFM. This highlights the difficulties of the RTFM problem and suggests that there is significant room for improvement in developing better language grounded policy learners.\nAttention maps. Figure 5 shows attention conditioned on the goal and on observation summaries produced by intermediate FiLM2 layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in txt2π help identify relevant information in the document.\nAnalysis of trajectories and failure modes. We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full RTFM. We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials1." }, { "heading": "6 CONCLUSION", "text": "We proposed RTFM, a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study RTFM, we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed txt2π, a model that captures three-way interactions between the goal,\n1Trajectories by txt2π on RTFM can be found at https://gofile.io/?c=9k7ZLk\ndocument, and observations, and that generalises to new environments with dynamics not seen during training. txt2π outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, txt2π performs well on complex RTFM tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex RTFM problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans (Andreas et al., 2018) and induce hierarchical policies (Hu et al., 2019; Jiang et al., 2019)." }, { "heading": "ACKNOWLEDGEMENT", "text": "We thank Heinrich Küttler and Nantas Nardelli for their help in adapting TorchBeast and the FAIR London team for their feedback and support." }, { "heading": "A PLAYTHROUGH EXAMPLES", "text": "These figures shows key snapshots from a trained policy on randomly sampled environments.\nB VARIABLE DIMENSIONS\nLet xtext ∈ Rdtext denote a fixed-length dtext-dimensional representation of the text and Xvis ∈ Rdvis×H×W denote the representation of visual inputs with\nVariable Symbol Dimension\ndtext-dim text representation xtext dtext dvis-dim visual representation with height H , width W , dvis channels Xvis dvis ×H ×W\nEnvironment observations embeddings Eobs lobs × demb ×H ×W lobs-word string that describes the objects in location (i, j) in the grid-world Eobs[:, :, i, j] lobs × demb" }, { "heading": "C MODEL DETAILS", "text": "C.1 TXT2π\nHyperparameters. The txt2π used in our experiments consists of 5 consecutive FiLM2 layers, each with 3x3 convolutions and padding and stride sizes of 1. The txt2π layers have channels of 16, 32, 64, 64, and 64, with residual connections from the 3rd layer to the 5th layer. The Goal-doc LSTM (see Figure 3) shares weight with the Goal LSTM. The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100. We use a word embedding dimension of 30.\nC.2 CNN WITH RESIDUAL CONNECTIONS\nLike txt2π, the CNN baseline consists of 5 layers of convolutions with channels of 16, 32, 64, 64, and 64. There are residual connections from the 3rd layer to the 5th layer. The input to each layer consists of the output of the previous layer, concatenated with positional features.\nThe input to the network is the concatenation of the observations V (0) and text representations. The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory. These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell. Figure 8 illustrates the CNN baseline.\nC.3 FILM BASELINE\nThe FiLM baseline encodes text in the same fashion as the CNN model. However, instead of using convolutional layers, each layer is a FiLM layer from Bahdanau et al. (2019). Note that in our case, the language representation is a self-attention over the LSTM states instead of a concatenation of terminal LSTM states." }, { "heading": "D TRAINING PROCEDURE", "text": "We train using an implementation of IMPALA (Espeholt et al., 2018). In particular, we use 20 actors and a batch size of 24. When unrolling actors, we use a maximum unroll length of 80 frames. Each episode lasts for a maximum of 1000 frames. We optimise using RMSProp (Tieleman & Hinton, 2012) with a learning rate of 0.005, which is annealed linearly for 100 million frames. We set α = 0.99 and = 0.01.\nDuring training, we apply a small negative reward for each time step of −0.02 and a discount factor of 0.99 to facilitate convergence. We additionally include a entropy cost to encourage exploration. Let ypolicy denote the policy. The entropy loss is calculated as\nLpolicy = − ∑ i ypolicyi log ypolicyi (23)\nIn addition to policy gradient, we add in the entropy loss with a weight of 0.005 and the baseline loss with a weight of 0.5. The baseline loss is computed as the root mean square of the advantages (Espeholt et al., 2018).\nWhen tuning models, we perform a grid search using the training environments to select hyperparameters for each model. We train 5 runs for each configuration in order to report the mean and standard deviation. When transferring, we transfer each of the 5 runs to the new task and once again report the mean and standard deviation.\nE ROCK-PAPER-SCISSORS\nIn addition to the main RTFM tasks, we also study a simpler formulation called Rock-paper-scissors that has a fixed goal. In Rock-paper-scissors, the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct\nweapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure 9 shows an instance of Rock-paper-scissors.\nReading models generalise to new environments. We split environment dynamics by permuting 3-character dependency graphs from an alphabet, which we randomly split into training and held-out sets. This corresponds to the “permutations” setting in Table 5.\nWe train models on the 10 × 10 worlds from the training set and evaluate them on both seen and not seen during training. The left of Figure 10 shows the performance of models on worlds of varying sizes with training environment dynamics. In this case, the dynamics (e.g. dependency graphs) were seen during training. For 9 × 9 and 11 × 11 worlds, the world configuration not seen during training. For 10 × 10 worlds, there is a 5% chance that the initial frame was seen during training.2 Figure 10 shows the performance on held-out environments not seen during training. We see that all models generalise to environments not seen during training, both when the world configuration is not seen (left) and when the environment dynamics are not seen (right).\nReading models generalise to new concepts. In addition to splitting via permutations, we de-\nvise two additional ways of splitting environment dynamics by introducing new edges and nodes into the held-out set. Table 5 shows the three different settings. For each, we study the transfer behaviour of models on new environments. Figure 11 shows the learning curve when training a model on the held-out environments directly and when transferring the model trained on train environments to held-out environments. We observe that all models are significantly more sample-efficient when transferring from training environments, despite the introduction of new edges and new nodes.\ntxt2π is more sample-efficient and learns better policies. In Figure 10, we see that the FiLM model outperforms the CNN model on both training environment dynamics and held-out environment dynamics. txt2π further outperforms FiLM, and does so more consistently in that the final performance has less variance. This behaviour is also observed in the in Figure 11. When training on the held-out set without transferring, txt2π is more sample efficient than FiLM and the CNN model, and achieves higher win-rate. When transferring to the held-out set, txt2π remains more sample efficient than the other models.\n2There are 24360 unique grid configurations given a particular dependency graph, 4060 unique dependency graphs in the training set, and 50 million frames seen during training. After training, the model finishes an episode in approximately 10 frames. Hence the probability of seeing a redundant initial frame is 5e7/10\n24360∗4060 = 5%." }, { "heading": "F CURRICULUM LEARNING TRAINING CURVES", "text": "" }, { "heading": "G ENTITIES AND MODIFIERS", "text": "Below is a list of entities and modifiers contained in RTFM:\nMonsters: wolf, jaguar, panther, goblin, bat, imp, shaman, ghost, zombie\nWeapons: sword, axe, morningstar, polearm, knife, katana, cutlass, spear\nElements: cold, fire, lightning, poison\nModifiers: Grandmaster’s, blessed, shimmering, gleaming, fanatical, mysterious, Soldier’s, arcane\nTeams: Star Alliance, Order of the Forest, Rebel Enclave" }, { "heading": "H LANGUAGE TEMPLATES", "text": "We collect human-written natural language templates for the goal and the dynamics. The goal statements in RTFM describe which team the agent should defeat. We collect 12 language templates for goal statements. The document of environment dynamics consists of two types of statements. The first type describes which monsters are assigned to with team. The second type describes which modifiers, which describe items, are effective against which element types, which are associated with monsters. We collection 10 language templates for each type of statements. The entire document is composed from statements, which are randomly shuffled. We randomly sample a template for each statement, which we fill with the monsters and team for the first type and modifiers and element for the second type." } ]
2,020
null
SP:5eed765bdae8974a4dc216b49631d9709767e29e
[ "In this paper, the author maps the problem of time series PDE into a naive reinforcement learning problem. Under the MDP assumption, the author sets the initial state of the particles as the current state, the flux at all spaces as the possible actions, and map the state-action pair deterministically to the next state of the particle diffusion. The reward is defined as the two norms between the prediction and the Burger’s equation. The naiveness comes from the fact that the typical reinforcement learning problem, the agent needs to decide how to choose an action. In this paper, it is formulated as an intrinsic proper that follows Burger’s equation instead. ", "This paper proposes to use reinforcement learning for constructing discretziation stencils of numerical schemes. More specifically, the method focuses on the widely used WENO schemes, which are an established class of finite difference schemes. Within this context, the method aims for training models to infer the weighting for a specific stencil with eight flux terms." ]
Conservation laws are considered to be fundamental laws of nature. It has broad application in many fields including physics, chemistry, biology, geology, and engineering. Solving the differential equations associated with conservation laws is a major branch in computational mathematics. Recent success of machine learning, especially deep learning, in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods. In this paper, we are the first to view numerical PDE solvers as a MDP and to use (deep) RL to learn new solvers. As a proof of concept, we focus on 1-dimensional scalar conservation laws. We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner. We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision making process and the numerical schemes learned in such a way can easily enforce long-term accuracy. Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach. In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts. Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and how well it generalizes. Our code is released anomynously at https://github.com/qwerlanksdf/L2D.
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Christian Beck", "E Weinan", "Arnulf Jentzen" ], "title": "Machine learning approximation algorithms for highdimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations", "venue": "Journal of Nonlinear Science,", "year": 2017 }, { "authors": [ "Samy Bengio", "Yoshua Bengio", "Jocelyn Cloutier", "Jan Gecsei" ], "title": "On the optimization of a synaptic learning rule", "venue": "In Preprints Conf. Optimality in Artificial and Biological Neural Networks,", "year": 1992 }, { "authors": [ "Gert-Jan Both", "Subham Choudhury", "Pierre Sens", "Remy Kusters" ], "title": "Deepmod: Deep learning for model discovery in noisy data", "venue": null, "year": 1904 }, { "authors": [ "Bo Chang", "Lili Meng", "Eldad Haber", "Frederick Tung", "David Begert" ], "title": "Multi-level residual networks from dynamical systems view", "venue": "arXiv preprint arXiv:1710.10348,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Niccolo’ Discacciati", "Jan S Hesthaven", "Deep Ray" ], "title": "Controlling oscillations in high-order discontinuous galerkin schemes using artificial viscosity tuned by neural networks", "venue": "Technical report,", "year": 2019 }, { "authors": [ "Qingnan Fan", "Dongdong Chen", "Lu Yuan", "Gang Hua", "Nenghai Yu", "Baoquan Chen" ], "title": "Decouple learning for parameterized image operators", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yuwei Fan", "Lin Lin", "Lexing Ying", "Leonardo" ], "title": "Zepeda-Núnez. A multiscale neural network based on hierarchical matrices", "venue": "arXiv preprint arXiv:1807.01883,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jiequn Han", "Arnulf Jentzen", "E Weinan" ], "title": "Solving high-dimensional partial differential equations using deep learning", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Silvia Jerez Galiano", "Miguel Uh Zapata" ], "title": "A new tvd flux-limiter method for solving nonlinear hyperbolic equations", "venue": "Journal of Computational and Applied Mathematics,", "year": 2010 }, { "authors": [ "Meiguang Jin", "Stefan Roth", "Paolo Favaro" ], "title": "Noise-blind image deblurring", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Yuehaw Khoo", "Lexing Ying" ], "title": "Switchnet: a neural network model for forward and inverse scattering problems", "venue": "arXiv preprint arXiv:1810.09675,", "year": 2018 }, { "authors": [ "Yuehaw Khoo", "Jianfeng Lu", "Lexing Ying" ], "title": "Solving parametric pde problems with artificial neural networks", "venue": "arXiv preprint arXiv:1707.03351,", "year": 2017 }, { "authors": [ "Randall J LeVeque" ], "title": "Numerical methods for conservation laws, volume 132", "venue": null, "year": 1992 }, { "authors": [ "Randall J LeVeque" ], "title": "Finite volume methods for hyperbolic problems, volume 31", "venue": "Cambridge university press,", "year": 2002 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Learning to optimize", "venue": "arXiv preprint arXiv:1606.01885,", "year": 2016 }, { "authors": [ "Yingzhou Li", "Jianfeng Lu", "Anqi Mao" ], "title": "Variational training of neural network approximations of solution maps for physical models", "venue": null, "year": 1905 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Xu-Dong Liu", "Stanley Osher", "Tony Chan" ], "title": "Weighted essentially non-oscillatory schemes", "venue": "Journal of computational physics,", "year": 1994 }, { "authors": [ "Zichao Long", "Yiping Lu", "Bin Dong" ], "title": "Pde-net 2.0: Learning pdes from data with a numericsymbolic hybrid deep network", "venue": "arXiv preprint arXiv:1812.04426,", "year": 2018 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "Pde-net: Learning pdes from data", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": null, "year": 2018 }, { "authors": [ "Jim Magiera", "Deep Ray", "Jan S Hesthaven", "Christian Rohde" ], "title": "Constraint-aware neural networks for riemann problems", "venue": "arXiv preprint arXiv:1904.12794,", "year": 2019 }, { "authors": [ "Stéphane G Mallat", "Zhifeng Zhang" ], "title": "Matching pursuits with time-frequency dictionaries", "venue": "IEEE Transactions on signal processing,", "year": 1993 }, { "authors": [ "Craig Michoski", "Milos Milosavljevic", "Todd Oliver", "David Hatch" ], "title": "Solving irregular and dataenriched differential equations using deep neural networks", "venue": null, "year": 1905 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Tong Qin", "Kailiang Wu", "Dongbin Xiu" ], "title": "Data driven governing equations approximation using deep neural networks", "venue": "arXiv preprint arXiv:1811.05537,", "year": 2018 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George Em Karniadakis" ], "title": "Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations", "venue": "arXiv preprint arXiv:1711.10561,", "year": 2017 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George Em Karniadakis" ], "title": "Physics informed deep learning (part ii): Data-driven discovery of nonlinear partial differential equations", "venue": "arXiv preprint arXiv:1711.10566,", "year": 2017 }, { "authors": [ "Deep Ray", "Jan S Hesthaven" ], "title": "An artificial neural network as a troubled-cell indicator", "venue": "Journal of Computational Physics,", "year": 2018 }, { "authors": [ "Jurgen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-.", "venue": "hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich,", "year": 1987 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Chi-Wang Shu" ], "title": "Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws. In Advanced numerical approximation of nonlinear hyperbolic equations", "venue": null, "year": 1998 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "Joaquin Vanschoren" ], "title": "Meta-learning: A survey", "venue": "arXiv preprint arXiv:1810.03548,", "year": 2018 }, { "authors": [ "Shiyin Wei", "Xiaowei Jin", "Hui Li" ], "title": "General solutions for nonlinear differential equations: a rule-based self-learning approach using deep reinforcement learning", "venue": "Computational Mechanics,", "year": 2019 }, { "authors": [ "E Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Stephan Hoyer" ], "title": "Data-driven discretization: a method for systematic coarse graining of partial differential equations", "venue": "arXiv preprint arXiv:1808.04930,", "year": 2018 }, { "authors": [ "Xiaoshuai Zhang", "Yiping Lu", "Jiaying Liu", "Bin Dong" ], "title": "Dynamically unfolding recurrent restorer: A moving endpoint control method for image restoration", "venue": null, "year": 2019 }, { "authors": [ "Hongkai Zhao" ], "title": "A fast sweeping method for eikonal equations", "venue": "Mathematics of computation,", "year": 2005 }, { "authors": [ "Es∼ρπθ", "a∼πθ" ], "title": "∇θlogπθ(a|s)Q (s, a)] where ρθ is the state distribution deduced by the policy πθ. In this paper we focus on the case where the action space A is continuous, and a lot of mature algorithms has been proposed for such a case, e.g., the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), the Trust Region Policy Optimization algorithm (Schulman et", "venue": null, "year": 2015 } ]
[ { "heading": null, "text": "Conservation laws are considered to be fundamental laws of nature. It has broad application in many fields including physics, chemistry, biology, geology, and engineering. Solving the differential equations associated with conservation laws is a major branch in computational mathematics. Recent success of machine learning, especially deep learning, in areas such as computer vision and natural language processing, has attracted a lot of attention from the community of computational mathematics and inspired many intriguing works in combining machine learning with traditional methods. In this paper, we are the first to view numerical PDE solvers as a MDP and to use (deep) RL to learn new solvers. As a proof of concept, we focus on 1-dimensional scalar conservation laws. We deploy the machinery of deep reinforcement learning to train a policy network that can decide on how the numerical solutions should be approximated in a sequential and spatial-temporal adaptive manner. We will show that the problem of solving conservation laws can be naturally viewed as a sequential decision making process and the numerical schemes learned in such a way can easily enforce long-term accuracy. Furthermore, the learned policy network is carefully designed to determine a good local discrete approximation based on the current state of the solution, which essentially makes the proposed method a meta-learning approach. In other words, the proposed method is capable of learning how to discretize for a given situation mimicking human experts. Finally, we will provide details on how the policy network is trained, how well it performs compared with some state-of-the-art numerical solvers such as WENO schemes, and how well it generalizes. Our code is released anomynously at https://github.com/qwerlanksdf/L2D." }, { "heading": "1 INTRODUCTION", "text": "Conservation laws are considered to be one of the fundamental laws of nature, and has broad applications in multiple fields such as physics, chemistry, biology, geology, and engineering. For example, Burger’s equation, a very classic partial differential equation (PDE) in conservation laws, has important applications in fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow.\nSolving the differential equations associated with conservation laws has been a major branch of computational mathematics (LeVeque, 1992; 2002), and a lot of effective methods have been proposed, from classic methods such as the upwind scheme, the Lax-Friedrichs scheme, to the advanced ones such as the ENO/WENO schemes (Liu et al., 1994; Shu, 1998), the flux-limiter methods (Jerez Galiano & Uh Zapata, 2010), and etc. In the past few decades, these traditional methods have been proven successful in solving conservation laws. Nonetheless, the design of some of the high-end methods heavily relies on expert knowledge and the coding of these methods can be a laborious process. To ease the usage and potentially improve these traditional algorithms, machine learning, especially deep learning, has been recently incorporated into this field. For example, the ENO scheme requires lots of ‘if/else’ logical judgments when used to solve complicated system of equations or high-dimensional equations. This very much resembles the old-fashioned expert systems. The recent trend in artificial intelligence (AI) is to replace the expert systems by the so-called ‘connectionism’, e.g., deep neural networks, which leads to the recent bloom of AI. Therefore, it\nis natural and potentially beneficial to introduce deep learning in traditional numerical solvers of conservation laws." }, { "heading": "1.1 RELATED WORKS", "text": "In the last few years, neural networks (NNs) have been applied to solving ODEs/PDEs or the associated inverse problems. These works can be roughly classified into three categories according to the way that the NN is used.\nThe first type of works propose to harness the representation power of NNs, and are irrelevant to the numerical discretization based methods. For example, Raissi et al. (2017a;b); Yohai Bar-Sinai (2018) treated the NNs as new ansatz to approximate solutions of PDEs. It was later generalized by Wei et al. (2019) to allow randomness in the solution which is trained using policy gradient. More recent works along this line include (Magiera et al., 2019; Michoski et al., 2019; Both et al., 2019). Besides, several works have focused on using NNs to establish direct mappings between the parameters of the PDEs (e.g. the coefficient field or the ground state energy) and their associated solutions (Khoo et al., 2017; Khoo & Ying, 2018; Li et al., 2019; Fan et al., 2018b). Furthermore, Han et al. (2018); Beck et al. (2017) proposed a method to solve very high-dimensional PDEs by converting the PDE to a stochastic control problem and use NNs to approximate the gradient of the solution.\nThe second type of works focus on the connection between deep neural networks (DNNs) and dynamic systems (Weinan, 2017; Chang et al., 2017; Lu et al., 2018; Long et al., 2018b; Chen et al., 2018). These works observed that there are connections between DNNs and dynamic systems (e.g. differential equations or unrolled optimization algorithms) so that we can combine deep learning with traditional tools from applied and computational mathematics to handle challenging tasks in inverse problems (Long et al., 2018b;a; Qin et al., 2018).The main focus of these works, however, is to solve inverse problems, instead of learning numerical discretizations of differential equations. Nonetheless, these methods are closely related to numerical differential equations since learning a proper discretization is often an important auxiliary task for these methods to accurately recover the form of the differential equations.\nThe third type of works, which target at using NNs to learn new numerical schemes, are closely related to our work. However, we note that these works mainly fall in the setting of supervised learning (SL). For example, Discacciati et al. (2019) proposed to integrate NNs into high-order numerical solvers to predict artificial viscosity; Ray & Hesthaven (2018) trained a multilayer perceptron to replace traditional indicators for identifying troubled-cells in high-resolution schemes for conservation laws. These works greatly advanced the development in machine learning based design of numerical schemes for conservation laws. Note that in Discacciati et al. (2019), the authors only utilized the one-step error to train the artificial viscosity networks without taking into account the longterm accuracy of the learned numerical scheme. Ray & Hesthaven (2018) first constructed several functions with known regularities and then used them to train a neural network to predict the location of discontinuity, which was later used to choose a proper slope limiter. Therefore, the training of the NNs is separated from the numerical scheme. Then, a natural question is whether we can learn discretization of differential equations in an end-to-end fashion and the learned discrete scheme also takes long-term accuracy into account. This motivates us to employ reinforcement learning to learn good solvers for conservation laws." }, { "heading": "1.2 OUR APPROACH", "text": "The main objective of this paper is to design new numerical schemes in an autonomous way. We propose to use reinforcement learning (RL) to aid the process of solving the conservation laws. To our best knowledge, we are the first to regard numerical PDE solvers as a MDP and to use (deep) RL to learn new solvers. We carefully design the proposed RL-based method so that the learned policy can generate high accuracy numerical schemes and can well generalize in varied situations. Details will be given in section 3.\nHere, we first provide a brief discussion on the benefits of using RL to solve conservation laws (the arguments apply to general evolution PDEs as well):\n• Most of the numerical solvers of conservation law can be interpreted naturally as a sequential decision making process (e.g., the approximated grid values at the current time instance definitely\naffects all the future approximations). Thus, it can be easily formulated as a Markov Decision Process (MDP) and solved by RL.\n• In almost all the RL algorithms, the policy π (which is the AI agent who decides on how the solution should be approximated locally) is optimized with regards to the values Qπ(s0, a0) = r(s0, a0) + ∑∞ t=1 γ\ntr(st, at), which by definition considers the long-term accumulated reward (or, error of the learned numerical scheme), thus could naturally guarantee the long-term accuracy of the learned schemes, instead of greedily deciding the local approximation which is the case for most numerical PDEs solvers. Furthermore, it can gracefully handle the cases when the action space is discrete, which is in fact one of the major strength of RL.\n• By optimizing towards long-term accuracy and effective exploration, we believe that RL has a good potential in improving traditional numerical schemes, especially in parts where no clear design principles exist. For example, although the WENO-5 scheme achieves optimal order of accuracy at smooth regions of the solution (Shu, 1998), the best way of choosing templates near singularities remains unknown. Our belief that RL could shed lights on such parts is later verified in the experiments: the trained RL policy demonstrated new behaviours and is able to select better templates than WENO and hence approximate the solution better than WENO near singularities.\n• Non-smooth norms such as the infinity norm of the error is often used to evaluate the performance of the learned numerical schemes. As the norm of the error serves as the loss function for the learning algorithms, computing the gradient of the infinity norm can be problematic for supervised learning, while RL does not have such problem since it does not explicitly take gradients of the loss function (i.e. the reward function for RL).\n• Learning the policy π within the RL framework makes the algorithm meta-learning-like (Schmidhuber, 1987; Bengio et al., 1992; Andrychowicz et al., 2016; Li & Malik, 2016; Finn et al., 2017). The learned policy π can decide on which local numerical approximation to use by judging from the current state of the solution (e.g. local smoothness, oscillatory patterns, dissipation, etc). This is vastly different from regular (non-meta-) learning where the algorithms directly make inference on the numerical schemes without the aid of an additional network such as π. As subtle the difference as it may seem, meta-learning-like methods have been proven effective in various applications such as in image restoration (Jin et al., 2017; Fan et al., 2018a; Zhang et al., 2019). See (Vanschoren, 2018) for a comprehensive survey on meta-learning.\n• Another purpose of this paper is to raise an awareness of the connection between MDP and numerical PDE solvers, and the general idea of how to use RL to improve PDE solvers or even finding brand new ones. Furthermore, in computational mathematics, a lot of numerical algorithms are sequential, and the computation at each step is expert-designed and usually greedy, e.g., the conjugate gradient method, the fast sweeping method (Zhao, 2005), matching pursuit (Mallat & Zhang, 1993), etc. We hope our work could motivate more researches in combining RL and computational mathematics, and stimulate more exploration on using RL as a tool to tackle the bottleneck problems in computational mathematics.\nOur paper is organized as follows. In section 2 we briefly review 1-dimensional conservation laws and the WENO schemes. In section 3, we discuss how to formulate the process of numerically solving conservation laws into a Markov Decision Process. Then, we present details on how to train a policy network to mimic human expert in choosing discrete schemes in a spatial-temporary adaptive manner by learning upon WENO. In section 4, we conduct numerical experiments on 1-D conservation laws to demonstrate the performance of our trained policy network. Our experimental results show that the trained policy network indeed learned to adaptively choose good discrete schemes that offer better results than the state-of-the-art WENO scheme which is 5th order accurate in space and 4th order accurate in time. This serves as an evidence that the proposed RL framework has the potential to design high-performance numerical schemes for conservation laws in a data-driven fashion. Furthermore, the learned policy network generalizes well to other situations such as different initial conditions, mesh sizes, temporal discrete schemes, etc. The paper ends with a conclusion in section 5, where possible future research directions are also discussed." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 NOTATIONS", "text": "In this paper, we consider solving the following 1-D conservation laws:\nut(x, t) + fx(u(x, t)) = 0, a ≤ x ≤ b, t ∈ [0, T ], u(x, 0) = u0(x). (1)\nFor example, f = u 2\n2 is the famous Burger’s Equation. We discretize the (x, t)-plane by choosing a mesh with spatial size ∆x and temporal step size ∆t, and define the discrete mesh points (xj , tn) by\nxj = a+ j∆x, tn = n∆t with j = 0, 1, ..., J = b− a ∆x , n = 0, 1, ..., N = T ∆t .\nWe denote xj+ 12 = xj + ∆x/2 = a + (j + 1 2 )∆x. The finite difference methods will produce approximations Unj to the solution u(xj , tn) on the given discrete mesh points. We denote pointwise values of the true solution to be unj = u(xj , tn), and the true point-wise flux values to be fnj = f(u(xj , tn))." }, { "heading": "2.2 WENO – WEIGHTED ESSENTIALLY NON-OSCILLATORY SCHEMES", "text": "WENO (Weighted Essentially Non-Oscillatory) (Liu et al., 1994) is a family of high order accurate finite difference schemes for solving hyperbolic conservation laws, and has been successful for many practical problems. The key idea of WENO is a nonlinear adaptive procedure that automatically chooses the smoothest local stencil to reconstruct the numerical flux. Generally, a finite difference method solves Eq.1 by using a conservative approximation to the spatial derivative of the flux:\nduj(t) dt = − 1 ∆x ( f̂j+ 12 − f̂j− 12 ) , (2)\nwhere uj(t) is the numerical approximation to the point value u(xj , t) and f̂j+ 12 is the numerical flux generated by a numerical flux policy\nf̂j+ 12 = π f (uj−r, ..., uj+s),\nwhich is manually designed. Note that the term “numerical flux policy\" is a new terminology that we introduce in this paper, which is exactly the policy we shall learn using RL. In WENO, πf works as follows. Using the physical flux values {fj−2, fj−1, fj}, we could obtain a 3th order accurate polynomial interpolation f̂−2\nj+ 12 , where the indices {j− 2, j− 1, j} is called a ‘stencil’. We could also\nuse the stencil {j−1, j, j+1}, {j, j+1, j+2} or {j+1, j+2, j+3} to obtain another three interpolants f̂−1 j+ 12 , f̂0 j+ 12 and f̂1 j+ 12 . The key idea of WENO is to average (with properly designed weights) all\nthese interpolants to obtain the final reconstruction: f̂j+ 12 = ∑1 r=−2 wrf̂ r j+1/2, ∑1 r=−2 wr = 1.\nThe weight wi depends on the smoothness of the stencil. A general principal is: the smoother is the stencil, the more accurate is the interpolant and hence the larger is the weight. To ensure convergence, we need the numerical scheme to be consistent and stable (LeVeque, 1992). It is known that WENO schemes as described above are consistent. For stability, upwinding is required in constructing the flux. The most easy way is to use the sign of the Roe speed āj+ 12 = (fj+ 12 − fj− 12 )/(uj+ 12 − uj− 12 ) to determine the upwind direction: if āj+ 12 ≥ 0, we only average among the three interpolants f̂ −2 j+ 12 , f̂−1 j+ 12 and f̂0 j+ 12 ; if āj+ 12 < 0, we use f̂ −1 j+ 12 , f̂0 j+ 12 and f̂1 j+ 12 .\nSome further thoughts. WENO achieves optimal order of accuracy (up to 5) at the smooth region of the solutions (Shu, 1998), while lower order of accuracy at singularities. The key of the WENO method lies in how to compute the weight vector (w1, w2, w3, w4), which primarily depends on the smoothness of the solution at local stencils. In WENO, such smoothness is characterized by handcrafted formula, and was proven to be successful in many practical problems when coupled with high-order temporal discretization. However, it remains unknown whether there are better ways to combine the stencils so that optimal order of accuracy in smooth regions can be reserved while, at the\nsame time, higher accuracy can be achieved near singularities. Furthermore, estimating the upwind directions is another key component of WENO, which can get quite complicated in high-dimensional situations and requires lots of logical judgments (i.e. “if/else\"). Can we ease the (some time painful) coding and improve the estimation at the aid of machine learning?" }, { "heading": "3 METHODS", "text": "In this section we present how to employ reinforcement learning to solve the conservation laws given by Eq.1. To better illustrate our idea, we first show in general how to formulate the process of numerically solving a conservation law into an MDP. We then discuss how to incorporate a policy network with the WENO scheme. Our policy network targets at the following two key aspects of WENO: (1) Can we learn to choose better weights to combine the constructed fluxes? (2) Can we learn to automatically judge the upwind direction, without complicated logical judgments?" }, { "heading": "3.1 MDP FORMULATION", "text": "Algorithm 1: A Conservation Law Solving Procedure 1 Input: initial values u00, u01, ..., u0J , flux f(u), ∆x, ∆t, evolve time N , left shift r and right shift s. 2 Output: {Unj | j = 0, ..., J, n = 1, ..., N} 3 U0j = u 0 j , j = 0, ..., J 4 for n = 1 to N do 5 for j = 0 to J do 6 Compute the numerical flux f̂n\nj− 1 2\n= πf (Un−1j−r−1, U n−1 j−r , ..., U n−1 j+s−1) and f̂ n j+ 1 2 = πf (Un−1j−r ,\nUn−1j−r+1, ..., U n−1 j+s ), e.g., using the WENO scheme\n7 Compute duj(t) dt = − 1 ∆x (f̂n j+ 1 2 − f̂n j− 1 2 ) 8 Compute Unj = π t(Un−1j , duj(t) dt ), e.g., using the Euler scheme Unj = U n−1 j + ∆t duj(t) dt\n9 Return {Unj | j = 0, ..., J, n = 1, ..., N}\nAs shown in Algorithm 1, the procedure of numerically solving a conservation law is naturally a sequential decision making problem. The key of the procedure is the numerical flux policy πf and the temporal scheme πt as shown in line 6 and 8 in Algorithm 1. Both policies could be learned using RL. However, in this paper, we mainly focus on using RL to learn the numerical flux policy πf , while leaving the temporal scheme πt with traditional numerical schemes such as the Euler scheme or the Runge–Kutta methods. A quick review of RL is given in the appendix.\nNow, we show how to formulate the above procedure as an MDP and the construction of the state S, action A, reward r and transition dynamics P . Algorithm 2 shows in general how RL is incorporated into the procedure. In Algorithm 2, we use a single RL agent. Specifically, when computing Unj :\n• The state for the RL agent is snj = gs(U n−1 j−r−1, ..., U n−1 j+s ), where gs is the state function.\n• In general, the action of the agent is used to determine how the numerical fluxes f̂n j+ 12 and f̂n j− 12 is computed. In the next subsection, we detail how we incorporate anj to be the linear weights of the fluxes computed using different stencils in the WENO scheme.\n• The reward should encourage the agent to generate a scheme that minimizes the error between its approximated value and the true value. Therefore, we define the reward function as rnj = gr(U n j−r−1 − unj−r−1, · · · , Unj+s − unj+s), e.g., a simplest choice is gr = −|| · ||2. • The transition dynamics P is fully deterministic, and depends on the choice of the temporal scheme at line 10 in Algorithm 2. Note that the next state can only be constructed when we have obtained all the point values in the next time step, i.e., sn+1j = gs(U n j−r−1, ..., U n j+s) does not\nonly depends on action anj , but also on actions a n j−r−1, ..., a n j+s (action a n j can only determine the value Unj ). This subtlety can be resolved by viewing the process under the framework of multi-agent RL, in which at each mesh point j we use a distinct agent ARLj , and the next state sn+1j = gs(U n j−r−1, ..., U n j+s) depends on these agents’ joint action a n j = (a n j−r−1, ..., a n j+s).\nHowever, it is impractical to train J different agents as J is usually very large, therefore we enforce the agents at different mesh point j to share the same weight, which reduces to case of using just a single agent. The single agent can be viewed as a counterpart of a human designer who decides on the choice of a local scheme based on the current state in traditional numerical methods.\nAlgorithm 2: General RL Running Procedure 1 Input: initial values u00, ..., u0J , flux f(u), ∆x, ∆t, evolve time N , left shift r, right shift s and RL policy π RL 2 Output: {Unj | j = 0, ..., J, n = 1, ..., N} 3 U0j = u 0 j , j = 0, ..., J 4 for Many iterations do 5 Construct initial states s0j = gs(U 0 j−r−1, ..., U 0 j+s) for j = 0, ..., J 6 for n = 1 to N do 7 for j = 0 to J do 8 Compute the action anj = π RL(snj ) that determines how f̂ n j+ 1\n2 and f̂n j− 1 2 is computed\n9 Compute duj(t) dt = − 1 ∆x (f̂n j+ 1 2 − f̂n j− 1 2 )\n10 Compute Unj = π t(Un−1j , duj(t) dt ), e.g., the Euler scheme Unj = U n−1 j + ∆t duj(t) dt 11 Compute the reward rnj = gr(U n j−r−1 − unj−r−1, · · · , Unj+s − unj+s).\n12 Construct the next states sn+1j = gs(u n j−r−1, ..., u n j+s) for j = 0, ..., J 13 Use any RL algorithm to train the RL policy πRL with the transitions {(snj , anj , rnj , sn+1j )} J j=0.\n14 Return the well-trained RL policy πRL." }, { "heading": "3.2 RL EMPOWERED WENO", "text": "We now present how to transfer the actions of the RL policy to the weights of WENO fluxes. Instead of directly using πRL to generate the numerical flux, we use it to produce the weights of numerical fluxes computed using different stencils in WENO. Since the weights are part of the configurations of the WENO scheme, our design of action essentially makes the RL policy a meta-learner, and enables more stable learning and better generalization power than directly generating the fluxes.\nSpecifically, at point xj (here we drop the time superscript n for simplicity), to compute the numerical flux f̂j− 12 and f̂j+ 12 , we first construct four fluxes {f̂ i j− 12 }1i=−2 and {f̂ ij+ 12 } 1 i=−2 using four different stencils just as in WENO, and then use the RL policy πRL to generate the weights of these fluxes:\nπRL(sj) = ( w−2 j− 12 , w−1 j− 12 , w0j− 12 , w1j− 12 , w−2 j+ 12 , w−1 j+ 12 , w0j+ 12 , w1j+ 12 ) .\nThe numerical flux is then constructed by averaging these fluxes: f̂j− 12 = ∑1 i=−2 w i j− 12 f̂ i j− 12 , and\nf̂j+ 12 = ∑1 i=−2 w i j+ 12 f̂ i j+ 12 .\nNote that the determination of upwind direction is automatically embedded in the RL policy since it generates four weights at once. For instance, when the roe speed āj+ 12 ≥ 0, we expect the 4 th weight w1 j+ 12 ≈ 0 and when āj+ 12 < 0, we expect w −2 j+ 12 ≈ 0. Note that the upwind direction can be very complicated in a system of equations or in the high-dimensional situations, and using the policy network to automatically embed such a process could save lots of efforts in algorithm design and implementation. Our numerical experiments show that πRL can indeed automatically determine upwind directions for 1D scalar cases. Although this does not mean that it works for systems and/or in high-dimensions, it shows the potential of the proposed framework and value for further studies." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we describe training and testing of the proposed RL conservation law solver and compare it with WENO. More comparisons and discussions can be found in the appendix." }, { "heading": "4.1 SETUP", "text": "In this subsection, we explain the general training setup. We train the RL policy network on the Burger’s equation, whose flux is computed as f(u) = 12u\n2. In all the experiments, we set the left-shift r = 2 and the right shift s = 3. The state function gs(sj) = gs(Uj−r−1, ..., Uj+s) will generate two vectors: sl = (fj−r−1, ..., fj+s−1, āj− 12 ), and s r = (fj−r, ..., fj+s, āj+ 12 ) for computing f̂j− 12 and f̂j+ 12 respectively. sl and sr will be passed into the same policy neural network π RL θ to produce the desired actions, as described in section 3.2. The reward function gr simply computes the infinity norm, i.e., gr(Uj−r−1 − uj−r−1, ..., Uj+s − uj+s) = −||(Uj−r−1 − uj−r−1, ..., Uj+s − uj+s)||∞.\nThe policy network πRLθ is a feed-forward Multi-layer Perceptron with 6 hidden layers, each has 64 neurons and use Relu (Goodfellow et al., 2016) as the activation function. We use the Deep Deterministic Policy Gradient Algorithm (Lillicrap et al., 2015) to train the RL policy.\nTo guarantee the generalization power of the trained RL agent, we randomly sampled 20 initial conditions in the form u0(x) = a + b · func(cπx), where |a| + |b| ≤ 3.5, func ∈ {sin, cos} and c ∈ {2, 4, 6}. The goal of generating such kind of initial conditions is to ensure they have similar degree of smoothness and thus similar level of difficulty in learning. The computation domain is −1 ≤ x ≤ 1 and 0 ≤ t ≤ 0.8 with ∆x = 0.02, ∆t = 0.004, and evolve steps N = 200 (which ensures the appearance of shocks). When training the RL agent, we use the Euler scheme for temporal discretization. The true solution needed for reward computing is generated using WENO on the same computation domain with ∆x = 0.001, ∆t = 0.0002 and the 4th order Runge-Kutta (RK4).\nIn the following, we denote the policy network that generates the weights of the WENO fluxes (as described in section 3.2) as RL-WENO. We randomly generated another different 10 initial conditions in the same form as training for testing." }, { "heading": "4.2 RESULTS", "text": "We compare the performance of RL-WENO and WENO. We also test whether the trained RL policy can generalize to different temporal discretization schemes, mesh sizes and flux functions that are not included in training. Table 1 and Table 2 present the comparison results, where the number shows the relative error (computed as ||U−u||2||u||2 with the 2-norm taking over all x) between the approximated solution U and the true solution u, averaged over 250 evolving steps (T = 1.0) and 10 random initial values. Numbers in the bracket shows the standard deviation over the 10 initial conditions. Several entries in the table are marked as ‘-’ because the corresponding CFL number is not small enough\nto guarantee convergence. Recall that training of the RL-WENO was conducted with Euler time discretization, (∆x,∆t) = (0.02, 0.004), T = 0.8 and f(u) = 12u 2.\nOur experimental results show that, compared with the high order accurate WENO (5th order accurate in space and 4th order accurate in time), the linear weights learned by RL not only achieves smaller errors, but also generalizes well to: 1) longer evolving time (T = 0.8 for training and T = 1.0 for testing); 2) new time discretization schemes (trained on Euler, tested on RK4); 3) new mesh sizes (see Table 1 and Table 2 for results of varied ∆x and ∆t); and 4) a new flux function (trained on f(u) = 12u 2 shown in Table 1, tested on 116u 4 Table 2).\nFigure 1 shows some examples of the solutions. As one can see, the solutions generated by RL-WENO not only achieve the same accuracy as WENO at smooth regions, but also have clear advantage over WENO near singularities which is particularly challenging for numerical PDE solvers and important in applications. Figure 2 shows that the learned numerical flux policy can indeed correctly determine upwind directions and generate local numerical schemes in an adaptive fashion. More interestingly, Figure 2 further shows that comparing to WENO, RL-WENO seems to be able to select stencils in a different way from it, and eventually leads to a more accurate solution. This shows that the proposed RL framework has the potential to surpass human experts in designing numerical schemes for conservation laws." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed a general framework to learn how to solve 1-dimensional conservation laws via deep reinforcement learning. We first discussed how the procedure of numerically solving conservation laws can be naturally cast in the form of Markov Decision Process. We then elaborated how to relate notions in numerical schemes of PDEs with those of reinforcement learning. In particular, we introduced a numerical flux policy which was able to decide on how numerical flux should be designed locally based on the current state of the solution. We carefully design the action of our RL policy to make it a meta-learner. Our numerical experiments showed that the proposed RL based solver was able to outperform high order WENO and was well generalized in various cases.\nAs part of the future works, we would like to consider using the numerical flux policy to inference more complicated numerical fluxes with guaranteed consistency and stability. Furthermore, we can use the proposed framework to learn a policy that can generate adaptive grids and the associated numerical schemes. Lastly, we would like consider system of conservation laws in 2nd and 3rd dimensional space." }, { "heading": "A COMPLEMENTARY EXPERIMENTS", "text": "A.1 COMPARISON WITH SUPERVISED LEARNING (SL) BASED METHODS\nWe first note that most of the neural network based numerical PDE solvers cited in the introduction requires retraining when the initialization, terminal time, or the form of the PDE is changed; while the proposed RL solver is much less restricted as shown in our numerical experiments. This makes proper comparisons between existing NN-based solvers and our proposed solver very difficult. Therefore, to demonstrate the advantage of our proposed RL PDE solver, we would like to propose a new SL method that does not require retraining when the test setting (e.g. initialization, flux function, etc.) is different from the training.\nHowever, as far as we are concerned, it is challenging to design such SL methods without formulating the problem into an MDP. One may think that we can use WENO to generate the weights for the stencil at a particular grid point on a dense grid, and use the weights of WENO generated from the dense grid as the label to train a neural network in the coarse grid. But such setting has a fatal flaw in that the stencils computed in the dense grids are very different from those in the coarse grids, especially near singularities. Therefore, good weights on dense grids might perform very poorly on coarse grids. In other words, simple imitation of WENO on dense grids is not a good idea. One might also argue that instead of learning the weights of the stencils, we could instead generate the discrete operators, such as the spatial discretization of ∂uj∂x , or the temporal discretization of ∂uj ∂t , the numerical fluxes fj+ 12 (u), fj− 12 (u), etc., on a dense grid, and then use them as labels to train a neural network in the supervised fashion on a coarse grid. However, the major problem with such design is that there is no guarantee that the learned discrete operators obey the conservation property of the equations, and thus they may also generalize very poorly.\nAfter formulating the problem into a MDP, there is indeed one way that we can use back-propagation (BP) instead of RL algorithms to optimize the policy network. Because all the computations on using the stencils to calculate the next-step approximations are differentiable, we can indeed use SL to train the weights. One possible way is to minimize the error (e.g. 2 norm) between the approximated and the true values, where the true value is pre-computed using a more accurate discretization on a fine mesh. The framework to train the SL network is described in Algorithm 3. Note that the framework to train the SL network is essentially the same as that of the proposed RL-WENO (Algorithm 2). The only difference is that we train the SL network using BP and the RL network using DDPG.\nAlgorithm 3: Using BP instead of RL algorithm to train the policy 1 Input: initial values u00, ..., u0J , flux f(u), ∆x, ∆t, evolve time N , left shift r, right shift s and a neural network\nπθ\n2 Output: {Unj | j = 0, ..., J, n = 1, ..., N} 3 U0j = u 0 j , j = 0, ..., J 4 for Many iterations do 5 Construct initial states s0j = gs(U 0 j−r−1, ..., U 0 j+s) for j = 0, ..., J 6 for n = 1 to N do 7 for j = 0 to J do 8 Compute the weights (wn,−2\nj− 1 2\n, wn,−1 j− 1\n2\n, wn,0 j− 1\n2\n, wn,1 j− 1\n2\n, wn,−2 j+ 1\n2\n, wn,−1 j+ 1\n2\n, wn,0 j+ 1\n2\n, wn,1 j+ 1\n2\n) = πθ(snj )\n9 Compute the fluxes f̂n j− 1\n2 =\n∑1 i=−2 w n,i\nj− 1 2\nf̂n,i j− 1\n2\n, f̂j+ 1 2\n= ∑1 i=−2 w i j+ 1 2 f̂n,i j+ 1\n2\n, where f̂n,i j± 1\n2\nare\nthe fluxes computed by WENO 10 Compute duj(t)\ndt = − 1 ∆x (f̂n j+ 1 2 − f̂n j− 1 2 )\n11 Compute Unj = π t(Un−1j , duj(t) dt ), e.g., the Euler scheme Unj = U n−1 j + ∆t duj(t) dt 12 Compute the loss for θ: Lnj (θ) = ||(Unj−r−1−unj−r−1, · · · , Unj+s−unj+s)−(Unj−r−1−unj−r−1, · · · , Unj+s−unj+s)||22. 13 Perform a gradient descent on θ w.r.t Lnj (θ)\n14 Construct the next states sn+1j = gs(u n j−r−1, ..., u n j+s) for j = 0, ..., J\n15 Return the BP optimized policy πθ .\nHowever, we argue that the main drawback of using SL (BP) to optimize the stencils in such a way is that it cannot enforce long-term accuracy and thus cannot outperform the proposed RL-WENO. To support such claims, we have added experiments using SL to train the weights of the stencils, and the results are shown in table 3 and 4. The SL policy is trained till it achieves very low loss (i.e., converges) in the training setting. However, as shown in the table, the SL-trained policy does not perform well overall. To improve longer time stability, one may argue that we could design the loss of SL to be the accumulated loss over multiple prediction steps, but in practice as the dynamics of our problem (computations for obtaining multiple step approximations) is highly non-linear, thus the gradient flow through multiple steps can be highly numerically unstable, making it difficult to obtain a decent result.\nA.2 RL-WENO’S PERFORMANCE ON SMOOTH AND SINGULAR REGIONS\nAs mentioned in section 2.2, WENO itself already achieves an optimal order of accuracy in the smooth regions. Since RL-WENO can further improve upon WENO, it must have obtained higher accuracy especially near singularities. Here we provide additional demonstrations on how RL-WENO performs in the smooth/singular regions. We run RL-WENO and WENO on a set of initial conditions, and record the approximation errors at every locations and then separate the errors in the smooth and singular regions for every time step. We then compute the distribution of the errors on the entire spatial-temporal grids with multiple initial conditions. The results are shown in figure 3. In figure 3, the x-axis is the logarithmic (base 10) value of the error and the y-axis is the number of grid points whose error is less than the corresponding value on the x-axis, i.e., the accumulated distribution of the errors. The results show that RL-WENO indeed performs better than WENO near singularities. RL-WENO even achieves better accuracy than WENO in the smooth region when the flux function is 1 16u 4.\nA.3 INFERENCE TIME OF RL-WENO AND WENO\nIn this subsection we report the inference time of RL-WENO and WENO. Although the computation complexity of the trained RL policy (a MLP) is higher than that of WENO, we could parallel and accelerate the computations using GPU.\nOur test is conducted in the following way: for each grid size ∆x, we fix the initial condition as u0(x) = 1 + cos(6πx), the evolving time T = 0.8 and the flux function f = u2. We then use RL-WENO and WENO to solve the problem 20 times, and report the average running time. For completeness, we also report the relative error of RL-WENO and WENO in each of these grid sizes in table 6. Note that the relative error is computed on average of several initial functions, and our RL-WENO policy is only trained on grid (∆x,∆t) = (0.02, 0.004).\nFor RL-WENO, we test it on both CPU and on GPU; For WENO, we test it purely on CPU, with a well-optimized version (e.g., good numpy vectorization in python), and a poor-implemented version (e.g., no vectorization, lots of loops). The CPU used for the tests is a custom Intel CORE i7, and the GPU is a custom NVIDIA GTX 1080. The results are shown in table 5.\nFrom the table we can tell that as ∆x decreases, i.e., as the grid becomes denser, all methods, except for the RL-WENO (GPU), requires significant more time to finish the computation. The reason that the time cost of the GPU-version of RL-WENO does not grow is that on GPU, we can compute\nall approximations in the next step (i.e., to compute (U t+10 , U t+1 1 , ..., U t+1 J ) given (U t 0, U t 1, ..., U t J), which dominates the computation cost of the algorithm) together in parallel. Thus, the increase of grids does not affect much of the computation time. Therefore, for coarse grid, well-optimized WENO indeed has clear speed advantage over RL-WENO (even on GPU), but on a much denser grid, RL-WENO (GPU) can be faster than well-optimized WENO by leveraging the paralleling nature of the algorithm." }, { "heading": "B REVIEW OF REINFORCEMENT LEARNING", "text": "B.1 REINFORCEMENT LEARNING\nReinforcement Learning (RL) is a general framework for solving sequential decision making problems. Recently, combined with deep neural networks, RL has achieved great success in various tasks such as playing video games from raw screen inputs (Mnih et al., 2015), playing Go (Silver et al., 2016), and robotics control (Schulman et al., 2017). The sequential decision making problem RL tackles is usually formulated as a Markov Decision Process (MDP), which comprises five elements: the state space S, the action space A, the reward r : S ×A→ R, the transition probability of the environment P : S ×A× S → [0, 1], and the discounting factor γ. The interactions between an RL agent and the environment forms a trajectory τ = (s0, a0, r0, ..., sT , aT , rT , ...). The return of τ is the discounted sum of all its future rewards:\nG(τ) = ∞∑ t=0 γtrt\nSimilarly, the return of a state-action pair (st, at) is:\nG(st, at) = ∞∑ l=t γl−trl\nA policy π in RL is a probability distribution on the action A given a state S: π : S ×A→ [0, 1]. We say a trajectory τ is generated under policy π if all the actions along the trajectory is chosen following π, i.e., τ ∼ π means at ∼ π(·|st) and st+1 ∼ P (·|st, at). Given a policy π, the value of a state s is defined as the expected return of all the trajectories when the agent starts at s and then follows π:\nV π(s) = Eτ [G(τ)|τ(s0) = s, τ ∼ π]\nSimilarly, the value of a state-action pair is defined as the expected return of all trajectories when the agent starts at s, takes action a, and then follows π:\nQπ(s, a) = Eτ [G(τ)|τ(s0) = s, τ(a0) = a, τ ∼ π]\nAs aforementioned in introduction, in most RL algorithms the policy π is optimized with regards to the values Qπ(s, a), thus naturally guarantees the long-term accumulated rewards (in our setting, the long-term accuracy of the learned schemes). Bellman Equation, one of the most important equations in RL, connects the value of a state and the value of its successor state:\nQπ(s, a) = r(s, a) + γEs′∼P (·|s,a),a′∼π(·|s′)[Q π(s′, a′)]\nV π(s) = Ea∼π(·|s),s′∼P (·|s′,a)[r(s, a) + γV π(s′)]\nThe goal of RL is to find a policy π to maximize the expected discounted sum of rewards starting from the initial state s0, J(π) = Es0∼ρ[V\nπ(s0)], where ρ is the initial state distribution. If we parameterize π using θ, then we can optimize it using the famous policy gradient theorem:\ndJ(πθ)\ndθ = Es∼ρπθ ,a∼πθ [∇θlogπθ(a|s)Qπθ (s, a)]\nwhere ρπθ is the state distribution deduced by the policy πθ. In this paper we focus on the case where the action space A is continuous, and a lot of mature algorithms has been proposed for such a case, e.g., the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015), the Trust Region Policy Optimization algorithm (Schulman et al., 2015), and etc." } ]
2,019
null
SP:d3e5ddd5bff36693dda6d3fb3fc19ab47706ec74
[ "Virtual Adversarial Training (Miyato et al., 2017) can be viewed as a form of Lipschitz regularization. Inspired by this, the paper proposes a Lipschitz regularization technique that tries to ensure that the function being regularized doesn’t change a lot in virtual adversarial directions. This method is shown to be effective in training Wasserstein GANs. ", "It is an interesting idea about how to enforce the Lipsthitz constrain in WGAN by using virtual adversarial training. The connection between virtual adversarial and this paper method - ALR is quite simple and clear. In the experiments, the FID score in the table is not complete which can not clearly compare the ability of the Lipschitz regularization to other regularization methods. The paper addresses that the approximation of r_{adv} will affect the performance of ALR. How to balance the quality and computation complexity is quite important. This paper did not provide the reason about why this method can not work better than GP method in high-dimensional setting." ]
Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training.
[ { "affiliations": [], "name": "Dávid Terjék" }, { "affiliations": [], "name": "Robert Bosch" } ]
[ { "authors": [ "C. Anil", "J. Lucas", "R.B. Grosse" ], "title": "Sorting out lipschitz function approximation", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "M. Arjovsky", "L. Bottou" ], "title": "Towards principled methods for training generative adversarial networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "M. Arjovsky", "S. Chintala", "L. Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "P.L. Bartlett" ], "title": "The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network", "venue": "IEEE Trans. Information Theory,", "year": 1998 }, { "authors": [ "A. Brock", "J. Donahue", "K. Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "M. Deza", "E. Deza" ], "title": "Encyclopedia of Distances. Encyclopedia of Distances", "venue": null, "year": 2009 }, { "authors": [ "H. Drucker", "Y. LeCun" ], "title": "Improving generalization performance using double backpropagation", "venue": "IEEE Trans. Neural Networks,", "year": 1992 }, { "authors": [ "Y. Dukler", "W. Li", "A.T. Lin", "G. Montúfar" ], "title": "Wasserstein of wasserstein loss for learning generative models", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "A. Galloway", "A. Golubeva", "T. Tanay", "M. Moussa", "G.W. Taylor" ], "title": "Batch normalization is a cause of adversarial vulnerability", "venue": "CoRR, abs/1905.02161,", "year": 2019 }, { "authors": [ "M. Gemici", "Z. Akata", "M. Welling" ], "title": "Primal-dual wasserstein GAN", "venue": "CoRR, abs/1805.09575,", "year": 2018 }, { "authors": [ "I.J. Goodfellow", "J. Pouget-Abadie", "M. Mirza", "B. Xu", "D. Warde-Farley", "S. Ozair", "A.C. Courville", "Y. Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "I.J. Goodfellow", "J. Shlens", "C. Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "H. Gouk", "E. Frank", "B. Pfahringer", "M.J. Cree" ], "title": "Regularisation of neural networks by enforcing lipschitz continuity", "venue": "CoRR, abs/1804.04368,", "year": 2018 }, { "authors": [ "Y. Grandvalet", "Y. Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems,", "year": 2004 }, { "authors": [ "I. Gulrajani", "F. Ahmed", "M. Arjovsky", "V. Dumoulin", "A.C. Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "M. Heusel", "H. Ramsauer", "T. Unterthiner", "B. Nessler", "S. Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "J. Ho", "S. Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "A. Householder" ], "title": "The Theory of Matrices in Numerical Analysis. A Blaisdell book in pure and applied sciences : introduction to higher mathematics", "venue": null, "year": 1964 }, { "authors": [ "T. Karras", "T. Aila", "S. Laine", "J. Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "V. Khrulkov", "I.V. Oseledets" ], "title": "Art of singular vectors and universal adversarial perturbations", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "T. Miyato", "T. Kataoka", "M. Koyama", "Y. Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "T. Miyato", "S. Maeda", "M. Koyama", "S. Ishii" ], "title": "Virtual adversarial training: A regularization method for supervised and semi-supervised learning", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 1979 }, { "authors": [ "A.M. Oberman", "J. Calder" ], "title": "Lipschitz regularized deep neural networks converge and generalize", "venue": "CoRR, abs/1808.09540,", "year": 2018 }, { "authors": [ "H. Petzka", "A. Fischer", "D. Lukovnikov" ], "title": "On the regularization of wasserstein gans", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "T. Salimans", "I.J. Goodfellow", "W. Zaremba", "V. Cheung", "A. Radford", "X. Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "A. Shafahi", "M. Najibi", "A. Ghiasi", "Z. Xu", "J.P. Dickerson", "C. Studer", "L.S. Davis", "G. Taylor", "T. Goldstein" ], "title": "Adversarial training for free", "venue": "URL http://arxiv", "year": 1904 }, { "authors": [ "C. Villani" ], "title": "Optimal Transport: Old and New", "venue": "Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg,", "year": 2008 }, { "authors": [ "X. Wei", "B. Gong", "Z. Liu", "W. Lu", "L. Wang" ], "title": "Improving the improved training of wasserstein gans: A consistency term and its dual effect", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "E. Wong", "F.R. Schmidt", "J.Z. Kolter" ], "title": "Wasserstein adversarial examples via projected sinkhorn iterations", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Q. Xu", "G. Huang", "Y. Yuan", "C. Guo", "Y. Sun", "F. Wu", "K.Q. Weinberger" ], "title": "An empirical study on evaluation metrics of generative adversarial networks", "venue": "CoRR, abs/1806.07755,", "year": 2018 }, { "authors": [ "Z. Zhou", "J. Liang", "Y. Song", "L. Yu", "H. Wang", "W. Zhang", "Y. Yu", "Z. Zhang" ], "title": "Lipschitz generative adversarial nets", "venue": null, "year": 2019 }, { "authors": [ "Z. Zhou", "J. Shen", "Y. Song", "W. Zhang", "Y. Yu" ], "title": "Towards efficient and unbiased implementation of lipschitz continuity in gans", "venue": "CoRR, abs/1904.01184,", "year": 2019 }, { "authors": [ "Miyato" ], "title": "spaces, let us restrict the divergence D from the VAT formulation to be a metric dY", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, Generative adversarial networks (GANs) (Goodfellow et al., 2014) have been becoming the state-of-the-art in several generative modeling tasks, ranging from image generation (Karras et al., 2018) to imitation learning (Ho and Ermon, 2016). They are based on an idea of a two-player game, in which a discriminator tries to distinguish between real and generated data samples, while a generator tries to fool the discriminator, learning to produce realistic samples on the long run. Wasserstein GAN (WGAN) was proposed as a solution to the issues present in the original GAN formulation. Replacing the discriminator, WGAN trains a critic to approximate the Wasserstein distance between the real and generated distributions. This introduced a new challenge, since Wasserstein distance estimation requires the function space of the critic to only consist of 1-Lipschitz functions.\nTo enforce the Lipschitz constraint on the WGAN critic, Arjovsky et al. (2017) originally used weight clipping, which was soon replaced by the much more effective method of Gradient Penalty (GP) (Gulrajani et al., 2017), which consists of penalizing the deviation of the critic’s gradient norm from 1 at certain input points. Since then, several variants of gradient norm penalization have been introduced (Petzka et al., 2018; Wei et al., 2018; Adler and Lunz, 2018; Zhou et al., 2019b).\nVirtual Adversarial Training (VAT) (Miyato et al., 2019) is a semi-supervised learning method for improving robustness against local perturbations of the input. Using an iterative method based on power iteration, it approximates the adversarial direction corresponding to certain input points. Perturbing an input towards its adversarial direction changes the network’s output the most.\nInspired by VAT, we propose a method called Adversarial Lipschitz Regularization (ALR), enabling the training of neural networks with regularization terms penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient. It provides means to generate a pair for each input point, for which the Lipschitz constraint is likely to be violated with high probability. In general, enforcing Lipschitz continuity of complex models can be useful for a lot of applications. In this work, we focus on applying ALR to Wasserstein GANs, as regularizing or constraining Lipschitz continuity has proven to have a high impact on training stability and reducing mode collapse. Source code to reproduce the presented experiments is available at https://github.com/dterjek/adversarial_lipschitz_regularization.\nOur contributions are as follows:\n• We propose Adversarial Lipschitz Regularization (ALR) and apply it to penalize the violation of the Lipschitz constraint directly, resulting in Adversarial Lipschitz Penalty (ALP).\n• Applying ALP on the critic in WGAN (WGAN-ALP), we show state-of-the-art performance in terms of Inception Score and Fréchet Inception Distance among non-progressive growing methods trained on CIFAR-10, and competitive performance in the high-dimensional setting when applied to the critic in Progressive Growing GAN trained on CelebA-HQ." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 WASSERSTEIN GENERATIVE ADVERSARIAL NETWORKS", "text": "Generative adversarial networks (GANs) provide generative modeling by a generator network g that transforms samples of a low-dimensional latent space Z into samples from the data space X , transporting mass from a fixed noise distribution PZ to the generated distribution Pg . The generator is trained simultaneously with another network f called the discriminator, which is trained to distinguish between fake samples drawn from Pg and real samples drawn from the real distribution Pr, which is often represented by a fixed dataset. This network provides the learning signal to the generator, which is trained to generate samples that the discriminator considers real. This iterative process implements the minimax game\nmin g max f Ex∼Pr log(f(x)) + Ez∼PZ log(1− f(g(z))) (1)\nplayed by the networks f and g. This training procedure minimizes the approximate Jensen-Shannon divergence (JSD) between Pr and Pg (Goodfellow et al., 2014). However, during training these two distributions might differ strongly or even have non-overlapping supports, which might result in gradients received by the generator that are unstable or zero (Arjovsky and Bottou, 2017).\nWasserstein GAN (WGAN) (Arjovsky et al., 2017) was proposed as a solution to this instability. Originating from Optimal Transport theory (Villani, 2008), the Wasserstein metric provides a distance between probability distributions with much better theoretical and practical properties than the JSD. It provides a smooth optimizable distance even if the two distributions have non-overlapping supports, which is not the case for JSD. It raises a metric dX from the space X of the supports of the probability distributions P1 and P2 to the space of the probability distributions itself. For these purposes, the Wasserstein-p distance requires the probability distributions to be defined on a metric space and is defined as\nWp(P1, P2) =\n( inf\nπ∈Π(P1,P2) E(x1,x2)∼πdX(x1, x2)\np\n) 1 p\n, (2)\nwhere Π(P1, P2) is the set of distributions on the product space X ×X whose marginals are P1 and P2, respectively. The optimal π achieving the infimum in (2) is called the optimal coupling of P1 and P2, and is denoted by π∗. The case of p = 1 has an equivalent formulation\nW1(P1, P2) = sup ‖f‖L≤1 Ex∼P1f(x)− Ex∼P2f(x), (3)\ncalled the Kantorovich-Rubinstein formula (Villani, 2008), where f : X → R is called the potential function, ‖f‖L ≤ 1 is the set of all functions that are 1-Lipschitz with respect to the ground metric dX , and the Wasserstein-1 distance corresponds to the supremum over all 1-Lipschitz potential functions. The smallest Lipschitz constant for a real-valued function f with the metric space (X, dX) as its domain is given by\n‖f‖L = sup x,y∈X;x 6=y |f(x)− f(y)| dX(x, y) . (4)\nBased on (3), the critic in WGAN (Arjovsky et al., 2017) implements an approximation of the Wasserstein-1 distance between Pg and Pr. The minimax game played by the critic f and the generator g becomes\nmin g max ‖f‖L≤1 Ez∼PZf(g(z))− Ex∼Prf(x), (5)\na formulation that proved to be superior to the standard GAN in practice, with substantially more stable training behaviour and improved sample quality (Arjovsky et al., 2017), although recent GAN variants do not always use this objective (Brock et al., 2019). With WGAN, the challenge became effectively restricting the smallest Lipschitz constant of the critic f , sparking the birth of a plethora of Lipschitz regularization techniques for neural networks." }, { "heading": "2.2 LIPSCHITZ FUNCTION APPROXIMATION", "text": "A general definition of the smallest Lipschitz constant of a function f : X → Y is\n‖f‖L = sup x,y∈X;x 6=y\ndY (f(x), f(y))\ndX(x, y) , (6)\nwhere the metric spaces (X, dX) and (Y, dY ) are the domain and codomain of the function f , respectively. The function f is called Lipschitz continuous if there exists a real constant K ≥ 0 for which dY (f(x), f(y)) ≤ K · dX(x, y) for any x, y ∈ X . Then, the function f is also called K-Lipschitz. Theoretical properties of neural networks with low Lipschitz constants were explored in Oberman and Calder (2018), Bartlett (1998) and Drucker and LeCun (1992), showing that it induces better generalization.\nLearning mappings with Lipschitz constraints became prevalent in the field of deep learning with the introduction of WGAN (Arjovsky et al., 2017). Enforcing the Lipschitz property on the critic was first done by clipping the weights of the network. This approach achieved superior results compared to the standard GAN formulation, but still sometimes yielded poor quality samples or even failed to converge. While clipping the weights enforces a global Lipschitz constant, it also reduces the function space, which might not include the optimal critic any more. Soon this method has been replaced by a softened one called Gradient Penalty (GP) (Gulrajani et al., 2017). Motivated by the fact that the optimal critic should have unit gradient norm on lines connecting the coupled points (x1, x2) ∼ π∗ according to (2), they proposed a regularizer that enforces unit gradient norm along these lines, which not only enforces the Lipschitz constraint, but other properties of the optimal solution as well. However, π∗ is not known in practice, which is why Gulrajani et al. (2017) proposed to apply GP on samples of the induced distribution Pi, by interpolating samples from the marginals P1 and P2. The critic in the WGAN-GP formulation is regularized with the loss\nλEx∼Pi(‖∇xf(x)‖2 − 1)2 (7) where Pi denotes the distribution of samples obtained by interpolating pairs of samples drawn from Pr and Pg , and λ is a hyperparameter acting as a Lagrange multiplier.\nTheoretical arguments against GP were pointed out by Petzka et al. (2018) and Gemici et al. (2018), arguing that unit gradient norm on samples of the distribution Pi is not valid, as the pairs of samples being interpolated are generally not from the optimal coupling π∗, and thus do not necessarily need to match gradient norm 1. Furthermore, they point out that differentiability assumptions of the optimal critic are not met. Therefore, the regularizing effect of GP might be too strong. As a solution, Petzka et al. (2018) suggested using a loss penalizing the violation of the Lipschitz constraint either explicitly with\nλEx,y∼Pτ ( |f(x)− f(y)| ‖x− y‖2 − 1 )2\n+\n(8)\nor implicitly with λEx∼Pτ (‖∇xf(x)‖2 − 1) 2 + (9)\nwhere in both cases (a)+ denotes max(0, a). The first method has only proved viable when used on toy datasets, and led to considerably worse results on relatively more complex datasets like CIFAR-10, which is why Petzka et al. (2018) used the second one, which they termed Lipschitz Penalty (LP). Compared to GP, this term only penalizes the gradient norm when it exceeds 1. As Pτ they evaluated the interpolation method described above, and also sampling random local perturbations of real and generated samples, but found no significant improvement compared to Pi. Wei et al. (2018) proposed dropout in the critic as a way for creating perturbed input pairs to evaluate the explicit Lipschitz penalty (8), which led to improvements, but still relied on using GP simultaneously.\nA second family of Lipschitz regularization methods is based on weight normalization, restricting the Lipschitz constant of a network globally instead of only at points of the input space. One such\ntechnique is called spectral normalization (SN) proposed in Miyato et al. (2018), which is a very efficient and simple method for enforcing a Lipschitz constraint with respect to the 2-norm on a per-layer basis, applicable to neural networks consisting of affine layers and K-Lipschitz activation functions. Gouk et al. (2018) proposed a similar approach, which can be used to enforce a Lipschitz constraint with respect to the 1-norm and ∞-norm in addition to the 2-norm, while also being compatible with batch normalization and dropout. Anil et al. (2019) argued that any Lipschitzconstrained neural network must preserve the norm of the gradient during backpropagation, and to this end proposed another weight normalization technique (showing that it compares favorably to SN, which is not gradient norm preserving), and an activation function based on sorting." }, { "heading": "2.3 VIRTUAL ADVERSARIAL TRAINING", "text": "VAT (Miyato et al., 2019) is a semi-supervised learning method that is able to regularize networks to be robust to local adversarial perturbation. Virtual adversarial perturbation means perturbing input sample points in such a way that the change in the output of the network induced by the perturbation is maximal in terms of a distance between distributions. This defines a direction for each sample point called the virtual adversarial direction, in which the perturbation is performed. It is called virtual to make the distinction with the adversarial direction introduced in Goodfellow et al. (2015) clear, as VAT uses unlabeled data with virtual labels, assigned to the sample points by the network being trained. The regularization term of VAT is called Local Distributional Smoothness (LDS). It is defined as LLDS = D (p(y|x), p(y|x+ rvadv)) , (10) where p is a conditional distribution implemented by a neural network, D(p, p′) is a divergence between two distributions p and p′, for which Miyato et al. (2019) chose the Kullback-Leibler divergence (KLD), and\nrvadv = arg max ‖r‖2≤\nD (p(y|x), p(y|x+ r)) (11)\nis the virtual adversarial perturbation, where is a hyperparameter. VAT is defined as a training method with the regularizer (10) applied to labeled and unlabeled examples. An important detail is that (10) is minimized by keeping p(y|x) fixed and optimizing p(y|x+ rvadv) to be close to it. The adversarial perturbation is approximated by the power iteration rvadv ≈ rk, where\nri+1 ≈ ∇rD (p(y|x), p(y|x+ r)) ∣∣∣ r=ξri∥∥∥∥∇rD (p(y|x), p(y|x+ r)) ∣∣∣ r=ξri ∥∥∥∥ 2 , (12)\nr0 is a randomly sampled unit vector and ξ is another hyperparameter. This iterative scheme is an approximation of the direction at x that induces the greatest change in the output of p in terms of the divergence D. Miyato et al. (2019) found that k = 1 iteration is sufficient in practical situations." }, { "heading": "3 ADVERSARIAL LIPSCHITZ REGULARIZATION", "text": "Adler and Lunz (2018) argued that penalizing the norm of the gradient as in (9) is more effective than penalizing the Lipschitz quotient directly as in (8), as the former penalizes the slope of f in all spatial directions around x, unlike the latter, which does so only along (x− y). We hypothesize that using the explicit Lipschitz penalty in itself is insufficient because if one takes pairs of samples x, y randomly from Pr, Pg or Pi (or just one sample and generates a pair for it with random perturbation), the violation of the Lipschitz penalty evaluated at these sample pairs will be far from its maximum, hence a more sophisticated strategy for sampling pairs is required. As we will show, a carefully chosen sampling strategy can in fact make the explicit penalty favorable over the implicit one.\nConsider the network f as a mapping from the metric space (X, dX) to the metric space (Y, dY ). Let us rewrite (6) with y = x+ r to get\n‖f‖L = sup x,x+r∈X;0<dX(x,x+r)\ndY (f(x), f(x+ r))\ndX(x, x+ r) . (13)\nA given mapping f is K-Lipschitz if and only if for any given x ∈ X , taking the supremum over r in (13) results in a value K or smaller. Assuming that this supremum is always achieved for some r, we\ncan define a notion of adversarial perturbation with respect to the Lipschitz continuity for a given x ∈ X as\nradv = arg max x+r∈X;0<dX(x,x+r)\ndY (f(x), f(x+ r))\ndX(x, x+ r) , (14)\nand the corresponding maximal violation of the K-Lipschitz constraint as\nLALP =\n( dY (f(x), f(x+ radv))\ndX(x, x+ radv) −K ) + . (15)\nWe define Adversarial Lipschitz Regularization (ALR) as the method of adding (15) as a regularization term to the training objective that penalizes the violation of the Lipschitz constraint evaluated at sample pairs obtained by adversarial perturbation. We call this term Adversarial Lipschitz Penalty (ALP).\nTo put it in words, ALP measures the deviation of f from being K-Lipschitz evaluated at pairs of sample points where one is the adversarial perturbation of the other. If added to the training objective, it makes the learned mapping approximately K-Lipschitz around the sample points it is applied at. We found that in the case of the WGAN critic it is best to minimize (15) without keeping f(x) fixed. See Appendix A.1 for the semi-supervised case and Appendix A.2 for how VAT can be seen as a special case of Lipschitz regularization.\n3.1 APPROXIMATION OF radv\nIn general, computing the adversarial perturbation (14) is a nonlinear optimization problem. A crude and cheap approximation is radv ≈ rk, where\nri+1 ≈ ∇rdY (f(x), f(x+ r)) ∣∣∣ r=ξri∥∥∥∥∇rdY (f(x), f(x+ r)) ∣∣∣ r=ξri ∥∥∥∥ 2 , (16)\nis the approximated adversarial direction with r0 being a randomly sampled unit vector. The derivation of this formula is essentially the same as the one described in Miyato et al. (2019), but is included in Appendix A.3 for completeness. Unlike in VAT, we do not fix , but draw it randomly from a predefined distribution P over R+ to apply the penalty at different scales.\nTheoretically, ALR can be used with all kinds of metrics dX and dY , and any kind of model f , but the approximation of radv imposes a practical restriction. It approximates the adversarial perturbation of x as a translation with length with respect to the 2-norm in the adversarial direction, which is only a perfect approximation if the ratio in (15) is constant for any > 0. This idealized setting is hardly ever the case, which is why we see the search for other approximation schemes as an important future direction. There is a large number of methods for generating adversarial examples besides the one proposed in VAT (Shafahi et al., 2019; Wong et al., 2019; Khrulkov and Oseledets, 2018), which could possibly be combined with ALR either to improve the approximation performance or to make it possible with new kinds of metrics. The latter is important since one of the strengths of the Wasserstein distance is that it can be defined with any metric dX , a fact that Adler and Lunz (2018) and Dukler et al. (2019) built on by extending GP to work with metrics other than the Euclidean distance. Adler and Lunz (2018) emphasized the fact that through explicit Lipschitz penalties one could extend WGANs to more general metric spaces as well." }, { "heading": "3.2 HYPERPARMETERS", "text": "In practice, one adds the Monte Carlo approximation of the expectation (averaged over a minibatch of samples) of either (15) or the square of (15) (or both) to the training objective, multiplied by a Lagrange multiplier λ. While VAT adds the expectation of (10) to the training objective, for WGAN we have added the square of the expectation of (15). To train the Progressive GAN, we have added both the expectation and its square. In the semi-supervised setting, we added only the expectation similarly to VAT. We have found these choices to work best in these scenarios, but a principled answer to this question is beyond the scope of this paper. The target Lipschitz constant K can be tuned by hand, or in the presence of labeled data it is possible to calculate the Lipschitz constant of the dataset\n(Oberman and Calder, 2018). The hyperparameters of the approximation scheme are k, ξ and those of P .\nChoosing the right hyperparameters can be done by monitoring the number of adversarial perturbations found by the algorithm for which the Lipschitz constraint is violated (and hence contribute a nonzero value to the expectation of (15)), and tuning the hyperparameters in order to keep this number balanced between its maximum (which is the minibatch size) and its minimum (which is 0). If it is too high, it means that either K is too small and should be increased, or the regularization effect is too weak, so one should increase λ. If it is too low, then either the regularization effect is too strong, or ALR is parametrized in a way that it cannot find Lipschitz constraint violations efficiently. In the former case, one should decrease λ. In the latter, one should either decrease K, tune the parameters of P , or increase the number of power iterations k for the price of increased runtime. We have not observed any significant effect when changing the value of ξ in any of the tasks considered." }, { "heading": "3.3 COMPARISON WITH OTHER LIPSCHITZ REGULARIZATION TECHNIQUES", "text": "In terms of efficiency when applied to WGANs, ALR compares favorably to the implicit methods penalizing the gradient norm, and to weight normalization techniques as well, as demonstrated in the experiments section. See Appendix A.4 for a showcase of the differences between weight normalization methods, implicit penalty methods and explicit penalty methods, represented by SN, LP and ALR, respectively. The key takeaways are that\n• penalty methods result in a softer regularization effect than SN, • ALR is preferable when the regularized network contains batch normalization (BN) layers,\nand\n• ALR gives more control over the regularization effect, which also means there are more hyperparameters to tune.\nThe performance of ALR mostly depends on the speed of the approximation of radv. The current method requires 1 step of backpropagation for each power iteration step, which means that running time will be similar to that of LP and GP with k = 1. SN is much cheaper computationally than each penalty method, although we believe ALR has the potential to become relatively cheap as well by adopting new techniques for obtaining adversarial examples (Shafahi et al., 2019)." }, { "heading": "4 WGAN-ALP", "text": "We specialize the ALP formula (15) with f being the critic, dX(x, y) = ‖x−y‖2, dY (x, y) = |x−y| and K = 1, and apply it to the WGAN objective to arrive at a version with the explicit penalty, which uses adversarial perturbations as a sampling strategy. It is formulated as\nEz∼PZf(g(z))− Ex∼Prf(x) + λEx∼Pr,g ( |f(x)− f(x+ radv)| ‖radv‖2 − 1 )2\n+\n, (17)\nwhere Pr,g is a combination of the real and generated distributions (meaning that a sample x can come from both), λ is the Lagrange multiplier, and the adversarial perturbation is defined as\nradv = arg max r;0<‖r‖2 |f(x)− f(x+ r)| ‖r‖2 . (18)\nThis formulation of WGAN results in a stable explicit Lipschitz penalty, overcoming the difficulties experienced when one tries to apply it to random sample pairs as shown in Petzka et al. (2018).\nTo evaluate the performance of WGAN-ALP, we trained one on CIFAR-10, consisting of 32× 32 RGB images, using the residual architecture from Gulrajani et al. (2017), implemented in TensorFlow. Closely following Gulrajani et al. (2017), we used the Adam optimizer (Kingma and Ba, 2015) with parameters β1 = 0, β2 = 0.9 and an initial learning rate of 2 × 10−4 decaying linearly to 0 over 100000 iterations, training the critic for 5 steps and the generator for 1 per iteration with minibatches of size 64 (doubled for the generator). We used (17) as a loss function to optimize the critic. K = 1 was an obvious choice, and we found λ = 100 to be optimal (the training diverged for λ = 0.1, and\nwas stable but performed worse for λ = 10 and 1000). The hyperparameters of the approximation of radv were set to ξ = 10, P being the uniform distribution over [0.1, 10], and k = 1 power iteration. Both batches from Pr and Pg were used for regularization.\nWe used Inception Score (Salimans et al., 2016) and FID (Heusel et al., 2017) as our evaluation metrics. The former correlates well with human judgment of image quality and is the most widely used among GAN models, and the latter has been shown to capture model issues such as mode collapse, mode dropping and overfitting, while being a robust and efficient metric (Xu et al., 2018). We monitored the Inception Score and FID during training using 10000 samples every 1000 iteration, and evaluated them at the end of training using 50000 samples. We ran the training setting described above 10 times with different random seeds, and calculated the mean and standard deviation of the final Inception Scores and FIDs, while also recording the maximal Inception Score observed during training. We report these values for WGAN-ALP and other relevant GANs (Gulrajani et al., 2017; Petzka et al., 2018; Zhou et al., 2019a; Wei et al., 2018; Miyato et al., 2018; Adler and Lunz, 2018; Karras et al., 2018) in Table 1. We did not run experiments to evaluate competing models, but included the values reported in the corresponding papers (with the exception of the FID for WGAN-GP, which was taken from Zhou et al. (2019a)). They used different methods to arrive at the cited results, from which that of Adler and Lunz (2018) is the one closest to ours. We show some generated samples in Figure 1a.\nWe also trained WGAN-LP in our implementation. During training, the best observed Inception Score and FID were 8.13 and 18.49, while at the end of training the best final Inception Score and\nFID were 8.01 and 15.42. To see that ALR indeed restricts the Lipschitz constant of the critic, we monitored the gradient norms during training, which converged to ≈ 5 with λ = 100. This was also the case using LP with λ = 0.1, but the number of Lipschitz constraint violations found by the algorithm were much higher in this case than with ALR.\nOur toy example in Appendix A.4 showed that when the regularized network contains BN layers, ALR seems to work better than competing methods. In order to see if this still applies in more complex settings, we have trained a variant of WGAN in which the critic contains BN layers (WGAN-BN). Gulrajani et al. (2017) did not use BN in the critic as they argued that GP is not valid in that setting, and indeed when we trained WGAN-BN with GP, the best Inception Score observed during training was only 6.29. When we applied ALP to WGAN-BN, the results were nearly on par with the original setting without BN, producing an even better maximal Inception Score of 8.71. We leave the question of how BN affects Lipschitz continuity for future work. Generated samples are shown in Figure 1b.\nGulrajani et al. (2017) made the distinction between one-sided and two-sided penalties, represented by (9) and (7). The latter is based on the fact that in WGAN, the optimal critic has unit gradient norm on lines connecting points from the optimal coupling π∗. Petzka et al. (2018) showed that since π∗ is not known in practice, one should use the one-sided penalty, while Gemici et al. (2018) proposed a method to approximate π∗ with an auto-encoding scheme. In the limit ‖r‖2 → 0 the expression inside the arg max operator in (18) is equivalent to the directional derivative of f along r, and the vector radv corresponding to the maximum value of the directional derivative at x is equivalent to ∇xf(x). Since the critic f corresponds to the potential function in the dual formulation of the optimal transport problem, at optimality its gradient at x points towards its coupling y, where (x, y) ∼ π∗. From this perspective, sampling pairs (x, x+ radv) using (18) can be seen as an approximation of the optimal coupling π∗. To test how reasonable this approximation is, we have trained a WGAN variant with the two-sided explicit penalty formulated as\nEz∼PZf(g(z))− Ex∼Prf(x) + λEx∼Pr,g ( |f(x)− f(x+ radv)| ‖radv‖2 − 1 )2 , (19)\nwhich performed similarly to the one-sided case with λ = 10, but was less stable for other values of λ. The findings of Petzka et al. (2018) were similar for the case of the implicit penalty. Improving the approximation scheme of radv might render the formulation using the two-sided penalty (19) preferable in the future.\nTo show that ALR works in a high-dimensional setting as well, we trained a Progressive GAN on the CelebA-HQ dataset (Karras et al., 2018), consisting of 1024× 1024 RGB images. We took the official TensorFlow implementation and replaced the loss function of the critic, which originally used GP, with a version of ALP. Using (17) as the training objective was stable until the last stage of progressive growing, but to make it work on the highest resolution, we had to replace it with\nEz∼PZf(g(z))− Ex∼Prf(x)\n+ λEx∼Pr,g\n(( |f(x)− f(x+ radv)| ‖radv‖2 − 1 )2\n+\n+\n( |f(x)− f(x+ radv)| ‖radv‖2 − 1 )\n+\n) , (20)\nmeaning that we used the sum of the absolute and squared values of the Lipschitz constraint violation as the penalty. The optimal hyperparameters were λ = 0.1, P being the uniform distribution over [0.1, 100], ξ = 10 and k = 1 step of power iteration. The best FID seen during training with the original GP version was 8.69, while for the modified ALP version it was 14.65. The example shows that while ALP did not beat GP in this case (possibly because the implementation was fine-tuned using GP), it does work in the high-dimensional setting as well. For samples generated by the best performing ALR and GP variants see Appendix A.5." }, { "heading": "5 CONCLUSIONS", "text": "Inspired by VAT, we proposed ALR and shown that it is an efficient and powerful method for learning Lipschitz constrained mappings implemented by neural networks. Resulting in competitive performance when applied to the training of WGANs, ALR is a generally applicable regularization method. It draws an important parallel between Lipschitz regularization and adversarial training, which we believe can prove to be a fruitful line of future research." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The author would like to thank Michael Herman from Bosch Center for Artificial Intelligence (BCAI) for the fruitful discussions, and the Advanced Engineering team in Budapest, especially Géza Velkey." }, { "heading": "A APPENDIX", "text": "A.1 SEMI-SUPERVISED LEARNING\nSince VAT is a semi-supervised learning method, it is important to see how ALR fares in that regime. To show this, we have replicated one of the experiments from Miyato et al. (2019). We trained the ConvLarge architecture to classify images from CIFAR-10 with the same setting as described in Miyato et al. (2019), except that we did not decay the learning rate, but kept it fixed at 3× 10−4. We split the 50000 training examples into 4000 samples for the classification loss, 45000 samples for regularization and 1000 for validation, with equally distributed classes. Test performance was evaluated on the 10000 test examples. We have found that unlike in the unsupervised setting, here it was important to assume f(x) fixed when minimizing the regularization loss, and also to complement the smoothing effect with entropy minimization (Grandvalet and Bengio, 2004). The baseline VAT method was ALR specialized with K = 0, dX being the Euclidean metric, dY being the KL divergence, fixed = 8 and λ = 1. This setting achieved maximal validation performance of 84.2% and test performance 82.46%. After some experimentation, the best performing choice was K = 0, dX being the l2 metric, dY the mean squared difference over the logit space (which parametrize the categorical output distribution over which the KL divergence is computed in the case of VAT), P being the uniform distribution over [1, 10] and λ = 1. This way the maximal validation performance was 85.3% and test performance 83.54%. Although this ≈ 1% is improvement is not very significant, it shows that ALR can be a competitive choice as a semi-supervised learning method as well.\nA.2 VIRTUAL ADVERSARIAL TRAINING AS LIPSCHITZ REGULARIZATION\nVAT was defined by considering neural networks implementing conditional distributions p(y|x), where the distribution over discrete labels y was conditioned on the input image x Miyato et al. (2019). To see why LDS (10), the regularization term of VAT, can be seen as special kind of Lipschitz continuity, we will use a different perspective. Consider a mapping f : X → Y with domain X and codomain Y , where X is the space of images and Y is the probability simplex (the space of distributions over the finite set of labels).\nSince a divergence is in general a premetric (prametric, quasi-distance) on the space of probability measures (Deza and Deza, 2009), and Lipschitz continuity is defined for mappings between metric spaces, let us restrict the divergence D from the VAT formulation to be a metric dY . Miyato et al. (2019) used KLD in their experiments, which is not a metric, but one can use e.g. the square root of JSD or the Hellinger distance, which are metrics. Let us metrize the space of images X with dX being the Euclidean metric. From this perspective, the network f is a mapping from the metric space (X, dX) to the metric space (Y, dY ). Let us also assume that we aim to learn a mapping f with the smallest possible ‖f‖L by setting K to 0. To enforce the condition x+ r ∈ X in (14), we bound the Euclidean norm of r from above by some predefined > 0. If we make the additional assumption that the supremum is always achieved with an r of maximal norm , the denominator in (14) will be constant, hence the formulas with and without it will be equivalent up to a scaling factor. With these simplifications, (14) and (15) reduce to\nrV ATadv = arg max 0≤‖r‖2≤ dY (f(x), f(x+ r)) (21)\nand LV ATALP = dY (f(x), f(x+ radv)), (22)\nwhich are equivalent to (11) and (10), respectively. Let us consider the question of keeping f(x) fixed when minimizing (22) an implementation detail. With this discrepancy aside, we have recovered VAT as a special case of Lipschitz regularization.\nA.3 DERIVATION OF THE APPROXIMATION OF radv\nWe assume that f and dY are both twice differentiable with respect to their arguments almost everywhere, the latter specifically at x = y. Note that one can easily find a dY for which the last assumption does not hold, for example the l1 distance. If dY is translation invariant, meaning that\ndY (x, y) = dY (x+ u, y + u) for each u ∈ Y , then its subderivatives at x = y will be independent of x, hence the method described below will still work. Otherwise, one can resort to using a proxy metric in place of dY for the approximation, for example the l2 distance.\nWe denote dY (f(x), f(x+ r)) by d(r, x) for simplicity. Because d(r, x) ≥ 0 and d(0, x) = 0, it is easy to see that ∇rd(r, x) ∣∣ r=0 = 0, (23)\nso that the second-order Taylor approximation of d(r, x) is d(r, x) ≈ 12r TH(x)r, where H(x) = ∇∇rd(r, x) ∣∣ r=0\nis the Hessian matrix. The eigenvector u of H(x) corresponding to its eigenvalue with the greatest absolute value is the direction of greatest curvature, which is approximately the adversarial direction that we are looking for. The power iteration (Householder, 1964) defined by\nri+1 := H(x)ri ‖H(x)ri‖2 , (24)\nwhere r0 is a randomly sampled unit vector, converges to u if u and r0 are not perpendicular. Calculating H(x) is computationally heavy, which is why H(x)ri is approximated using the finite differences method as\nH(x)ri ≈ ∇rd(r, x)\n∣∣ r=ξri −∇rd(r, x) ∣∣ r=0\nξ = ∇rd(r, x)\n∣∣ r=ξri\nξ (25)\nwhere the equality follows from (23). The hyperparameter ξ 6= 0 is introduced here. In summary, the adversarial direction is approximated by the iterative scheme\nri+1 := ∇rd(r, x) ∣∣ r=ξri∥∥∥∇rd(r, x)∣∣r=ξri∥∥∥2 , (26)\nof which one iteration is found to be sufficient and necessary in practice.\nA.4 TOY EXAMPLE\nTo showcase the differences between weight normalization methods, implicit penalty methods and explicit penalty methods, represented by SN, LP and ALR, respectively, we devised the following toy example. Suppose that we want to approximate the following real-valued mapping on the 2-dimensional interval [−4, 4]2:\nf(x, y) =\n{ 0 if 1 ≤ √ x2 + y2 ≤ 2,\n1 otherwise (27)\nfor −4 ≤ x, y ≤ 4. In addition, we want the approximation to be 1-Lipschitz. It is easy to see that the optimal approximation with respect to the mean squared error is\nf̂opt(x, y) = 1 if √ x2 + y2 ≤ 0.5, 1.5− √ x2 + y2 if 0.5 < √ x2 + y2 ≤ 1.5,√ x2 + y2 − 1.5 if 1.5 < √ x2 + y2 ≤ 2.5,\n1 otherwise.\n. (28)\nThis example has connections to WGAN, as the optimal critic is 1-Lipschitz, and its approximation will provide the learning signal to the generator in the form of gradients. Therefore, it is important to closely approximate the gradient of the optimal critic, which is achieved indirectly by Lipschitz regularization. In this example, we will see how closely the different Lipschitz regularization methods can match the gradient of the optimal approximation f̂opt.\nWe implemented the example in PyTorch. For the approximation f̂ , we use an MLP with 3 hidden layers containing 20, 40 and 20 neurons, respectively, with ReLU activations after the hidden layers, and a variant which also has batch normalization (BN) before the activations, since it has been found that BN hurts adversarial robustness (Galloway et al., 2019), and hence it should also hurt Lipschitz continuity. We trained the networks for 214 iterations, with batches consisting of an input, a\ncorresponding output, and an additional input for regularization. The inputs are drawn uniformly at random from [−4, 4]2 and the output is defined by (27). The minibatch size was 64 for input-output pairs, and 1024 for regularization inputs. We used heatmaps to visualize the gradient norm surfaces of the optimal and learned mappings, with the color gradient going from black at 0 to white at 1, see Figure 2. This example is not intended to rank the competing Lipschitz regularization methods, as it always depends on the particular application which one is the best suited, but to show that they are fundamentally different and competent in their own way.\nWithout any kind of regularization, the network learned to approximate the target function very well, but its gradients look nothing like that of f̂opt, although somehow it is a better match with BN.\nWhen we apply SN to the MLP layers, the result without BN will be a very smooth mapping with maximum gradient norm far below 1. SN is not compatible with BN, the result being only slightly better than the unregularized case. A detail not visible here is that because SN considers weight matrices as linear maps from Rn to Rm and normalizes them layer-wise, it regularizes globally instead of around actual data samples. In this case, on the whole of R2 instead of just [−4, 4]2. For WGANs trained on CIFAR-10, the input space consists of 32× 32 RGB images with pixel values in [−1, 1], but the trained mapping is regularized on R32×32×3 instead of just [−1, 1]32×32×3 (which\ncontains the supports of the real and fake distributions). This can hurt performance if the optimal mapping implemented by a particular network architecture is K-Lipschitz inside these supports, but not in some other parts of R32×32×3.\nWhen the network is regularized using LP (9), the regularization strength can be controlled by tuning the value of λ. We trained with λ = 0.1, 1 and 10. Without BN, the highest of these values seems to work the best. With BN, the resulting mapping is visibly highly irregular.\nWith ALR, in addition to λ, we have additional control over the regularization by the hyperparameters of the approximation scheme of radv . After some experimentation, we have found the best P for this case was the uniform distribution over [10−6, 10−5]. We trained with λ = 0.1, 1 and 10, and k = 0, 1 and 5 power iterations. Arguably, both with and without BN the λ = 1 and k = 5 case seems like the best choice. Without BN, the results are quite similar to the LP case, but when BN is introduced, the resulting mappings are much smoother than the ones obtained with LP.\nA.5 IMAGES GENERATED BY PROGRESSIVE GAN TRAINED ON CELEBA-HQ" } ]
2,020
ADVERSARIAL LIPSCHITZ REGULARIZATION
SP:50780fb6b72c0da68cb960a12530c54a831222de
[ "This paper investigates the degree to which we might view attention weights as explanatory across NLP tasks and architectures. Notably, the authors distinguish between single and \"pair\" sequence tasks, the latter including NLI, and generation tasks (e.g., translation). The argument here is that attention weights do not provide explanatory power for single sequence tasks like classification, but do for NLI and generation. Another notable distinction from most (although not all; see the references below) prior work on the explainability of attention mechanisms in NLP is the inclusion of transformer/self-attentive architectures. ", "I use (unqualified) “self-attention” to refer to attention of tokens in a sequence to other tokens in the same sequence, as described by [some corrected version of] Eq (1) and the paragraph following it (citing Bahdanau et al. 2015). This contrasts with “Transformer self-attention” and “cross-sequence attention”." ]
The attention layer in a neural network model provides insights into the model’s reasoning behind its prediction, which are usually criticized for being opaque. Recently, seemingly contradictory viewpoints have emerged about the interpretability of attention weights (Jain & Wallace, 2019; Vig & Belinkov, 2019). Amid such confusion arises the need to understand attention mechanism more systematically. In this work, we attempt to fill this gap by giving a comprehensive explanation which justifies both kinds of observations (i.e., when is attention interpretable and when it is not). Through a series of experiments on diverse NLP tasks, we validate our observations and reinforce our claim of interpretability of attention through manual evaluation.
[]
[ { "authors": [ "Jimmy Ba", "Volodymyr Mnih", "Koray Kavukcuoglu" ], "title": "Multiple object recognition with visual attention", "venue": "In Proc. of ICLR,", "year": 2014 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Proc. of ICLR,", "year": 2015 }, { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proc. of EMNLP,", "year": 2015 }, { "authors": [ "S. Chakraborty", "R. Tomsett", "R. Raghavendra", "D. Harborne", "M. Alzantot", "F. Cerutti", "M. Srivastava", "A. Preece", "S. Julier", "R.M. Rao", "T.D. Kelley", "D. Braines", "M. Sensoy", "C.J. Willis", "P. Gurram" ], "title": "Interpretability of deep learning models: A survey of results", "venue": "In Proc. of UbiComp,", "year": 2017 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D. Manning" ], "title": "What does BERT look at? an analysis of bert’s attention", "venue": "In Proc. of BlackBoxNLP,", "year": 2019 }, { "authors": [ "J. Cohen" ], "title": "A Coefficient of Agreement for Nominal Scales", "venue": "Educational and Psychological Measurement,", "year": 1960 }, { "authors": [ "Yann N. Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "In Proc. of ICML,", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proc. of NAACL,", "year": 2019 }, { "authors": [ "Desmond Elliott", "Stella Frank", "Khalil Sima’an", "Lucia Specia" ], "title": "Multi30k: Multilingual englishgerman image descriptions", "venue": "In Proc of the 5th Workshop on Vision and Language,", "year": 2016 }, { "authors": [ "Reza Ghaeini", "Xiaoli Z. Fern", "Prasad Tadepalli" ], "title": "Interpreting recurrent and attention-based neural models: a case study on natural language inference", "venue": "In Proc. of EMNLP,", "year": 2018 }, { "authors": [ "Karl Moritz Hermann", "Tomáš Kočiský", "Edward Grefenstette", "Lasse Espeholt", "Will Kay", "Mustafa Suleyman", "Phil Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "In Proc. of NIPS,", "year": 2015 }, { "authors": [ "Sarthak Jain", "Byron C. Wallace" ], "title": "Attention is not Explanation", "venue": "In Proc. of ACL,", "year": 2019 }, { "authors": [ "J. Richard Landis", "Gary G. Koch" ], "title": "The measurement of observer agreement for categorical data", "venue": null, "year": 1977 }, { "authors": [ "Zhouhan Lin", "Minwei Feng", "Cı́cero Nogueira dos Santos", "Mo Yu", "Bing Xiang", "Bowen Zhou", "Yoshua Bengio" ], "title": "A structured self-attentive sentence embedding", "venue": "In Proc. of ICLR,", "year": 2017 }, { "authors": [ "Yang Liu", "Mirella Lapata" ], "title": "Learning structured text representations", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proc. of EMNLP,", "year": 2015 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proc. of ACL,", "year": 2011 }, { "authors": [ "Diego Marcheggiani", "Ivan Titov" ], "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "venue": "In Proc. of EMNLP,", "year": 2017 }, { "authors": [ "Aäron van den Oord", "Nal Kalchbrenner", "Oriol Vinyals", "Lasse Espeholt", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Proc. of NIPS,", "year": 2016 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proc. of EMNLP,", "year": 2014 }, { "authors": [ "Tim Rocktäschel", "Edward Grefenstette", "Karl Moritz Hermann", "Tomas Kocisky", "Phil Blunsom" ], "title": "Reasoning about entailment with neural attention", "venue": "In Proc. of ICLR,", "year": 2016 }, { "authors": [ "Sofia Serrano", "Noah A. Smith" ], "title": "Is attention interpretable", "venue": "In Proc. of ACL,", "year": 2019 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proc. of EMNLP,", "year": 2013 }, { "authors": [ "Ian Tenney", "Dipanjan Das", "Ellie Pavlick" ], "title": "BERT rediscovers the classical NLP pipeline", "venue": "In Proc. of ACL,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proc. of NIPS,", "year": 2017 }, { "authors": [ "Jesse Vig", "Yonatan Belinkov" ], "title": "Analyzing the structure of attention in a transformer language model", "venue": "In Proc. of BlackBoxNLP,", "year": 2019 }, { "authors": [ "Yequan Wang", "Minlie Huang", "Xiaoyan Zhu", "Li Zhao" ], "title": "Attention-based LSTM for aspect-level sentiment classification", "venue": "In Proc. of EMNLP,", "year": 2016 }, { "authors": [ "Jason Weston", "Antoine Bordes", "Sumit Chopra", "Tomas Mikolov" ], "title": "Towards ai-complete question answering: A set of prerequisite toy tasks", "venue": "In Proc. of ICLR,", "year": 2015 }, { "authors": [ "Sarah Wiegreffe", "Yuval Pinter" ], "title": "Attention is not not explanation", "venue": "In Proc. of EMNLP,", "year": 2019 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In Proc. of NAACL,", "year": 2018 }, { "authors": [ "Kelvin Xu", "Jimmy Lei Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhutdinov", "Richard S. Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In Proc. of ICML,", "year": 2015 }, { "authors": [ "Zichao Yang", "Diyi Yang", "Chris Dyer", "Xiaodong He", "Alex Smola", "Eduard Hovy" ], "title": "Hierarchical attention networks for document classification", "venue": "In Proc. of NAACL,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Attention is a way of obtaining a weighted sum of the vector representations of a layer in a neural network model (Bahdanau et al., 2015). It is used in diverse tasks ranging from machine translation (Luong et al., 2015), language modeling (Liu & Lapata, 2018) to image captioning (Xu et al., 2015), and object recognition (Ba et al., 2014). Apart from substantial performance benefit (Vaswani et al., 2017), attention also provides interpretability to neural models (Wang et al., 2016; Lin et al., 2017; Ghaeini et al., 2018) which are usually criticized for being black-box function approximators (Chakraborty et al., 2017).\nThere has been substantial work on understanding attention in neural network models. On the one hand, there is work on showing that attention weights are not interpretable, and altering them does not significantly affect the prediction (Jain & Wallace, 2019; Serrano & Smith, 2019). While on the other hand, some studies have discovered how attention in neural models captures several linguistic notions of syntax and coreference (Vig & Belinkov, 2019; Clark et al., 2019; Tenney et al., 2019). Amid such contrasting views arises a need to understand the attention mechanism more systematically. In this paper, we attempt to fill this gap by giving a comprehensive explanation which justifies both kinds of observations.\nThe conclusions of Jain & Wallace (2019); Serrano & Smith (2019) have been mostly based on text classification experiments which might not generalize to several other NLP tasks. In Figure 1, we report the performance on text classification, Natural Language Inference (NLI) and Neural Machine Translation (NMT) of two models: one trained with neural attention and the other trained with attention weights fixed to a uniform distribution. The results show that the attention mechanism in text classification does not have an impact on the performance, thus, making inferences about interpretability of attention in these models might not be accurate. However, on tasks such as NLI and NMT uniform attention weights degrades the performance substantially, indicating that attention is a crucial component of the model for these tasks and hence the analysis of attention’s interpretability here is more reasonable.\nIn comparison to the existing work on interpretability, we analyze attention mechanism on a more diverse set of NLP tasks that include text classification, pairwise text classification (such as NLI), and text generation tasks like neural machine translation (NMT). Moreover, we do not restrict ourselves to a single attention mechanism and also explore models with self-attention. For examining the interpretability of attention weights, we perform manual evaluation. Our key contributions are:\n1. We extend the analysis of attention mechanism in prior work to diverse NLP tasks and provide a comprehensive picture which alleviates seemingly contradicting observations.\n2. We identify the conditions when attention weights are interpretable and correlate with feature importance measures – when they are computed using two vectors which are both functions of the input (Figure 1b, c). We also explain why attention weights are not interpretable when the input has only single sequence (Figure 1a), an observation made by Jain & Wallace (2019), by showing that they can be viewed as a gating unit.\n3. We validate our hypothesis of interpretability of attention through manual evaluation." }, { "heading": "2 TASKS AND DATASETS", "text": "We investigate the attention mechanism on the following three task categories.\n1. Single Sequence tasks are those where the input consists of a single text sequence. For instance, in sentiment analysis, the task is to classify a review as positive or negative. This also includes other text classification tasks such as topic categorization. For the experiments, in this paper, we use three review rating datasets: (1) Stanford Sentiment Treebank (Socher et al., 2013), (2) IMDB (Maas et al., 2011) and (3) Yelp 20171 and one topic categorization dataset AG News Corpus (business vs world).2\n2. Pair Sequence tasks comprise of a pair of text sequences as input. The tasks like NLI and question answering come under this category. NLI involves determining whether a hypothesis entails, contradicts, or is undetermined given a premise. We use Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MultiNLI) (Williams et al., 2018) datasets for our analysis. For question answering, similar to Jain & Wallace (2019), we use CNN News Articles (Hermann et al., 2015) and three tasks of the original babI dataset (Weston et al., 2015) in our experiments, i.e., using one, two and three supporting statements as the context for answering the questions.\n3. Generation tasks involve generating a sequence based on the input sequence. Neural Machine translation is an instance of generation task which comprises of translating a source text to a target language given translation pairs from a parallel corpus. For our experiments, we use\n1from www.yelp.com/dataset challenge 2www.di.unipi.it/ gulli/AG corpus of news articles.html\nthree English-German datasets: Multi30k (Elliott et al., 2016), En-De News Commentary v11 from WMT16 translation task3 and full En-De WMT13 dataset." }, { "heading": "3 NEURAL ATTENTION MODELS", "text": "In this section, we give a brief overview of the neural attention-based models we analyze for different categories of tasks listed in Section 2. The overall architecture for each category is shown in Fig 1." }, { "heading": "3.1 SINGLE SEQUENCE MODELS:", "text": "For single sequence tasks, we adopt the model architecture from Jain & Wallace (2019); Wiegreffe & Pinter (2019). For a given input sequence x ∈ RT×|V |, where T and |V | are the number of tokens and vocabulary size, we first represent each token with its d-dimensional GloVe embedding Pennington et al. (2014) to obtain xe ∈ RT×d. Next, we use a Bi-RNN encoder (Enc) to obtain an m-dimensional contextualized representation of tokens: h = Enc(xe) ∈ RT×m. Then, we use the additive formulation of attention proposed by Bahdanau et al. (2015) for computing attention weights αi for all tokens defined as:\nui = tanh(Whi + b); αi = exp(uTi c)∑ j exp(u T j c) , (1)\nwhere W ∈ Rd′×m, b, c ∈ Rd′ are the parameters of the model. Finally, the weighted instance representation: hα = ∑T i=1 αihi ∈ Rm is fed to a dense layer (Dec) followed by softmax to obtain prediction ŷ = σ(Dec(hα)) ∈ R|Y|, where |Y| denotes the label set size. We also analyze the hierarchical attention model (Yang et al., 2016), which involves first computing attention over the tokens to obtain a sentence representation. This is followed by attention over sentences to obtain an instance representation hα, which is fed to a dense layer for obtaining prediction (ŷ). At both word and sentence level the attention is computed similar to as defined in Equation 1." }, { "heading": "3.2 PAIR SEQUENCE MODELS:", "text": "For pair sequence, the input consists of two text sequences: x ∈ RT1×|V |,y ∈ RT2×|V | of length T1 and T2. In NLI, x indicates premise and y is hypothesis while in question answering, it is the question and paragraph respectively. Following Bowman et al. (2015), we use two separate RNNs for encoding both the sequences to obtain {hx1 , ...,hxT1} and {h y 1, ...,h y T2 }. Now, similar to Jain & Wallace (2019), attention weight αi over each token of x is computed as:\nui = tanh(W1h x i +W2h y T2 ); αi = exp(uTi c)∑ j exp(u T j c) , (2)\nwhere similar to Equation 1, W1,W2 ∈ Rd×d ′ denotes the projection matrices and c ∈ Rd′ is a parameter vector. Finally, the representation obtained from a weighted sum of tokens in x: hα =∑T i=1 αih x i is fed to a classifier for prediction.\nWe also explore a variant of the above attention proposed by Rocktäschel et al. (2016). Instead of keeping the RNN encoders of both the sequences independent, Rocktäschel et al. (2016) use conditional encoding where the encoder of y is initialized with the final state of x’s encoder. This allows the model to obtain a conditional encoding {h′y1 , ...,h ′y T2 } of y given the sequence x. Moreover, unlike the previous model, attention over the tokens of x is defined as follows:\nM = tanh(W1X +W2h ′y T2 ⊗ eT1); α = softmax(wTM), (3)\nwhere X = [hx1 , ...,h x T1 ], eT1 ∈ RT1 is a vector of ones and outer product W2h ′y T2 ⊗ eT1 denotes repeating linearly transformed h′yT2 as many times as words in the sequence x (i.e. T1 times).\n3http://www.statmt.org/wmt16/translation-task.html" }, { "heading": "3.3 GENERATION TASK MODELS:", "text": "In this paper, for generation tasks, we focus on Neural Machine Translation (NMT) problem which involves translating a given source text sentence x ∈ RT1×|V1| to a sequence y ∈ RT2×|V2| in the target language. The model comprises of two components: (a) an encoder which computes a representation for each source sentence and (b) a decoder which generates a target word at each time step. In this work, we utilize RNN based encoder and decoder models. For each input sentence x, we first obtain a contextualized representation {h1, ...,hT1} of its tokens using a multi-layer Bi-RNN. Then, at each time step t, the decoder has a hidden state defined as\nct = f(ct−1, yt−1,h t α),where h t α = T1∑ i=1 αt,ihi.\nIn our work, we compute αt,i as proposed by Bahdanau et al. (2015) and Luong et al. (2015). The former computes attention weights using a feed-forward network, i.e., αt,i = wT tanh(W [ct;hi]) while the latter define it simply as αt,i = cTt hi." }, { "heading": "3.4 SELF-ATTENTION BASED MODELS:", "text": "We also examine self-attention based models on all three categories of tasks. For single and pair sequence tasks, we fine-tune pre-trained BERT (Devlin et al., 2019) model on the downstream task. In pair sequence tasks, instead of independently encoding each text, we concatenate both separated by a delimiter and pass it to BERT model. Finally, the embedding corresponding to [CLS] token is fed to a feed-forward network for prediction. For neural machine translation, we use Transformer model proposed by Vaswani et al. (2017) with base configuration." }, { "heading": "4 IS ATTENTION AN EXPLANATION?", "text": "In this section, we attempt to address the question: Is attention an explanation? through a series of experiments which involve analyzing attention weights in a variety of models (§3) on multiple tasks (§2). Following Jain & Wallace (2019), we take the definition of explainability of attention as: inputs with high attention weights are responsible for model output. Jain & Wallace (2019); Serrano & Smith (2019) have extensively investigated this aspect for certain class of problems and have shown that attention does not provide an explanation. However, another series of work (Vig & Belinkov, 2019; Clark et al., 2019; Tenney et al., 2019) has shown that attention does encode several linguistic notions. In our work, we claim that the findings of both the line of work are consistent. We note that the observations of the former works can be explained based on the following proposition.\nProposition 4.1. Attention mechanism as defined in Equation 1 as\nui = tanh(Whi + b); αi = exp(uTi c)∑ j exp(u T j c)\nfor single sequence tasks can be interpreted as a gating unit in the network.\nProof: The attention weighted averaging computed in Equation 1 for single sequence tasks can be interpreted as gating proposed by Dauphin et al. (2017) which is defined as\nh(x) = f(x)× σ(g(x)),\nwhere x ∈ Rm is the input and × denotes product between transformed input f(x) ∈ Rm and its computed gating scores σ(g(x)) ∈ R. Equation 1 can be reduced to the above form by taking f as an identity function and defining g(x) = cT tanh(Wx+ b) ∈ R and replacing σ with softmax. We note that the same reduction does not hold in the case of pair sequence and generation tasks where attention along with input also depends on another text sequence Y and current hidden state ct, respectively. Thus, attention mechanism for these tasks take the form\nh(x,y) = f(x)× σ(g(x,y)),\nwhich does not reduce to the above equation for gating unit.\nBased on the above proposition, we argue that weights learned in single sequence tasks cannot be interpreted as attention, and therefore, they do not reflect the reasoning behind the model’s prediction. This justifies the observation that for the single sequence tasks examined in Jain & Wallace (2019); Serrano & Smith (2019), attention weights do not correlate with feature importance measures and permuting them does not change the prediction of the model. In light of this observation, we revisit the explainability of attention weights by asking the following questions." }, { "heading": "4.1 HOW DOES ALTERING ATTENTION WEIGHTS AFFECT MODEL OUTPUT ON TASKS?", "text": "In this section, we compare the performance of various attention mechanism described in §3 for different categories of tasks listed in §2. For each model, we analyze its three variants defined as:\n• Uniform denotes the case when all the inputs are given equal weights, i.e., αi = 1/T, ∀i ∈ {1, ..., T}. This is similar to the analysis performed by Wiegreffe & Pinter (2019). However, we consider two scenarios when the weights are kept fixed both during training and inference (Train+Infer) and only during inference (Infer). • Random refers to the variant where all the weights are randomly sampled from a uniform\ndistribution: αi ∼ U(0, 1), ∀i ∈ {1, ..., T}, this is followed by normalization. Similar to Uniform, we analyze both Train+Infer and Infer. • Permute refers to the case when the learned attention weights are randomly permuted during\ninference, i.e., α = shuffle(α). Unlike the previous two, here we restrict our analysis to only permuting during inference as Tensorflow currently does not support backpropagation with shuffle operation.4\nEffect on single sequence tasks: The evaluation results on single sequence datasets: SST, IMDB, AG News, and YELP are presented in Table 1. We observe that Train+Infer case of Uniform and Random attentions gives around 0.5 and 0.9 average decrease in accuracy compared to the base model. However, in Infer scenario the degradation on average increases to 3.9 and 4.5 absolute points respectively. This is so because the model becomes more robust to handle altered weights in the former case. The reduction in performance from Permute comes around to 4.2 across all datasets and models. The results support the observation of Jain & Wallace (2019); Serrano & Smith (2019) that alternating attention in text classification task does not have much effect on the model output. The slight decrease in performance can be attributed to corrupting the existing gating mechanism which has been shown to give some improvement (Oord et al., 2016; Dauphin et al., 2017; Marcheggiani & Titov, 2017).\nEffect on pair sequence and generation tasks: The results on pair sequence and generation tasks are summarized in Table 2 and 3, respectively. Overall, we find that the degradation in performance from altering attention weights in case of pair sequence and generation tasks is much more substantial than single sequence tasks. For instance, in Uniform (Train+Infer), the average relative decrease\n4https://github.com/tensorflow/tensorflow/issues/6269\nin performance of single sequence tasks is 0.1% whereas in case of pair sequence and generation tasks it is 49.5% and 51.2% respectively. The results thereby validate our Proposition 4.1 and show that altering attention does affect model output for a task where the attention layer cannot be modeled as a gating unit in the network.\nVisualizing the effect of permuting attention weights: To further reinforce our claim, similar to Jain & Wallace (2019), we report the median of Total Variation Distance (TVD) between new and original prediction on permuting attention weights for each task. The TVD between two predictions ŷ1 and ŷ2 is defined as: TVD(ŷ1, ŷ2) = 12 ∑|Y| i=1 |ŷ1i − ŷ2i|, where |Y| denotes the total number of classes in the problem. We use TVD for measuring the change in output distribution on permuting the attention weights. In Figure 2, we report the relationship between the maximum attention value and the median induced change in model output over 100 permutations on all categories of tasks. For NMT task, we present change in output at the 25th-percentile length of sentences for both datasets. Overall, we find that for single sequence tasks even with the maximum attention weight in range [0.75, 1.0], the change in prediction is considerably small (the violin plots are to the left of the figure) compared to the pair sequence and generation tasks (the violin plots are to the right of the figure)." }, { "heading": "4.2 DO ATTENTION WEIGHTS CORRELATE WITH FEATURE IMPORTANCE MEASURES?", "text": "In this section, similar to the analysis of Serrano & Smith (2019), we investigate the importance of attention weights only when one weight is removed. Let i∗ be the input corresponding to the highest attention weights and let r be any randomly selected input. We denote the original model’s prediction as p and output after removing i∗ and r input as q{i∗} and q{r} respectively. Now, to measure the impact of removing i∗ relative to any randomly chosen input r on the model output, we compute the difference of Jensen-Shannon (JS) divergence between JS(p, q{i∗}) and JS(p, q{r}) given as: ∆JS = JS(p, q{i∗}) − JS(p, q{r}). The relationship between the difference of attention weights corresponding to i∗ and r, i.e., αi∗−αr and ∆JS for different tasks is presented in Figure 3. In general, we found that for single sequence tasks, the change JS divergence is small even for cases\nwhen the difference in attention weight is considerable. However, for pair sequence and generation tasks, there is a substantial change in the model output." }, { "heading": "4.3 HOW PERMUTING DIFFERENT LAYERS OF SELF-ATTENTION BASED MODELS AFFECT PERFORMANCE?", "text": "In this section, we analyze the importance of attention weights on the performance of self-attention based models as described in §3.4. We report the accuracy on single, and pair sequence tasks and BLEU score for NMT on WMT13 dataset on permuting the attention weights of layers cumulatively. For Transformer model, we analyze the effect of altering attention weights in encoder, decoder, and across encoder-decoder (denoted by Across). The results are presented in Figure 4. Overall, we find that unlike the pattern observed in §4.1 and §4.2 for single sequence tasks, altering weights in self-attention based models does have a substantial effect on the performance. We note that this is because while computing attention weights over all tokens with respect to a given token, Proposition 4.1 does not hold. Thus, altering them does have an impact across all three tasks. We note that in the case of transformer model, altering the weights in the first step of Decoder and in Across has maximum effect as it almost stops the flow of information from encoder to decoder." }, { "heading": "4.4 ARE ATTENTION WEIGHTS HUMAN INTERPRETABLE?", "text": "To determine if attention weights are human interpretable, here, we address the question of interpretability of attention weights by manually analyzing them on a representative dataset of single and pair sequence task. For each task, we randomly sample 100 samples with original attention weights and 100 with randomly permuted weights. Then, we shuffle all 200 samples together and present them to annotators for deciding whether the top three highest weighted words are relevant for the model’s prediction.\nThe overall results are reported in Figure 5. Cohen’s kappa score of inter-annotator agreement (Cohen, 1960) on IMDB and babI is 0.84 and 0.82, respectively, which shows near-perfect agreement (Landis & Koch, 1977). We find that in both single and pair sequence tasks, the attention weights in samples with original weights do make sense in general (highlighted with blue color). However, in the former case, the attention mechanism learns to give higher weights to tokens relevant to both kinds of sentiment. For instance, in “This is a great movie. Too bad it is not available on home video.”, tokens great, too, and bad get the highest weight. Such examples demonstrate that the attention mechanism in single sequence tasks works like a gating unit, as shown in §4.1. For permuted samples, in the case of single sequence, the prediction remains correct in majority although the attention weights were meaningless. For example, in “This movie was terrible . the acting was lame , but it ’s hard to tell since the writing was so bad .”, the prediction remains the same on changing attention weights from underlined to bold tokens. However, this does not hold with the pair sequence task. This shows that attention weights in single sequence tasks do not provide a reason for the prediction, which in the case of pairwise tasks, attention do reflect the reasoning behind model output." }, { "heading": "5 CONCLUSION", "text": "In this paper, we addressed the seemingly contradictory viewpoint over explainability of attention weights in NLP. On the one hand, some works have demonstrated that attention weights are not interpretable, and altering them does not affect the model output while several others have shown that attention captures several linguistic notions in the model. We extend the analysis of prior works to diverse NLP tasks and demonstrate that attention weights are interpretable and are correlated with feature importance measures. However, this holds only for cases when attention weights are essential for model’s prediction and cannot simply be reduced to a gating unit. Through a battery of experiments, we validate our claims and reinforce them through manual evaluation." } ]
2,019
ATTENTION INTERPRETABILITY ACROSS NLP TASKS
SP:eb76b7126106346a97624a40277bb28c57f0629b
[ "In this paper, authors propose a spectral nonlocal block. First, they re-interpret the nonlocal blocks in a graph view and then use Chebyshev approximation to obtain the spectral nonlocal block which is quite simple by adding a ZW_1 term. Furthermore, they analyze the steady-state to build up a deeper nonlocal structure. Also, the gSNL is simple by adding a (2A-I)ZW_3 term.", "The paper proposes a spectral non-local block, which is a generalized method of the non-local block and non-local stage in the literature. The proposed spectral non-local block can be plugged into a neural network to improve its effectiveness. The paper also provides theoretical analyses of the stability of the proposed method, and also extend the method by including more Chebyshev polynomial terms. Experiments are conducted on image classification and action recognition tasks, and they valid the effectiveness of the proposed method. " ]
The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks.
[]
[ { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),", "year": 2017 }, { "authors": [ "Yunpeng Chen", "Yannis Kalantidis", "Jianshu Li", "Shuicheng Yan", "Jiashi Feng" ], "title": "Aˆ 2-nets: Double attention networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Nieves Crasto", "Philippe Weinzaepfel", "Karteek Alahari", "Cordelia Schmid" ], "title": "Mars: Motionaugmented rgb stream for action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Christoph Feichtenhofer", "Haoqi Fan", "Jitendra Malik", "Kaiming He" ], "title": "Slowfast networks for video recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Jiyang Gao", "Ram Nevatia" ], "title": "Revisiting temporal modeling for video-based person reid", "venue": "arXiv preprint arXiv:1805.02104,", "year": 2018 }, { "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "title": "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Xiangyu He", "Ke Cheng", "Qiang Chen", "Qinghao Hu", "Peisong Wang", "Jian Cheng" ], "title": "Compact global descriptor for neural networks", "venue": null, "year": 1907 }, { "authors": [ "Zilong Huang", "Xinggang Wang", "Lichao Huang", "Chang Huang", "Yunchao Wei", "Wenyu Liu" ], "title": "Ccnet: Criss-cross attention for semantic segmentation", "venue": "arXiv preprint arXiv:1811.11721,", "year": 2018 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The kinetics human action video dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Alexander Kozlov", "Vadim Andronov", "Yana Gritsenko" ], "title": "Lightweight network architecture for real-time action recognition", "venue": "arXiv preprint arXiv:1905.08711,", "year": 2019 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Computer Science Department, University of Toronto, Tech. Rep,", "year": 2009 }, { "authors": [ "Xingyu Liao", "Lingxiao He", "Zhouwang Yang", "Chi Zhang" ], "title": "Video-based person re-identification via 3d convolutional networks and non-local attention", "venue": "In Asian Conference on Computer Vision (ACCV),", "year": 2018 }, { "authors": [ "Wenjie Luo", "Yujia Li", "Raquel Urtasun", "Richard Zemel" ], "title": "Understanding the effective receptive field in deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Chao Peng", "Xiangyu Zhang", "Gang Yu", "Guiming Luo", "Jian Sun" ], "title": "Large kernel matters–improve semantic segmentation by global convolutional network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "George M Phillips" ], "title": "Interpolation and approximation by polynomials, volume 14", "venue": "Springer Science & Business Media,", "year": 2003 }, { "authors": [ "Zhaofan Qiu", "Ting Yao", "Tao Mei" ], "title": "Learning spatio-temporal representation with pseudo-3d residual networks", "venue": "In proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "David I Shuman", "Sunil K Narang", "Pascal Frossard", "Antonio Ortega", "Pierre Vandergheynst" ], "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "venue": "IEEE Signal Processing Magazine,", "year": 2013 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "venue": "arXiv preprint arXiv:1212.0402,", "year": 2012 }, { "authors": [ "Yunzhe Tao", "Qi Sun", "Qiang Du", "Wei Liu" ], "title": "Nonlocal neural networks, nonlocal diffusion and nonlocal modeling", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Kaiyu Yue", "Ming Sun", "Yuchen Yuan", "Feng Zhou", "Errui Ding", "Fuxin Xu" ], "title": "Compact generalized non-local network", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Ruimao Zhang", "Jingyu Li", "Hongbin Sun", "Yuying Ge", "Ping Luo", "Xiaogang Wang", "Liang Lin" ], "title": "Scan: Self-and-collaborative attention network for video person re-identification", "venue": "IEEE Transactions on Image Processing (TIP),", "year": 2019 }, { "authors": [ "Hengshuang Zhao", "Jianping Shi", "Xiaojuan Qi", "Xiaogang Wang", "Jiaya Jia" ], "title": "Pyramid scene parsing network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Gao", "Nevatia" ], "title": "2018) that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatial", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Capturing the long-range spatial-temporal dependencies is crucial for the Deep Convolutional Neural Networks (CNNs) to extract discriminate features in vision tasks such as image and video classification. However, the traditional convolution operator only focuses on processing local neighborhood at a time. This makes the CNNs need to go deeper with convolutional operations to enlarge the receptive fields, which lead to higher computation and memory. Moreover, going deeper cannot always increase the effective receptive fields due to the Gaussian distribution of the kernel weight (Luo et al. (2016)). To eliminate this limitation, some recent works focus on designing the network architecture with wider and well-designed modules to catch the long-range dependencies such as (Peng et al. (2017), Chen et al. (2017), Zhao et al. (2017)). Although having larger receptive fields, these modules still need to be applied recursively to catch the dependencies of the pairs in large distances.\nInspired by the classical non-local means method in image denoising, Wang et al. (2018) proposes the nonlocal neural network which uses the nonlocal (NL) block to concern the “full-range” dependencies in only one module by exploring the correlations between each position and all other positions. In the NL block, the affinity matrix is first computed to represent the correlations between each position pair. Then the weight means of features are calculated based on the affinity matrix to refine the feature representation. Finally, the residual connection is added to the refined feature map. Due to its simplicity and effectiveness, the nonlocal block has been widely used in image and video classification (Wang et al. (2018); Yue et al. (2018); Tao et al. (2018); Chen et al. (2018)), image segmentation (Huang et al. (2018); Yue et al. (2018); Wang et al. (2018)) and person re-identification (Liao et al. (2018); Zhang et al. (2019)) recently.\nHowever, due to the complexity of the affinity matrix, the nonlocal block 1 needs much more computational effort and is sensitive to its number and position in the neural network (Tao et al. (2018)). Some works solve the first problem by simplifying the calculation of the affinity matrix such as Huang et al. (2018), He et al. (2019), Yue et al. (2018), Chen et al. (2018). Only a few works try to solve the second problem which limits the robustness of the nonlocal network 2. Tao et al. (2018)\n1The nonlocal block is composed of a nonlocal operator and a residual connection 2The nonlocal network is composed of several nonlocal blocks\nproposes the nonlocal stage (NS) block which concerns the diffusion nature and maintains the same affinity matrix for all the nonlocal units in the NS block. Comparing with the NL block, the NS block is insensitive to the numbers and allows deeper nonlocal structure. However, the deeper nonlocal structure of NS block increases the complexity and do not have a remarkable improvement.\nIn this work, we focus on elaborating a robust nonlocal block which is more flexible when using in the neural network. We prove that the nonlocal operator in the nonlocal block is equivalent to the Chebyshev-approximated fully-connected graph filter with irrational constraints that limits its liberty for learning. To remove these irrational constraints, we propose the Spectral-based Nonlocal (SNL) block which is more robust and can degrade into the NL and NS with specific assumptions. We also prove that the deeper nonlocal structure satisfies the stable hypothesis with the help of steadystate analysis. Based on this hypothesis, we give the full-order approximated spectral nonlocal (gSNL) block which is well-performed for deeper nonlocal structure. Finally, we add our proposed nonlocal blocks into the deep network and evaluate them on the image and video classification tasks. Experiments show that the networks with our proposed blocks are more robust and have a higher accuracy than using other types of nonlocal blocks. To summarize, our contributions are threefold:\n• We propose a spectral nonlocal (SNL) block as an efficient, simple, and generic component for capturing long-range spatial-temporal dependencies with deep neural networks, which is a generalization of the classical nonlocal blocks.\n• We propose the stable hypothesis, which can enable the deeper nonlocal structure without an elaborate preparation for both the number and position of the building blocks. We further extend SNL into generalized SNL (gSNL), which can enable multiple nonlocal blocks to be plugged into the existing computer vision architectures with stable learning dynamics.\n• Both SNL and gSNL have outperformed other nonlocal blocks across both image and video classification tasks with a clear-cut improvement." }, { "heading": "2 PRELIMINARY", "text": "Nonlocal block The NL block consist of NL operator with residual connection and is expressed as: Y = X + F(A,Z) with Z = XWg, (1)\nwhere X ∈ RN×C1 is the input feature map, F(A,Z) is the NL operator, Z ∈ RN×Cs is the transferred feature map that compresses the channels of X ∈ RN×C1 by a linear transformation with kernel Wg ∈ RC1×Cs . Here N is the number of positions. The affinity matrix A ∈ RN×N is composed by pairwise correlations between pixels.\nIn the NL block, the NL operator explores the “full-range” dependencies by concerning the relationships between all the position pairs:\nF(A,Z) = AZW with A = (aij)N×N , Aij = f(Xi,:,Xj,:), (2) where W ∈ RCs×C1 is the weight matrix of a linear transformation. f(·) is the affinity kernel which can adopt the “Dot Product”, “Traditional Gasuassian”, “Embedded Gasussian” or other kernel matrix with a finite Frobenius norm.\nNonlocal stage To make the NL operator follow the diffusion nature that allows deeper nonlocal structure (Tao et al. (2018)), the nonlocal stage (NS) operator uses the graph laplacian L = DA−A to replace the affinity matrix A in the NL operator:\nF̄(A,Z) = (A−DA)ZW with DA = diag(di), (3) where F̄(A,Z) is the NS operator. di = ∑ j aij is the degree of node i. Moreover, when adding multiple blocks with the same affinity matrix A and replacing the NL operator by the NS operator, these consecutively-connected blocks become the NS block. We called these nonlocal blocks in the NS block as the NS units." }, { "heading": "3 METHOD", "text": "The nonlocal operator can be divided into two steps: calculating the affinity matrix A to represent the correlations between each position pairs and refining the feature map by calculating the\nweighted means based on A. In this section, a fully-connected graph filter is utilized for explaining the nonlocal operator. With the Chebyshev approximation, we propose the SNL operator which is proved to be a generalized form of NL and NS operator and is more robust with higher performance in computer vision tasks. Furthermore, based on the stable hypothesis that deeper nonlocal structure tends to learn a stable affinity matrix, we extend our SNL operator into a full-order Chebyshev approximation version, i.e. the gSNL." }, { "heading": "3.1 THE PROPOSED SPECTRAL NONLOCAL OPERATOR", "text": "Nonlocal operator in the graph view The nonlocal operator F(A,Z) is a filter that computes a weighted mean of all the positions in the feature map Z based on the affinity matrix A and then conduct the feature transformation with the kernel W. This is the same as filtering the signal Z by a graph filter Ω in the graph domain defined by the affinity matrix A (Shuman et al. (2013)). Based on this perspective (Shuman et al. (2013)), we further define the nonlocal operator as: Theorem 1. Given an affinity matrix A ∈ RN×N and the signal Z ∈ RN×Cs , the nonlocal operator is the same as filtering the signal Z in the graph domain of a fully-connected weighted graph G:\nF(A,Z) = Z ∗ g = Ugθ(Λ)UTZ = UΩUTZ with L = DL −A = UTΛU,\n(4)\nwhere the graph filter Ω ∈ RN×N is a diagonal parameter matrix, i.e. Ω = diag(ω), ω = (ω1, ω2, ..., ωn). G = (V,A) is a fully-connected graph with the vertex set V and affinity matrix A. Λ = diag({λ1, λ2, ..., λi, ..., λN}) and U = {u1,u2, ...,ui, ...,uN} are the eigenvectors and eigenvalues of the graph laplacian L.\nThis definition requires that the graph laplacian L has non-singular eigenvalue and eigenvector, so the affinity matrix A should be a symmetric, non-negative, row-normalized matrix. To meet this requirement, the affinity matrix A can be obtained by the following steps. First, the affinity kernel is used to calculate the matrix A (we use the dot product with embeded weight matrix Wφ ∈ RC1×Cs and Wϕ ∈ RC1×Cs as the affinity kernel, i.e. A = (XWφ)(XWϕ)). Then we make the matrix A symmetric: Ā = A\nT+A 2 . We normalize the row of Ā to make it satisfy di = 1 and having Ǎ =\nD−1A Ā. For the simplicity, in the following sections the symmetric, non-negative, row-normalized matrix Ǎ is denoted as A.\nThe proposed spectral nonlocal operator The graph filter Ω in Eq. (4) contains N parameters. To simplify it, we use the Chebyshev polynomials which can reduce the N parameters into k (k N ). For simplicity, we firstly assume that the input Z, the output F(A,Z) and the output F(A,Z) have only one channel.\nFollowing the similar method as Defferrard et al. (2016), the kst-order Chebyshev polynomials is used to approximate the graph filter function gθ(Λ):\nF(A,Z) = K−1∑ k=0 θkTk(L ′ )Z with L ′ = 2L/λmax − In, s.t. T0(L ′ ) = In, T1(L ′ ) = L ′ , Tk(L ′ ) = 2L ′ Tk−1(L ′ )− Tk−2(L ′ ).\n(5)\nDue to L is a random walk laplacican, the maximum eiginvalue λmax satisfies λmax = 2 which makes L ′ = A (Shuman et al. (2013)). Then Eq. (5) becomes:\nF(A,Z) = K−1∑ k=0 θkTk(A)Z = θ0Z + θ1AZ + K−1∑ k=2 θkTk(A)Z, (6)\nIf k = 1, the first-order Chebyshev approximation of Eq. (6) becomes:\nF(A,Z) = θ0Z + θ1AZ, (7) where θ0 and θ1 are the coefficients for the first and second term which are approximated by learning with SGD. Then, extending Eq. (7) into multi-channel conditions, we can get the formation of our SNL operator:\nFs(A,Z) = ZW1 + AZW2, (8)\nwhere Fs(A,Z) is the SNL operator, W1 ∈ RCs×C1 , W2 ∈ RCs×C1 . Finally, a residual connection is added with the SNL operator to form the SNL block:\nY = X + Fs(A,Z) = X + ZW1 + AZW2. (9)\nRelation with other nonlocal operators As shown in fig. 1, our SNL operator can degrade into the NL operator by setting W1 = 0, i.e. θ0 = 0. However, its analytic solution: θ0 = 2N ∑N j=0 ωj controls the total filtering intensity, which cannot be guaranteed to be 0. This setting will limit the search space when training the network and reduce the robustness of the NL block. The NL operator cannot magnify features of a large range and damp some discriminative features such as the beak of the waterfowl. Our SNL operator can also degrade into the NS operator by setting W1 = −W2, i.e. θ1 + θ0 = 0. However, the analytic solution of this equation is θ1 + θ0 = 2N ∑N j=0 ωj(λj + 1) = 0. When setting it to zero, the filter strength of the high-frequency signal (with high λ) such as the small part or twig is suppressed. Thus, it still cannot magnify the discriminative part such as the beak of the waterfowl as shown in fig. 1. Comparing with NL and NS, our SNL does not have these irrational constraints and give these two parameters a liberal learning space. Thus, θ0 can control the preserve strength of the discriminative features, while θ1 can pay more attention to the low-frequency signal to diminish the noise." }, { "heading": "3.2 THE PROPOSED GENERALIZED SPECTRAL NONLOCAL OPERATOR", "text": "To fully exploit the “full-range” dependencies, the nonlocal block should have the ability to be consecutively stacked into the network to form a deeper nonlocal structure. However, some types of nonlocal blocks such as the NL and CGNL block cannot achieve this purpose (Tao et al. (2018)). To show the robustness of our SNL block when used in the deeper nonlocal structure, we firstly study the steady-state of deeper nonlocal structure when consecutively adding our SNL block. We also prove the stable hypothesis that the deeper nonlocal structure tends to learn a stable affinity. Based on this hypothesis, we can extend our SNL block into a full-order Chebyshev approximation, i.e. the gSNL block which is more applicable for deeper nonlocal structure.\nThe stable hypothesis The Steady-state analysis can be used to analyze the stable dynamics of the nonlocal block. Here we give the steady-state analysis of our SNL block when consecutively adds into the network structure and get the Stable Hypothesis:\nLemma 1. The Stable Hypothesis: when adding more than two consecutively-connected SNL blocks with the same affinity matrix A into the network structure, these SNL blocks are stable when the variable affinity matrix A satisfies: Ak = A.\nProof. The stability holds when the weight parameters in W1,W2 and W are small enough such that the CFL condition is satisfied (Tao et al. (2018)). So we ignore them for simplicity. The discrete nonlinear operator of our SNL have a similar formulation as the NS operator:\nLhZN := −LZ,\nwhere h is the discretization parameter. ZN is the input of the N th block in the deeper nonlocal structure with Z0 = X. The stable assumption demands that ZN+1 = ZN , so the steady-state equation of the last SNL block can be written as:\nZN+1 − ZN = LhZN = −LZN = 0.\nThe deeper nonlocal structure has more than one SNL blocks. So the ZN−1 and LhZN−1 can be used to express ZN :\n−LZN = −(I−A)ZN = −(I−A)(ZN−1 + LhZN−1) = −(I−A)ZN−1 + (I−A)(I−A)ZN−1 = 0.\nFinally, the steady-state equation becomes:\n(I−A)ZN−1 = (I−A)2ZN−1 ⇐⇒ A2 = A\nThis equation can naturally extend to the k-hop affinity matrix Ak, i.e. Ak = A.\nTo verify the stable hypothesis, we add five consecutively-connected SNL blocks (and NS blocks) into the PreResnet56 He et al. (2016) and train this model on the train set of the CIFAR100 dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 150 and 250 epochs (total 300 epochs). A weight decay 1e − 4 and momentum 0.9 are also used. Then we test the trained model on the test set and output the affinity matrix of each image. Figure. 2 shows the statistics that reflects the strength of the affinity matrix, 2-hop, 3-hop, and 4-hop affinity matrix: A,A2,A3,A4. We can see that the number of elements in each histogram bin are nearly the same. This means that\nthe A, A2, A3, A4 have similar distribution of all the elements in k-hop affinity matrixes, which also empirically verifies the stable-state equation: Ak = A. Full-order spectral nonlocal operator With the stable hypothesis, the Chebyshev polynomials can be simplified into a piece-wise function (details in Appendix B). Taking this piece-wise function into the Eq. 7, we can get the full-order approximation of the SNL operator:\nF∗s (A,Z) = ∑ k θkTk(A)Z = Zθ̃1 + AZθ̃2 + (2A− I)Zθ̃3, (10)\nwhere θ̃1 = ∑k%4=0 i1 θi1 , θ̃2 = ∑k%4=1||k%4=2 i2 θi1 , θ̃3 = ∑k%4=3 i1\nθi1 , whose upper bound is less than 1. Then, extending it into multi-channel input and output with the residual connection, we can get our gSNL block:\nY = X + F∗s (A,Z) = X + ZW1 + AZW2 + (2A− I)ZW3 (11)\nThe gSNL block is well-performed when the stable affinity hypothesis is satisfied, i.e. adding more than two nonlocal blocks with the same affinity matrix as shown in Table. 4." }, { "heading": "3.3 IMPLEMENTATION DETAILS", "text": "The implementation details of the gSNL block is shown in fig. 3. The input feature map X ∈ RW×H×C1 is first fed into three 1x1 convolutions with the weight kernel: Wφ ∈ RC1×Cs , Wϕ ∈ RC1×Cs , Wg ∈ RC1×Cs to subtract the number of channel. One of the output Z ∈ RW×H×Cs is used as the transferred feature map to reduce the calculation complexity, while the other two output Φ ∈ RW×H×Cs , Ψ ∈ RW×H×Cs are used to get the affinity matrix A. The sub-channel Cs are usually two times less than the input channel C1. The affinity matrix is calculated by the affinity kernel function f(·) and then use the operation in Sec3.1 to make it non-negative, symmetric and normalized. Finally, with the affinity matrix A and the transferred feature map Z, the output of the nonlocal block can be obtained by the equation Eq. (11). Specifically, the three weight matrixes W1 ∈ RCs×C1 , W2 ∈ RCs×C1 , W3 ∈ RCs×C1 are implemented as three 1x1 convolutions." }, { "heading": "4 EXPERIMENT", "text": "" }, { "heading": "4.1 SETTING", "text": "Datasets Our proposed SNL and gSNL blocks have been evaluated across several computer vision tasks, including image classification and video-based action recognition. For the image classification, both CIFAR-10 and CIFAR-100 datasets (Krizhevsky & Hinton (2009)) are tested. The CIFAR10 dataset contains 60, 000 images of 10 classes, and CIFAR-100 dataset contains 60, 000 images of 100 classes. For these two datasets, we use 50, 000 images as the train set and 10, 000 images as\nthe test set. We also generate experiments for the fine-grained classification on the Birds-200-2011 (CUB-200) dataset (Welinder et al. (2010)) which contains 11, 788 images of 200 bird categories. For the action recognition, the experiments are conducted on the UCF-101 dataset (Soomro et al. (2012)), which contains 101 different actions.\nBackbones For the image classification, the ResNet-50 and the PreResNet variations (including both PreResNet-20 and PreResNet-56) are used as the backbone networks. For the video classification task, we follow the I3D structure (Hara et al. (2018)) which uses k × k × k kernels to replace the convolution operator in the residual block.\nSetting for the network In the main experiments, we setCs = C1/2. Without loss of the generality, we use the “Dot Product” as the affinity kernel in the experiments. We add one SNL (or gSNL) block into these backbone networks to construct the SNL (or gSNL) network. For the ResNet and the I3D (Hara et al. (2018)), following Wang et al. (2018) we add the SNL block right before the last residual block of res4. For the PreResNet series, we add the SNL block right after the second residual block in res1. For the other nonlocal-base block including the NL (Wang et al. (2018)), the NS (Tao et al. (2018)), the Compact Generalized Nonlocal Block (CGNL) (Yue et al. (2018)) and the Double Attention Block (A2), the settings are all the same as ours. The difference of these blocks are shown in Table. 1, in which the Approximated Condition shows the strategy for the Chebyshev approximation and Channel-wise reflect the consideration of the channel relations.\nSetting for the training For the image classification on CIFAR-10 dataset and CIFAR-100 dataset, we train the models end-to-end without using pretrained model. The initial learning rate 0.1 is used for these two datasets with the weight decay 1e− 4 and momentum 0.9. The learning rate is divided by 10 at 150 and 250 epochs. The models are trained for total 300 epochs.\nFor the fine-grained classification on CUB-200 dataset, we use the models pretrained on ImageNet (Russakovsky et al. (2015)) to initialize the weights. We train the models for total 200 epochs with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61, 81 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100.\nFor the video classification on the UCF-101 dataset, the weights are initialized by the pretrained I3D model on Kinetics dataset (Kay et al. (2017)). We train the models with the initial learning rate 0.1 which is subsequently divided by 10 each 40 epochs. The training stops at the 100 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100." }, { "heading": "4.2 ABLATION EXPERIMENT", "text": "The number of channels in transferred feature space The nonlocal-based block firstly reduces the channels of original feature mapC1 into the transferred feature spaceCs by the 1×1 convolution to reduce the computation complexity. When Cs is too large, the feature map will contain redundant information which introduces the noise when calculating the affinity matrix A. However, if Cs is too small, it is hard to reconstruct the output feature map due to inadequate features. To test the robustness for the number of the Cs, we generate three types of models with different number of the transferred channels with the setting: “Sub 1” (Cs = C1), “Sub 2” (Cs = C12 ), “Sub 4” (Cs = C1 4 ) as shown in Table. 2. Other parameters of the models and the training steps are the same as the setting in Sec.4.1. Table. 2 shows the experimental results of the three types of models with different nonlocal blocks. Our SNL and gSNL blocks outperforms other models profited by their flexible for learning. Moreover, from Table. 2, we can see that the performances of the CGNL steeply drops when the number of the transferred channels increases. This is because the CGNL block concerns\nthe relationship between channels, when the number of the sub-channel increases, the relationship between the redundant channels seriously interferes its effects. Overall, our proposed nonlocal block is the most robust for the large number of transferred channels (our model rise 1.1% in Top1 while the best of others only rise 0.4% compared to the baseline).\nThe stage for adding the nonlocal blocks The nonlocal-based blocks can be added into the different stages of the preResNet (or the ResNet) to form the Nonlocal Net. In Tao et al. (2018), the nonlocalbased blocks are added into the early stage of the preResNet to catch the long-range correlations. Here we experiment the performance of adding different types of nonlocal blocks into the three stages (the first, the second and the third stage of the preResNet) and train the models on CIFAR100 dataset with the same setting discussed in Sec.5.2. The experimental results are shown in Table. 3. We can see that the performances of the NL block is lower than the backbones when adding into the early stage. However, our proposed SNL block has 0.81% improvement compared with the backbone when respectively adding into all the three stages, which is much higher than the other type nonlocal blocks (only 0.42% for the best case).\nTo intuitively show the stability and robustness of our SNL, we give the spectrum analysis for the estimated weight matrices (Tao et al. (2018)). We extract the self-attention weight matrix: Wg,W of the NL block and the NS block, Wg,W2 of our proposed SNL block. The dimension of the weight matrix satisfies: Wg ∈ RC1×Cs , W ∈ RCs×C1 W2 ∈ RCs×C1 . To make all the eigenvalues real, we let: W̃ = (WgW)+(WgW) T\n2 . We do the same to the W2. Figure. 5 shows the top thirtytwo eigenvalues of the weight matrix of W̃ on the models in Table. 3. We can see that the density of the negative eigenvalues is higher than the positive eigenvalues of the NL block when adding into all three stages. This phenomenon makes the NL operator F(A,Z) in Eq. (1) less than zero. So the output feature map is less than the input feature map, i.e. Y < X (more detail of this phenomenon can be seen in Tao et al. (2018)). The NS block can avoid “the damping effect” to some extent by concerning the diffusion nature. However, when adding into the early stage, only six eigenvalues of the nonlocal stage are not equal to zero. This phenomenon makes the nonlocal stage cannot effectively magnify the discriminated feature. Comparing with these two models, our proposed SNL block has more positive eigenvalues which takes effect to enhance the discriminated features and also avoids the “damping effect”.\nThe number of the nonlocal blocks We test the robustness for adding multiple nonlocal blocks into the backbone network which forms the three type network “Different Position 3 (DP 3)”, “Same Position 3 (SP 3)” “Same Position 5 (SP 5)” as shown in Table. 4. The result are shown in Table. 4. For the model “DP3”, three blocks are added into the stage 1, stage 2, and stage 3 (right after the second residual block). We can see that adding three proposed nonlocal operators into different stages of the backbone generate a larger improvement than the NS operator and NL operator (2.4% improvement). This is because when adding NS and NL into the early stage, these two models cannot better aggregate the low-level features and interfere the following blocks. For the model “SP 3” (“SP 5”), we add three (five) consecutively-connected nonlocal blocks into the stage 1. Note that different from the experiment in Tao et al. (2018) and Wang et al. (2018), these consecutivelyconnected nonlocal blocks have the same affinity matrix. From Table. 4, we can see that profited by concerning the stable hypothesis discussed in Sec 3.3, our gSNL outperform all other models when adding consecutively-connected nonlocal blocks (rises average 0.72% to the backbone and 0.41% higher than the best performance of other type nonlocal blocks) and has a relatively stable performance. However, one drawback is that our gSNL may interfere the learning when adding only one nonlocal block (the stable hypothesis is not satisfied)." }, { "heading": "4.3 MAIN RESULTS", "text": "We test the networks with the Nonlocal Block (NL), the Nonlocal Stage (NS), the Compact Generalized Nonlocal block (CGNL), the Double Attention Block (A2) and our SNL (gSNL) blocks in the different visual learning tasks. The experiment settings are discussed in Sec.4.1. Our models outperform other types of the nonlocal blocks across several standard benchmarks. Table. 5 shows the experimental results on the CIFAR10 dataset, we can see that by adding one proposed block, the Top1 rises about 0.65%, which is higher than adding other type nonlocal blocks (0.3%). As the experiments on CIFAR100 dataset shown in Table. 7, using our proposed block brings improvement about 1.8% with ResNet50. While using a more simple backbone PreResnet56, our model can still generate 1.1% improvement as shown in Table. 6.\nTable. 9 shows the experimental results on the fine-grained image classification task on CUB-200 datasets. Our model outperforms other non-channel-concerning blocks and generate (0.42%) im-\nprovement. Comparing with the channel-wise concerning CGNL block, our model is only a bit lower in Top1. Fig. 4 also shows the visualized feature map which is formed by adding the upsampled feature output with the source image. We can see that the feature maps of our proposed block can cover more critical area of the birds. For example, both the left and right wings (red square) of the birds can be focused profited by the better long-range concerning of our SNL. Moreover, benefited from the flexibility of the W1, our proposed SNL can also catch a relatively large range of the discriminative parts. Table. 8 shows the experimental results on the action recognition task. The network with our proposed block can generate 1.8% improvement than the I3D model and outperforms all other nonlocal models on the UCF-101 dataset.\nTable 8: The Results on UCF101\nmodel top1 top5 I3D 81.57% 95.40% + NL 81.37% 95.76% + NS 82.50% 95.84% + A2 82.68% 95.85% + CGNL 83.16% 96.16 % + *SNL 82.30% 95.56% + *gSNL 83.21% 96.53%\nTable 9: The Results on CUB\nmodel top1 top5 R-50 85.43% 96.70% + NL 85.34% 96.77% + NS 85.54% 96.56% + A2 86.02% 96.56% + CGNL 86.14% 96.34% + *SNL 85.91% 96.65% + *gSNL 85.95% 96.79%" }, { "heading": "5 CONCLUSION", "text": "In this paper, we explain the nonlocal block in the graph view and propose the spectral nonlocal (SNL) block which is more robust and well-behaved. Our SNL block is a generalized version of the NL and NS block and having more liberty for the parameter learning. We also give the stable hypothesis for deeper nonlocal structure and extend the SNL to gSNL that can be applied to the deeper nonlocal structures. The experiments on multiple computer vision tasks show the high robustness and performance of our proposed nonlocal block. Feature works will focus on using the SNL block into different vision task and its roubustness for the other type of neural network such as the Generative Adversarial Networks (GAN)." }, { "heading": "A ANALYTIC SOLUTION OF THE CHEBYSHEV APPROXIMATE", "text": "Here we give the analytic solution for the coefficients in Chebyshev polynomials (Phillips (2003)):\nTheorem 2. Giving a function f(x), x = {x1, x2, ..., xN}, it can be optimally approximated by Chebyshev polynomials: f(x) ≈ ∑K−1 k=0 akTk(x), only when ak satisfies: ak = 2 N ∑N j=0 f(xj)Tk(xj). We call the ak as the analytic solution of the Chebyshev coeffcients.\nBased on these theorem, we can get the analytic solution of the parameter θ for Eq. (7):\nLemma 2. The spectral nonlocal operator can be best approximated when the function g(λ) = ω can be best approximated by the Chebyshev polynomials, i.e. the analytic solutions of the Chebyshev coeffcients satisfy:\nθk = ak = 2\nN N∑ j=0 g(λj)Tk(λj) = 2 N N∑ j=0 ωjTk(λj) (12)" }, { "heading": "B THE PIECEWISE CHEBYSHEV POLYNOMIALS", "text": "Taking Ak = A into the Chebyshev polynomials of the affinity matrix A, the Chebyshev polynomials becomes:\nT0(A) = I\nT1(A) = A\nT2(A) = 2AT1(A)− T0(A) = 2AA− I = 2A− I T3(A) = 2AT2(A)− T1(A) = 2A(2A− I)−A = A T4(A) = 2AT3(A)− T2(A) = 2AA− 2A + I = I = T0(A) T5(A) = 2AT4(A)− T3(A) = 2AI−A = A = T1(A) T6(A) = 2AT5(A)− T4(A) = 2 ∗ T2(A)− T1(A) = T2(A)\n(13)\nThis cyclic form of Chebshev polynomials Tk(A) can be reformulated as a piecewise function:\nTk(A) =\n{ I k%4 = 0\nA k%4 = 1 || k%4 = 3 2A− I k%4 = 2\n(14)" }, { "heading": "C EXPERIMENT OF SEMANTIC SEGMENTATION ON VOC2012 DATASET", "text": "For the semantic segmentation tasks, we generate experiment on the VOC2012 dataset with the model proposed by Chen et al. (2017).We add different types of nonlocal blocks on right before the last residual block in res4 of the ResNet50. The models are trained for 50 epochs with the SGD optimize algorithm. The learning rate is set 0.007 with the weight decay 5e− 4 and momentum 0.9. Experimental results show that the model with our proposed block can the best results." }, { "heading": "D THE EXAMPLE OF THE AFFINITY MATRIX ON CUB DATASETS", "text": "Experiments to verify the stable hypothesis is also generated on the CUB datasets, we add three consecutively-connected SNL blocks (and NS blocks) into the ResNet50 (right before the last residual block of res4) and train this model on the train set of the CUB dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61 and 81 epochs (total 200 epochs). A weight decay 1e− 4 and momentum 0.9 are also used. Figure. 6 shows the histogram of the strength statistics of the affinity matrix A. We can see that although using different backbone and dataset, the distribution of the k-hop affinity matrixes are corresponded with the experiments on CIFAR100." }, { "heading": "E EXPERIMENTS ON VIDEO-BASED PERSON RE-IDENTIFICATION", "text": "Experiments are also conducted on the challenging datasets on Video-based Person Re-identification task including the Mars, ILID-SVID and PRID2011. For the backbone, we follow the strategy of Gao & Nevatia (2018) that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatialtemporal features. Note that the models are totally trained on ilidsvid and prid2011 rather than finetuning the pre-trained model on Mars dataset. The experimental results are shown in Table.11, 12, 13. We can see that in these datasets, our proposed block can still generate consistent improvements.\nTable 11: The Results on Mars dataset\nmodel mAP Rank1 RTMta 77.70% 79.10% + NL 72.90% 80.90% + *SNL 74.00% 81.98% RTMtp 75.70% 82.30% + NL 75.54% 83.40% + *SNL 76.80% 99.92%\nTable 12: The Results on ILIDSVID dataset\nmodel mAP Rank1 RTMta 69.70% 58.70% + NL 66.30% 56.00% + *SNL 79.40% 70.00% RTMtp 81.60% 74.70% + NL 83.00% 75.30% + *SNL 84.80% 76.60%\nTable 13: The Results on PRID2011 dataset\nmodel mAP Rank1 RTMta 86.60% 79.80% + NL 90.70% 85.40% + *SNL 91.50% 86.50% RTMtp 90.50% 86.50% + NL 89.70% 85.40% + *SNL 92.40% 88.80%" }, { "heading": "F ADDITIONAL EXPERIMENTS ON ACTION CLASSIFICATION", "text": "Ours SNL can also improve the performance of other network structures such as the Pseudo 3D Convolutional Network (P3D) (Qiu et al. (2017)), the Motion-augmented RGB Stream (MARS) (Crasto et al. (2019)), the Slow-Fast Network (Slow-Fast) (Feichtenhofer et al. (2019)) and the Video Transformer Network (VTN) (Kozlov et al. (2019)). For P3D and MARS, our SNL block is inserted right before the last residual layer of the res3. For the Slow-Fast, we replace its original NL block with our SNL block. For the VTN, we replace its multi-head self-attention blocks (paralleledconnected NL blocks) with our SNL blocks. The Slow-Fast network are trained end-to-end on the UCF-101 dataset while others use the model pretrained on Kinetic400 dataset and finetuning on the UCF-101 dataset. From Table. 14, We can see that all the performances are improved when adding our proposed SNL model.\nExperiments on Kinetics-400 dataset are also given in Table. 15. We can see that inserting SNL block into the Slow-Fast Network can generate 2.1% improvement." } ]
2,019
null